Async I/O via io_ring#

xNVMe utilizes the Windows io_ring interface, its support for feature-probing the io_ring interface and the io_ring opcodes:

When available, then xNVMe can send the io_ring specific request using IORING_HANDLE_REF and IORING_BUFFER_REF structure for read and write via Windows io_ring interface. Doing so improves command-throughput at all io-depths when compared to sending the command via NVMe Driver IOCTLs.

One can explicitly tell xNVMe to utilize io_ring for async I/O by encoding it in the device identifier, like so:

xnvme_io_async read \\.\PhysicalDrive0 --slba 0x0 --qdepth 1 --async io_ring

Yielding the output:

# Allocating and filling buf of nbytes: 512
# Initializing queue and setting default callback function and arguments
# Read uri: '\\.\PhysicalDrive0', qd: 1
xnvme_lba_range:
  slba: 0x0000000000000000
  elba: 0x0000000000000000
  naddrs: 1
  nbytes: 512
  attr: { is_zones: 0, is_valid: 1}
wall-clock: {elapsed: 0.0003, mib: 0.00, mib_sec: 1.92}
# cb_args: {submitted: 1, completed: 1, ecount: 0}