Skip to content
  1. Jan 28, 2025
  2. Jan 24, 2025
    • Sagi Grimberg's avatar
      nvmet: fix a memory leak in controller identify · 58f5c8d5
      Sagi Grimberg authored
      
      
      Simply free an allocated buffer once we copied its content
      to the request sgl.
      
      kmemleak complaint:
      unreferenced object 0xffff8cd40c388000 (size 4096):
        comm "kworker/2:2H", pid 14739, jiffies 4401313113
        hex dump (first 32 bytes):
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
          00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00  ................
        backtrace (crc 0):
          [<ffffffff9e01087a>] kmemleak_alloc+0x4a/0x90
          [<ffffffff9d30324a>] __kmalloc_cache_noprof+0x35a/0x420
          [<ffffffffc180b0e2>] nvmet_execute_identify+0x912/0x9f0 [nvmet]
          [<ffffffffc181a72c>] nvmet_tcp_try_recv_pdu+0x84c/0xc90 [nvmet_tcp]
          [<ffffffffc181ac02>] nvmet_tcp_io_work+0x82/0x8b0 [nvmet_tcp]
          [<ffffffff9cfa7158>] process_one_work+0x178/0x3e0
          [<ffffffff9cfa8e9c>] worker_thread+0x2ec/0x420
          [<ffffffff9cfb2140>] kthread+0xf0/0x120
          [<ffffffff9cee36a4>] ret_from_fork+0x44/0x70
          [<ffffffff9ce7fdda>] ret_from_fork_asm+0x1a/0x30
      
      Fixes: 84909f7d ("nvmet: use kzalloc instead of ZERO_PAGE in nvme_execute_identify_ns_nvm()")
      Signed-off-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarNilay Shroff <nilay@linux.ibm.com>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      58f5c8d5
  3. Jan 23, 2025
    • Daniel Wagner's avatar
      nvme-fc: do not ignore connectivity loss during connecting · ee59e382
      Daniel Wagner authored
      
      
      When a connectivity loss occurs while nvme_fc_create_assocation is
      being executed, it's possible that the ctrl ends up stuck in the LIVE
      state:
      
        1) nvme nvme10: NVME-FC{10}: create association : ...
        2) nvme nvme10: NVME-FC{10}: controller connectivity lost.
                        Awaiting Reconnect
           nvme nvme10: queue_size 128 > ctrl maxcmd 32, reducing to maxcmd
        3) nvme nvme10: Could not set queue count (880)
           nvme nvme10: Failed to configure AEN (cfg 900)
        4) nvme nvme10: NVME-FC{10}: controller connect complete
        5) nvme nvme10: failed nvme_keep_alive_end_io error=4
      
      A connection attempt starts 1) and the ctrl is in state CONNECTING.
      Shortly after the LLDD driver detects a connection lost event and calls
      nvme_fc_ctrl_connectivity_loss 2). Because we are still in CONNECTING
      state, this event is ignored.
      
      nvme_fc_create_association continues to run in parallel and tries to
      communicate with the controller and these commands will fail. Though
      these errors are filtered out, e.g in 3) setting the I/O queues numbers
      fails which leads to an early exit in nvme_fc_create_io_queues. Because
      the number of IO queues is 0 at this point, there is nothing left in
      nvme_fc_create_association which could detected the connection drop.
      Thus the ctrl enters LIVE state 4).
      
      Eventually the keep alive handler times out 5) but because nothing is
      being done, the ctrl stays in LIVE state.
      
      There is already the ASSOC_FAILED flag to track connectivity loss event
      but this bit is set too late in the recovery code path. Move this into
      the connectivity loss event handler and synchronize it with the state
      change. This ensures that the ASSOC_FAILED flag is seen by
      nvme_fc_create_io_queues and it does not enter the LIVE state after a
      connectivity loss event. If the connectivity loss event happens after we
      entered the LIVE state the normal error recovery path is executed.
      
      Signed-off-by: default avatarDaniel Wagner <wagi@kernel.org>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      ee59e382
    • Daniel Wagner's avatar
      nvme: handle connectivity loss in nvme_set_queue_count · 294b2b75
      Daniel Wagner authored
      
      
      When the set feature attempts fails with any NVME status code set in
      nvme_set_queue_count, the function still report success. Though the
      numbers of queues set to 0. This is done to support controllers in
      degraded state (the admin queue is still up and running but no IO
      queues).
      
      Though there is an exception. When nvme_set_features reports an host
      path error, nvme_set_queue_count should propagate this error as the
      connectivity is lost, which means also the admin queue is not working
      anymore.
      
      Fixes: 9a0be7ab ("nvme: refactor set_queue_count")
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Signed-off-by: default avatarDaniel Wagner <wagi@kernel.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      294b2b75
    • Daniel Wagner's avatar
      nvme-fc: go straight to connecting state when initializing · d3d380ed
      Daniel Wagner authored
      
      
      The initial controller initialization mimiks the reconnect loop
      behavior by switching from NEW to RESETTING and then to CONNECTING.
      
      The transition from NEW to CONNECTING is a valid transition, so there is
      no point entering the RESETTING state. TCP and RDMA also transition
      directly to CONNECTING state.
      
      Reviewed-by: default avatarSagi Grimberg <sagi@grimberg.me>
      Reviewed-by: default avatarHannes Reinecke <hare@suse.de>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Signed-off-by: default avatarDaniel Wagner <wagi@kernel.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      d3d380ed
  4. Jan 17, 2025
  5. Jan 14, 2025
  6. Jan 13, 2025
  7. Jan 12, 2025
  8. Jan 11, 2025
    • Damien Le Moal's avatar
      Documentation: Document the NVMe PCI endpoint target driver · 002ec8f1
      Damien Le Moal authored
      
      
      Add a documentation file
      (Documentation/nvme/nvme-pci-endpoint-target.rst) for the new NVMe PCI
      endpoint target driver. This provides an overview of the driver
      requirements, capabilities and limitations. A user guide describing how
      to setup a NVMe PCI endpoint device using this driver is also provided.
      
      This document is made accessible also from the PCI endpoint
      documentation using a link. Furthermore, since the existing nvme
      documentation was not accessible from the top documentation index, an
      index file is added to Documentation/nvme and this index listed as
      "NVMe Subsystem" in the "Storage interfaces" section of the subsystem
      API index.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Acked-by: default avatarBjorn Helgaas <bhelgaas@google.com>
      Reviewed-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      002ec8f1
    • Damien Le Moal's avatar
      nvmet: New NVMe PCI endpoint function target driver · 0faa0fe6
      Damien Le Moal authored
      
      
      Implement a PCI target driver using the PCI endpoint framework. This
      requires hardware with a PCI controller capable of executing in endpoint
      mode.
      
      The PCI endpoint framework is used to set up a PCI endpoint function
      and its BAR compatible with a NVMe PCI controller. The framework is also
      used to map local memory to the PCI address space to execute MMIO
      accesses for retrieving NVMe commands from submission queues and posting
      completion entries to completion queues. If supported, DMA is used for
      command retreival and command data transfers, based on the PCI address
      segments indicated by the command using either PRPs or SGLs.
      
      The NVMe target driver relies on the NVMe target core code to execute
      all commands isssued by the host. The PCI target driver is mainly
      responsible for the following:
       - Initialization and teardown of the endpoint device and its backend
         PCI target controller. The PCI target controller is created using a
         subsystem and a port defined through configfs. The port used must be
         initialized with the "pci" transport type. The target controller is
         allocated and initialized when the PCI endpoint is started by binding
         it to the endpoint PCI device (nvmet_pci_epf_epc_init() function).
      
       - Manage the endpoint controller state according to the PCI link state
         and the actions of the host (e.g. checking the CC.EN register) and
         propagate these actions to the PCI target controller. Polling of the
         controller enable/disable is done using a delayed work scheduled
         every 5ms (nvmet_pci_epf_poll_cc() function). This work is started
         whenever the PCI link comes up (nvmet_pci_epf_link_up() notifier
         function) and stopped when the PCI link comes down
         (nvmet_pci_epf_link_down() notifier function).
         nvmet_pci_epf_poll_cc() enables and disables the PCI controller using
         the functions nvmet_pci_epf_enable_ctrl() and
         nvmet_pci_epf_disable_ctrl(). The controller admin queue is created
         using nvmet_pci_epf_create_cq(), which calls nvmet_cq_create(), and
         nvmet_pci_epf_create_sq() which uses nvmet_sq_create().
         nvmet_pci_epf_disable_ctrl() always resets the PCI controller to its
         initial state so that nvmet_pci_epf_enable_ctrl() can be called
         again. This ensures correct operation if, for instance, the host
         reboots causing the PCI link to be temporarily down.
      
       - Manage the controller admin and I/O submission queues using local
         memory. Commands are obtained from submission queues using a work
         item that constantly polls the doorbells of all submissions queues
         (nvmet_pci_epf_poll_sqs() function). This work is started whenever
         the controller is enabled (nvmet_pci_epf_enable_ctrl() function) and
         stopped when the controller is disabled (nvmet_pci_epf_disable_ctrl()
         function). When new commands are submitted by the host, DMA transfers
         are used to retrieve the commands.
      
       - Initiate the execution of all admin and I/O commands using the target
         core code, by calling a requests execute() function. All commands are
         individually handled using a per-command work item
         (nvmet_pci_epf_iod_work() function). A command overall execution
         includes: initializing a struct nvmet_req request for the command,
         using nvmet_req_transfer_len() to get a command data transfer length,
         parse the command PRPs or SGLs to get the PCI address segments of
         the command data buffer, retrieve data from the host (if the command
         is a write command), call req->execute() to execute the command and
         transfer data to the host (for read commands).
      
       - Handle the completions of commands as notified by the
         ->queue_response() operation of the PCI target controller
         (nvmet_pci_epf_queue_response() function). Completed commands are
         added to a list of completed command for their CQ. Each CQ list of
         completed command is processed using a work item
         (nvmet_pci_epf_cq_work() function) which posts entries for the
         completed commands in the CQ memory and raise an IRQ to the host to
         signal the completion. IRQ coalescing is supported as mandated by the
         NVMe base specification for PCI controllers. Of note is that
         completion entries are transmitted to the host using MMIO, after
         mapping the completion queue memory to the host PCI address space.
         Unlike for retrieving commands from SQs, DMA is not used as it
         degrades performance due to the transfer serialization needed (which
         delays completion entries transmission).
      
      The configuration of a NVMe PCI endpoint controller is done using
      configfs. First the NVMe PCI target controller configuration must be
      done to set up a subsystem and a port with the "pci" addr_trtype
      attribute. The subsystem can be setup using a file or block device
      backed namespace or using a passthrough NVMe device. After this, the
      PCI endpoint can be configured and bound to the PCI endpoint controller
      to start the NVMe endpoint controller.
      
      In order to not overcomplicate this initial implementation of an
      endpoint PCI target controller driver, protection information is not
      for now supported. If the PCI controller port and namespace are
      configured with protection information support, an error will be
      returned when the controller is created and initialized when the
      endpoint function is started. Protection information support will be
      added in a follow-up patch series.
      
      Using a Rock5B board (Rockchip RK3588 SoC, PCI Gen3x4 endpoint
      controller) with a target PCI controller setup with 4 I/O queues and a
      null_blk block device as a namespace, the maximum performance using fio
      was measured at 131 KIOPS for random 4K reads and up to 2.8 GB/S
      throughput. Some data points are:
      
      Rnd read,   4KB,  QD=1, 1 job : IOPS=16.9k, BW=66.2MiB/s (69.4MB/s)
      Rnd read,   4KB, QD=32, 1 job : IOPS=78.5k, BW=307MiB/s (322MB/s)
      Rnd read,   4KB, QD=32, 4 jobs: IOPS=131k, BW=511MiB/s (536MB/s)
      Seq read, 512KB, QD=32, 1 job : IOPS=5381, BW=2691MiB/s (2821MB/s)
      
      The NVMe PCI endpoint target driver is not intended for production use.
      It is a tool for learning NVMe, exploring existing features and testing
      implementations of new NVMe features.
      
      Co-developed-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Reviewed-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Reviewed-by: default avatarKrzysztof Wilczyński <kwilczynski@kernel.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      0faa0fe6
    • Damien Le Moal's avatar
      nvmet: Implement arbitration feature support · a0ed77d4
      Damien Le Moal authored
      
      
      NVMe base specification v2.1 mandates support for the arbitration
      feature (NVME_FEAT_ARBITRATION). Introduce the data structure
      struct nvmet_feat_arbitration to define the high, medium and low
      priority weight fields and the arbitration burst field of this feature
      and implement the functions nvmet_get_feat_arbitration() and
      nvmet_set_feat_arbitration() functions to get and set these fields.
      
      Since there is no generic way to implement support for the arbitration
      feature, these functions respectively use the controller get_feature()
      and set_feature() operations to process the feature with the help of
      the controller driver. If the controller driver does not implement these
      operations and a get feature command or a set feature command for this
      feature is received, the command is failed with an invalid field error.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      a0ed77d4
    • Damien Le Moal's avatar
      nvmet: Implement interrupt config feature support · f1ecd491
      Damien Le Moal authored
      
      
      The NVMe base specifications v2.1 mandate supporting the interrupt
      config feature (NVME_FEAT_IRQ_CONFIG) for PCI controllers. Introduce the
      data structure struct nvmet_feat_irq_config to define the coalescing
      disabled (cd) and interrupt vector (iv) fields of this feature and
      implement the functions nvmet_get_feat_irq_config() and
      nvmet_set_feat_irq_config() functions to get and set these fields. These
      functions respectively use the controller get_feature() and
      set_feature() operations to fill and handle the fields of struct
      nvmet_feat_irq_config.
      
      Support for this feature is prohibited for fabrics controllers. If a get
      feature command or a set feature command for this feature is received
      for a fabrics controller, the command is failed with an invalid field
      error.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      f1ecd491
    • Damien Le Moal's avatar
      nvmet: Implement interrupt coalescing feature support · 89b94a6c
      Damien Le Moal authored
      
      
      The NVMe base specifications v2.1 mandate Supporting the interrupt
      coalescing feature (NVME_FEAT_IRQ_COALESCE) for PCI controllers.
      Introduce the data structure struct nvmet_feat_irq_coalesce to define
      the time and threshold (thr) fields of this feature and implement the
      functions nvmet_get_feat_irq_coalesce() and
      nvmet_set_feat_irq_coalesce() to get and set this feature. These
      functions respectively use the controller get_feature() and
      set_feature() operations to fill and handle the fields of struct
      nvmet_feat_irq_coalesce.
      
      While the Linux kernel nvme driver does not use this feature and thus
      will not complain if it is not implemented, other major OSes fail
      initializing the NVMe device if this feature support is missing.
      
      Support for this feature is prohibited for fabrics controllers. If a get
      feature or set feature command for this feature is received for a
      fabrics controller, the command is failed with an invalid field error.
      
      Suggested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      89b94a6c
    • Damien Le Moal's avatar
      nvmet: Implement host identifier set feature support · 2f2b20fa
      Damien Le Moal authored
      
      
      The NVMe specifications mandate support for the host identifier
      set_features for controllers that also supports reservations. Satisfy
      this requirement by implementing handling of the NVME_FEAT_HOST_ID
      feature for the nvme_set_features command. This implementation is for
      now effective only for PCI target controllers. For other controller
      types, the set features command is failed with a NVME_SC_CMD_SEQ_ERROR
      status as before.
      
      As noted in the code, 128 bits host identifiers are supported since the
      NVMe base specifications version 2.1 indicate in section 5.1.25.1.28.1
      that "The controller may support a 64-bit Host Identifier...".
      
      The RHII (Reservations and Host Identifier Interaction) bit of the
      controller attribute (ctratt) field of the identify controller data is
      also set to indicate that a host ID of "0" is supported but that the
      host ID must be a non-zero value to use reservations.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      2f2b20fa
    • Damien Le Moal's avatar
      nvmet: Introduce get/set_feature controller operations · 08461535
      Damien Le Moal authored
      
      
      The implementation of some features cannot always be done generically by
      the target core code. Arbitraion and IRQ coalescing features are
      examples of such features: their implementation must be provided (at
      least partially) by the target controller driver.
      
      Introduce the set_feature() and get_feature() controller fabrics
      operations (in struct nvmet_fabrics_ops) to allow supporting such
      features.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      08461535
    • Damien Le Moal's avatar
      nvmet: Do not require SGL for PCI target controller commands · 1ad8630f
      Damien Le Moal authored
      
      
      Support for SGL is optional for the PCI transport. Modify
      nvmet_req_init() to not require the NVME_CMD_SGL_METABUF command flag to
      be set if the target controller transport type is NVMF_TRTYPE_PCI.
      In addition to this, the NVMe base specification v2.1 mandate that all
      admin commands use PRP, that is, have CDW0.PSDT cleared to 0. Modify
      nvmet_parse_admin_cmd() to check this.
      
      Finally, modify nvmet_check_transfer_len() and
      nvmet_check_data_len_lte() to return the appropriate error status
      depending on the command using SGL or PRP. Since for fabrics
      nvmet_req_init() checks that a command uses SGL, always, this change
      affects only PCI target controllers.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      1ad8630f
    • Damien Le Moal's avatar
      nvmet: Add support for I/O queue management admin commands · 60d3cd85
      Damien Le Moal authored
      
      
      The I/O submission queue management admin commands
      (nvme_admin_delete_sq, nvme_admin_create_sq, nvme_admin_delete_cq,
      and nvme_admin_create_cq) are mandatory admin commands for I/O
      controllers using the PCI transport, that is, support for these commands
      is mandatory for a a PCI target I/O controller.
      
      Implement support for these commands by adding the functions
      nvmet_execute_delete_sq(), nvmet_execute_create_sq(),
      nvmet_execute_delete_cq() and nvmet_execute_create_cq() to set as the
      execute method of requests for these commands. These functions will
      return an invalid opcode error for any controller that is not a PCI
      target controller. Support for the I/O queue management commands is also
      reported in the command effect log  of PCI target controllers (using
      nvmet_get_cmd_effects_admin()).
      
      Each management command is backed by a controller fabric operation
      that can be defined by a PCI target controller driver to setup I/O
      queues using nvmet_sq_create() and nvmet_cq_create() or delete I/O
      queues using nvmet_sq_destroy().
      
      As noted in a comment in nvmet_execute_create_sq(), we do not yet
      support sharing a single CQ between multiple SQs.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      60d3cd85
    • Damien Le Moal's avatar
      nvmet: Introduce nvmet_sq_create() and nvmet_cq_create() · 1eb380ca
      Damien Le Moal authored
      
      
      Introduce the new functions nvmet_sq_create() and nvmet_cq_create() to
      allow a target driver to initialize and setup admin and IO queues
      directly, without needing to execute connect fabrics commands.
      The helper functions nvmet_check_cqid() and nvmet_check_sqid() are
      implemented to check the correctness of SQ and CQ IDs when
      nvmet_sq_create() and nvmet_cq_create() are called.
      
      nvmet_sq_create() and nvmet_cq_create() are primarily intended for use
      with PCI target controller drivers and thus are not well integrated
      with the current queue creation of fabrics controllers using the connect
      command. These fabrices drivers are not modified to use these functions.
      This simple implementation of SQ and CQ management for PCI target
      controller drivers does not allow multiple SQs to share the same CQ,
      similarly to other fabrics transports. This is a specification
      violation. A more involved set of changes will follow to add support for
      this required completion queue sharing feature.
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      1eb380ca
    • Damien Le Moal's avatar
      nvmet: Introduce nvmet_req_transfer_len() · 43043c9b
      Damien Le Moal authored
      
      
      Add the new function nvmet_req_transfer_len() to parse a request command
      to extract the transfer length of the command. This function
      implementation relies on multiple helper functions for parsing I/O
      commands (nvmet_io_cmd_transfer_len()), admin commands
      (nvmet_admin_cmd_data_len()) and fabrics connect commands
      (nvmet_connect_cmd_data_len).
      
      Signed-off-by: default avatarDamien Le Moal <dlemoal@kernel.org>
      Reviewed-by: default avatarChristoph Hellwig <hch@lst.de>
      Tested-by: default avatarRick Wertenbroek <rick.wertenbroek@gmail.com>
      Tested-by: default avatarManivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
      Signed-off-by: default avatarKeith Busch <kbusch@kernel.org>
      43043c9b
Loading