Re: [PATCH] blk-mq-debugfs: Also show requests that have not yet been started

2018-10-05 Thread Jens Axboe
On 10/5/18 4:37 PM, Omar Sandoval wrote:
> On Fri, Oct 05, 2018 at 08:18:00AM -0600, Jens Axboe wrote:
>> On 10/4/18 11:35 AM, Bart Van Assche wrote:
>>> When debugging e.g. the SCSI timeout handler it is important that
>>> requests that have not yet been started or that already have
>>> completed are also reported through debugfs.
>>
>> Thanks, I like this better - applied. BTW, what's up with the
>> reverse ordering on this:
>>
>>> Signed-off-by: Bart Van Assche 
>>> Cc: Christoph Hellwig 
>>> Cc: Ming Lei 
>>> Cc: Hannes Reinecke 
>>> Cc: Johannes Thumshirn 
>>> Cc: Martin K. Petersen 
>>
>> For some reason that really annoys me, and I see it in various
>> patches these days. IMHO the SOB should be last, with whatever
>> acks, reviews, CC, before that.
> 
> I could've sworn that this guideline was even documented somewhere, but
> I can't find it now ¯\_(ツ)_/¯

My guess is that it's some newer git thing - but if it is, it's really
annoying and should be reverted. I end up fixing these up by hand.

-- 
Jens Axboe



Re: [EXT] Re: [PATCH v2] lightnvm: pblk: consider max hw sectors supported for max_write_pgs

2018-10-05 Thread Zhoujie Wu




On 10/05/2018 06:01 PM, Matias Bjørling wrote:

External Email

--
On 10/05/2018 07:39 PM, Zhoujie Wu wrote:

When do GC, the number of read/write sectors are determined
by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).

Due to max_write_pgs doesn't consider max hw sectors
supported by nvme controller(128K), which leads to GC
tries to read 64 * 4K in one command, and see below error
caused by pblk_bio_map_addr in function pblk_submit_read_gc.

[ 2923.005376] pblk: could not add page to bio
[ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)

Signed-off-by: Zhoujie Wu 
---
v2: Changed according to Javier's comments.
Remove unneccessary comment and move the definition of bqueue to
maintain ordering.

  drivers/lightnvm/pblk-init.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index e357388..0ef9ac5 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -391,6 +391,7 @@ static void pblk_put_global_caches(void)
  static int pblk_core_init(struct pblk *pblk)
  {
  struct nvm_tgt_dev *dev = pblk->dev;
+struct request_queue *bqueue = dev->q;
  struct nvm_geo *geo = >geo;
  int ret, max_write_ppas;
  @@ -407,6 +408,8 @@ static int pblk_core_init(struct pblk *pblk)
  pblk->min_write_pgs = geo->ws_opt;
  max_write_ppas = pblk->min_write_pgs * geo->all_luns;
  pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
+pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
+queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
  pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, 
sizeof(atomic64_t),




Thanks Zhoujie. I've done the following update and applied it to 4.20. 
Note that I also removed the bqueue.

Good for me, thanks a lot.


-   pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
-   queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
+   pblk->max_write_pgs = min_t(int, pblk->max_write_pgs,
+   queue_max_hw_sectors(dev->q) / (geo->csecs >> 
SECTOR_SHIFT));


Javier, I've carried over your Reviewed-by, let me know if you want it 
to be removed.




Re: [PATCH v2] lightnvm: pblk: consider max hw sectors supported for max_write_pgs

2018-10-05 Thread Matias Bjørling

On 10/05/2018 07:39 PM, Zhoujie Wu wrote:

When do GC, the number of read/write sectors are determined
by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).

Due to max_write_pgs doesn't consider max hw sectors
supported by nvme controller(128K), which leads to GC
tries to read 64 * 4K in one command, and see below error
caused by pblk_bio_map_addr in function pblk_submit_read_gc.

[ 2923.005376] pblk: could not add page to bio
[ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)

Signed-off-by: Zhoujie Wu 
---
v2: Changed according to Javier's comments.
Remove unneccessary comment and move the definition of bqueue to
maintain ordering.

  drivers/lightnvm/pblk-init.c | 3 +++
  1 file changed, 3 insertions(+)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index e357388..0ef9ac5 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -391,6 +391,7 @@ static void pblk_put_global_caches(void)
  static int pblk_core_init(struct pblk *pblk)
  {
struct nvm_tgt_dev *dev = pblk->dev;
+   struct request_queue *bqueue = dev->q;
struct nvm_geo *geo = >geo;
int ret, max_write_ppas;
  
@@ -407,6 +408,8 @@ static int pblk_core_init(struct pblk *pblk)

pblk->min_write_pgs = geo->ws_opt;
max_write_ppas = pblk->min_write_pgs * geo->all_luns;
pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
+   pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
+   queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
  
  	pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, sizeof(atomic64_t),




Thanks Zhoujie. I've done the following update and applied it to 4.20. 
Note that I also removed the bqueue.


-   pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
-   queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
+   pblk->max_write_pgs = min_t(int, pblk->max_write_pgs,
+   queue_max_hw_sectors(dev->q) / (geo->csecs >> 
SECTOR_SHIFT));


Javier, I've carried over your Reviewed-by, let me know if you want it 
to be removed.


Re: [PATCH] blk-mq-debugfs: Also show requests that have not yet been started

2018-10-05 Thread Omar Sandoval
On Fri, Oct 05, 2018 at 08:18:00AM -0600, Jens Axboe wrote:
> On 10/4/18 11:35 AM, Bart Van Assche wrote:
> > When debugging e.g. the SCSI timeout handler it is important that
> > requests that have not yet been started or that already have
> > completed are also reported through debugfs.
> 
> Thanks, I like this better - applied. BTW, what's up with the
> reverse ordering on this:
> 
> > Signed-off-by: Bart Van Assche 
> > Cc: Christoph Hellwig 
> > Cc: Ming Lei 
> > Cc: Hannes Reinecke 
> > Cc: Johannes Thumshirn 
> > Cc: Martin K. Petersen 
> 
> For some reason that really annoys me, and I see it in various
> patches these days. IMHO the SOB should be last, with whatever
> acks, reviews, CC, before that.

I could've sworn that this guideline was even documented somewhere, but
I can't find it now ¯\_(ツ)_/¯


Re: [PATCH blktests 0/3] Add NVMeOF multipath tests

2018-10-05 Thread Omar Sandoval
On Thu, Sep 27, 2018 at 04:26:42PM -0700, Bart Van Assche wrote:
> On Tue, 2018-09-18 at 17:18 -0700, Omar Sandoval wrote:
> > On Tue, Sep 18, 2018 at 05:02:47PM -0700, Bart Van Assche wrote:
> > > On 9/18/18 4:24 PM, Omar Sandoval wrote:
> > > > On Tue, Sep 18, 2018 at 02:20:59PM -0700, Bart Van Assche wrote:
> > > > > Can you have a look at the updated master branch of
> > > > > https://github.com/bvanassche/blktests? That code should no longer 
> > > > > fail if
> > > > > unloading the nvme kernel module fails. Please note that you will need
> > > > > kernel v4.18 to test these scripts - a KASAN complaint appears if I 
> > > > > run
> > > > > these tests against kernel v4.19-rc4.
> > > > 
> > > > Thanks, these pass now. Is it expected that my nvme device gets a
> > > > multipath device configured after running these tests?
> > > > 
> > > > $ lsblk
> > > > NAME MAJ:MIN RM SIZE RO TYPE  MOUNTPOINT
> > > > vda  254:00  16G  0 disk
> > > > └─vda1   254:10  16G  0 part  /
> > > > vdb  254:16   0   8G  0 disk
> > > > vdc  254:32   0   8G  0 disk
> > > > vdd  254:48   0   8G  0 disk
> > > > nvme0n1  259:00   8G  0 disk
> > > > └─mpatha 253:00   8G  0 mpath
> > > 
> > > No, all multipath devices that were created during a test should be 
> > > removed
> > > before that test finishes. I will look into this.
> > > 
> > > > Also, can you please fix:
> > > > 
> > > > _have_kernel_option NVME_MULTIPATH && exit 1
> > > > 
> > > > to not exit on failure? It should use SKIP_REASON and return 1. You
> > > > might need to add something like _dont_have_kernel_option to properly
> > > > handle the case where the config is not found.
> > > 
> > > OK, I will change this.
> > > 
> > > > Side note which isn't a blocker for merging is that there's a lot of
> > > > duplicated code between these helpers and the srp helpers. How hard
> > > > would it be to refactor that?
> > > 
> > > Are you perhaps referring to the code that is shared between the
> > > tests/srp/rc tests/nvmeof-mp/rc shell scripts?
> > 
> > Yes, those.
> > 
> > > The hardest part is probably
> > > to chose a location where to store these functions. Should I create a file
> > > with common code under common/, under tests/srp/, under tests/nvmeof-mp/ 
> > > or
> > > perhaps somewhere else?
> > 
> > Just put it under common.
> 
> Hi Omar,
> 
> All feedback mentioned above has been addressed. The following pull request 
> has
> been updated: https://github.com/osandov/blktests/pull/33. Please let me know
> if you want me to post these patches on the linux-block mailing list.
> 
> Note: neither the upstream kernel v4.18 nor v4.19-rc4 are stable enough to 
> pass
> all nvmeof-mp tests if kernel debugging options like KASAN are enabled.
> Additionally, the NVMe device_add_disk() race condition often causes 
> multipathd
> to refuse to consider /dev/nvme... devices. The output on my test setup is as
> follows (all tests pass):
> 
> # ./check -q nvmeof-mp
> nvmeof-mp/001 (Log in and log out)   [passed]
> runtime  1.528s  ...  1.909s
> nvmeof-mp/002 (File I/O on top of multipath concurrently with logout and 
> login (mq)) [
> passed]time  38.968s  ...
> runtime  38.968s  ...  38.571s
> nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and 
> login (sq-on-
> nvmeof-mp/004 (File I/O on top of multipath concurrently with logout and 
> login (sq-on-
> mq)) [passed]38.632s  ...
> runtime  38.632s  ...  37.529s
> nvmeof-mp/005 (Direct I/O with large transfer sizes and bs=4M) [passed]
> runtime  13.382s  ...  13.684s
> nvmeof-mp/006 (Direct I/O with large transfer sizes and bs=8M) [passed]
> runtime  13.511s  ...  13.480s
> nvmeof-mp/009 (Buffered I/O with large transfer sizes and bs=4M) [passed]
> runtime  13.665s  ...  13.763s
> nvmeof-mp/010 (Buffered I/O with large transfer sizes and bs=8M) [passed]
> runtime  13.442s  ...  13.900s
> nvmeof-mp/011 (Block I/O on top of multipath concurrently with logout and 
> login) [pass
> ed] runtime  37.988s  ...
> runtime  37.988s  ...  37.945s
> nvmeof-mp/012 (dm-mpath on top of multiple I/O schedulers)   [passed]
> runtime  21.659s  ...  21.733s

Thanks, Bart, merged.


[PATCH v2] lightnvm: pblk: consider max hw sectors supported for max_write_pgs

2018-10-05 Thread Zhoujie Wu
When do GC, the number of read/write sectors are determined
by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).

Due to max_write_pgs doesn't consider max hw sectors
supported by nvme controller(128K), which leads to GC
tries to read 64 * 4K in one command, and see below error
caused by pblk_bio_map_addr in function pblk_submit_read_gc.

[ 2923.005376] pblk: could not add page to bio
[ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)

Signed-off-by: Zhoujie Wu 
---
v2: Changed according to Javier's comments.
Remove unneccessary comment and move the definition of bqueue to
maintain ordering.

 drivers/lightnvm/pblk-init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index e357388..0ef9ac5 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -391,6 +391,7 @@ static void pblk_put_global_caches(void)
 static int pblk_core_init(struct pblk *pblk)
 {
struct nvm_tgt_dev *dev = pblk->dev;
+   struct request_queue *bqueue = dev->q;
struct nvm_geo *geo = >geo;
int ret, max_write_ppas;
 
@@ -407,6 +408,8 @@ static int pblk_core_init(struct pblk *pblk)
pblk->min_write_pgs = geo->ws_opt;
max_write_ppas = pblk->min_write_pgs * geo->all_luns;
pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
+   pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
+   queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
 
pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, sizeof(atomic64_t),
-- 
1.9.1



Re: [EXT] Re: [PATCH] lightnvm: consider max hw sectors supported for max_write_pgs

2018-10-05 Thread Zhoujie Wu




On 10/05/2018 01:05 AM, Javier González wrote:

External Email

--

On 5 Oct 2018, at 02.26, Zhoujie Wu  wrote:

When do GC, the number of read/write sectors are determined
by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).

Due to max_write_pgs doesn't consider max hw sectors
supported by nvme controller(128K), which leads to GC
tries to read 64 * 4K in one command, and see below error
caused by pblk_bio_map_addr in function pblk_submit_read_gc.

[ 2923.005376] pblk: could not add page to bio
[ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)

Signed-off-by: Zhoujie Wu 
---
drivers/lightnvm/pblk-init.c | 4 
1 file changed, 4 insertions(+)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index e357388..2e51875 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -393,6 +393,7 @@ static int pblk_core_init(struct pblk *pblk)
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = >geo;
int ret, max_write_ppas;
+   struct request_queue *bqueue = dev->q;


Detail: Can you move this under struct nvm_tgt_dev *dev = pblk->dev;? So
that we maintain ordering?

Good suggestion.



atomic64_set(>user_wa, 0);
atomic64_set(>pad_wa, 0);
@@ -407,6 +408,9 @@ static int pblk_core_init(struct pblk *pblk)
pblk->min_write_pgs = geo->ws_opt;
max_write_ppas = pblk->min_write_pgs * geo->all_luns;
pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
+   /* consider the max hw sector as well */

No need for this comment.

ok, will remove it.

+   pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
+   queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
pblk_set_sec_per_write(pblk, pblk->min_write_pgs);

pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, sizeof(atomic64_t),
--
1.9.1

Besides the comment above, it looks good to me.

Will send out v2 soon. Thanks so much.


Reviewed-by: Javier González 





Re: [PATCH] blk-mq-debugfs: Also show requests that have not yet been started

2018-10-05 Thread Jens Axboe
On 10/4/18 11:35 AM, Bart Van Assche wrote:
> When debugging e.g. the SCSI timeout handler it is important that
> requests that have not yet been started or that already have
> completed are also reported through debugfs.

Thanks, I like this better - applied. BTW, what's up with the
reverse ordering on this:

> Signed-off-by: Bart Van Assche 
> Cc: Christoph Hellwig 
> Cc: Ming Lei 
> Cc: Hannes Reinecke 
> Cc: Johannes Thumshirn 
> Cc: Martin K. Petersen 

For some reason that really annoys me, and I see it in various
patches these days. IMHO the SOB should be last, with whatever
acks, reviews, CC, before that.

-- 
Jens Axboe



Re: [GIT PULL] nvme updates for 4.20

2018-10-05 Thread Jens Axboe
On 10/5/18 7:41 AM, Christoph Hellwig wrote:
> A relatively boring merge window:
> 
>  - better AEN tracing (Chaitanya)
>  - NUMA aware PCIe multipathing (me)
>  - RDMA workqueue fixes (Sagi)
>  - better bio usage in the target (Sagi)
>  - FC rework for target removal (James)
>  - better multipath handling of ->queue_rq failures (James)
>  - various cleanups (Milan)
> 
> The following changes since commit c0aac682fa6590cb660cb083dbc09f55e799d2d2:
> 
>   Merge tag 'v4.19-rc6' into for-4.20/block (2018-10-01 08:58:57 -0600)
> 
> are available in the Git repository at:
> 
>   git://git.infradead.org/nvme.git nvme-4.20
> 
> for you to fetch changes up to 2acf70ade79d26b97611a8df52eb22aa33814cd4:
> 
>   nvmet-rdma: use a private workqueue for delete (2018-10-05 09:25:18 +0200)
> 
> 
> Chaitanya Kulkarni (2):
>   nvmet: remove redundant module prefix
>   nvme-core: add async event trace helper
> 
> Christoph Hellwig (1):
>   nvme: take node locality into account when selecting a path
> 
> James Smart (3):
>   nvmet_fc: support target port removal with nvmet layer
>   nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
>   nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O
> 
> Milan P. Gandhi (2):
>   nvme: fix typo in nvme_identify_ns_descs
>   nvme-fc: fix for a minor typos
> 
> Sagi Grimberg (2):
>   nvmet: don't split large I/Os unconditionally
>   nvmet-rdma: use a private workqueue for delete
> 
>  drivers/nvme/host/core.c  |  20 --
>  drivers/nvme/host/fabrics.c   |   7 +-
>  drivers/nvme/host/fc.c| 108 +++
>  drivers/nvme/host/multipath.c |  57 +
>  drivers/nvme/host/nvme.h  |  25 +++-
>  drivers/nvme/host/trace.h |  28 
>  drivers/nvme/target/admin-cmd.c   |   2 +-
>  drivers/nvme/target/fc.c  | 130 
> +++---
>  drivers/nvme/target/io-cmd-bdev.c |   9 ++-
>  drivers/nvme/target/nvmet.h   |   1 +
>  drivers/nvme/target/rdma.c|  19 --
>  include/linux/nvme.h  |   1 +
>  12 files changed, 347 insertions(+), 60 deletions(-)

Pulled, thanks.

-- 
Jens Axboe



[no subject]

2018-10-05 Thread Christoph Hellwig
Bcc: 
Subject: [GIT PULL] nvme updates for 4.20
Reply-To: 

A relatively boring merge window:

 - better AEN tracing (Chaitanya)
 - NUMA aware PCIe multipathing (me)
 - RDMA workqueue fixes (Sagi)
 - better bio usage in the target (Sagi)
 - FC rework for target removal (James)
 - better multipath handling of ->queue_rq failures (James)
 - various cleanups (Milan)

The following changes since commit c0aac682fa6590cb660cb083dbc09f55e799d2d2:

  Merge tag 'v4.19-rc6' into for-4.20/block (2018-10-01 08:58:57 -0600)

are available in the Git repository at:

  git://git.infradead.org/nvme.git nvme-4.20

for you to fetch changes up to 2acf70ade79d26b97611a8df52eb22aa33814cd4:

  nvmet-rdma: use a private workqueue for delete (2018-10-05 09:25:18 +0200)


Chaitanya Kulkarni (2):
  nvmet: remove redundant module prefix
  nvme-core: add async event trace helper

Christoph Hellwig (1):
  nvme: take node locality into account when selecting a path

James Smart (3):
  nvmet_fc: support target port removal with nvmet layer
  nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
  nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O

Milan P. Gandhi (2):
  nvme: fix typo in nvme_identify_ns_descs
  nvme-fc: fix for a minor typos

Sagi Grimberg (2):
  nvmet: don't split large I/Os unconditionally
  nvmet-rdma: use a private workqueue for delete

 drivers/nvme/host/core.c  |  20 --
 drivers/nvme/host/fabrics.c   |   7 +-
 drivers/nvme/host/fc.c| 108 +++
 drivers/nvme/host/multipath.c |  57 +
 drivers/nvme/host/nvme.h  |  25 +++-
 drivers/nvme/host/trace.h |  28 
 drivers/nvme/target/admin-cmd.c   |   2 +-
 drivers/nvme/target/fc.c  | 130 +++---
 drivers/nvme/target/io-cmd-bdev.c |   9 ++-
 drivers/nvme/target/nvmet.h   |   1 +
 drivers/nvme/target/rdma.c|  19 --
 include/linux/nvme.h  |   1 +
 12 files changed, 347 insertions(+), 60 deletions(-)


[GIT PULL] nvme updates for 4.20

2018-10-05 Thread Christoph Hellwig
A relatively boring merge window:

 - better AEN tracing (Chaitanya)
 - NUMA aware PCIe multipathing (me)
 - RDMA workqueue fixes (Sagi)
 - better bio usage in the target (Sagi)
 - FC rework for target removal (James)
 - better multipath handling of ->queue_rq failures (James)
 - various cleanups (Milan)

The following changes since commit c0aac682fa6590cb660cb083dbc09f55e799d2d2:

  Merge tag 'v4.19-rc6' into for-4.20/block (2018-10-01 08:58:57 -0600)

are available in the Git repository at:

  git://git.infradead.org/nvme.git nvme-4.20

for you to fetch changes up to 2acf70ade79d26b97611a8df52eb22aa33814cd4:

  nvmet-rdma: use a private workqueue for delete (2018-10-05 09:25:18 +0200)


Chaitanya Kulkarni (2):
  nvmet: remove redundant module prefix
  nvme-core: add async event trace helper

Christoph Hellwig (1):
  nvme: take node locality into account when selecting a path

James Smart (3):
  nvmet_fc: support target port removal with nvmet layer
  nvme_fc: add 'nvme_discovery' sysfs attribute to fc transport device
  nvme: call nvme_complete_rq when nvmf_check_ready fails for mpath I/O

Milan P. Gandhi (2):
  nvme: fix typo in nvme_identify_ns_descs
  nvme-fc: fix for a minor typos

Sagi Grimberg (2):
  nvmet: don't split large I/Os unconditionally
  nvmet-rdma: use a private workqueue for delete

 drivers/nvme/host/core.c  |  20 --
 drivers/nvme/host/fabrics.c   |   7 +-
 drivers/nvme/host/fc.c| 108 +++
 drivers/nvme/host/multipath.c |  57 +
 drivers/nvme/host/nvme.h  |  25 +++-
 drivers/nvme/host/trace.h |  28 
 drivers/nvme/target/admin-cmd.c   |   2 +-
 drivers/nvme/target/fc.c  | 130 +++---
 drivers/nvme/target/io-cmd-bdev.c |   9 ++-
 drivers/nvme/target/nvmet.h   |   1 +
 drivers/nvme/target/rdma.c|  19 --
 include/linux/nvme.h  |   1 +
 12 files changed, 347 insertions(+), 60 deletions(-)


[PATCH 0/5] lightnvm: pblk: Flexible metadata

2018-10-05 Thread Igor Konopko
This series of patches extends the way how pblk can
store L2P sector metadata. After this set of changes
any size of NVMe metadata (including 0) is supported.

Igor Konopko (5):
  lightnvm: pblk: Do not reuse DMA memory on partial read
  lightnvm: pblk: Helpers for OOB metadata
  lightnvm: Flexible DMA pool entry size
  lightnvm: Disable interleaved metadata
  lightnvm: pblk: Support for packed metadata

 drivers/lightnvm/core.c  | 33 ++
 drivers/lightnvm/pblk-core.c | 77 +---
 drivers/lightnvm/pblk-init.c | 54 +-
 drivers/lightnvm/pblk-map.c  | 21 ++---
 drivers/lightnvm/pblk-rb.c   |  3 ++
 drivers/lightnvm/pblk-read.c | 56 +++
 drivers/lightnvm/pblk-recovery.c | 28 +++-
 drivers/lightnvm/pblk-sysfs.c|  7 +++
 drivers/lightnvm/pblk-write.c| 14 --
 drivers/lightnvm/pblk.h  | 55 +--
 drivers/nvme/host/lightnvm.c |  7 ++-
 include/linux/lightnvm.h |  9 ++--
 12 files changed, 278 insertions(+), 86 deletions(-)

-- 
2.17.1



[PATCH 5/5] lightnvm: pblk: Support for packed metadata

2018-10-05 Thread Igor Konopko
In current pblk implementation, l2p mapping for not closed lines
is always stored only in OOB metadata and recovered from it.

Such a solution does not provide data integrity when drives does
not have such a OOB metadata space.

The goal of this patch is to add support for so called packed
metadata, which store l2p mapping for open lines in last sector
of every write unit.

Signed-off-by: Igor Konopko 
---
 drivers/lightnvm/pblk-core.c | 52 +---
 drivers/lightnvm/pblk-init.c | 37 +--
 drivers/lightnvm/pblk-rb.c   |  3 ++
 drivers/lightnvm/pblk-recovery.c |  5 +--
 drivers/lightnvm/pblk-sysfs.c|  7 +
 drivers/lightnvm/pblk-write.c| 14 ++---
 drivers/lightnvm/pblk.h  |  5 ++-
 7 files changed, 110 insertions(+), 13 deletions(-)

diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index 131972b13e27..e11a46c05067 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -376,7 +376,7 @@ void pblk_write_should_kick(struct pblk *pblk)
 {
unsigned int secs_avail = pblk_rb_read_count(>rwb);
 
-   if (secs_avail >= pblk->min_write_pgs)
+   if (secs_avail >= pblk->min_write_pgs_data)
pblk_write_kick(pblk);
 }
 
@@ -407,7 +407,9 @@ struct list_head *pblk_line_gc_list(struct pblk *pblk, 
struct pblk_line *line)
struct pblk_line_meta *lm = >lm;
struct pblk_line_mgmt *l_mg = >l_mg;
struct list_head *move_list = NULL;
-   int vsc = le32_to_cpu(*line->vsc);
+   int packed_meta = (le32_to_cpu(*line->vsc) / pblk->min_write_pgs_data)
+   * (pblk->min_write_pgs - pblk->min_write_pgs_data);
+   int vsc = le32_to_cpu(*line->vsc) + packed_meta;
 
lockdep_assert_held(>lock);
 
@@ -620,12 +622,15 @@ struct bio *pblk_bio_map_addr(struct pblk *pblk, void 
*data,
 }
 
 int pblk_calc_secs(struct pblk *pblk, unsigned long secs_avail,
-  unsigned long secs_to_flush)
+  unsigned long secs_to_flush, bool skip_meta)
 {
int max = pblk->sec_per_write;
int min = pblk->min_write_pgs;
int secs_to_sync = 0;
 
+   if (skip_meta && pblk->min_write_pgs_data != pblk->min_write_pgs)
+   min = max = pblk->min_write_pgs_data;
+
if (secs_avail >= max)
secs_to_sync = max;
else if (secs_avail >= min)
@@ -851,7 +856,7 @@ int pblk_line_emeta_read(struct pblk *pblk, struct 
pblk_line *line,
 next_rq:
memset(, 0, sizeof(struct nvm_rq));
 
-   rq_ppas = pblk_calc_secs(pblk, left_ppas, 0);
+   rq_ppas = pblk_calc_secs(pblk, left_ppas, 0, false);
rq_len = rq_ppas * geo->csecs;
 
bio = pblk_bio_map_addr(pblk, emeta_buf, rq_ppas, rq_len,
@@ -2161,3 +2166,42 @@ void pblk_lookup_l2p_rand(struct pblk *pblk, struct 
ppa_addr *ppas,
}
spin_unlock(>trans_lock);
 }
+
+void pblk_set_packed_meta(struct pblk *pblk, struct nvm_rq *rqd)
+{
+   void *meta_list = rqd->meta_list;
+   void *page;
+   int i = 0;
+
+   if (pblk_is_oob_meta_supported(pblk))
+   return;
+
+   /* We need to zero out metadata corresponding to packed meta page */
+   pblk_set_meta_lba(pblk, meta_list, rqd->nr_ppas - 1, ADDR_EMPTY);
+
+   page = page_to_virt(rqd->bio->bi_io_vec[rqd->bio->bi_vcnt - 1].bv_page);
+   /* We need to fill last page of request (packed metadata)
+* with data from oob meta buffer.
+*/
+   for (; i < rqd->nr_ppas; i++)
+   memcpy(page + (i * sizeof(struct pblk_sec_meta)),
+   pblk_get_meta_buffer(pblk, meta_list, i),
+   sizeof(struct pblk_sec_meta));
+}
+
+void pblk_get_packed_meta(struct pblk *pblk, struct nvm_rq *rqd)
+{
+   void *meta_list = rqd->meta_list;
+   void *page;
+   int i = 0;
+
+   if (pblk_is_oob_meta_supported(pblk))
+   return;
+
+   page = page_to_virt(rqd->bio->bi_io_vec[rqd->bio->bi_vcnt - 1].bv_page);
+   /* We need to fill oob meta buffer with data from packe metadata */
+   for (; i < rqd->nr_ppas; i++)
+   memcpy(pblk_get_meta_buffer(pblk, meta_list, i),
+   page + (i * sizeof(struct pblk_sec_meta)),
+   sizeof(struct pblk_sec_meta));
+}
diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index 1529aa37b30f..d2a63494def6 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -407,8 +407,40 @@ static int pblk_core_init(struct pblk *pblk)
pblk->min_write_pgs = geo->ws_opt;
max_write_ppas = pblk->min_write_pgs * geo->all_luns;
pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
+   pblk->min_write_pgs_data = pblk->min_write_pgs;
pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
 
+   if (!pblk_is_oob_meta_supported(pblk)) {
+   /* For drives which does not have OOB metadata 

[PATCH 4/5] lightnvm: Disable interleaved metadata

2018-10-05 Thread Igor Konopko
Currently pblk and lightnvm does only check for size
of OOB metadata and does not care wheather this meta
is located in separate buffer or is interleaved with
data in single buffer.

In reality only the first scenario is supported, where
second mode will break pblk functionality during any
IO operation.

The goal of this patch is to block creation of pblk
devices in case of interleaved metadata

Signed-off-by: Igor Konopko 
---
 drivers/lightnvm/pblk-init.c | 6 ++
 drivers/nvme/host/lightnvm.c | 1 +
 include/linux/lightnvm.h | 1 +
 3 files changed, 8 insertions(+)

diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
index b794e279da31..1529aa37b30f 100644
--- a/drivers/lightnvm/pblk-init.c
+++ b/drivers/lightnvm/pblk-init.c
@@ -1152,6 +1152,12 @@ static void *pblk_init(struct nvm_tgt_dev *dev, struct 
gendisk *tdisk,
return ERR_PTR(-EINVAL);
}
 
+   if (geo->ext) {
+   pblk_err(pblk, "extended metadata not supported\n");
+   kfree(pblk);
+   return ERR_PTR(-EINVAL);
+   }
+
spin_lock_init(>resubmit_lock);
spin_lock_init(>trans_lock);
spin_lock_init(>lock);
diff --git a/drivers/nvme/host/lightnvm.c b/drivers/nvme/host/lightnvm.c
index e370793f52d5..7020e87bcee4 100644
--- a/drivers/nvme/host/lightnvm.c
+++ b/drivers/nvme/host/lightnvm.c
@@ -989,6 +989,7 @@ void nvme_nvm_update_nvm_info(struct nvme_ns *ns)
 
geo->csecs = 1 << ns->lba_shift;
geo->sos = ns->ms;
+   geo->ext = ns->ext;
 }
 
 int nvme_nvm_register(struct nvme_ns *ns, char *disk_name, int node)
diff --git a/include/linux/lightnvm.h b/include/linux/lightnvm.h
index c6c998716ee7..abd29f50f2a1 100644
--- a/include/linux/lightnvm.h
+++ b/include/linux/lightnvm.h
@@ -357,6 +357,7 @@ struct nvm_geo {
u32 clba;   /* sectors per chunk */
u16 csecs;  /* sector size */
u16 sos;/* out-of-band area size */
+   u16 ext;/* metadata in extended data buffer */
 
/* device write constrains */
u32 ws_min; /* minimum write size */
-- 
2.17.1



[PATCH 1/5] lightnvm: pblk: Do not reuse DMA memory on partial read

2018-10-05 Thread Igor Konopko
Currently DMA allocated memory is reused on partial read
path for some internal pblk structs. In preparation for
dynamic DMA pool sizes we need to change it to kmalloc
allocated memory.

Signed-off-by: Igor Konopko 
---
 drivers/lightnvm/pblk-read.c | 20 +---
 drivers/lightnvm/pblk.h  |  2 ++
 2 files changed, 7 insertions(+), 15 deletions(-)

diff --git a/drivers/lightnvm/pblk-read.c b/drivers/lightnvm/pblk-read.c
index d340dece1d00..08f6ebd4bc48 100644
--- a/drivers/lightnvm/pblk-read.c
+++ b/drivers/lightnvm/pblk-read.c
@@ -224,7 +224,6 @@ static void pblk_end_partial_read(struct nvm_rq *rqd)
unsigned long *read_bitmap = pr_ctx->bitmap;
int nr_secs = pr_ctx->orig_nr_secs;
int nr_holes = nr_secs - bitmap_weight(read_bitmap, nr_secs);
-   __le64 *lba_list_mem, *lba_list_media;
void *src_p, *dst_p;
int hole, i;
 
@@ -237,13 +236,9 @@ static void pblk_end_partial_read(struct nvm_rq *rqd)
rqd->ppa_list[0] = ppa;
}
 
-   /* Re-use allocated memory for intermediate lbas */
-   lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size);
-   lba_list_media = (((void *)rqd->ppa_list) + 2 * pblk_dma_ppa_size);
-
for (i = 0; i < nr_secs; i++) {
-   lba_list_media[i] = meta_list[i].lba;
-   meta_list[i].lba = lba_list_mem[i];
+   pr_ctx->lba_list_media[i] = meta_list[i].lba;
+   meta_list[i].lba = pr_ctx->lba_list_mem[i];
}
 
/* Fill the holes in the original bio */
@@ -255,7 +250,7 @@ static void pblk_end_partial_read(struct nvm_rq *rqd)
line = pblk_ppa_to_line(pblk, rqd->ppa_list[i]);
kref_put(>ref, pblk_line_put);
 
-   meta_list[hole].lba = lba_list_media[i];
+   meta_list[hole].lba = pr_ctx->lba_list_media[i];
 
src_bv = new_bio->bi_io_vec[i++];
dst_bv = bio->bi_io_vec[bio_init_idx + hole];
@@ -295,13 +290,9 @@ static int pblk_setup_partial_read(struct pblk *pblk, 
struct nvm_rq *rqd,
struct pblk_g_ctx *r_ctx = nvm_rq_to_pdu(rqd);
struct pblk_pr_ctx *pr_ctx;
struct bio *new_bio, *bio = r_ctx->private;
-   __le64 *lba_list_mem;
int nr_secs = rqd->nr_ppas;
int i;
 
-   /* Re-use allocated memory for intermediate lbas */
-   lba_list_mem = (((void *)rqd->ppa_list) + pblk_dma_ppa_size);
-
new_bio = bio_alloc(GFP_KERNEL, nr_holes);
 
if (pblk_bio_add_pages(pblk, new_bio, GFP_KERNEL, nr_holes))
@@ -312,12 +303,12 @@ static int pblk_setup_partial_read(struct pblk *pblk, 
struct nvm_rq *rqd,
goto fail_free_pages;
}
 
-   pr_ctx = kmalloc(sizeof(struct pblk_pr_ctx), GFP_KERNEL);
+   pr_ctx = kzalloc(sizeof(struct pblk_pr_ctx), GFP_KERNEL);
if (!pr_ctx)
goto fail_free_pages;
 
for (i = 0; i < nr_secs; i++)
-   lba_list_mem[i] = meta_list[i].lba;
+   pr_ctx->lba_list_mem[i] = meta_list[i].lba;
 
new_bio->bi_iter.bi_sector = 0; /* internal bio */
bio_set_op_attrs(new_bio, REQ_OP_READ, 0);
@@ -325,7 +316,6 @@ static int pblk_setup_partial_read(struct pblk *pblk, 
struct nvm_rq *rqd,
rqd->bio = new_bio;
rqd->nr_ppas = nr_holes;
 
-   pr_ctx->ppa_ptr = NULL;
pr_ctx->orig_bio = bio;
bitmap_copy(pr_ctx->bitmap, read_bitmap, NVM_MAX_VLBA);
pr_ctx->bio_init_idx = bio_init_idx;
diff --git a/drivers/lightnvm/pblk.h b/drivers/lightnvm/pblk.h
index 0f98ea24ee59..aea09879636f 100644
--- a/drivers/lightnvm/pblk.h
+++ b/drivers/lightnvm/pblk.h
@@ -132,6 +132,8 @@ struct pblk_pr_ctx {
unsigned int bio_init_idx;
void *ppa_ptr;
dma_addr_t dma_ppa_list;
+   __le64 lba_list_mem[NVM_MAX_VLBA];
+   __le64 lba_list_media[NVM_MAX_VLBA];
 };
 
 /* Pad context */
-- 
2.17.1



[PATCH 2/5] lightnvm: pblk: Helpers for OOB metadata

2018-10-05 Thread Igor Konopko
Currently pblk assumes that size of OOB metadata on drive is always
equal to size of pblk_sec_meta struct. This commit add helpers which will
allow to handle different sizes of OOB metadata on drive.

Signed-off-by: Igor Konopko 
---
 drivers/lightnvm/pblk-core.c |  6 ++---
 drivers/lightnvm/pblk-map.c  | 21 ++--
 drivers/lightnvm/pblk-read.c | 41 +++-
 drivers/lightnvm/pblk-recovery.c | 14 ++-
 drivers/lightnvm/pblk.h  | 37 +++-
 5 files changed, 86 insertions(+), 33 deletions(-)

diff --git a/drivers/lightnvm/pblk-core.c b/drivers/lightnvm/pblk-core.c
index 6944aac43b01..7cb39d84c833 100644
--- a/drivers/lightnvm/pblk-core.c
+++ b/drivers/lightnvm/pblk-core.c
@@ -743,7 +743,6 @@ int pblk_line_smeta_read(struct pblk *pblk, struct 
pblk_line *line)
rqd.opcode = NVM_OP_PREAD;
rqd.nr_ppas = lm->smeta_sec;
rqd.is_seq = 1;
-
for (i = 0; i < lm->smeta_sec; i++, paddr++)
rqd.ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id);
 
@@ -796,10 +795,11 @@ static int pblk_line_smeta_write(struct pblk *pblk, 
struct pblk_line *line,
rqd.is_seq = 1;
 
for (i = 0; i < lm->smeta_sec; i++, paddr++) {
-   struct pblk_sec_meta *meta_list = rqd.meta_list;
+   void *meta_list = rqd.meta_list;
 
rqd.ppa_list[i] = addr_to_gen_ppa(pblk, paddr, line->id);
-   meta_list[i].lba = lba_list[paddr] = addr_empty;
+   pblk_set_meta_lba(pblk, meta_list, i, ADDR_EMPTY);
+   lba_list[paddr] = addr_empty;
}
 
ret = pblk_submit_io_sync_sem(pblk, );
diff --git a/drivers/lightnvm/pblk-map.c b/drivers/lightnvm/pblk-map.c
index 6dcbd44e3acb..4c7a9909308e 100644
--- a/drivers/lightnvm/pblk-map.c
+++ b/drivers/lightnvm/pblk-map.c
@@ -22,7 +22,7 @@
 static int pblk_map_page_data(struct pblk *pblk, unsigned int sentry,
  struct ppa_addr *ppa_list,
  unsigned long *lun_bitmap,
- struct pblk_sec_meta *meta_list,
+ void *meta_list,
  unsigned int valid_secs)
 {
struct pblk_line *line = pblk_line_get_data(pblk);
@@ -68,14 +68,15 @@ static int pblk_map_page_data(struct pblk *pblk, unsigned 
int sentry,
kref_get(>ref);
w_ctx = pblk_rb_w_ctx(>rwb, sentry + i);
w_ctx->ppa = ppa_list[i];
-   meta_list[i].lba = cpu_to_le64(w_ctx->lba);
+   pblk_set_meta_lba(pblk, meta_list, i, w_ctx->lba);
lba_list[paddr] = cpu_to_le64(w_ctx->lba);
if (lba_list[paddr] != addr_empty)
line->nr_valid_lbas++;
else
atomic64_inc(>pad_wa);
} else {
-   lba_list[paddr] = meta_list[i].lba = addr_empty;
+   lba_list[paddr] = addr_empty;
+   pblk_set_meta_lba(pblk, meta_list, i, ADDR_EMPTY);
__pblk_map_invalidate(pblk, line, paddr);
}
}
@@ -88,7 +89,7 @@ void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, 
unsigned int sentry,
 unsigned long *lun_bitmap, unsigned int valid_secs,
 unsigned int off)
 {
-   struct pblk_sec_meta *meta_list = rqd->meta_list;
+   void *meta_list = rqd->meta_list;
struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd);
unsigned int map_secs;
int min = pblk->min_write_pgs;
@@ -97,7 +98,10 @@ void pblk_map_rq(struct pblk *pblk, struct nvm_rq *rqd, 
unsigned int sentry,
for (i = off; i < rqd->nr_ppas; i += min) {
map_secs = (i + min > valid_secs) ? (valid_secs % min) : min;
if (pblk_map_page_data(pblk, sentry + i, _list[i],
-   lun_bitmap, _list[i], map_secs)) {
+   lun_bitmap,
+   pblk_get_meta_buffer(pblk,
+meta_list, i),
+   map_secs)) {
bio_put(rqd->bio);
pblk_free_rqd(pblk, rqd, PBLK_WRITE);
pblk_pipeline_stop(pblk);
@@ -113,7 +117,7 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq 
*rqd,
struct nvm_tgt_dev *dev = pblk->dev;
struct nvm_geo *geo = >geo;
struct pblk_line_meta *lm = >lm;
-   struct pblk_sec_meta *meta_list = rqd->meta_list;
+   void *meta_list = rqd->meta_list;
struct ppa_addr *ppa_list = nvm_rq_to_ppa_list(rqd);
struct pblk_line *e_line, *d_line;
unsigned int map_secs;
@@ -123,7 +127,10 @@ void pblk_map_erase_rq(struct pblk *pblk, struct nvm_rq 
*rqd,

Re: [PATCH] lightnvm: consider max hw sectors supported for max_write_pgs

2018-10-05 Thread Javier González
> On 5 Oct 2018, at 02.26, Zhoujie Wu  wrote:
> 
> When do GC, the number of read/write sectors are determined
> by max_write_pgs(see gc_rq preparation in pblk_gc_line_prepare_ws).
> 
> Due to max_write_pgs doesn't consider max hw sectors
> supported by nvme controller(128K), which leads to GC
> tries to read 64 * 4K in one command, and see below error
> caused by pblk_bio_map_addr in function pblk_submit_read_gc.
> 
> [ 2923.005376] pblk: could not add page to bio
> [ 2923.005377] pblk: could not allocate GC bio (18446744073709551604)
> 
> Signed-off-by: Zhoujie Wu 
> ---
> drivers/lightnvm/pblk-init.c | 4 
> 1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/lightnvm/pblk-init.c b/drivers/lightnvm/pblk-init.c
> index e357388..2e51875 100644
> --- a/drivers/lightnvm/pblk-init.c
> +++ b/drivers/lightnvm/pblk-init.c
> @@ -393,6 +393,7 @@ static int pblk_core_init(struct pblk *pblk)
>   struct nvm_tgt_dev *dev = pblk->dev;
>   struct nvm_geo *geo = >geo;
>   int ret, max_write_ppas;
> + struct request_queue *bqueue = dev->q;
> 

Detail: Can you move this under struct nvm_tgt_dev *dev = pblk->dev;? So
that we maintain ordering?

>   atomic64_set(>user_wa, 0);
>   atomic64_set(>pad_wa, 0);
> @@ -407,6 +408,9 @@ static int pblk_core_init(struct pblk *pblk)
>   pblk->min_write_pgs = geo->ws_opt;
>   max_write_ppas = pblk->min_write_pgs * geo->all_luns;
>   pblk->max_write_pgs = min_t(int, max_write_ppas, NVM_MAX_VLBA);
> + /* consider the max hw sector as well */

No need for this comment.

> + pblk->max_write_pgs =  min_t(int, pblk->max_write_pgs,
> + queue_max_hw_sectors(bqueue) / (geo->csecs >> 9));
>   pblk_set_sec_per_write(pblk, pblk->min_write_pgs);
> 
>   pblk->pad_dist = kcalloc(pblk->min_write_pgs - 1, sizeof(atomic64_t),
> --
> 1.9.1

Besides the comment above, it looks good to me.

Reviewed-by: Javier González 



signature.asc
Description: Message signed with OpenPGP


Re: [PATCH] block: BFQ default for single queue devices

2018-10-05 Thread Pavel Machek
Hi!

> I talked to Pavel a bit back and it turns out he has a
> usecase for BFQ as well and I bet he also would like it
> as default scheduler for that system (Pavel tell us more,
> I don't remember what it was!)

I'm not sure I remember clearly, either.

IIRC I was working with ionice on spinning disks, and it had no
effect. I switched to BFQ and suddenly ionice was effective.

Best regards,
Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) 
http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html


signature.asc
Description: Digital signature


Re: [PATCH] blk-mq-debugfs: Also show requests that have not yet been started

2018-10-05 Thread Johannes Thumshirn
Looks good,
Reviewed-by: Johannes Thumshirn 
-- 
Johannes Thumshirn  Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham Norton
HRB 21284 (AG Nürnberg)
Key fingerprint = EC38 9CAB C2C4 F25D 8600 D0D0 0393 969D 2D76 0850