> -Original Message-
> From: Javier González
> Sent: Tuesday, 7 July 2020 10.43
> To: Matias Bjorling
> Cc: Damien Le Moal ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Niklas Cassel ; Hans
> Holmberg ; li
> -Original Message-
> From: Javier González
> Sent: Monday, 29 June 2020 21.39
> To: Damien Le Moal
> Cc: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Niklas Cassel ; Hans
> Holmberg ; li
> -Original Message-
> From: Bart Van Assche
> Sent: Monday, 29 June 2020 03.36
> To: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Damien Le Moal ;
> Niklas Cassel ; Hans Holmberg
>
> Cc:
> -Original Message-
> From: Niklas Cassel
> Sent: Monday, 29 June 2020 11.04
> To: Damien Le Moal
> Cc: Matias Bjorling ; ax...@kernel.dk;
> kbu...@kernel.org; h...@lst.de; s...@grimberg.me;
> martin.peter...@oracle.com; Hans Holmberg ;
> linux-s...@vger.
On 02/20/2016 08:52 AM, Matias Bjørling wrote:
> Hi Jens,
>
> Sorry, I was living in a fairy tail land, where patches are
> miraculously applied without being sent upstream. Leading me to test
> on top of the wrong base.
>
> I was missing three patches, which I should have sent for previous -rc
On 02/20/2016 08:52 AM, Matias Bjørling wrote:
> Hi Jens,
>
> Sorry, I was living in a fairy tail land, where patches are
> miraculously applied without being sent upstream. Leading me to test
> on top of the wrong base.
>
> I was missing three patches, which I should have sent for previous -rc
On 11/02/2015 04:37 PM, Jens Axboe wrote:
> On 11/02/2015 05:43 AM, Matias Bjorling wrote:
>> On 11/02/2015 02:16 AM, Randy Dunlap wrote:
>>> On 11/01/15 08:53, Stephen Rothwell wrote:
>>>> Hi all,
>>>>
>>>> I start again a day early, and thi
On 11/02/2015 02:16 AM, Randy Dunlap wrote:
> On 11/01/15 08:53, Stephen Rothwell wrote:
>> Hi all,
>>
>> I start again a day early, and this is how you all repay me? ;-)
>>
>> Changes since 20151022:
>>
>
> on i386:
>
> ../include/linux/lightnvm.h:143:4: error: width of 'resved' exceeds its
On 11/02/2015 04:37 PM, Jens Axboe wrote:
> On 11/02/2015 05:43 AM, Matias Bjorling wrote:
>> On 11/02/2015 02:16 AM, Randy Dunlap wrote:
>>> On 11/01/15 08:53, Stephen Rothwell wrote:
>>>> Hi all,
>>>>
>>>> I start again a day early, and thi
On 11/02/2015 02:16 AM, Randy Dunlap wrote:
> On 11/01/15 08:53, Stephen Rothwell wrote:
>> Hi all,
>>
>> I start again a day early, and this is how you all repay me? ;-)
>>
>> Changes since 20151022:
>>
>
> on i386:
>
> ../include/linux/lightnvm.h:143:4: error: width of 'resved' exceeds its
Den 02-09-2015 kl. 20:39 skrev Ross Zwisler:
On Mon, Aug 31, 2015 at 02:17:18PM +0200, Matias Bjørling wrote:
From: Matias Bjørling
Driver was not freeing the memory allocated for internal nullb queues.
This patch frees the memory during driver unload.
You may want to consider devm_* style
Den 02-09-2015 kl. 20:39 skrev Ross Zwisler:
On Mon, Aug 31, 2015 at 02:17:18PM +0200, Matias Bjørling wrote:
From: Matias Bjørling
Driver was not freeing the memory allocated for internal nullb queues.
This patch frees the memory during driver unload.
You may want to
I don't think the current abuses of the block API are acceptable though.
The crazy deep merging shouldn't be too relevant for SSD-type devices
so I think you'd do better than trying to reuse the TYPE_FS level
blk-mq merging code. If you want to reuse the request
allocation/submission code that's
I don't think the current abuses of the block API are acceptable though.
The crazy deep merging shouldn't be too relevant for SSD-type devices
so I think you'd do better than trying to reuse the TYPE_FS level
blk-mq merging code. If you want to reuse the request
allocation/submission code that's
On 06/11/2015 12:29 PM, Christoph Hellwig wrote:
> On Wed, Jun 10, 2015 at 08:11:42PM +0200, Matias Bjorling wrote:
>> 1. A get/put flash block API, that user-space applications can use.
>> That will enable application-driven FTLs. E.g. RocksDB can be integrated
>> tightly w
On 06/11/2015 12:29 PM, Christoph Hellwig wrote:
On Wed, Jun 10, 2015 at 08:11:42PM +0200, Matias Bjorling wrote:
1. A get/put flash block API, that user-space applications can use.
That will enable application-driven FTLs. E.g. RocksDB can be integrated
tightly with the SSD. Allowing data
On 06/09/2015 09:46 AM, Christoph Hellwig wrote:
> Hi Matias,
>
> I've been looking over this and I really think it needs a fundamental
> rearchitecture still. The design of using a separate stacking
> block device and all kinds of private hooks does not look very
> maintainable.
>
> Here is my
On 06/09/2015 09:46 AM, Christoph Hellwig wrote:
Hi Matias,
I've been looking over this and I really think it needs a fundamental
rearchitecture still. The design of using a separate stacking
block device and all kinds of private hooks does not look very
maintainable.
Here is my counter
-
+#if defined(CONFIG_NVM)
+ struct bio_nvm_payload *bi_nvm; /* open-channel ssd backend */
+#endif
unsigned short bi_vcnt;/* how many bio_vec's */
Jens suggests this to implemented using a bio clone. Will do in the next
refresh.
--
To unsubscribe from this
-
+#if defined(CONFIG_NVM)
+ struct bio_nvm_payload *bi_nvm; /* open-channel ssd backend */
+#endif
unsigned short bi_vcnt;/* how many bio_vec's */
Jens suggests this to implemented using a bio clone. Will do in the next
refresh.
--
To unsubscribe from this
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -39,6 +39,7 @@
#include
#include
#include
+#include
#include
#include
@@ -134,6 +135,11 @@ static inline void _nvme_check_size(void)
BUILD_BUG_ON(sizeof(struct nvme_id_ns) != 4096);
--- a/drivers/block/nvme-core.c
+++ b/drivers/block/nvme-core.c
@@ -39,6 +39,7 @@
#include linux/slab.h
#include linux/t10-pi.h
#include linux/types.h
+#include linux/lightnvm.h
#include scsi/sg.h
#include asm-generic/io-64-nonatomic-lo-hi.h
@@ -134,6 +135,11 @@ static inline void
On Sat, Apr 18, 2015 at 08:45:19AM +0200, Matias Bjorling wrote:
The reason it shouldn't be under the a single block device, is that a target
should be able to provide a global address space.
That allows the address
space to grow/shrink dynamically with the disks. Allowing a continuously
On Sat, Apr 18, 2015 at 08:45:19AM +0200, Matias Bjorling wrote:
snip
The reason it shouldn't be under the a single block device, is that a target
should be able to provide a global address space.
That allows the address
space to grow/shrink dynamically with the disks. Allowing a continuously
Den 17-04-2015 kl. 19:46 skrev Christoph Hellwig:
On Fri, Apr 17, 2015 at 10:15:46AM +0200, Matias Bj?rling wrote:
Just the prep/unprep, or other pieces as well?
All of it - it's functionality that lies logically below the block
layer, so that's where it should be handled.
In fact it should
Den 17-04-2015 kl. 19:46 skrev Christoph Hellwig:
On Fri, Apr 17, 2015 at 10:15:46AM +0200, Matias Bj?rling wrote:
Just the prep/unprep, or other pieces as well?
All of it - it's functionality that lies logically below the block
layer, so that's where it should be handled.
In fact it should
Den 16-04-2015 kl. 16:55 skrev Keith Busch:
On Wed, 15 Apr 2015, Matias Bjørling wrote:
@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-int shift = NVME_CAP_MPSMIN(readq(>bar->cap)) + 12;
+u64
Den 16-04-2015 kl. 16:55 skrev Keith Busch:
On Wed, 15 Apr 2015, Matias Bjørling wrote:
@@ -2316,7 +2686,9 @@ static int nvme_dev_add(struct nvme_dev *dev)
struct nvme_id_ctrl *ctrl;
void *mem;
dma_addr_t dma_addr;
-int shift = NVME_CAP_MPSMIN(readq(dev-bar-cap)) + 12;
+u64
On 08/15/2014 01:09 AM, Keith Busch wrote:
The allocation and freeing of blk-mq parts seems a bit asymmetrical
to me. The 'tags' belong to the tagset, but any request_queue using
that tagset may free the tags. I looked to separate the tag allocation
concerns, but that's more time than I have,
On 08/15/2014 01:09 AM, Keith Busch wrote:
The allocation and freeing of blk-mq parts seems a bit asymmetrical
to me. The 'tags' belong to the tagset, but any request_queue using
that tagset may free the tags. I looked to separate the tag allocation
concerns, but that's more time than I have,
I haven't event tried debugging this next one: doing an insmod+rmmod
caused this warning followed by a panic:
I'll look into it. Thanks
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with
I haven't event tried debugging this next one: doing an insmod+rmmod
caused this warning followed by a panic:
I'll look into it. Thanks
nr_tags must be uninitialized or screwed up somehow, otherwise I don't
see how that kmalloc() could warn on being too large. Keith, are you
running with
On 07/14/2014 02:41 PM, Christoph Hellwig wrote:
+static int nvme_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+ unsigned int hctx_idx)
+ struct nvme_queue *nvmeq = dev->queues[
+ (hctx_idx % dev->queue_count) + 1];
+
+
On 07/14/2014 02:41 PM, Christoph Hellwig wrote:
+static int nvme_init_hctx(struct blk_mq_hw_ctx *hctx, void *data,
+ unsigned int hctx_idx)
+ struct nvme_queue *nvmeq = dev-queues[
+ (hctx_idx % dev-queue_count) + 1];
+
+
On 07/22/2014 07:46 AM, Hannes Reinecke wrote:
On 07/21/2014 09:28 PM, Kent Overstreet wrote:
On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
On 07/18/2014 07:04 PM, John Utz wrote:
On 07/18/2014 05:31 AM, John Utz wrote:
Thankyou very much for the exhaustive answer! I
On 07/22/2014 07:46 AM, Hannes Reinecke wrote:
On 07/21/2014 09:28 PM, Kent Overstreet wrote:
On Mon, Jul 21, 2014 at 04:23:41PM +0200, Hannes Reinecke wrote:
On 07/18/2014 07:04 PM, John Utz wrote:
On 07/18/2014 05:31 AM, John Utz wrote:
Thankyou very much for the exhaustive answer! I
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct
Den 16-06-2014 17:57, Keith Busch skrev:
On Fri, 13 Jun 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
static void nvme_reset_notify(struct pci_dev *pdev, bool prepare)
{
- struct nvme_dev *dev = pci_get_drvdata(pdev);
+struct
On 06/04/2014 08:52 PM, Keith Busch wrote:
> On Wed, 4 Jun 2014, Jens Axboe wrote:
>> On 06/04/2014 12:28 PM, Keith Busch wrote:
>> Are you testing against 3.13? You really need the current tree for this,
>> otherwise I'm sure you'll run into issues (as you appear to be :-)
>
> I'm using Matias'
On 06/04/2014 08:52 PM, Keith Busch wrote:
On Wed, 4 Jun 2014, Jens Axboe wrote:
On 06/04/2014 12:28 PM, Keith Busch wrote:
Are you testing against 3.13? You really need the current tree for this,
otherwise I'm sure you'll run into issues (as you appear to be :-)
I'm using Matias' current
On 06/03/2014 01:06 AM, Keith Busch wrote:
> Depending on the timing, it might fail in alloc instead of free:
>
> Jun 2 16:45:40 kbgrz1 kernel: [ 265.421243] NULL pointer dereference
> at (null)
> Jun 2 16:45:40 kbgrz1 kernel: [ 265.434284] PGD 429acf067 PUD
> 42ce28067 PMD 0
> Jun
On 06/03/2014 01:06 AM, Keith Busch wrote:
Depending on the timing, it might fail in alloc instead of free:
Jun 2 16:45:40 kbgrz1 kernel: [ 265.421243] NULL pointer dereference
at (null)
Jun 2 16:45:40 kbgrz1 kernel: [ 265.434284] PGD 429acf067 PUD
42ce28067 PMD 0
Jun 2
On 05/30/2014 06:48 PM, Keith Busch wrote:
> On Thu, 29 May 2014, Matias Bjørling wrote:
>> This converts the current NVMe driver to utilize the blk-mq layer.
>
> I'm pretty darn sure this new nvme_remove can cause a process
> with an open reference to use queues after they're freed in the
>
On 05/30/2014 01:12 AM, Jens Axboe wrote:
> On 05/29/2014 05:06 PM, Jens Axboe wrote:
>> Ah I see, yes that code apparently got axed. The attached patch brings
>> it back. Totally untested, I'll try and synthetically hit it to ensure
>> that it does work. Note that it currently does unmap and iod
On 05/30/2014 01:12 AM, Jens Axboe wrote:
On 05/29/2014 05:06 PM, Jens Axboe wrote:
Ah I see, yes that code apparently got axed. The attached patch brings
it back. Totally untested, I'll try and synthetically hit it to ensure
that it does work. Note that it currently does unmap and iod free,
On 05/30/2014 06:48 PM, Keith Busch wrote:
On Thu, 29 May 2014, Matias Bjørling wrote:
This converts the current NVMe driver to utilize the blk-mq layer.
I'm pretty darn sure this new nvme_remove can cause a process
with an open reference to use queues after they're freed in the
On 04/03/2014 11:01 AM, Christoph Hellwig wrote:
> On Thu, Apr 03, 2014 at 09:45:11AM -0700, Matias Bjorling wrote:
>>> I'd still create a request_queue for the internal queue, just not register
>>> a block device for it. For example SCSI sets up queues for each LUN
>>
On 04/03/2014 12:36 AM, Christoph Hellwig wrote:
> On Wed, Apr 02, 2014 at 09:10:12PM -0700, Matias Bjorling wrote:
>> For the nvme driver, there's a single admin queue, which is outside
>> blk-mq's control, and the X normal queues. Should we allow the shared
>> tags structur
On 04/03/2014 12:36 AM, Christoph Hellwig wrote:
On Wed, Apr 02, 2014 at 09:10:12PM -0700, Matias Bjorling wrote:
For the nvme driver, there's a single admin queue, which is outside
blk-mq's control, and the X normal queues. Should we allow the shared
tags structure to be used (get/put
On 04/03/2014 11:01 AM, Christoph Hellwig wrote:
On Thu, Apr 03, 2014 at 09:45:11AM -0700, Matias Bjorling wrote:
I'd still create a request_queue for the internal queue, just not register
a block device for it. For example SCSI sets up queues for each LUN
found, but only a subset actually
On 04/02/2014 12:46 AM, Christoph Hellwig wrote:
> On Tue, Apr 01, 2014 at 05:16:21PM -0700, Matias Bjorling wrote:
>> Hi Christoph,
>>
>> Can you rebase it on top of 3.14. I have trouble applying it for testing.
>
> Hi Martin,
>
> the series is based on top of
On 04/02/2014 12:46 AM, Christoph Hellwig wrote:
On Tue, Apr 01, 2014 at 05:16:21PM -0700, Matias Bjorling wrote:
Hi Christoph,
Can you rebase it on top of 3.14. I have trouble applying it for testing.
Hi Martin,
the series is based on top of Jens' for-next branch. I've also pushed out
On 03/31/2014 07:46 AM, Christoph Hellwig wrote:
> This series adds support for sharing tags (and thus requests) between
> multiple request_queues. We'll need this for SCSI, and I think Martin
> also wants something similar for nvme.
>
> Besides the mess with request contructors/destructors the
On 03/31/2014 07:46 AM, Christoph Hellwig wrote:
This series adds support for sharing tags (and thus requests) between
multiple request_queues. We'll need this for SCSI, and I think Martin
also wants something similar for nvme.
Besides the mess with request contructors/destructors the major
On 03/24/2014 08:08 PM, David Lang wrote:
> On Fri, 21 Mar 2014, Matias Bjorling wrote:
>
>> On 03/21/2014 02:06 AM, Joe Thornber wrote:
>>> Hi Matias,
>>>
>>> This looks really interesting and I'd love to get involved. Do you
>>> have any re
On 03/24/2014 07:22 PM, Akira Hayakawa wrote:
> Hi, Matias,
>
> Sorry for jumping in. I am interested in this new feature, too.
>
>>> Does it even make sense to expose the underlying devices as block
>>> devices? It surely would help to send this together with a driver
>>> that you plan to use
On 03/23/2014 11:13 PM, Bart Van Assche wrote:
> On 03/21/14 16:37, Christoph Hellwig wrote:
>> Just curious: why do you think implementing this as a block remapper
>> inside device mapper is a better idea than as a blk-mq driver?
>>
>> At the request layer you already get a lot of infrastructure
On 03/23/2014 11:13 PM, Bart Van Assche wrote:
On 03/21/14 16:37, Christoph Hellwig wrote:
Just curious: why do you think implementing this as a block remapper
inside device mapper is a better idea than as a blk-mq driver?
At the request layer you already get a lot of infrastructure for all
On 03/24/2014 07:22 PM, Akira Hayakawa wrote:
Hi, Matias,
Sorry for jumping in. I am interested in this new feature, too.
Does it even make sense to expose the underlying devices as block
devices? It surely would help to send this together with a driver
that you plan to use it on top of.
On 03/24/2014 08:08 PM, David Lang wrote:
On Fri, 21 Mar 2014, Matias Bjorling wrote:
On 03/21/2014 02:06 AM, Joe Thornber wrote:
Hi Matias,
This looks really interesting and I'd love to get involved. Do you
have any recommendations for what hardware I should pick up?
Hi Joe,
The most
On 03/21/2014 08:32 AM, Richard Weinberger wrote:
> On Fri, Mar 21, 2014 at 7:32 AM, Matias Bjørling wrote:
>
> This sounds very interesting!
>
> Is there also a way to expose the flash directly as MTD device?
> I'm thinking of UBI. Maybe both projects can benefit from each others.
>
Hi
On 03/21/2014 08:37 AM, Christoph Hellwig wrote:
> Just curious: why do you think implementing this as a block remapper
> inside device mapper is a better idea than as a blk-mq driver?
Hi Christoph,
I imagine the layer to interact with a compatible SSD, that either uses
SATA, NVMe or PCI-e as
gt;> compatible device, the device will be queued upon initialized for the
>> relevant values.
>>
>> The last part is still in progress and a fully working prototype will be
>> presented in upcoming patches.
>>
>> Contributions to make this possible by the following pe
On 03/21/2014 02:06 AM, Joe Thornber wrote:
> Hi Matias,
>
> This looks really interesting and I'd love to get involved. Do you
> have any recommendations for what hardware I should pick up?
Hi Joe,
The most easily available platform is OpenSSD
(http://www.openssd-project.org). It's a little
On 03/21/2014 02:06 AM, Joe Thornber wrote:
Hi Matias,
This looks really interesting and I'd love to get involved. Do you
have any recommendations for what hardware I should pick up?
Hi Joe,
The most easily available platform is OpenSSD
(http://www.openssd-project.org). It's a little old,
for the
relevant values.
The last part is still in progress and a fully working prototype will be
presented in upcoming patches.
Contributions to make this possible by the following people:
Aviad Zuck aviad...@tau.ac.il
Jesper Madsen j...@itu.dk
Signed-off-by: Matias Bjorling m
On 03/21/2014 08:37 AM, Christoph Hellwig wrote:
Just curious: why do you think implementing this as a block remapper
inside device mapper is a better idea than as a blk-mq driver?
Hi Christoph,
I imagine the layer to interact with a compatible SSD, that either uses
SATA, NVMe or PCI-e as
On 03/21/2014 08:32 AM, Richard Weinberger wrote:
On Fri, Mar 21, 2014 at 7:32 AM, Matias Bjørling m...@bjorling.me wrote:
snip
This sounds very interesting!
Is there also a way to expose the flash directly as MTD device?
I'm thinking of UBI. Maybe both projects can benefit from each
fers() could return NULL on (1) bs > PAGE_SIZE
> + * (2) low memory case. Ensure that we don't dereference null ptr
> + */
> + BUG_ON(!head);
> bh = head;
> do {
> bh->b_state |= b_state;
>
Reviewed-by: Matias Bjorling
--
To unsu
;
> + bs = PAGE_SIZE;
> + }
>
> if (queue_mode == NULL_Q_MQ && use_per_node_hctx) {
> if (submit_queues < nr_online_nodes) {
>
Reviewed-by: Matias Bjorling
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the
use_per_node_hctx) {
if (submit_queues nr_online_nodes) {
Reviewed-by: Matias Bjorling m...@bjorling.me
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http://vger.kernel.org
+ */
+ BUG_ON(!head);
bh = head;
do {
bh-b_state |= b_state;
Reviewed-by: Matias Bjorling m...@bjorling.me
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at http
On 01/20/2014 04:58 AM, Raghavendra K T wrote:
> If we load the null_blk module with bs=8k we get following oops:
> [ 3819.812190] BUG: unable to handle kernel NULL pointer dereference at
> 0008
> [ 3819.812387] IP: [] create_empty_buffers+0x28/0xaf
> [ 3819.812527] PGD 219244067 PUD
On 01/20/2014 04:58 AM, Raghavendra K T wrote:
If we load the null_blk module with bs=8k we get following oops:
[ 3819.812190] BUG: unable to handle kernel NULL pointer dereference at
0008
[ 3819.812387] IP: [81170aa5] create_empty_buffers+0x28/0xaf
[ 3819.812527] PGD
On 01/17/2014 01:22 AM, Raghavendra K T wrote:
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a2e69d2..6b0e049 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -535,6 +535,11 @@ static int null_add_dev(void)
if (!nullb)
On 01/17/2014 01:22 AM, Raghavendra K T wrote:
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index a2e69d2..6b0e049 100644
--- a/drivers/block/null_blk.c
+++ b/drivers/block/null_blk.c
@@ -535,6 +535,11 @@ static int null_add_dev(void)
if (!nullb)
On 01/09/2014 10:33 PM, Muthu Kumar wrote:
Thanks Matias. Yes, Ming Lei's 4th patch does make the function internal.
So, which branch has the laest patches... i am checking for-3.14/core...
Depends on what you want to do. You may use the Jens' for-linus branch
for the latest including blk
On 01/09/2014 07:54 PM, Muthu Kumar wrote:
Jens,
Compiling null_blk.ko failed with error that blk_mq_free_queue() was
defined implicitly. So, moved the declaration from block/blk-mq.h to
include/linux/blk-mq.h and exported it.
The patch from Ming Lei is missing in -rc6
On 01/09/2014 07:54 PM, Muthu Kumar wrote:
Jens,
Compiling null_blk.ko failed with error that blk_mq_free_queue() was
defined implicitly. So, moved the declaration from block/blk-mq.h to
include/linux/blk-mq.h and exported it.
The patch from Ming Lei is missing in -rc6
On 01/09/2014 10:33 PM, Muthu Kumar wrote:
Thanks Matias. Yes, Ming Lei's 4th patch does make the function internal.
So, which branch has the laest patches... i am checking for-3.14/core...
Depends on what you want to do. You may use the Jens' for-linus branch
for the latest including blk
From: Matias Bjørling
Randy Dunlap reported a couple of grammar errors and unfortunate usages of
socket/node/core.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 20 ++--
1 file changed, 10 insertions(+), 10 deletions(-)
diff --git a/Documentation/block
queues are mapped as node0[0,1],
node1[2,3], ...
If uneven, we are left with an uneven number of submit_queues that must be
mapped. These are mapped toward the first node and onward. E.g. 5
submit queues mapped onto 4 nodes are mapped as node0[0,1], node1[2], ...
Signed-off-by: Matias Bjorling
From: Matias Bjørling
The defaults for the module is to instantiate itself with blk-mq and a
submit queue for each CPU node in the system.
To save resources, initialize instead with a single submit queue.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 9
Hi,
These three patches cover:
* Incorporated the feedback from Randy Dunlap into documentation.
Can be merged with the previous documentation commit (6824518).
* Set use_per_node_hctx to false per default to save resources.
* Allow submit_queues and use_per_node_hctx to be used simultanesly.
From: Matias Bjørling m...@bjorling.me
The defaults for the module is to instantiate itself with blk-mq and a
submit queue for each CPU node in the system.
To save resources, initialize instead with a single submit queue.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
Documentation/block
Hi,
These three patches cover:
* Incorporated the feedback from Randy Dunlap into documentation.
Can be merged with the previous documentation commit (6824518).
* Set use_per_node_hctx to false per default to save resources.
* Allow submit_queues and use_per_node_hctx to be used simultanesly.
. For example: 8 submit queues are mapped as node0[0,1],
node1[2,3], ...
If uneven, we are left with an uneven number of submit_queues that must be
mapped. These are mapped toward the first node and onward. E.g. 5
submit queues mapped onto 4 nodes are mapped as node0[0,1], node1[2], ...
Signed-off-by: Matias
From: Matias Bjørling m...@bjorling.me
Randy Dunlap reported a couple of grammar errors and unfortunate usages of
socket/node/core.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
Documentation/block/null_blk.txt | 20 ++--
1 file changed, 10 insertions(+), 10 deletions
parameter is ignored.
Thanks,
Matias
Matias Bjorling (3):
null_blk: documentation
null_blk: refactor init and init errors code paths
null_blk: warning on ignored submit_queues param
Documentation/block/null_blk.txt | 71
drivers/block/null_blk.c
Add description of module and its parameters.
Signed-off-by: Matias Bjorling
---
Documentation/block/null_blk.txt | 71
1 file changed, 71 insertions(+)
create mode 100644 Documentation/block/null_blk.txt
diff --git a/Documentation/block/null_blk.txt b
Let the user know when the number of submission queues are being
ignored.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f0aeb2a..8f2e7c3 100644
paths.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 63 +---
1 file changed, 38 insertions(+), 25 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f370fc1..f0aeb2a 100644
--- a/drivers/block/null_blk.c
Add description of module and its parameters.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
Documentation/block/null_blk.txt | 71
1 file changed, 71 insertions(+)
create mode 100644 Documentation/block/null_blk.txt
diff --git a/Documentation
Let the user know when the number of submission queues are being
ignored.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
drivers/block/null_blk.c | 7 +--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f0aeb2a
paths.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
drivers/block/null_blk.c | 63 +---
1 file changed, 38 insertions(+), 25 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index f370fc1..f0aeb2a 100644
--- a/drivers
parameter is ignored.
Thanks,
Matias
Matias Bjorling (3):
null_blk: documentation
null_blk: refactor init and init errors code paths
null_blk: warning on ignored submit_queues param
Documentation/block/null_blk.txt | 71
drivers/block/null_blk.c
garbage into struct request_queue's
mq_map.
Signed-off-by: Matias Bjorling
---
drivers/block/null_blk.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index ea192ec..f370fc1 100644
--- a/drivers/block/null_blk.c
+++ b
garbage into struct request_queue's
mq_map.
Signed-off-by: Matias Bjorling m...@bjorling.me
---
drivers/block/null_blk.c | 8
1 file changed, 4 insertions(+), 4 deletions(-)
diff --git a/drivers/block/null_blk.c b/drivers/block/null_blk.c
index ea192ec..f370fc1 100644
--- a/drivers/block
1 - 100 of 144 matches
Mail list logo