On 01/11/2018 03:12 AM, Mike Snitzer wrote:
> DM is now no longer prone to having its request_queue be improperly
> initialized.
>
> Summary of changes:
>
> - defer DM's blk_register_queue() from add_disk()-time until
> dm_setup_md_queue() by setting QUEUE_FLAG_DEFER_REG in alloc_dev().
>
> -
On 01/11/2018 03:12 AM, Mike Snitzer wrote:
> Since I can remember DM has forced the block layer to allow the
> allocation and initialization of the request_queue to be distinct
> operations. Reason for this is block/genhd.c:add_disk() has requires
> that the request_queue (and associated bdi) be
On 01/11/2018 03:12 AM, Mike Snitzer wrote:
> Since I can remember DM has forced the block layer to allow the
> allocation and initialization of the request_queue to be distinct
> operations. Reason for this is block/genhd.c:add_disk() has requires
> that the request_queue (and associated bdi) be
On 01/11/2018 03:12 AM, Mike Snitzer wrote:
> device_add_disk() will only call bdi_register_owner() if
> !GENHD_FL_HIDDEN, so it follows that del_gendisk() should only call
> bdi_unregister() if !GENHD_FL_HIDDEN.
>
> Found with code inspection. bdi_unregister() won't do any harm if
>
On 01/10/2018 08:39 PM, Bart Van Assche wrote:
> Both add_wait_queue() and blk_mq_dispatch_wake() protect wait queue
> manipulations with the wait queue lock. Hence also protect the
> !list_empty(>entry) test with the wait queue lock instead of
> the hctx lock.
>
> Signed-off-by: Bart Van Assche
On 01/10/2018 08:39 PM, Bart Van Assche wrote:
> This patch does not change any functionality but makes the
> blk_mq_mark_tag_wait() code slightly easier to read.
>
> Signed-off-by: Bart Van Assche
> Cc: Christoph Hellwig
> Cc: Omar Sandoval
On 2018-01-09 11:05 AM, Dmitry Vyukov wrote:
Hello,
syzkaller has found the following memory leak:
unreferenced object 0x88004c19 (size 8328):
comm "syz-executor", pid 4627, jiffies 4294749150 (age 45.507s)
hex dump (first 32 bytes):
00 00 00 00 00 00 00 00 00 00 00 00 00 00
In the following patch, we will use blk_mq_try_issue_directly() for DM
to return the dispatch result, and DM need this informatin to improve
IO merge.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 23 ++-
1 file changed, 14 insertions(+), 9 deletions(-)
blk_insert_cloned_request() is called in fast path of dm-rq driver, and
in this function we append request to hctx->dispatch_list of the underlying
queue directly.
1) This way isn't efficient enough because hctx lock is always required
2) With blk_insert_cloned_request(), we bypass underlying
No functional change, just to clean up code a bit, so that the following
change of using direct issue for blk_mq_request_bypass_insert() which is
needed by DM can be easier to do.
Signed-off-by: Ming Lei
---
block/blk-mq.c | 39 +++
1
blk-mq will rerun queue via RESTART or dispatch wake after one request
is completed, so not necessary to wait random time for requeuing, we
should trust blk-mq to do it.
More importantly, we need return BLK_STS_RESOURCE to blk-mq so that
dequeue from I/O scheduler can be stopped, then I/O merge
If .queue_rq() returns BLK_STS_RESOURCE, blk-mq will rerun the queue in
the three situations:
1) if BLK_MQ_S_SCHED_RESTART is set
- queue is rerun after one rq is completed, see blk_mq_sched_restart()
which is run from blk_mq_free_request()
2) run out of driver tag
- queue is rerun after one tag
Hi Guys,
The 1st patch removes the workaround of blk_mq_delay_run_hw_queue() in
case of requeue, this way isn't necessary, and more worse, it makes
BLK_MQ_S_SCHED_RESTART not working, and degarde I/O performance.
The 2nd patch return DM_MAPIO_REQUEUE to dm-rq if underlying request
allocation
Now blktrace supports outputting cgroup info for trace action and
trace message, however, it can only be enabled globally by writing
"blk_cgroup" to trace_options file, and there is no per-device API
for the new functionality.
Adding a new field (enable_cg_info) by using the pad after act_mask
in
On Wed, Jan 10, 2018 at 09:12:56PM -0500, Mike Snitzer wrote:
> DM is now no longer prone to having its request_queue be improperly
> initialized.
>
> Summary of changes:
>
> - defer DM's blk_register_queue() from add_disk()-time until
> dm_setup_md_queue() by setting QUEUE_FLAG_DEFER_REG in
On Wed, Jan 10, 2018 at 09:12:55PM -0500, Mike Snitzer wrote:
> Since I can remember DM has forced the block layer to allow the
> allocation and initialization of the request_queue to be distinct
> operations. Reason for this is block/genhd.c:add_disk() has requires
> that the request_queue (and
On Wed, Jan 10, 2018 at 09:12:54PM -0500, Mike Snitzer wrote:
> device_add_disk() will only call bdi_register_owner() if
> !GENHD_FL_HIDDEN, so it follows that del_gendisk() should only call
> bdi_unregister() if !GENHD_FL_HIDDEN.
>
> Found with code inspection. bdi_unregister() won't do any
On Wed, Jan 10, 2018 at 10:18:16AM -0800, Bart Van Assche wrote:
> Several SCSI transport and LLD drivers surround code that does not
> tolerate concurrent calls of .queuecommand() with scsi_target_block() /
> scsi_target_unblock(). These last two functions use
> blk_mq_quiesce_queue() /
Hi Jens,
I eliminated my implementation that set disk->queue = NULL before
calling add_disk(). As we discussed it left way too much potential
for NULL pointer crashes and I agree it was too fragile.
This v3's approach is much simpler. It adjusts block core so that
blk_register_queue() can be
DM is now no longer prone to having its request_queue be improperly
initialized.
Summary of changes:
- defer DM's blk_register_queue() from add_disk()-time until
dm_setup_md_queue() by setting QUEUE_FLAG_DEFER_REG in alloc_dev().
- dm_setup_md_queue() is updated to fully initialize DM's
Since I can remember DM has forced the block layer to allow the
allocation and initialization of the request_queue to be distinct
operations. Reason for this is block/genhd.c:add_disk() has requires
that the request_queue (and associated bdi) be tied to the gendisk
before add_disk() is called --
device_add_disk() will only call bdi_register_owner() if
!GENHD_FL_HIDDEN, so it follows that del_gendisk() should only call
bdi_unregister() if !GENHD_FL_HIDDEN.
Found with code inspection. bdi_unregister() won't do any harm if
bdi_register_owner() wasn't used but best to avoid the unnecessary
Hello Jens,
This patch series reworks the blk_mq_mark_tag_wait() implementation and also
fixes a race condition in that function. Please consider these two patches for
kernel v4.16.
Thanks,
Bart.
Changes compared to v3:
- Reworked patch 1/2 such that it uses if (...) ...; ... instead of
if
This patch does not change any functionality but makes the
blk_mq_mark_tag_wait() code slightly easier to read.
Signed-off-by: Bart Van Assche
Cc: Christoph Hellwig
Cc: Omar Sandoval
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Both add_wait_queue() and blk_mq_dispatch_wake() protect wait queue
manipulations with the wait queue lock. Hence also protect the
!list_empty(>entry) test with the wait queue lock.
Signed-off-by: Bart Van Assche
Cc: Christoph Hellwig
Cc: Omar Sandoval
On Wed, 2018-01-10 at 13:30 -0700, Jens Axboe wrote:
> On 1/10/18 12:39 PM, Bart Van Assche wrote:
> > This patch does not change any functionality but makes the
> > blk_mq_mark_tag_wait() code slightly easier to read.
>
> I agree it could do with a cleanup, but how about something like the
>
On 1/10/18 12:39 PM, Bart Van Assche wrote:
> This patch does not change any functionality but makes the
> blk_mq_mark_tag_wait() code slightly easier to read.
I agree it could do with a cleanup, but how about something like the
below? I think that's easier to read.
diff --git a/block/blk-mq.c
Hello Jens,
This patch series reworks the blk_mq_mark_tag_wait() implementation and also
fixes a race condition in that function. Please consider these two patches for
kernel v4.16.
Thanks,
Bart.
Changes compared to v1:
- Split a single patch into two patches to make reviewing easier.
- The
This patch does not change any functionality but makes the
blk_mq_mark_tag_wait() code slightly easier to read.
Signed-off-by: Bart Van Assche
Cc: Christoph Hellwig
Cc: Omar Sandoval
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
Both add_wait_queue() and blk_mq_dispatch_wake() protect wait queue
manipulations with the wait queue lock. Hence also protect the
!list_empty(>entry) test with the wait queue lock instead of
the hctx lock.
Signed-off-by: Bart Van Assche
Cc: Christoph Hellwig
On 1/10/18 12:34 PM, Bart Van Assche wrote:
> This patch avoids that sparse reports the following:
>
> block/blk-mq.c:637:33: warning: context imbalance in 'hctx_unlock' -
> unexpected unlock
> block/blk-mq.c:642:9: warning: context imbalance in 'hctx_lock' - wrong count
> at exit
Thanks Bart,
This patch avoids that sparse reports the following:
block/blk-mq.c:637:33: warning: context imbalance in 'hctx_unlock' - unexpected
unlock
block/blk-mq.c:642:9: warning: context imbalance in 'hctx_lock' - wrong count
at exit
Signed-off-by: Bart Van Assche
Cc: Tejun
On 1/10/18 8:54 AM, Paolo Bonzini wrote:
> After the first few months, the message has not led to many bug reports.
> It's been almost five years now, and in practice the main source of
> it seems to be MTIOCGET that someone is using to detect tape devices.
> While we could whitelist it just like
On Wed, Jan 10, 2018 at 10:04:28AM -0800, Bart Van Assche wrote:
> Use prepare_to_wait() and finish_wait() instead of open-coding these
> functions. Reduce the number of if-statements to make
> blk_mq_mark_tag_wait() easier to read. Both add_wait_queue() and
> blk_mq_dispatch_wake() protect wait
On Wed, Jan 10, 2018 at 06:42:17PM +, Bart Van Assche wrote:
> On Wed, 2018-01-10 at 11:35 -0700, Jens Axboe wrote:
> > On 1/10/18 11:33 AM, Bart Van Assche wrote:
> > > On Wed, 2018-01-10 at 11:32 -0700, Jens Axboe wrote:
> > > > On 1/10/18 11:29 AM, Bart Van Assche wrote:
> > > > > On Tue,
On 1/10/18 11:43 AM, Omar Sandoval wrote:
>> -INIT_LIST_HEAD(>queuelist);
>> /* csd/requeue_work/fifo_time is initialized before use */
>> rq->q = data->q;
>> rq->mq_ctx = data->ctx;
>> +rq->rq_flags = 0;
>> +rq->cpu = -1;
>> rq->cmd_flags = op;
>> if
On Tue, Jan 09, 2018 at 05:29:27PM -0700, Jens Axboe wrote:
> Move completion related items (like the call single data) near the
> end of the struct, instead of mixing them in with the initial
> queueing related fields.
>
> Move queuelist below the bio structures. Then we have all
> queueing
On Wed, 2018-01-10 at 11:35 -0700, Jens Axboe wrote:
> On 1/10/18 11:33 AM, Bart Van Assche wrote:
> > On Wed, 2018-01-10 at 11:32 -0700, Jens Axboe wrote:
> > > On 1/10/18 11:29 AM, Bart Van Assche wrote:
> > > > On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> > > > > @@ -313,8 +307,6 @@
On Tue, Jan 09, 2018 at 05:29:25PM -0700, Jens Axboe wrote:
> We reduce the resolution of request expiry, but since we're already
> using jiffies for this where resolution depends on the kernel
> configuration and since the timeout resolution is coarse anyway,
> that should be fine.
Reviewed-by:
On Tue, Jan 09, 2018 at 05:29:24PM -0700, Jens Axboe wrote:
> We don't need this to be an atomic flag, it can be a regular
> flag. We either end up on the same CPU for the polling, in which
> case the state is sane, or we did the sleep which would imply
> the needed barrier to ensure we see the
On 1/10/18 11:33 AM, Bart Van Assche wrote:
> On Wed, 2018-01-10 at 11:32 -0700, Jens Axboe wrote:
>> On 1/10/18 11:29 AM, Bart Van Assche wrote:
>>> On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
@@ -313,8 +307,6 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m,
struct request
On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> Move completion related items (like the call single data) near the
> end of the struct, instead of mixing them in with the initial
> queueing related fields.
>
> Move queuelist below the bio structures. Then we have all
> queueing related
On Wed, 2018-01-10 at 11:32 -0700, Jens Axboe wrote:
> On 1/10/18 11:29 AM, Bart Van Assche wrote:
> > On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> > > @@ -313,8 +307,6 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m,
> > > struct request *rq)
> > > seq_puts(m, ", .rq_flags=");
> >
On 1/10/18 11:25 AM, Bart Van Assche wrote:
> On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
>> We don't need this to be an atomic flag, it can be a regular
>> flag. We either end up on the same CPU for the polling, in which
>> case the state is sane, or we did the sleep which would imply
>>
On 1/10/18 11:29 AM, Bart Van Assche wrote:
> On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
>> @@ -313,8 +307,6 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m, struct
>> request *rq)
>> seq_puts(m, ", .rq_flags=");
>> blk_flags_show(m, (__force unsigned int)rq->rq_flags,
On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> @@ -313,8 +307,6 @@ int __blk_mq_debugfs_rq_show(struct seq_file *m, struct
> request *rq)
> seq_puts(m, ", .rq_flags=");
> blk_flags_show(m, (__force unsigned int)rq->rq_flags, rqf_name,
>
On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> We reduce the resolution of request expiry, but since we're already
> using jiffies for this where resolution depends on the kernel
> configuration and since the timeout resolution is coarse anyway,
> that should be fine.
Reviewed-by: Bart
On Tue, 2018-01-09 at 17:29 -0700, Jens Axboe wrote:
> We don't need this to be an atomic flag, it can be a regular
> flag. We either end up on the same CPU for the polling, in which
> case the state is sane, or we did the sleep which would imply
> the needed barrier to ensure we see the right
Introduce functions that allow block drivers to wait while a request
queue is in the quiesced state (blk-mq) or in the stopped state (legacy
block layer). The next patch will add calls to these functions in the
SCSI core.
Signed-off-by: Bart Van Assche
Cc: Martin K.
Several SCSI transport and LLD drivers surround code that does not
tolerate concurrent calls of .queuecommand() with scsi_target_block() /
scsi_target_unblock(). These last two functions use
blk_mq_quiesce_queue() / blk_mq_unquiesce_queue() for scsi-mq request
queues to prevent concurrent
Hello Jens,
A longstanding issue with the SCSI core is that several SCSI transport drivers
use scsi_target_block() and scsi_target_unblock() to avoid concurrent
.queuecommand() calls during e.g. transport recovery but that this is not
sufficient to protect from such calls. Hence this patch
The previous two patches guarantee that srp_queuecommand() does not get
invoked while reconnecting occurs. Hence remove the code from
srp_queuecommand() that prevents command queueing while reconnecting.
This patch avoids that the following can appear in the kernel log:
BUG: sleeping function
Rename a waitqueue in struct request_queue since the next patch will
add code that uses this waitqueue outside the request queue freezing
implementation.
Signed-off-by: Bart Van Assche
Reviewed-by: Hannes Reinecke
Cc: Christoph Hellwig
Cc:
On 1/10/18 1:28 AM, Christoph Hellwig wrote:
> The whole series looks fine to me:
>
> Reviewed-by: Christoph Hellwig
>
> Jens, do you want me to apply this to the nvme tree, or pick it up
> directly?
I queued it up, thanks.
--
Jens Axboe
On Wed, Jan 10, 2018 at 09:34:02AM -0700, Jens Axboe wrote:
> It's yet another check that adds part lookup and rcu lock/unlock in that
> path. Can we combine some of them? Make this part of the remap? This
> overhead impacts every IO, let's not bloat it more than absolutely
> necessary.
Yes, we
On Wed, Jan 10, 2018 at 04:54:52PM +0100, Paolo Bonzini wrote:
> After the first few months, the message has not led to many bug reports.
> It's been almost five years now, and in practice the main source of
> it seems to be MTIOCGET that someone is using to detect tape devices.
> While we could
On Wed, Jan 10, 2018 at 5:34 PM, Jens Axboe wrote:
> On 1/10/18 9:18 AM, Ilya Dryomov wrote:
>> Regular block device writes go through blkdev_write_iter(), which does
>> bdev_read_only(), while zeroout/discard/etc requests are never checked,
>> both userspace- and
On 1/10/18 9:33 AM, Bart Van Assche wrote:
> It is nontrivial to derive from the blk-mq source code when
> blk_mq_tags.active_queues is decremented. Hence add a comment that
> explains this.
That is how it works, applied the patch, thanks Bart.
--
Jens Axboe
bdi debugfs dir/file may create fail, add error log here.
Signed-off-by: weiping zhang
---
V1->V2:
fix indentation and make log message more clear
mm/backing-dev.c | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/mm/backing-dev.c
On 1/10/18 9:18 AM, Ilya Dryomov wrote:
> Regular block device writes go through blkdev_write_iter(), which does
> bdev_read_only(), while zeroout/discard/etc requests are never checked,
> both userspace- and kernel-triggered. Add a generic catch-all check to
> generic_make_request_checks() to
It is nontrivial to derive from the blk-mq source code when
blk_mq_tags.active_queues is decremented. Hence add a comment that
explains this.
Signed-off-by: Bart Van Assche
Cc: Christoph Hellwig
Cc: Hannes Reinecke
Cc: Johannes Thumshirn
2018-01-11 0:10 GMT+08:00 Bart Van Assche :
> On Wed, 2018-01-10 at 23:18 +0800, weiping zhang wrote:
>> bdi debugfs dir/file may create fail, add error log here.
>>
>> Signed-off-by: weiping zhang
>> ---
>> mm/backing-dev.c | 3 ++-
>> 1
Similar to blkdev_write_iter(), return -EPERM if the partition is
read-only. This covers ioctl(), fallocate() and most in-kernel users
but isn't meant to be exhaustive -- everything else will be caught in
generic_make_request_checks(), fail with -EIO and can be fixed later.
Signed-off-by: Ilya
Hello,
I was doing some cleanup work on rbd BLKROSET handler and discovered
that we ignore partition rw/ro setting (hd_struct->policy) for pretty
much everything but straight writes.
David (CCed) has blktests patches standing by.
(Another aspect of this is that we don't enforce open(2) mode.
Regular block device writes go through blkdev_write_iter(), which does
bdev_read_only(), while zeroout/discard/etc requests are never checked,
both userspace- and kernel-triggered. Add a generic catch-all check to
generic_make_request_checks() to actually enforce ioctl(BLKROSET) and
On Wed, 2018-01-10 at 23:18 +0800, weiping zhang wrote:
> bdi debugfs dir/file may create fail, add error log here.
>
> Signed-off-by: weiping zhang
> ---
> mm/backing-dev.c | 3 ++-
> 1 file changed, 2 insertions(+), 1 deletion(-)
>
> diff --git
After the first few months, the message has not led to many bug reports.
It's been almost five years now, and in practice the main source of
it seems to be MTIOCGET that someone is using to detect tape devices.
While we could whitelist it just like CDROM_GET_CAPABILITY, this patch
just removes the
On 10.01.2018 05:40, Ming Lei wrote:
> On Tue, Jan 09, 2018 at 08:02:53PM +0300, Dmitry Osipenko wrote:
>> On 09.01.2018 17:33, Ming Lei wrote:
>>> On Tue, Jan 09, 2018 at 04:18:39PM +0300, Dmitry Osipenko wrote:
On 09.01.2018 05:34, Ming Lei wrote:
> On Tue, Jan 09, 2018 at 12:09:27AM
On 1/10/18 7:58 AM, Paolo Valente wrote:
> Commit ("block, bfq: release oom-queue ref to root group on exit")
> added a missing put of the root bfq group for the oom queue. That put
> has to be, and can be, performed only if CONFIG_BFQ_GROUP_IOSCHED is
> defined: the function doing the put is even
bdi debugfs dir/file may create fail, add error log here.
Signed-off-by: weiping zhang
---
mm/backing-dev.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/mm/backing-dev.c b/mm/backing-dev.c
index b5f940c..9117c21 100644
---
Hello,
On Tue, Jan 09, 2018 at 11:28:55PM +0100, Paolo Valente wrote:
> Yep. So, do you guys think that our proposal may be ok? We are
> waiting just for the green light to start implementing it.
Yeah, sounds reasonable to me. Non-debug stats are already in blkcg
core anyway and I wouldn't
Hi Jens,
unfortunately, the patch of mine that you applied yesterday ("block,
bfq: release oom-queue ref to root group on exit"), does not compile
if CONFIG_BFQ_GROUP_IOSCHED is not defined. I forgot to test that patch
with that option disabled. Honestly, I'm more and more uncertain
about how
Commit ("block, bfq: release oom-queue ref to root group on exit")
added a missing put of the root bfq group for the oom queue. That put
has to be, and can be, performed only if CONFIG_BFQ_GROUP_IOSCHED is
defined: the function doing the put is even not defined at all if
CONFIG_BFQ_GROUP_IOSCHED
On 1/10/18 7:24 AM, Chiara Bruschi wrote:
> Hi Jens,
> have you had time to look into this?
I missed that Paolo had already reviewed it, I will queue it up.
--
Jens Axboe
Hi Jens,
have you had time to look into this?
Thank you,
Chiara Bruschi
On 12/18/17 5:21 PM, Chiara Bruschi wrote:
Commit '7b9e93616399' ("blk-mq-sched: unify request finished methods")
changed the old name of current bfq_finish_request method, but left it
unchanged elsewhere in the code
On Wed, Jan 10 2018 at 2:55am -0500,
Ming Lei wrote:
> On Wed, Jan 10, 2018 at 12:21 PM, Mike Snitzer wrote:
> > On Tue, Jan 09 2018 at 10:46pm -0500,
> > Ming Lei wrote:
> >
> >> Another related issue is that blk-mq debugfs
On 01/10/2018 09:32 AM, Christoph Hellwig wrote:
> On Tue, Jan 09, 2018 at 09:41:03PM -0500, Mike Snitzer wrote:
>> Since I can remember DM has forced the block layer to allow the
>> allocation and initialization of the request_queue to be distinct
>> operations. Reason for this was
From: Tang Junhui
After long time run of random small IO writing,
reboot the machine, and after the machine power on,
bcache got stuck, the stack is:
[root@ceph153 ~]# cat /proc/2510/task/*/stack
[] closure_sync+0x25/0x90 [bcache]
[] bch_journal+0x118/0x2b0 [bcache]
[]
On Tue, Jan 09, 2018 at 09:41:03PM -0500, Mike Snitzer wrote:
> Since I can remember DM has forced the block layer to allow the
> allocation and initialization of the request_queue to be distinct
> operations. Reason for this was block/genhd.c:add_disk() has required
> that the request_queue (and
It's in fact completely harmless :) But not even calling it
obviously is just as fine.
The whole series looks fine to me:
Reviewed-by: Christoph Hellwig
Jens, do you want me to apply this to the nvme tree, or pick it up
directly?
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
Looks good,
Reviewed-by: Johannes Thumshirn
--
Johannes Thumshirn Storage
jthumsh...@suse.de+49 911 74053 689
SUSE LINUX GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: Felix Imendörffer, Jane Smithard, Graham
85 matches
Mail list logo