Looks good.
Reviewed-by: Keith Busch
Looks good.
Reviewed-by: Keith Busch
Looks good.
Reviewed-by: Keith Busch
On Sun, May 21, 2017 at 08:20:02AM +0200, Christoph Hellwig wrote:
> > index d5e0906262ea..ce0d96913ee6 100644
> > --- a/drivers/nvme/host/core.c
> > +++ b/drivers/nvme/host/core.c
> > @@ -2437,7 +2437,13 @@ void nvme_kill_queues(struct nvme_ctrl *ctrl)
> > revalidate_disk(ns->disk);
>
The code in block/partitions/msdos.c recognizes FreeBSD, OpenBSD
and NetBSD partitions and does a reasonable job picking out OpenBSD
and NetBSD UFS subpartitions.
But for FreeBSD the subpartitions are always "bad".
Kernel:
---
Changelog v1->v2:
- Improve style, use +=
---
The code in block/partitions/msdos.c recognizes FreeBSD, OpenBSD
and NetBSD partitions and does a reasonable job picking out OpenBSD
and NetBSD UFS subpartitions.
But for FreeBSD the subpartitions are always "bad".
Kernel:
---
Changelog v1->v2:
- Improve style, use +=
---
On Wed, 17 May 2017, Christoph Hellwig wrote:
> Thanks Richard,
>
> this looks good to me.
>
> Reviewed-by: Christoph Hellwig
>
> On Wed, May 17, 2017 at 06:28:53PM -0700, Richard Narron wrote:
> > The code in block/partitions/msdos.c recognizes FreeBSD, OpenBSD
> > and NetBSD
On Fri, 19 May 2017, Christoph Hellwig wrote:
> Factor out code from the x86 cpu hot plug code to program the affinity
> for a vector for a hot plug / hot unplug event.
> +bool irq_affinity_set(int irq, struct irq_desc *desc, const cpumask_t *mask)
> +{
> + struct irq_data *data =
On Fri, 19 May 2017, Christoph Hellwig wrote:
> - /* Stabilize the cpumasks */
> - get_online_cpus();
How is that protected against physical CPU hotplug? Physical CPU hotplug
manipulates the present mask.
> - nodes = get_nodes_in_cpumask(cpu_online_mask, );
> + nodes =
On Sun, 2017-05-21 at 08:32 +0200, Christoph Hellwig wrote:
> And btw, I didn't get your cover letter [0/18], did that get lost
> somewhere?
Hello Christoph,
Thanks for the review comments. The cover letter should have made it to at
least the linux-scsi mailing list since it shows up in at least
On Sun, May 21, 2017 at 12:04:27AM -0700, Christoph Hellwig wrote:
> On Wed, May 17, 2017 at 05:32:12PM +0900, Minchan Kim wrote:
> > Is block device(esp, zram which is compressed ram block device) okay to
> > return garbage when ongoing overwrite IO fails?
> >
> > O_DIRECT write 4 block "aaa.."
On Thu, May 11, 2017 at 12:30:32AM -0700, Christoph Hellwig wrote:
> Jens, can you pick this up for 4.12?
ping?
> On Wed, May 10, 2017 at 07:20:44PM +0400, Dmitry Monakhov wrote:
> > If bio has no data, such as ones from blkdev_issue_flush(),
> > then we have nothing to protect.
> >
> > This
On Thu, May 18, 2017 at 05:31:32PM +0800, Anand Jain wrote:
> You mean at btrfs: write_dev_flush()
>OR
> block: blkdev_issue_flush() ?
> Where I find
> q = bdev_get_queue(bdev);
> if (!q)
> return -ENXIO
> isn't needed as anyway generic_make_request_checks()
On Wed, May 17, 2017 at 05:32:12PM +0900, Minchan Kim wrote:
> Is block device(esp, zram which is compressed ram block device) okay to
> return garbage when ongoing overwrite IO fails?
>
> O_DIRECT write 4 block "aaa.." -> success
> read 4 block "aaa.." -> success
> O_DIRECT write 4 block
On Thu, May 18, 2017 at 03:29:45PM +0200, Johannes Thumshirn wrote:
> On 05/18/2017 03:19 PM, Christoph Hellwig wrote:
> > All SG_IO test should also apply to block device nodes that support
> > the ioctl..
> >
>
> But these are not necessarily SG_IO tests, are they?
>
> The test included is
On Fri, May 19, 2017 at 11:29:59AM -0700, Bart Van Assche wrote:
> This function will be used by later patches in this series.
And it could already be used to simplify blk_alloc_flush_queue a bit..
Reviewed-by: Christoph Hellwig
On Fri, May 19, 2017 at 11:30:06AM -0700, Bart Van Assche wrote:
> Instead of explicitly calling scsi_req_init(), let
> blk_get_request() call that function from inside blk_rq_init().
> Add an .initialize_rq_fn() callback function to the block drivers
> that need it.
Thanks Bart,
this looks like
On Fri, May 19, 2017 at 11:30:05AM -0700, Bart Van Assche wrote:
> Several block drivers need to initialize the driver-private data
> after having called blk_get_request() and before .prep_rq_fn() is
> called, e.g. when submitting a REQ_OP_SCSI_* request. Avoid that
> that initialization code has
Hi Bart,
I think this is the wrong kind of check - while we do care about the
size of the queue, we only do it as a side effect of the queue
being able to handle REQ_OP_SCSI_IN/REQ_OP_SCSI_OUT commands.
I think we'll need a flag for those in the queue instead.
And btw, I didn't get your cover
Looks good,
Reviewed-by: Christoph Hellwig
Looks good,
Reviewed-by: Christoph Hellwig
21 matches
Mail list logo