On Thu, Jan 26, 2017 at 10:17:58AM +0100, Greg KH wrote:
> Ok, but do you feel the "loop method" of using a char device node to
> create/control these devices is a good model to follow for new devices
> like ndb?
Yes. We've done the same for NVMe over fabrics.
On Wed, Jan 25, 2017 at 03:36:20PM -0600, Eric Blake wrote:
> How do you get an fd to existing nbd block device? Your intent is to
> use an ioctl to request creating/opening a new nbd device that no one
> else is using; opening an existing device in order to send that ioctl
> may have negative
On Mon, Oct 03, 2016 at 09:51:49AM +0200, Wouter Verhelst wrote:
> Actually, I was pointing out the TCP head-of-line issue, where a delay
> on the socket that contains the flush reply would result in the arrival
> in the kernel block layer of a write reply before the said flush reply,
> resulting
On Mon, Oct 03, 2016 at 01:47:06AM +, Josef Bacik wrote:
> It's not "broken", it's working as designed, and any fs on top of this patch
> will be perfectly safe because they all wait for their io to complete before
> issuing the FLUSH. If somebody wants to address the paranoid case later
On Thu, Sep 15, 2016 at 01:39:11PM +0100, Alex Bligh wrote:
> That's probably right in the case of file-based back ends that
> are running on a Linux OS. But gonbdserver for instance supports
> (e.g.) Ceph based backends, where each connection might be talking
> to a completely separate ceph node,
On Thu, Sep 15, 2016 at 01:33:20PM +0100, Alex Bligh wrote:
> At an implementation level that is going to be a little difficult
> for some NBD servers, e.g. ones that fork() a different process per
> connection. There is in general no IPC to speak of between server
> instances. Such servers would
On Thu, Sep 15, 2016 at 02:21:20PM +0200, Wouter Verhelst wrote:
> Right. So do I understand you correctly that blk-mq currently doesn't
> look at multiple queues, and just assumes that if a FLUSH is sent over
> any one of the queues, it applies to all queues?
Yes. The same is true at the
On Thu, Sep 15, 2016 at 02:01:59PM +0200, Wouter Verhelst wrote:
> Yes. There was some discussion on that part, and we decided that setting
> the flag doesn't hurt, but the spec also clarifies that using it on READ
> does nothing, semantically.
>
>
> The problem is that there are clients in the
On Thu, Sep 15, 2016 at 01:11:24PM +0100, Alex Bligh wrote:
> > NBD_CMD_FLUSH (3)
> >
> > A flush request; a write barrier.
>
> I can see that's potentially confusing as isn't meant to mean 'an old-style
> linux kernel block device write barrier'. I think in general terms it
> probably is some
On Thu, Sep 15, 2016 at 01:55:14PM +0200, Wouter Verhelst wrote:
> Maybe I'm not using the correct terminology here. The point is that
> after a FLUSH, the server asserts that all write commands *for which a
> reply has already been sent to the client* will also have reached
> permanent storage.
On Thu, Sep 15, 2016 at 12:46:07PM +0100, Alex Bligh wrote:
> Essentially NBD does supports FLUSH/FUA like this:
>
> https://www.kernel.org/doc/Documentation/block/writeback_cache_control.txt
>
> IE supports the same FLUSH/FUA primitives as other block drivers (AIUI).
>
> Link to protocol (per
On Thu, Sep 15, 2016 at 12:43:35PM +0100, Alex Bligh wrote:
> Sure, it's at:
>
> https://github.com/yoe/nbd/blob/master/doc/proto.md#ordering-of-messages-and-writes
>
> and that link takes you to the specific section.
>
> The treatment of FLUSH and FUA is meant to mirror exactly the
> linux
On Thu, Sep 15, 2016 at 12:09:28PM +0100, Alex Bligh wrote:
> A more general point is that with multiple queues requests
> may be processed in a different order even by those servers that
> currently process the requests in strict order, or in something
> similar to strict order. The server is
On Thu, Sep 15, 2016 at 12:49:35PM +0200, Wouter Verhelst wrote:
> A while back, we spent quite some time defining the semantics of the
> various commands in the face of the NBD_CMD_FLUSH and NBD_CMD_FLAG_FUA
> write barriers. At the time, we decided that it would be unreasonable
> to expect
Hi Josef,
I haven't read the full path as I'm a bit in a hurry, but is there
a good reason to not simply have a socket per-hw_ctx and store it in
the hw_ctx private data instead of using the index in the nbd_cmd
structure?
Hi Markus,
this looks great!
Reviewed-by: Christoph Hellwig <h...@lst.de>
One thing I noticed, which might be a good cleanup in the future:
> - spin_lock_irqsave(>tasks_lock, flags);
> nbd->task_recv = current;
> - spin_unlock_irqrestore(>tasks_lock, flag
On Sun, Oct 25, 2015 at 03:27:13PM +0100, Oleg Nesterov wrote:
> It is not safe to use the task_struct returned by kthread_run(threadfn)
> if threadfn() can exit before the "owner" does kthread_stop(), nothing
> protects this task_struct.
>
> So __nbd_ioctl() looks buggy; a killed
This series looks good to me,
Reviewed-by: Christoph Hellwig h...@lst.de
--
One dashboard for servers and applications across Physical-Virtual-Cloud
Widest out-of-the-box monitoring support with 50+ applications
The series looks fine to me:
Reviewed-by: Christoph Hellwig h...@lst.de
--
BPM Camp - Free Virtual Workshop May 6th at 10am PDT/1PM EDT
Develop your own process in accordance with the BPMN 2 standard
Learn Process
On Mon, Apr 06, 2015 at 12:28:22AM +0800, Ming Lei wrote:
Another simpler way is to make lo_refcnt as atomic_t and remove
lo_ctrl_mutext in lo_open(), and freeze request queue during clearing
fd, and better to freeze queue during setting fd too, so will update in
v1 with this way.
Using an
On Thu, Apr 02, 2015 at 10:11:39AM +0200, Markus Pargmann wrote:
+/*
+ * Forcibly shutdown the socket causing all listeners to error
+ */
static void sock_shutdown(struct nbd_device *nbd, int lock)
{
- /* Forcibly shutdown the socket causing all listeners
- * to error
- *
21 matches
Mail list logo