Device interfaces (was: USB Mass Storage with rump)

2015-09-26 Thread Olaf Buddenhagen
Hi,

On Thu, Sep 24, 2015 at 09:30:17AM +0200, Samuel Thibault wrote:
> Olaf Buddenhagen, le Thu 24 Sep 2015 00:11:26 +0200, a écrit :

> > As I already mentioned on IRC, I don't think we should emulate Mach
> > device nodes at all here. Rather, the USB mass storage server(s) would
> > export UNIX device nodes, which ext2fs/libstore can already deal with
> > AFAIK.
> 
> But the fs RPC interface is much more involved to implement than the
> device RPC interface.

Is it really? While I haven't implemented a trivfs translator doing
actual file I/O yet, they do seem simple enough to me... Or am I missing
something?

Also, it's a one-time effort. And it avoids the need to modify
libstore/storeio.

> Storeio already does the conversion nicely, why not reusing it?

Why introduce an unnecessary indirection, which makes things more
confusing and harder to use, if we can just use the standard Hurd I/O
interface directly?

The point of the store infrastructure is to abstract different backends,
so we are not bound to Mach devices only...

-antrik-



Re: Device interfaces (was: USB Mass Storage with rump)

2015-09-26 Thread Samuel Thibault
Olaf Buddenhagen, le Sat 26 Sep 2015 14:48:07 +0200, a écrit :
> On Thu, Sep 24, 2015 at 09:30:17AM +0200, Samuel Thibault wrote:
> > Olaf Buddenhagen, le Thu 24 Sep 2015 00:11:26 +0200, a écrit :
> 
> > > As I already mentioned on IRC, I don't think we should emulate Mach
> > > device nodes at all here. Rather, the USB mass storage server(s) would
> > > export UNIX device nodes, which ext2fs/libstore can already deal with
> > > AFAIK.
> > 
> > But the fs RPC interface is much more involved to implement than the
> > device RPC interface.
> 
> Is it really?

Yes.

> While I haven't implemented a trivfs translator doing actual file I/O
> yet, they do seem simple enough to me... Or am I missing something?

Simple enough, but really less simple than device. Device is only
device_open/close, device_read/write, and set/get_status. trivfs
apparently has at least modify_stat, io_read/write, readable, select,
seek, file_set_size, owner, io_map, etc., most of which make very little
sense to a disk driver, and requires offset-to-block conversions.

> Also, it's a one-time effort. And it avoids the need to modify
> libstore/storeio.

The modification of libstore/storeio would be a one-time effort too, and
it would be useful for other use cases too (perhaps it even already has
the needed support actually)

> > Storeio already does the conversion nicely, why not reusing it?
> 
> Why introduce an unnecessary indirection,

It's not unnecessary. The conversion from io_read/write to block
read/write has to be done somewhere.

Possibly the code can be shared by stuffing the conversion layer as a
library, but really, providing a *really working* fs interface is much
more involved that just using storeio. In the end it's up to the person
who actually works on the code, but the easiest way to implement it is
using the device RPC and storeio. Perhaps libstore can be used as it is
to do the layering, I don't know.

Samuel



Re: USB Mass Storage with rump

2015-09-24 Thread Robert Millan

El 24/09/15 a les 00:05, Olaf Buddenhagen ha escrit:

Instead, you could run a Rump instance with USB mass storage only
which uses libusb as backend rather than its own *HCI driver (but that
requires some coding work as it's currently not implemented ;-))


Yeah, I guess that's the price to pay if we want a properly layered
driver stack based on an originally monolithic implementation :-) As
long as we don't need to jump through hoops to achieve that (and it gets
upstream support), I'd say it's worth the effort...


If someone implements a libusb backend for Rump, I think upstream will be
more than happy to accept it.

On my experience, Rump upstream is demanding in terms of code quality, but
very friendly and always open to discuss things.

--
Robert Millan



Re: USB Mass Storage with rump

2015-09-24 Thread Samuel Thibault
Olaf Buddenhagen, le Thu 24 Sep 2015 00:11:26 +0200, a écrit :
> On Sat, Sep 19, 2015 at 10:59:39AM +0200, Samuel Thibault wrote:
> 
> > It'd probably be easy to make ext2fs open a device node, just like we
> > made pfinet do it.
> 
> As I already mentioned on IRC, I don't think we should emulate Mach
> device nodes at all here. Rather, the USB mass storage server(s) would
> export UNIX device nodes, which ext2fs/libstore can already deal with
> AFAIK.

But the fs RPC interface is much more involved to implement than the
device RPC interface. Storeio already does the conversion nicely, why
not reusing it?

Samuel



Re: USB Mass Storage with rump

2015-09-23 Thread Olaf Buddenhagen
Hi,

On Sat, Sep 19, 2015 at 11:57:03PM +0200, Robert Millan wrote:

> single Rump instance inside a single translator which exposes all of
> /dev in Rump namespace somewhere under host /dev hierarchy (e.g.
> /dev/rump/*).

This is certainly tempting, but also dangerous -- once a somewhat
working solution is in place, there is less motivation to do the "right"
thing...

Note that besides going fully monolithic (as described above), or fully
modular (as you suggested originally), a compromise solution that is
monolitic on the vertical axis (all layers in one server), but modular
on the horizontal one (seperate instance for each device) is also a
possibility. I'm not saying it's necessarily the best choice -- but it's
an option to consider. Ultimately it depends on how much effort the
person implementing it is willing to invest...

-antrik-



Re: USB Mass Storage with rump

2015-09-23 Thread Olaf Buddenhagen
Hi,

On Sat, Sep 19, 2015 at 10:52:13AM +0200, Robert Millan wrote:

> Since you most likely want to provide multiplexing, authorisation,
> etc, to any application who wants to access USB, I wouldn't recommend
> to lump USB mass storage and *HCI in the same Rump instance.

Quite frankly, I wouldn't either :-) It just sounded like Bruno wanted
to go with such a simplistic approach for the start...

> Instead, you could run a Rump instance with USB mass storage only
> which uses libusb as backend rather than its own *HCI driver (but that
> requires some coding work as it's currently not implemented ;-))

Yeah, I guess that's the price to pay if we want a properly layered
driver stack based on an originally monolithic implementation :-) As
long as we don't need to jump through hoops to achieve that (and it gets
upstream support), I'd say it's worth the effort...

-antrik-



Re: USB Mass Storage with rump

2015-09-23 Thread Olaf Buddenhagen
Hi,

On Sat, Sep 19, 2015 at 10:59:39AM +0200, Samuel Thibault wrote:

> It'd probably be easy to make ext2fs open a device node, just like we
> made pfinet do it.

As I already mentioned on IRC, I don't think we should emulate Mach
device nodes at all here. Rather, the USB mass storage server(s) would
export UNIX device nodes, which ext2fs/libstore can already deal with
AFAIK.

-antrik-



Re: USB Mass Storage with rump (was: Full-time developer available)

2015-09-19 Thread Samuel Thibault
Olaf Buddenhagen, le Sat 19 Sep 2015 00:52:08 +0200, a écrit :
> This looks nice for generic USB. I doubt we have a mass storage driver
> using libusb though? Rather, I guess it's something rump implements
> internally, and would be exposed through a different entry point?

I'd say so, yes.

Samuel



Re: USB Mass Storage with rump

2015-09-19 Thread Robert Millan

El 19/09/15 a les 00:52, Olaf Buddenhagen ha escrit:

On Wed, Sep 16, 2015 at 10:57:20PM +0200, Robert Millan wrote:

El 16/09/15 a les 05:47, Bruno Félix Rezende Ribeiro ha escrit:



I'm interested in USB support.  I'd like to aim mass storage devices at
first.


For USB using Rump, I think most of the pieces exist already. Rump implements
the ugenhc NetBSD API, which is already supported by libusb, so if you want
to support all libusb-aware applications, I think you'd just need something
like:

(cups|sane|whatever) ---> libusb > /dev/rumpusb (in Hurd VFS) > your 
translator > librump > /dev/ugenhc


This looks nice for generic USB. I doubt we have a mass storage driver
using libusb though? Rather, I guess it's something rump implements
internally, and would be exposed through a different entry point?


Yes and no.

If you load *HCI support and USB mass storage into Rump, you can have
/dev/XXX pop up in the Rump namespace and that will be your disk node.
Then you can write a translator to link the host system into that disk
(or whatever way this is handled, does ext2fs open device nodes directly?).

Note, however, unless the same Rump instance also exports raw USB access as
described above, it will be the only possible user of USB in the system!

Since you most likely want to provide multiplexing, authorisation, etc, to
any application who wants to access USB, I wouldn't recommend to lump
USB mass storage and *HCI in the same Rump instance.

Instead, you could run a Rump instance with USB mass storage only which
uses libusb as backend rather than its own *HCI driver (but that requires
some coding work as it's currently not implemented ;-))

--
Robert Millan



Re: USB Mass Storage with rump

2015-09-19 Thread Samuel Thibault
Robert Millan, le Sat 19 Sep 2015 10:52:13 +0200, a écrit :
> If you load *HCI support and USB mass storage into Rump, you can have
> /dev/XXX pop up in the Rump namespace and that will be your disk node.
> Then you can write a translator to link the host system into that disk
> (or whatever way this is handled, does ext2fs open device nodes directly?).

It'd probably be easy to make ext2fs open a device node, just like we
made pfinet do it.

> Since you most likely want to provide multiplexing, authorisation, etc, to
> any application who wants to access USB, I wouldn't recommend to lump
> USB mass storage and *HCI in the same Rump instance.
> 
> Instead, you could run a Rump instance with USB mass storage only which
> uses libusb as backend rather than its own *HCI driver (but that requires
> some coding work as it's currently not implemented ;-))

Indeed. We can however start with an all-in solution before adding
multiplexing.

Samuel



Re: USB Mass Storage with rump

2015-09-19 Thread Robert Millan

El 19/09/15 a les 10:59, Samuel Thibault ha escrit:

Instead, you could run a Rump instance with USB mass storage only which
uses libusb as backend rather than its own *HCI driver (but that requires
some coding work as it's currently not implemented ;-))


Indeed. We can however start with an all-in solution before adding
multiplexing.


If you want an all-in solution, a simple way to do this could be to run a
single Rump instance inside a single translator which exposes all of /dev
in Rump namespace somewhere under host /dev hierarchy (e.g. /dev/rump/*).

Then it becomes very easy to select what you want, and no matter what you do
it's always a single Rump instance. For example:

- if you want your /dev/hd1 to be a Rump USB mass storage, a symlink will do.

- if you want raw access to network cards, one could have a translator
  which opens /dev/rump/bpf (Berkeley Packet Filter) to capture and inject
  packets.

- if you want OSS rather than Sun audio, maybe you'll want a translator which
  opens /dev/rump/audio and exports OSS in /dev/audio, /dev/dsp, etc.

--
Robert Millan



USB Mass Storage with rump (was: Full-time developer available)

2015-09-18 Thread Olaf Buddenhagen
Hi,

On Wed, Sep 16, 2015 at 10:57:20PM +0200, Robert Millan wrote:
> El 16/09/15 a les 05:47, Bruno Félix Rezende Ribeiro ha escrit:

> >I'm interested in USB support.  I'd like to aim mass storage devices at
> >first.
> 
> For USB using Rump, I think most of the pieces exist already. Rump implements
> the ugenhc NetBSD API, which is already supported by libusb, so if you want
> to support all libusb-aware applications, I think you'd just need something
> like:
> 
> (cups|sane|whatever) ---> libusb > /dev/rumpusb (in Hurd VFS) > your 
> translator > librump > /dev/ugenhc

This looks nice for generic USB. I doubt we have a mass storage driver
using libusb though? Rather, I guess it's something rump implements
internally, and would be exposed through a different entry point?

-antrik-