On Wed, 22 Nov 2006 22:00:32 -0500 (EST), Alan Stern <[EMAIL PROTECTED]> wrote:
> On Wed, 22 Nov 2006, Pete Zaitcev wrote:

> Don't feel any need to reply or explain -- this is meant mainly to
> illustrate that eavesdropping sometimes doesn't provide much useful
> information...

I'll be brief, I know you can figure it out easily, once you have
the mental model of it.

The "binary" reader has a buffer, which contains events, up to two per
URB just like the text reader did. Applications can do mmap(2) on that
buffer and parse packets there.

> > > One possible solution is to add another ioctl operation to remove a
> > > specified number of records (header+data) from the buffer. User use this
> > > ioctl after processing at least one urb.
> 
> How would the user know how many records to remove?  There might be other 
> unexpected URBs in among the expected ones; removing them would be a 
> mistake.

User removes records which it has done processing. Yes, application
cannot hold onto a record in the buffer indefinitely and has to
remove them in order.

> > Right, this is what I was going to do. It's part of what I call
> > "mfetch". The mfetch takes this struct:
> 
> What on earth is "mfetch"?  Is that a made-up name for this combination of 
> reading and flushing records?

Made-up name for "mmap's fetch".

> Why make this a single operation?  Why not have one operation to drop 
> nflush events and another operation to do everything else?

It's done to reduce the number of syscalls per event.

> > The idea here is that polling without any syscalls is a no-goal,
> 
> "no-goal"?  Does that mean nobody would ever want to do it so there's no 
> point implementing it?

It does not provide much of an advantage, I think. This is not
proven, of course.

> > considering systemic overhead elsewhere. By getting a bunch of
> > mmap offsets, applications use a "fraction of syscall per event"
> > model and do not need to think about wrapped buffers
> 
> Why should applications have to think about wrapped buffers in any case?

In libpcap, the library can pass a pointer up. It has to point to
a contiguous area. If we let URB data to wrap around the end of
the buffer, the libpcap would need to detect this condition and
copy the data.

> > (they see
> > the filler packets, but it's a very small overhead).
> 
> What are "filler packets"?

When a URB does not fit into the buffer without wrapping, a filler
is added which uses the rest of the length, and the URB starts from
offset zero.

Paolo originally wanted each event header to point to the next one.
The unfortunate issue with this is that when the header is formed
in the buffer, it cannot be possibly known if the next event would
fit without wrapping. Thus, when next event comes, we would need
to go back and modify the previous header... unless the application
has consumed it. I thought that it had too many corner cases.

> > Why is that? I thought that it may be useful to start with INT_MAX
> > events to flush.
> 
> Does that mean you start with INT_MAX made-up events in the buffer just so 
> that the user can flush them?  That doesn't make any sense...

Yeah, this was a mistake. You can only consume as much as you have
memory allocated in the vector of offsets, which can't be INT_MAX.
My point was that it made no sense to return an error if there are
more events than "mfetch" can return.

-- Pete

-------------------------------------------------------------------------
Take Surveys. Earn Cash. Influence the Future of IT
Join SourceForge.net's Techsay panel and you'll get the chance to share your
opinions on IT & business topics through brief surveys - and earn cash
http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV
_______________________________________________
linux-usb-devel@lists.sourceforge.net
To unsubscribe, use the last form field at:
https://lists.sourceforge.net/lists/listinfo/linux-usb-devel

Reply via email to