On Monday 03 August 2009, Stefani Seibold wrote:
> This is a proposal of a new generic kernel FIFO implementation.
> 
> The current kernel fifo API is not very widely used, because it has to many
> constrains. Only 13 files in the current 2.6.30 used it. FIFO's are
> like list are a very basic thing and a kfifo API which handles the most use
> case would save a lot of time and memory resources.
> 
> I think there are the following reasons why kfifo is not in use.
> 
> - There is a need of a spinlock despite you need it or not
> - A fifo can only allocated dynamically
> - There is no support for data records inside a fifo
> - The FIFO size can only a power of two
> - The API is to tall, some important functions are missing

My guess is that more importantly

- few people so far needed the functionality.

All of the existing users are in relatively obscure device drivers,
a total of 16 committers have touched code using it since 2.6.12
and almost all the changes were in iSCSI (cc:ing maintainer).

> So i decided to extend the kfifo in a more generic way without blowing up
> the API to much. This was my design goals:
> 
> - Generic usage: For kernel internal use or device driver
> - Provide an API for the most use case
> - Preserve memory resource
> - Linux style habit: DECLARE_KFIFO, DEFINE_KFIFO and INIT_KFIFO Macros
> - Slim API: The whole API provides 13 functions.
> - Ability to handle variable length records. Three type of records are
>   supported:
>   - Records between 0-255 bytes, with a record size field of 1 bytes
>   - Records between 0-65535 bytes, with a record size field of 2 bytes
>   - Byte stream, which no record size field
>   Inside a fifo this record types it is not a good idea to mix them together.
> - Direct copy_to_user from the fifo and copy_from_user into the fifo.
> - Single structure: The fifo structure contains the management variables and
>   the buffer. No extra indirection is needed to access the fifo buffer.
> - Lockless access: if only one reader and one writer is active on the fifo,
>   which is the common use case, there is no additional locking necessary.
> - Performance
> - Easy to use

This sounds all nice, and your code looks clean and usable, but really,
what's your point?

If you have a new driver that will use all the new features, please
just tell us, that would make your point much clearer. Also, if
you can quantify an advantage (xxxx bytes code size reduce, yy%
performance improvement on Y benchmark) that would be really
helpful.

> One thing is that the new API is not compatible with the old one. I had
> a look at the current user of the old kfifo API and it is easy to adapt it to
> the new API. These are the files which use currently the kfifo API:
> 
> /usr/src/linux/./drivers/char/nozomi.c
> /usr/src/linux/./drivers/char/sonypi.c
> /usr/src/linux/./drivers/infiniband/hw/cxgb3/cxio_resource.c
> /usr/src/linux/./drivers/media/video/meye.c
> /usr/src/linux/./drivers/net/wireless/libertas/main.c
> /usr/src/linux/./drivers/platform/x86/fujitsu-laptop.c
> /usr/src/linux/./drivers/platform/x86/sony-laptop.c
> /usr/src/linux/./drivers/scsi/libiscsi.c
> /usr/src/linux/./drivers/scsi/libiscsi_tcp.c
> /usr/src/linux/./drivers/scsi/libsrp.c
> /usr/src/linux/./drivers/usb/host/fhci.h
> /usr/src/linux/./net/dccp/probe.c
> 
> I will do this job if there is a tendency for substitute the old API. So i ask
> for comments....

I don't think we should risk introducing regressions if the only possible
benefit is to make an interface easy to use that almost nobody uses.
We have also recently gained the include/linux/ring_buffer.h API which
appears to be a superset of the kfifo API.

        Arnd <><

--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to 
open-iscsi+unsubscr...@googlegroups.com
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to