Joel Becker wrote: > On Mon, Aug 07, 2006 at 05:18:37PM -0400, Shailabh Nagar wrote: > >>...so could we work out how & if configfs' 4K limit on attributes >>can be removed ? > > > The 4K limit is one of memory. It's straight from the sysfs > code. A page is allocated and pinned while the attribute file is opened. > I don't think we should be arbitrarily allocating large amounts of RAM > here. > Actually, that buffer is perhaps large. Maybe it should be only > 1K for simple attributes, though grabbing a page is easy. > So, large things demand a lot of memory, or a lot of thought. > Let's consider something similar to seq_file, but controlled by sysfs. > That is, you create a configfs_large_attribute structure, in which you > specify your seq_show, seq_start, seq_next function pointers. Configfs, > when asked for this object, will do the seq_open() for you, and will > create the file_operations for you. Basically, you have the > functionality of seq_file, but configfs is controlling the inode/dentry > lifetimes. > Now, reading from this object is pretty simple.
Agreed. If file->private_data is not used as a config_buffer, seq_file can be used as is to do the reading part. > But what about > writing to it? How big can a single write be -- there is no > seq_write_file yet? Should we allocate that same page buffer if opened > O_WRITE? Try and come up with a seq_write_file? Couple of suggestions: 1. Don't manage the write operation at a configfs level. Let the subsystem provide a write operation that reads the user supplied buffer. Subsystem can worry about how to sync between the seq_* and write ops. 2. Add iterator functions ->store_attribute_next , ->store_attribute_last to configfs_item_operations. Calling ->store_attribute_next tells the subsystem to expect more data as part of the same write, ->store_attribute_last indicates the end so it can do whatever it now does for store_attribute. So you don't end up with a seq_write_file but effectively do something like it. Granularity of data passed in one ->store_attribute_next can be chosen to be PAGE_SIZE for convenience though the count field can make a static choice unnecessary. > Also, what does a write do? Append to the file, or replace it? That semantic is anyway exported/enforced by the subsystem and not really by configfs even now, right ? I mean, the ->store_attribute can choose to do whatever it likes with the data (looking at file->f_pos or not) even though configfs uses generic_ll_seek to advance the file->f_pos ? > Almost attributes do a replace, which is a fair thing to do > (open()+write() expects to be starting at 0 unless O_APPEND is > specified). But ckrm looks to want append semantics. If this is > genericized, how does one truncate if "append" is the default? If the > write replaces the existing data, we'd really need a seq_write_file, so > that multiple values can be written across multiple write calls. > These are the sorts of things I'm pondering over this. > > >>Could you elaborate the uncontrolled set of lifetime semantics part ? > > > No generic files in configfs. Nothing that isn't created by the > configfs infrastructure. The dirents and inodes are completely > controlled by configfs, and the client module just defines what is > needed. > There will be no kernel-side "sysfs_create(parent, inode)" type > thing. > Thats fine. The problem here seems to be the way configfs is trying to help simplify the buffer management (using ->store and ->show). The use of configfs_buffer and the unnecessarily constrained way in which it calls the subsystem's callbacks only once (instead of multiple times if needed) seems to be the root of the problem ? --Shailabh > Joel > ------------------------------------------------------------------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo http://sel.as-us.falkag.net/sel?cmd=lnk&kid=120709&bid=263057&dat=121642 _______________________________________________ ckrm-tech mailing list https://lists.sourceforge.net/lists/listinfo/ckrm-tech