> 
> 
> What a great proposal! To be able to simply browse a
> directory structure
> rather than try and remember the usage for a handful
> of tools would
> really make it easy for a systems administrator to
> quickly get a picture
> of exactly what the host sees wrt storage.
> 
> It would also be nice if the pseudo file system would
> represent mpxio.
> Possibly with a directory structure with the
> scsi_vhci device as a top
> level directory with subdirectories representing
> primary, secondary and
> standby paths, each with properties files. 

Why do I sense a desire to use Nautilus (or whatever the file manager
GUI you use is) as a configuration browser?

If that's the motivation here (as opposed to shell scripts that query or
manipulate configuration or settings), maybe there just needs to be
a proper storage management GUI with plugins for various device
categories and filesystems, which take care of presenting (and if
possible and permitted, making changes to) all storage related info.

Whether it's simplified command line tools, or a GUI, or usable APIs,
or all of those, I think the answer is well-designed, documented, and
reasonable stable APIs, with the rest built on top (as an application, not
in the kernel).

I don't know, maybe it just seems really cheesy to me to have a pseudo-fs
with a bunch of mostly read-only text files with various snippets of
information in them.  I tend to think that if someone has to browse to
get to that info, they're probably not going to understand it once they get
there.  And then I think of writing a program to try to answer higher-level
questions, and the first thing it has is to do is get the data from a model
that's halfway to a UI but is really poor for a program (the data gets
translated by the pseudo-fs to a textual form, then back by the app to
something it can work intelligently with; that consumes CPU, and the
translation may not be fully and unambiguously reversible),
not to mention way too inefficient dealing with large configurations; think of
the performance problem with "top" and very large numbers of processes;
you end up opening crazy large numbers of /proc/PID files, either having
a very high fd limit or closing and reopening them a lot. It would be much
more efficient to have a single call that tells you how many procs there are,
and another that grabs a snapshot of info on all of them at once into
a buffer of specified maximum size (which one could scale based on
the number of procs last seen + some fudge for a sudden surge).  Ideally I'd
like a pseudo relational DB rather than a pseudo-fs, I think; that lets one
either just look at everything, or approximate questions and answers; while
leaving it to the library-to-kernel interface to worry about how to
do that efficiently, without necessarily having to do as many system calls
as would be needed to get that info via a pseudo-fs, and perhaps with
the possibility of getting consistent snapshots.

If the pseudo-fs interface is just too darn irresistable, it could always be
done as an option on top of a good API, using fuse, and with the understanding
that nothing supplied would go through the pseudo-fs, so that it would
remain optional.
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to