First of all thanks for reviewing the proposal. 

1) The whole idea of the project was to have something similar to Linux 
/proc/scsi pseudo file system layer, however with certain extendable feature. 
To list some advantages, which I believe in this project are:

 a) This file system is a simpler way of device management. The list of 
devices, its targets etc can be viewed through single “ls” command and hence 
administering the devices is made lot simpler.

 b) Firmware updates and other administration of the HBA could be easily done.

 c) Applications can be developed using this file system without knowing the 
complex ioctl interfaces from the driver. An application could be as easy as a 
shell script. For example to rescan the luns only from a particular target a 
small shell script should do. 

 d) One of the main advantages, I see is that it can be improved over the time, 
adding different storage related entities into this file system. It is 
modularized in that sense. Various features can be added at later releases.

2) > To the question of risking the system with poorly written shell script:
Yes, that is a perfectly valid point and hence we can probably trim down to the 
level of not sending any scsi commands other than minimal commands (like say 
inquiry, TUR etc) or to rescan the scsi bus to see any targets added etc.

3) 
[i]>What is the scope of this file system ? Does it stop at HBA driver level or 
it goes deeper e.g. for a technology like Fibre channel, does the file system 
only stays at SCSI HBA level (which is fcp in this case) or it goes down to 
fibre channel stack as well. Does it stop there ? e.g. for technologies like 
FCoE and iscsi, does it go all the way to the ethernet card driver ? If yes 
then is the scope really limited to storage or expands to networking also ? If 
yes, then is it an attempt to create yet another /dev tree ?[/i]

The scope of the file system is limited to storage only. It stops at vendor HBA 
driver level. I don't think at present it is viable to go till FC stack level. 
As it was mentioned rightly that will be too much of work. Also with regard to 
iSCSI and FCoE, it doesn't go till ethernet card driver level. In hardware or 
software iSCSI solution, this file system goes till the point where it sees the 
storage, not how it reaches the storage. Hence we are not creating another 
complete /dev tree, but just a storage encapsulation.

4)
[i]> To accomplish something like this, I think every storage component  
participate. This includes, target driver, HBA drivers, FCA drivers, various 
storage related frameworks (leadville, SRP, scsa2usb etc.) A lot of these 
components are developed and maintained by non-SUN vendors. And some of those 
vendors wont participate (or wont participate right away). So how will a hybrid 
environment will be handled specially in a multipathed configuration?[/i]

Here are the different layers:

HBA card and port: Most of these properties can be obtained from dev tree. 
Still some of the files (like f/w dump) need vendor driver modifications. 
However if it is a HBA driver under Leadville it can be obtained through 
Leadville layer. 

Software iSCSI drivers: This would need some modifications and I believe it is 
SUN vendor driver. 

Targets/Luns: This file mostly queries for information, rather than modifying 
stuffs. I believe there is standard way to send SCSI commands to the target, 
thru sd, sg etc and hence I believe changes would not be required in those 
modules.

The design would take care so that even if the vendor driver doesn’t support it 
could still get to show the basic properties, targets/luns it see etc.

5)
[i]> All the components listed above already have /devices and /dev/ entries. 
So what will happen  to those entries. Should they be removed ? But then there 
are tools and scripts which depend upon those entries.[/i]

The /devices and /dev entries are not modified or touched. It would remain as 
is.

6)
[i]Looking at all of the above this looks like a lot of work. So the final 
thing that comes to mind is: what is the real purpose ? Is it observability ? 
if so then I dont think people are going to manually browse through the 
directories and files to get the data they need specially in a multipathed 
environment. As far as I am aware of, most administrators use tools and scripts 
for the same purpose. So as an earlier post on this subject also suggested, a 
more consolidated toolset, or GUIs, or API or all of those are probably a 
better solution.[/i]

The purpose is to get a good snapshot of how this system sees the storage. I 
believe there is scope to extend it to FC switches and different storage 
entities. As Victor Engle had mentioned, definitely multi-pathing could be one 
of them which could be fitted here.

7
[i]As Sumit pointed out, boundaries will be very difficult to define. I agree. 
Without knowing how each technology will consume underlying, existing or new 
subsystems, it's difficult to envision what will be important to an 
administrator when viewing the tree. It's being called a SCSI SAM filesystem 
but if it were purely SCSI, why are switches involved? Why would FC transport 
properties be involved? What happens when you end up running non-storage 
traffic through your so-called storage HBAs, like IPFC? Is the NIC a storage 
HBA or is it also responsible for HTTP traffic?[/i]

SCSI SAM file system name is just coined as it sees anything what SAM 
architecture sees. Even in non-storage traffic HBA drivers see the destination 
as targets/luns. However, initially the plan is not to handle IP or other 
traffic, only storage traffic.

The FC switch is just one of the extensions possible under this file system 
which can be fitted under the port directory. If the transport is some thing 
else, (say iSCSI) then that transport router or its internals can be added at 
later point under the port directory. The idea is to have a framework of having 
the adapter, port, target, lun directories and the other things will be 
positioned later at appropriate points. 

8.
[i]I think what we need, and what I believe we have been striving towards, is a 
more common approach to the various tools that we have such that, where 
possible, administrators can move between them and not feel as though they need 
to learn (and remember) a different set of commands and syntax for very similar 
operations. But when it comes to specifics for a particular subsystem, I 
believe the benefit of being able to intelligently communicate the semantics of 
that subsystem outweighs the benefit of remembering a single name, whether that 
is a root filesystem name or the name of a CLI.[/i]

I think the file system would give a standard view to the storage irrespective 
of the underlying protocol and also to send commands to the targets in easier 
and standard way.
 
 
This message posted from opensolaris.org
_______________________________________________
storage-discuss mailing list
[email protected]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

Reply via email to