Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Hey all, I tried the following combinations, of IPMP and Link Aggregation ( Nemo stuff ) . With Cisco and Dlink managed Switches. Some of these switches has facilities to arrgegate ports ( port aggregation ) on the switch, i used them, for no use.. Only one active port is selected at particular time, and in one of the parallel mode settings, Packet loss goes to an all time high of 43 % ... lol With Unmanaged Switch, where All NICs which are aggregated, are connected, Also does not work, and the aggregated link speed is same as a single one. SO WHAT IS THAT WHICH WORKS . ... :) ... here we go .. Consider that we have created an aggregated link with three interfaces, and if we connect all three interfaces to a different unmanaged switch, and on the other end too, if each interface of an IPMP group are connected to one of these switches... Than we have "Optimum outbound spread" and i reached a optimum outgoing traffic speed of 2.6 GBPS with 3 GBe intel ethernet cards. Also we have "Optimum Inbound Spread" and i achieved an inbound traffic speed of nearly 2.4 GBPs on every aggregated link ( actually one aggregated link of 3 Nic's on two X64 Servers ). This solution is beautiful, and now as even the PGR is on, i will be testing quorum and stuff with Solaris 10 and SC 3.2 . and also i am planning to start quorum server on the storage node, where IPMP is setup and add aditional quorum device to the cluster nodes.. If this works, believe me guys, we can have a optimum and "superb" performing / Tolarent and very cost effective "3" Node Sun CLuster. I will keep you guys updated. -- Chandan Maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Does Solaris has similar Linux command "mount -o"
Thanks Jörg, I just checked this, on a iso i had generated using "mkisofs" on linux with offset option for a session. My Best Regards, -- Chandan Maddanna On Dec 13, 2007 4:43 PM, Joerg Schilling < [EMAIL PROTECTED]> wrote: > Chandan Maddanna <[EMAIL PROTECTED]> wrote: > > > #!/usr/bin/bash > > mkdir $2 > > lofiadm -a $1 /dev/lofi/1 > > mount -F hsfs -o ro /dev/lofi/1 $2 && echo -e "\n\t I have mounted $1 > under the folder $2\n\n" > > > > Just a note for the 'hsfs' case: > > I intend to add a sector offset option to the hsfs mount code. > I added half of the code already and forgot to complete this. > > > this offset feature is needed soon when cdrecord will support > multi-borderDVDs because the sd driver not yet support a correct > MMC based ioctl to read the session offset. If your problem is not > the fact that lofiadm is a separate program and you like to specify > offsets, this would be the way to go. > > BTW: The original idea that has been copied by Linux, FreeBSD and Sun is > a driver called "fbk" that I wrote in 1988. > > ftp://ftp.berlios.de/pub/schily/kernel/fbk/fbk.tar.gz > > > fbk did use this syntax for mounting: > > mount -F fbk -o ro,type=hsfs /dev/fbk0:file_to_mount /mnt > > Jörg > > -- > EMail:[EMAIL PROTECTED] (home) Jörg Schilling D-13353 > Berlin > [EMAIL PROTECTED](uni) > [EMAIL PROTECTED] (work) Blog: > http://schily.blogspot.com/ > URL: http://cdrecord.berlios.de/old/private/ > ftp://ftp.berlios.de/pub/schily > ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Hi all, i Badly need LSI53C1030 lsi drivers ( not solaris mpt ones )
Thanks for update Ian, Actually the solaris mpt drivers, panics fro scsi disks added on vmware enterprise server - Sol 10 - nodes. and using lsi drivers seems to be the solution ... tahts why .. Btw if you feel that any other virtualization technique can act as a better base, for implementing Sun CLuster Test Bed, please let me know.. i was also investigating on xVM with Nevada build 76 and above.. but seems liek its still in early stages and had lots of doubts on its working.. waiting to hear form you . With Warm Regards and Best Wishes, -- Chandan Maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Hi all, i Badly need LSI53C1030 lsi drivers ( not solaris mpt ones )
Funny part is i am able to locate the readme text explaining the installation ... lolzx... but not able to find the driver package itself... lolz... Please someone help guys .. -- Chandan This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
[osol-discuss] Hi all, i Badly need LSI53C1030 lsi drivers ( not solaris mpt ones )
Hi Friends, I badly need LSI53C1030 Drivers, that is the lsi provided itmpt ones. I tried may best , but not able to get the drivers, form the lsilogic site. Pleasee someone point me to this one .. it will be really helpful. With Warm Regards, -- Chandan maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Does Solaris has similar Linux command "mount -o"
Thanks frank, yes, you can use standard img format also, BUt i donno about the mount offset thng. Also Caspers script is more better one. so you can use that too. ALso if you get to know about the offset usage, Kindly share with me. Thanks and Warm Regards, -- Chandan Maddanna On Dec 12, 2007 2:39 PM, Zhang, Frank F <[EMAIL PROTECTED]> wrote: > Chandan >Thanks for your detailed explanation, but my requirement is a little > different from what you have done here, I want to mount an IMG file, not an > ISO file, and also I want to specify the mount offset during mounting. > > > > > Thanks! > Frank > > > -Original Message- > From: [EMAIL PROTECTED] [mailto: > [EMAIL PROTECTED] On Behalf Of Chandan Maddanna > Sent: 2007年12月12日 16:50 > To: opensolaris-discuss@opensolaris.org > Subject: Re: [osol-discuss] Does Solaris has similar Linux command "mount > -o" > > Dear Frank, > > I use this small script i had written for mounting iso images. hope it > helps.. > > Save the below contents in a file called "mountiso.sh" > > #!/usr/bin/bash > mkdir $2 > lofiadm -a $1 /dev/lofi/1 > mount -F hsfs -o ro /dev/lofi/1 $2 && echo -e "\n\t I have mounted $1 > under the folder $2\n\n" > > > and then give this script file executable permission form command prompt. > > # chmod +x ./mountiso.sh > > and use this script to mount iso images... > > > For example lets say you have an iso image called "/opt/testfile1.iso" and > you want to mount it under /testfilecontents > > Than use the script as follows : > > # ./mountiso.sh /opt/testfile1.iso /testfilecontents > > > and you are done, you will have all the contents of the iso visible under > /testfilecontents folder. > > > Best Regards, > > -- Chandan Maddanna > > > This message posted from opensolaris.org > ___ > opensolaris-discuss mailing list > opensolaris-discuss@opensolaris.org > ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Does Solaris has similar Linux command "mount -o"
Dear Frank, I use this small script i had written for mounting iso images. hope it helps.. Save the below contents in a file called "mountiso.sh" #!/usr/bin/bash mkdir $2 lofiadm -a $1 /dev/lofi/1 mount -F hsfs -o ro /dev/lofi/1 $2 && echo -e "\n\t I have mounted $1 under the folder $2\n\n" and then give this script file executable permission form command prompt. # chmod +x ./mountiso.sh and use this script to mount iso images... For example lets say you have an iso image called "/opt/testfile1.iso" and you want to mount it under /testfilecontents Than use the script as follows : # ./mountiso.sh /opt/testfile1.iso /testfilecontents and you are done, you will have all the contents of the iso visible under /testfilecontents folder. Best Regards, -- Chandan Maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] Equivalent of OBP 'probe-scsi-all' from inside
Dear Kyle, The number of cable connected has nothing to do with the controllers that is visible. Even if you are connected through just one cable, the number of controllers visible will be the same. But it seems like what you are asking for is lun masking. That is seeing just some luns and not seeing some. So if this is what you want to do, than in normal entry level storage arrays we will have to implement it on the client side ( solaris box connected to the array ). Hope this helps. -- Chandan Maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sens
Beautiful, Crisp and elegant example.. Thanks Very Very much James, -- Chandan Maddanna This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sens
Hi James "That's not quite true. IPMP's inbound load spreading makes use of multiple data addresses in a group. When we make outbound connections to multiple peers and there are multiple data addresses, we'll intentionally round-robin select among those addresses to use as source addresses, each with a separate MAC address. That allows the return traffic to be spread among the available links. The other part of the picture is DNS. For spreading of inbound connections, you should insert all of the data addresses as IN A records for a single name, and configure your server so that it does round-robin responses. (If the peers are Solaris, disabling or configuring nscd may be necessary.)" Can you please tell em a bit more about this, how can IPMP make inbound spreading, because i saw in documentation that test IPs are used explicitly for test pings and availability check only.. so.. i was not able to comprehend, it will be helpful , if you can help me understand, Humble Regards, -- chandan Maddanna. This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Hey John, Thanks for reply "guru" ... yes i was about to test sun trunking, But that workes only with sun's hardware with hard coded support, like ce , qfe etc etc ... but not bge, pcn, rtls , e1000 etc.. But, i found nemo drivers work with the other ones well, and also we can aggregate more than four interfaces, in case of trunking software max is four only.. and other thing i want to ask you, what strategy do i use on switch... because even managed switches ( form cisco etc ) does not do aggregation of ports on there side efficiently ) .. So may be for switch i have to think of something else... please let me know your valuable suggestions on that part too.. i am seeing how all your guidance is helping me to tune and evolve my implementation .. if nothign works, ( please dont laugh ) , i am thinking to use a unix box with some 6 or 8 GBE interfaces for switch too lol ... normal mother boards come with 3 pci slots, and i will put quad port Gigabit ethernet cards availabel form intel, in all three slots.. and if i aggregate three ports each and make a 4 port switch ( with each port capable of doing 3 GBPS -- taht is 6 GBPS including both ways ) tahn beautiful... )0 also we can use cat6 for cabling , as it is not a problem , and will easily support this speed, because afterall its three different cables too, working together right beautiful, i think we will be able to implement very good iscsi based san guys,, thanks so so s much for all you help... also now PGR is supported from opensolaris build 74 on wards and Jim has already told me this and even i too verified ( Thanks Jim... Fore those who dint know , Jim Dunham, from Storage Platform Software Group at sun ) . Wow... yeahhhahah ... I am floating... One way or the other i will get this done and will definitely share with make a note of complete detail;s of commands and procedures, and will produce to you all for further tuning and evolution .. -- Chandan This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Thanks Ux-Admin, I am intensively workign out on all these, and also i will be conducting a test with new nemo drivers, and dladm aggregation. So let me see how does aggregation work out, and as far as i see there architecture, it should support, in-load spread very well, because the virtual aggregated interface receives packets from all the interfaces in the group, forming the aggregated interface , but i am not sure how re-ordering for tcp is handled though. -- Chandan This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
First of all Thanks to all, i am getting so much valuable Information here. secondly, as of now, I am thinking That on the transmittion end, where i am concerned with outbound load spreading, i will use IPMP group. Now as james said, i am thinking if there are three interfaces in the ipmp group, i will use address on one and other two i will mark, 0.0.0.0 and make them "up". So it does efficient outbound load spreading, in Round Robin Type. And for clients, at the receiving end, what i am planning to use aggregated links, Something like this : # dladm create-aggr -d e1000g0 -d e1000g1 -d e1000g2 1 # ifconfig aggr1 plumb 192.168.10.210 <http://192.168.1.200/> up So now i expect "aggr1" to give me an *INBOUND* bandwidth of atleast 2.5to 2.6 GBPS as all three ( VIZ e1000g0 e1000g1 and e1000g2 are GBE cards and very well supported with Solaris ) ( Please *NOTE* : that i don't need outbound and inbound load spreading on the same machine ) And now you all may be thinking why i am concerned about this performance part so much, The reason is, i want to implement a iSCSI SAN , which almost matches or at least stands something like 85 % Performance of usual Fabric SAN implementations. So you see, I want outbound Load Spreading and performance for "iSCSI Targets" and inbound Load spreading and performance for "iSCSI Initiators" i am drawing a detailed architecture of how i want to proceed, and once that is done , I will DEFINITELY share with you guys, so that i may get valuable inputs and guidance for you "guru's" here. BUt still there is one problem with iSCSI SAN Replacement, our iSCSI SAN implementation with opensolaris does not support PGR, ( persistent SCSI Reservations ) , though this should not be a problem in normal cases, But it is definitely a problem, if some applications makes explicit usage of this feature ( ex : Sun Cluster ) nice discussing with you people, I will keep you all updated. -- Chandan Maddanna On Dec 10, 2007 7:20 PM, James Carlson <[EMAIL PROTECTED]> wrote: > Ceri Davies writes: > > On Sat, Dec 08, 2007 at 11:53:27PM -0800, Chandan Maddanna wrote: > > > Guys to be more clear, look at the diagram below, and tell me how to > get a single load spread 2 Gbps link with IP Multi Pathing on Solaris 10 ? > > > > > > Note : Please see Image Below . > > > > > > > > > http://img514.imageshack.us/img514/8928/ipmpmultipathinghz4.gif";> > > > > > > Now i don't want two public ip , what i need is just one IP which > utilizes the bandwidth provided by both the nic's. Can anyone help me out > guys . > > > > > > meaning i want active-active configuration and i should use one ip at > client side and it should make use of the bandwidth of both the NIC's .. can > this be done and how, just outbound load spreading is enough for me, as > there is no much in bound load generated, except few scsi commands through > ip and stuff.. > > > > You can't; there is no inbound load-spreaing with IPMP. > > That's not quite true. > > IPMP's inbound load spreading makes use of multiple data addresses in > a group. When we make outbound connections to multiple peers and > there are multiple data addresses, we'll intentionally round-robin > select among those addresses to use as source addresses, each with a > separate MAC address. That allows the return traffic to be spread > among the available links. > > The other part of the picture is DNS. For spreading of inbound > connections, you should insert all of the data addresses as IN A > records for a single name, and configure your server so that it does > round-robin responses. (If the peers are Solaris, disabling or > configuring nscd may be necessary.) > > As for outbound load spreading alone, as long as interfaces are marked > "up" in the group, they'll be used. They don't all have to have > addresses, and interfaces that are "up" but with 0.0.0.0 address will > be used for outbound load spreading using source (data) addresses from > other interfaces. > > -- > James Carlson, Solaris Networking <[EMAIL PROTECTED]> > Sun Microsystems / 35 Network Drive71.232W Vox +1 781 442 2084 > MS UBUR02-212 / Burlington MA 01803-2757 42.496N Fax +1 781 442 1677 > ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Guys to be more clear, look at the diagram below, and tell me how to get a single load spread 2 Gbps link with IP Multi Pathing on Solaris 10 ? Note : Please see Image Below . http://img514.imageshack.us/img514/8928/ipmpmultipathinghz4.gif";> Now i don't want two public ip , what i need is just one IP which utilizes the bandwidth provided by both the nic's. Can anyone help me out guys . meaning i want active-active configuration and i should use one ip at client side and it should make use of the bandwidth of both the NIC's .. can this be done and how, just outbound load spreading is enough for me, as there is no much in bound load generated, except few scsi commands through ip and stuff.. -- With Warm Regards, Chandan Maddanna. Unix Services, Bangalore, CSC India. This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Okie guys, seems like John pointed me to the right direction and i found the solution, But just want a confirmation with "this will work" stamp for the gurus here, and also request there suggestions.. :) O O o o oo o o i am floating in Air.. So the solution i find is something like this . Use IPMP group in iSCSI target to load spread outbound traffic, it increases the amount of data packets that can be transmitted at a time. ( I am looking at some figures like 2.2 Gbps and above from a IPMP group of 3 - "1 GBE" Nic cards ) and Use Multipathing ( MPxIO ) to import same Lun form two different Ip's ( each NIC attached toa saperate NIC ), SO the the inbound speed will also get load spreaded on the interfaces form which i import. Now the only thing left is iSCSI PGR ... is this enabled guys ... please someone at least tell me if this is being developed, because i am ready to wait if this is going to be out by another 20 or 30 days or so .. :) waiting for all your guidance. -- Chandan Maddanna, Unix Services, Bangalore, CSC India. This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
Re: [osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensib
Thanks John, Yes you are right, now i have a active-active IPMP group "SANGroup" of three Interfaces ( e1000g* of 1 GBe each ). So in simple words what i want to know is the effective speed of inbound traffic. I am sure of outbound traffic, as i have seen it in action. The outbound traffic reaches approximately near to ~ 2.6 Gbps ( actually something like 2.2 Gbps ~ 2.6 Gbps ). So will i get a similar performance gain for inbound traffic, as 99% of the traffic for nodes that import the Lun is inbound, and only SCSI commands and wrappers form the outbound traffic. Please let me know if what i am predicting above is true or not . thanks all for help , Warm Regards, -- Chandan Maddanna, Unix Services, CSC India. This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org
[osol-discuss] OpenSolaris with iSCSI Luns and IPMP 3 x "1 Gbe" eth SAN . Is it sensible ?
Hi Guys, I just received a new request to provide a SAN for a new project . I was thinking to do the following, Please let me know if it is feasible. OpenSolaris + iSCSI LUNs Setup + IPMP group of three 1 Gbe Ethernet card with group IP and load Spreading. But What i am really worried is Two things. 1.) I am trying IPMP group for performance, "Outbound" load spreading i am not worried of , as i have myself seen it in action and is quite efficient, But does Inbound Load Spreading work with IPMP and will it have same effect. because if this doesn't work, than it introduces a bottle neck at the node which imports and makes use of the LUN. and secondly, 2.) What happenned to the PGR Reservation bits, Is this thing Done or Still in process, if in development, at which build is it aimed at .. This opens a lot of doorways and also makes a cheap replacement to real SANs complete. Please help me out guys, I am particularly concerned about performance, and also about applications which make user of Persistent Reservations and if they fail with this SAN implementation. Regards, -- Chandan Maddanna, Unix Services, CSC India. This message posted from opensolaris.org ___ opensolaris-discuss mailing list opensolaris-discuss@opensolaris.org