Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Brent Jones
On Thu, Jun 17, 2010 at 10:44 PM, Giovanni giof...@gmail.com wrote: Hi guys I wanted to ask how i could setup a iSCSI device to be shared by 2 computers concurrently, by that i mean sharing files like it was a NFS share but use iSCSI instead. I tried and setup iSCSI on both computers and

Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Arve Paalsrud
NTFS is not a clustered file system and thus can't handle multiple clients accessing the data. You could use MelioFS if you're Windows based, which handles metadata updates and locking between the accessing nodes to be able to share a NTFS disk between them - even over iSCSI. MelioFS:

Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Arve Paalsrud
And.. if you're using iSCSI towards ZFS and want to have shared access, take a look at GlusterFS which you use in front of multiple ZFS nodes as accessing point. GlusterFS: http://www.gluster.org/ -Arve -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-

[zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Well, I've searched my brains out and I can't seem to find a reason for this. I'm getting bad to medium performance with my new test storage device. I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the Areca raid controller, the driver being arcmsr. Quad core AMD

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote: Well, I've searched my brains out and I can't seem to find a reason for this. I'm getting bad to medium performance with my new test storage device. I've got 24 1.5T disks with 2 SSDs configured as a zil log device. I'm using the

Re: [zfs-discuss] OCZ Devena line of enterprise SSD

2010-06-18 Thread Pasi Kärkkäinen
On Thu, Jun 17, 2010 at 09:58:25AM -0700, Ray Van Dolson wrote: On Thu, Jun 17, 2010 at 09:54:59AM -0700, Ragnar Sundblad wrote: On 17 jun 2010, at 18.17, Richard Jahnel wrote: The EX specs page does list the supercap The pro specs page does not. They do for both on the

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Robert Milkowski
On 18/06/2010 00:18, Garrett D'Amore wrote: On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: On the SS7000 series, you get an alert that the enclosure has been detached from the system. The fru-monitor code (generalization of the disk-monitor) that generates this sysevent has not

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote: On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote: Well, I've searched my brains out and I can't seem to find a reason for this. I'm getting bad to medium performance with my new test storage device. I've got 24

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 04:52:02AM -0400, Curtis E. Combs Jr. wrote: I am new to zfs, so I am still learning. I'm using zpool iostat to measure performance. Would you say that smaller raidz2 sets would give me more reliable and better performance? I'm willing to give it a shot... Yes, more

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40 very rarely. As far as random vs. sequential. Correct me if I'm wrong, but if I used dd to make files from /dev/zero, wouldn't that be

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 05:15:44AM -0400, Thomas Burgess wrote: On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen [1]pa...@iki.fi wrote: On Fri, Jun 18, 2010 at 01:26:11AM -0700, artiepen wrote: Well, I've searched my brains out and I can't seem to find a reason for this.

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Yes, and I apologize for basic nature of these questions. Like I said, I'm pretty wet behind the ears with zfs. The MB/sec metric comes from dd, not zpool iostat. zpool iostat usually gives me units of k. I think I'll try with smaller raid sets and come back to the thread. Thanks, all -- This

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Ian Collins
On 06/18/10 09:21 PM, artiepen wrote: This is a test system. I'm wondering, now, if I should just reconfigure with maybe 7 disks and add another spare. Seems to be the general consensus that bigger raid pools = worse performance. I thought the opposite was true... No, wider vdevs gives

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Pasi Kärkkäinen
On Fri, Jun 18, 2010 at 02:21:15AM -0700, artiepen wrote: 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40 very rarely. As far as random vs. sequential. Correct me if I'm wrong, but

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
Curtis E. Combs Jr. wrote: Sure. And hey, maybe I just need some context to know what's normal IO for the zpool. It just...feels...slow, sometimes. It's hard to explain. I attached a log of iostat -xn 1 while doing mkfile 10g testfile on the zpool, as well as your dd with the bs set really

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
Curtis E. Combs Jr. wrote: Um...I started 2 commands in 2 separate ssh sessions: in ssh session one: iostat -xn 1 stats in ssh session two: mkfile 10g testfile when the mkfile was finished i did the dd command... on the same zpool1 and zfs filesystem..that's it, really No, this doesn't

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Joerg Schilling
artiepen ceco...@uga.edu wrote: 40MB/sec is the best that it gets. Really, the average is 5. I see 4, 5, 2, and 6 almost 10x as many times as I see 40MB/sec. It really only bumps up to 40 very rarely. I get Read/write speeds of aprox. 630 MB/s into ZFS on a SunFire X4540. It seems that you

[zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Sendil
Hi Currently I have 400+ users with quota set to 500MB limit. Currently the file system is using veritas file system. I am planning to migrate all these home directory to a new server with ZFS. How can i migrate the quotas. I can create 400+ file system for each users, but will this affect

Re: [zfs-discuss] Monitoring filessytem access

2010-06-18 Thread Andreas Grüninger
Here is a dtrace script based of one of the examples for the nfs provider. Especially useful when you use NFS for ESX or other hypervisors. Andreas #!/usr/sbin/dtrace -s #pragma D option quiet inline int TOP_FILES = 50; dtrace:::BEGIN { printf(Tracing... Hit Ctrl-C to end.\n);

[zfs-discuss] lsiutil for mpt_sas

2010-06-18 Thread Jeff Bacon
Is there a version of lsiutil that works for the LSI2008 controllers? I have a mix of both, and lsiutil is nifty, but not as nifty if it only works on half my controllers. :) -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] WD caviar/mpt issues

2010-06-18 Thread Jeff Bacon
I know that this has been well-discussed already, but it's been a few months - WD caviars with mpt/mpt_sas generating lots of retryable read errors, spitting out lots of beloved Log info 3108 received for target messages, and just generally not working right. (SM 836EL1 and 836TQ chassis

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread David Magda
On Fri, June 18, 2010 08:29, Sendil wrote: I can create 400+ file system for each users, but will this affect my system performance during the system boot up? Is this recommanded or any alternate is available for this issue. You can create a dataset for each user, and then set a per-dataset

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Arne Jansen
David Magda wrote: On Fri, June 18, 2010 08:29, Sendil wrote: I can create 400+ file system for each users, but will this affect my system performance during the system boot up? Is this recommanded or any alternate is available for this issue. You can create a dataset for each user, and

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Eric Schrock
On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote: On 18/06/2010 00:18, Garrett D'Amore wrote: On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: On the SS7000 series, you get an alert that the enclosure has been detached from the system. The fru-monitor code (generalization of

[zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread ???
Dear All : Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio. On Web GUI, only show dedup ratio entire storage pool. Thanks a lot, -- Rex -- This message posted from opensolaris.org

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Mike Gerdts
On Fri, Jun 18, 2010 at 8:09 AM, David Magda dma...@ee.ryerson.ca wrote: You could always split things up into groups of (say) 50. A few jobs ago, I was in an environment where we have a /home/students1/ and /home/students2/, along with a separate faculty/ (using Solaris and UFS). This had

Re: [zfs-discuss] VXFS to ZFS Quota

2010-06-18 Thread Cindy Swearingen
P.S. User/group quotas are available in the Solaris 10 release, starting in the Solaris 10 10/09 release: http://docs.sun.com/app/docs/doc/819-5461/gazvb?l=ena=view Thanks, Cindy On 06/18/10 07:09, David Magda wrote: On Fri, June 18, 2010 08:29, Sendil wrote: I can create 400+ file system

Re: [zfs-discuss] hot detach of disks, ZFS and FMA integration

2010-06-18 Thread Garrett D'Amore
On Fri, 2010-06-18 at 09:07 -0400, Eric Schrock wrote: On Jun 18, 2010, at 4:56 AM, Robert Milkowski wrote: On 18/06/2010 00:18, Garrett D'Amore wrote: On Thu, 2010-06-17 at 18:38 -0400, Eric Schrock wrote: On the SS7000 series, you get an alert that the enclosure has been

Re: [zfs-discuss] COMSTAR iSCSI and two Windows computers

2010-06-18 Thread Giovanni
Thanks guys - I will take a look at those clustered file systems. My goal is not to stick with Windows - I would like to have a Storage pool for XenServer (free) so that I can have guests, but using a storage server (Opensolaris - ZFS) as the iSCSI storage pool. Any suggestions for the added

[zfs-discuss] raid-z - not even iops distribution

2010-06-18 Thread Robert Milkowski
Hi, zpool create test raidz c0t0d0 c1t0d0 c2t0d0 c3t0d0 \ raidz c0t1d0 c1t1d0 c2t1d0 c3t1d0 \ raidz c0t2d0 c1t2d0 c2t2d0 c3t2d0 \ raidz c0t3d0 c1t3d0 c2t3d0 c3t3d0 \ [...] raidz c0t10d0 c1t10d0 c2t10d0

Re: [zfs-discuss] Question : Sun Storage 7000 dedup ratio per share

2010-06-18 Thread Robert Milkowski
On 18/06/2010 14:47, ??? wrote: Dear All : Under Sun Storage 7000 system, can we see per share ratio after enable dedup function ? We would like deep to see each share dedup ratio. On Web GUI, only show dedup ratio entire storage pool. Since dedup works across all dataset with

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
I am new to zfs, so I am still learning. I'm using zpool iostat to measure performance. Would you say that smaller raidz2 sets would give me more reliable and better performance? I'm willing to give it a shot... On Fri, Jun 18, 2010 at 4:42 AM, Pasi Kärkkäinen pa...@iki.fi wrote: On Fri, Jun 18,

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
I also have a dtrace script that I found that supposedly gives a more accurate reading. Usually, though, it's output is very close to what zpool iostat says. Keep in mind this is a test environment, there's no production here, so I can make and destroy the pools as much as I want to play around

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Yea. I did bs sizes from 8 to 512k with counts from 256 on up. I just added zeros to the count, to try to test performance for larger files. I didn't notice any difference at all, either with the dtrace script or zpool iostat. Thanks for you help, btw. On Fri, Jun 18, 2010 at 5:30 AM, Pasi

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Sure. And hey, maybe I just need some context to know what's normal IO for the zpool. It just...feels...slow, sometimes. It's hard to explain. I attached a log of iostat -xn 1 while doing mkfile 10g testfile on the zpool, as well as your dd with the bs set really high. When I Ctl-C'ed the dd it

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Um...I started 2 commands in 2 separate ssh sessions: in ssh session one: iostat -xn 1 stats in ssh session two: mkfile 10g testfile when the mkfile was finished i did the dd command... on the same zpool1 and zfs filesystem..that's it, really On Fri, Jun 18, 2010 at 6:06 AM, Arne Jansen

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Curtis E. Combs Jr.
Oh! Yes. dedup. not compression, but dedup, yes. On Fri, Jun 18, 2010 at 6:30 AM, Arne Jansen sensi...@gmx.net wrote: Curtis E. Combs Jr. wrote: Um...I started 2 commands in 2 separate ssh sessions: in ssh session one: iostat -xn 1 stats in ssh session two: mkfile 10g testfile when the

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Linder, Doug
Another thing that Gmail does that I find infuriating, is that it mucks with the formatting. For some reason it, and to be fair, Outlook as well, seem to think that they know how a message needs to be formatted better than I do. Try doing inline quoting/response with Outlook, where you quote

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Geoff Nordli
-Original Message- From: Linder, Doug Sent: Friday, June 18, 2010 12:53 PM Try doing inline quoting/response with Outlook, where you quote one section, reply, quote again, etc. It's impossible. You can't split up the quoted section to add new text - no way, no how. Very infuriating.

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen
Hi Curtis, You might review the ZFS best practices info to help you determine the best pool configuration for your environment: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide If you're considering using dedup, particularly on a 24T pool, then review the current known

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread artiepen
Thank you, all of you, for the super helpful responses, this is probably one of the most helpful forums I've been on. I've been working with ZFS on some SunFires for a little while now, in prod, and the testing environment with oSol is going really well. I love it. Nothing even comes close. If

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Fredrich Maney
On Fri, Jun 18, 2010 at 3:52 PM, Linder, Doug doug.lin...@merchantlink.com wrote: Another thing that Gmail does that I find infuriating, is that it mucks with the formatting. For some reason it, and to be fair, Outlook as well, seem to think that they know how a message needs to be formatted

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Cindy Swearingen
If the device driver generates or fabricates device IDs, then moving devices around is probably okay. I recall the Areca controllers are problematic when it comes to moving devices under pools. Maybe someone with first-hand experience can comment. Consider exporting the pool first, moving the

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Linder, Doug
People still use Outhouse? Really?! Next you'll be suggesting that some people still put up with Internet Exploder... ;-) Those of us who are literally forced to use it aren't too happy. Nor am I happy with the giant stupid signature that gets tacked on that you all have to trim when you

Re: [zfs-discuss] Please trim posts

2010-06-18 Thread Marion Hakanson
doug.lin...@merchantlink.com said: Apparently, before Outlook there WERE no meetings, because it's clearly impossible to schedule one without it. Don't tell my boss, but I use Outlook for the scheduling, and fetchmail plus procmail to download email out of Exchange and into my favorite email

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Freddie Cash
On Fri, Jun 18, 2010 at 1:52 AM, Curtis E. Combs Jr. ceco...@uga.edu wrote: I am new to zfs, so I am still learning. I'm using zpool iostat to measure performance. Would you say that smaller raidz2 sets would give me more reliable and better performance? I'm willing to give it a shot... A ZFS

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Thomas Burgess
On Fri, Jun 18, 2010 at 6:34 AM, Curtis E. Combs Jr. ceco...@uga.eduwrote: Oh! Yes. dedup. not compression, but dedup, yes. dedup may be your problem...it requires some heavy ram and/or decent L2ARC from what i've been reading. ___ zfs-discuss

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Sandon Van Ness
Sounds to me like something is wrong as on my 20 disk backup machine with 20 1TB disks on a single raidz2 vdev I get the following with DD on sequential reads/writes: writes: r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero of=./100gb.bin 10+0 records in 10+0 records

[zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Cott Lang
I split a mirror to reconfigure and recopy it. I detached one drive, reconfigured it ... all after unplugging the remaining pool drive during a shutdown to verify no accidents could happen. Later, I tried to import the original pool from the drive (now plugged back in), only to be greeted

Re: [zfs-discuss] ZFS pool - label missing on invalid

2010-06-18 Thread Frank Cusack
On 6/18/10 9:46 PM -0700 Cott Lang wrote: I split a mirror to reconfigure and recopy it. I detached one drive, reconfigured it ... all after unplugging the remaining pool drive during a shutdown to verify no accidents could happen. By detach, do you mean that you ran 'zpool detach'?

Re: [zfs-discuss] Erratic behavior on 24T zpool

2010-06-18 Thread Arne Jansen
Sandon Van Ness wrote: Sounds to me like something is wrong as on my 20 disk backup machine with 20 1TB disks on a single raidz2 vdev I get the following with DD on sequential reads/writes: writes: r...@opensolaris: 11:36 AM :/data# dd bs=1M count=10 if=/dev/zero of=./100gb.bin 10+0