Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-24 Thread Ian Collins
Andrew Gabriel wrote: Ian Collins wrote: I don't see the 5 second bursty behaviour described in the bug report. It's more like 5 second interval gaps in the network traffic while the data is written to disk. That is exactly the issue. When the zfs recv data has been written, zfs recv

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-24 Thread Andrew Gabriel
Ian Collins wrote: I've just finished a small application to couple zfs_send and zfs_receive through a socket to remove ssh from the equation and the speed up is better than 2x. I have a small (140K) buffer on the sending side to ensure the minimum number of sent packets The times I get

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Darren J Moffat
Kam wrote: Posted for my friend Marko: I've been reading up on ZFS with the idea to build a home NAS. My ideal home NAS would have: - high performance via striping - fault tolerance with selective use of multiple copies attribute - cheap by getting the most efficient space utilization

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-11-24 Thread Ross
Yeah, it's not really 'easy', but certainly better than nothing. I would imagine it would be possible to write a script that could link all three drive identifiers, and from there it would relatively simple to create a chart adding the physical location too. I agree with Tim that we really

Re: [zfs-discuss] Supermicro AOC-USAS-L8i

2008-11-24 Thread James C. McPherson
On Sat, 22 Nov 2008 10:42:51 -0800 (PST) Asa Durkee [EMAIL PROTECTED] wrote: My Supermicro H8DA3-2's onboard 1068E SAS chip isn't recognized in OpenSolaris, and I'd like to keep this particular system all Supermicro, so the L8i it is. I know there have been issues with Supermicro-branded

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread BJ Quinn
Here's an idea - I understand that I need rsync on both sides if I want to minimize network traffic. What if I don't care about that - the entire file can come over the network, but I specifically only want rsync to write the changed blocks to disk. Does rsync offer a mode like that? -- This

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Scara Maccai
Why would it be assumed to be a bug in Solaris? Seems more likely on balance to be a problem in the error reporting path or a controller/ firmware weakness. Weird: they would use a controller/firmware that doesn't work? Bad call... I'm pretty sure the first 2 versions of this demo I

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Bob Friesenhahn
On Mon, 24 Nov 2008, BJ Quinn wrote: Here's an idea - I understand that I need rsync on both sides if I want to minimize network traffic. What if I don't care about that - the entire file can come over the network, but I specifically only want rsync to write the changed blocks to disk.

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Erik Trimble
Bob Friesenhahn wrote: On Mon, 24 Nov 2008, BJ Quinn wrote: Here's an idea - I understand that I need rsync on both sides if I want to minimize network traffic. What if I don't care about that - the entire file can come over the network, but I specifically only want rsync to write

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Bob Friesenhahn
On Mon, 24 Nov 2008, Erik Trimble wrote: One note here for ZFS users: On ZFS (or any other COW filesystem), rsync unfortunately does NOT do the Right Thing when syncing an existing file. From ZFS's standpoint, the most efficient way would be merely to rewrite the changed blocks, thus

Re: [zfs-discuss] ZFS space efficiency when copying files from another source

2008-11-24 Thread Albert Chin
On Mon, Nov 24, 2008 at 08:43:18AM -0800, Erik Trimble wrote: I _really_ wish rsync had an option to copy in place or something like that, where the updates are made directly to the file, rather than a temp copy. Isn't this what --inplace does? -- albert chin ([EMAIL PROTECTED])

Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-11-24 Thread Al Tobey
Rsync can update in-place. From rsync(1): --inplace update destination files in-place -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread marko b
At this point, this IS an academic exercize. I've tried to outline the motivations/justifications for wanting this particular functionality. I believe my architectural why not? and is it possible? question is sufficiently valid. Its not about disk cost. Its about being able to grow the pool

Re: [zfs-discuss] ZFS space efficiency when copying files from

2008-11-24 Thread Erik Trimble
Al Tobey wrote: Rsync can update in-place. From rsync(1): --inplace update destination files in-place Whee! This is now newly working (for me). I've been using an older rsync, where this option didn't work properly on ZFS. It looks like this was fixed on newer

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 11:41 AM, marko b [EMAIL PROTECTED] wrote: At this point, this IS an academic exercize. I've tried to outline the motivations/justifications for wanting this particular functionality. I believe my architectural why not? and is it possible? question is sufficiently

Re: [zfs-discuss] 'zfs recv' is very slow

2008-11-24 Thread Ian Collins
Andrew Gabriel wrote: Ian Collins wrote: I've just finished a small application to couple zfs_send and zfs_receive through a socket to remove ssh from the equation and the speed up is better than 2x. I have a small (140K) buffer on the sending side to ensure the minimum number of sent

[zfs-discuss] Stuck pool

2008-11-24 Thread Ian Collins
I have a pool on a USB stick that has become stuck again. Any ZFS command on that pool will hang. Is there any worthwhile debugging information I can collect before rebooting the box (which might not help - the pool was tuck before I reboot and it's still stuck now)? -- Ian.

Re: [zfs-discuss] HELP! SNV_97, 98, 99 zfs with iscsitadm and VMWare!

2008-11-24 Thread Tano
Nigel, I have sent you an email with the output that you were looking for. Once a solution has been discovered I'll post it on here so everyone can see. Tano -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Toby Thain
On 24-Nov-08, at 10:40 AM, Scara Maccai wrote: Why would it be assumed to be a bug in Solaris? Seems more likely on balance to be a problem in the error reporting path or a controller/ firmware weakness. Weird: they would use a controller/firmware that doesn't work? Bad call... Seems

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Will Murnane
On Mon, Nov 24, 2008 at 10:40, Scara Maccai [EMAIL PROTECTED] wrote: Still don't understand why even the one on http://www.opensolaris.com/, ZFS – A Smashing Hit, doesn't show the app running in the moment the HD is smashed... weird... ZFS is primarily about protecting your data: correctness,

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Darren J Moffat
marko b wrote: At this point, this IS an academic exercize. I've tried to outline the motivations/justifications for wanting this particular functionality. I believe my architectural why not? and is it possible? question is sufficiently valid. Its not about disk cost. Its about being

Re: [zfs-discuss] ZFS fragmentation with MySQL databases

2008-11-24 Thread Richard Elling
Luke Lonergan wrote: Actually, it does seem to work quite well when you use a read optimized SSD for the L2ARC. In that case, random read workloads have very fast access, once the cache is warm. One would expect so, yes. But the usefulness of this is limited to the cases where the

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread C. Bergström
Will Murnane wrote: On Mon, Nov 24, 2008 at 10:40, Scara Maccai [EMAIL PROTECTED] wrote: Still don't understand why even the one on http://www.opensolaris.com/, ZFS – A Smashing Hit, doesn't show the app running in the moment the HD is smashed... weird... Sorry this is OT, but is

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Scara Maccai
if a disk vanishes like a sledgehammer hit it, ZFS will wait on the device driver to decide it's dead. OK I see it. That said, there have been several threads about wanting configurable device timeouts handled at the ZFS level rather than the device driver level. Uh, so I can

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Moore, Joe
C. Bergström wrote: Will Murnane wrote: On Mon, Nov 24, 2008 at 10:40, Scara Maccai [EMAIL PROTECTED] wrote: Still don't understand why even the one on http://www.opensolaris.com/, ZFS - A Smashing Hit, doesn't show the app running in the moment the HD is smashed... weird...

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Kam
Are there any performance penalties incurred by mixing vdevs? Say you start with a raidz1 with three 500gb disks. Then over time you add a mirror with 2 1TB disks. -- This message posted from opensolaris.org ___ zfs-discuss mailing list

[zfs-discuss] ESX integration

2008-11-24 Thread Ahmed Kamal
Hi, Not sure if this is the best place to ask, but do Sun's new Amber road storage boxes have any kind of integration with ESX? Most importantly, quiescing the VMs, before snapshotting the zvols, and/or some level of management integration thru either the web UI or ESX's console ? If there's

[zfs-discuss] replacing disk

2008-11-24 Thread Krzys
somehow I have issue replacing my disk. [20:09:29] [EMAIL PROTECTED]: /root zpool status mypooladas pool: mypooladas state: DEGRADED status: One or more devices could not be opened. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Attach

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Miles Nordin
tt == Toby Thain [EMAIL PROTECTED] writes: tt Why would it be assumed to be a bug in Solaris? Seems more tt likely on balance to be a problem in the error reporting path tt or a controller/ firmware weakness. It's not really an assumption. It's been discussed in here a lot, and we

Re: [zfs-discuss] `zfs list` doesn't show my snapshot

2008-11-24 Thread Simon Breden
To be honest, I haven't considered the ease-of-use aspects of listing file systems and/or snapshots, simply that the way it is now is preferable (to me) than how it used to be. But you could see what others think perhaps. Yes, I think when a system is evolving it can be confusing to see cases

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread marko b
Darren, Perhaps I misspoke when I said that it wasn't about cost. It is _partially_ about cost. Monetary cost of drives isn't a major concern. At about $110-150 each. Loss of efficiency (mirroring 50%), zraid1 (25%), is a concern. Expense of sata bays, either in a single chassis or an external

Re: [zfs-discuss] So close to better, faster, cheaper....

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 4:04 PM, marko b [EMAIL PROTECTED] wrote: Darren, Perhaps I misspoke when I said that it wasn't about cost. It is _partially_ about cost. Monetary cost of drives isn't a major concern. At about $110-150 each. Loss of efficiency (mirroring 50%), zraid1 (25%), is a

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Toby Thain
On 24-Nov-08, at 3:49 PM, Miles Nordin wrote: tt == Toby Thain [EMAIL PROTECTED] writes: tt Why would it be assumed to be a bug in Solaris? Seems more tt likely on balance to be a problem in the error reporting path tt or a controller/ firmware weakness. It's not really an

Re: [zfs-discuss] ESX integration

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: Hi, Not sure if this is the best place to ask, but do Sun's new Amber road storage boxes have any kind of integration with ESX? Most importantly, quiescing the VMs, before snapshotting the zvols, and/or some level of

Re: [zfs-discuss] ESX integration

2008-11-24 Thread David Magda
On Nov 24, 2008, at 17:32, Tim wrote: On Mon, Nov 24, 2008 at 2:22 PM, Ahmed Kamal [EMAIL PROTECTED] wrote: Not sure if this is the best place to ask, but do Sun's new Amber road storage boxes have any kind of integration with ESX? Most importantly, quiescing the VMs, before

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Richard Elling
Toby Thain wrote: On 24-Nov-08, at 3:49 PM, Miles Nordin wrote: tt == Toby Thain [EMAIL PROTECTED] writes: tt Why would it be assumed to be a bug in Solaris? Seems more tt likely on balance to be a problem in the error reporting path tt or a controller/

[zfs-discuss] New 2 ZFS: How assign user permission to read, write, execute a new filesys?

2008-11-24 Thread Richard Catlin
I am new to OpenSolaris and ZFS. I created a new filesystem under and existing filesystem for a user Exists: /rpool/export/home/user01 zfs create rpool/export/home/user01/fs1 As root, I can add a file to fs1, but as user01, I don't have the permission. How do I give user01 permission? Can I

Re: [zfs-discuss] New 2 ZFS: How assign user permission to read, write, execute a new filesys?

2008-11-24 Thread Tim
On Mon, Nov 24, 2008 at 7:54 PM, Richard Catlin [EMAIL PROTECTED]wrote: I am new to OpenSolaris and ZFS. I created a new filesystem under and existing filesystem for a user Exists: /rpool/export/home/user01 zfs create rpool/export/home/user01/fs1 As root, I can add a file to fs1, but as

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Scara Maccai
In the worst case, the device would be selectable, but not responding to data requests which would lead through the device retry logic and can take minutes. that's what I didn't know: that a driver could take minutes (hours???) to decide that a device is not working anymore. Now it comes

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Richard Elling
Scara Maccai wrote: In the worst case, the device would be selectable, but not responding to data requests which would lead through the device retry logic and can take minutes. that's what I didn't know: that a driver could take minutes (hours???) to decide that a device is not

Re: [zfs-discuss] ZFS, Smashing Baby a fake???

2008-11-24 Thread Ross
But that's exactly the problem Richard: AFAIK. Can you state that absolutely, categorically, there is no failure mode out there (caused by hardware faults, or bad drivers) that won't lock a drive up for hours? You can't, obviously, which is why we keep saying that ZFS should have this kind