Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread Khyron
It would be helpful if you posted more information about your configuration. Numbers *are* useful too, but minimally, describing your setup, use case, the hardware and other such facts would provide people a place to start. There are much brighter stars on this list than myself, but if you are

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Joe Auty
Richard Elling wrote: On Jun 7, 2010, at 4:50 PM, besson3c wrote: Hello, I have a drive that was a part of the pool showing up as "removed". I made no changes to the machine, and there are no errors being displayed, which is rather weird: # zpool status nm pool: nm state:

[zfs-discuss] Someone is porting ZFS to Linux(again)!!

2010-06-08 Thread ???
just find this project: http://github.com/behlendorf/zfs Does it mean we will use ZFS as a linux kernel module in the near future :) Look forward to it ! -- This message posted from opensolaris.org ___ zfs-discuss mailing list

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Cindy Swearingen
Hi Joe, The REMOVED status generally means that a device was physically removed from the system. If necessary, physically reconnect c0t7d0 or if connected, check cabling, power, and so on. If the device is physically connected, see what cfgadm says about this device. For example, a device that

Re: [zfs-discuss] Native ZFS for Linux

2010-06-08 Thread Hillel Lubman
A very interesting video from DebConf, which addresses CDDL and GPL incompatibility issues, and some original reasoning behind CDDL usage: http://caesar.acc.umu.se/pub/debian-meetings/2006/debconf6/theora-small/2006-05-14/tower/OpenSolaris_Java_and_Debian-Simon_Phipps__Alvaro_Lopez_Ortega.ogg --

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread besson3c
blockquoteIt would be helpful if you posted more information about your configuration. Numbers *are* useful too, but minimally, describing your setup, use case, the hardware and other such facts would provide people a place to start. There are much brighter stars on this list than myself, but

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Cindy Swearingen
Joe, Yes, the device should resilver when its back online. You can use the fmdump -eV command to discover when this device was removed and other hardware-related events to help determine when this device was removed. I would recommend exporting (not importing) the pool before physically

[zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
I have a series of daily snapshots against a set of data that go for several months, but then the server crashed. In a hurry, we set up a new server and just copied over the live data and didn't bother with the snapshots (since zfs send/recv was too slow and would have taken hours and hours to

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread Brandon High
On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote: On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm wondering what the best way is

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Brandon High
On Tue, Jun 8, 2010 at 10:51 AM, BJ Quinn bjqu...@seidal.com wrote: 3.  Take a snapshot on the new server and call it the same thing as the snapshot that I copied the data from (i.e. datap...@nightly20090715) It won't work, because the two snapshots are different. It doesn't matter if they

[zfs-discuss] SATA/SAS Interposer Cards

2010-06-08 Thread Steve D. Jost
Hello all, We have 2 Solaris 10u8 boxes in a small cluster (active/passive) serving up a ZFS-formatted shared SAS tray as an NFS share. We are going to be adding a few SSDs into our disk pool and have determined that we need a SATA/SAS Interposer AAMUX card. Currently the storage tray

Re: [zfs-discuss] slog / log recovery is here!

2010-06-08 Thread R. Eulenberg
Hi, yesterday I changed the /etc/system file and ran: zdb -e -bcsvL tank1 without an output and without a prompt (prozess hangs up) and the same result of running: zdb -eC tank1 Regards Ron -- This message posted from opensolaris.org ___ zfs-discuss

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread Brandon High
On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty j...@netmusician.org wrote: things. I've also read this on a VMWare forum, although I don't know if this correct? This is in context to me questioning why I don't seem to have these same load average problems running Virtualbox: The problem with the

Re: [zfs-discuss] Native ZFS for Linux

2010-06-08 Thread Joerg Schilling
Hillel Lubman shtetl...@gmail.com wrote: A very interesting video from DebConf, which addresses CDDL and GPL incompatibility issues, and some original reasoning behind CDDL usage:

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Cindy Swearingen
According to this report, I/O to this device caused a probe failure because the device isn't available on May 31. I was curious if this device had any previous issues over a longer period of time. Failing or faulted drives can also kill your pool's performance. Thanks, Cindy On 06/08/10

Re: [zfs-discuss] Native ZFS for Linux

2010-06-08 Thread Hillel Lubman
Joerg Schilling wrote: This viedo is not interesting, it is wrong. Danese Cooper claims incorrect things and her claims have already been verified wrong by Simon Phipps. http://www.opensolaris.org/jive/message.jspa?messageID=55013#55008 Hope this helps. Jörg I see it's a pretty heated

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-08 Thread Miles Nordin
re == Richard Elling richard.ell...@gmail.com writes: re Please don't confuse Ethernet with IP. okay, but I'm not. seriously, if you'll look into it. Did you misread where I said FC can exert back-pressure? I was contrasting with Ethernet. Ethernet output queues are either FIFO or RED,

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Is there any way to merge them back together? I really need the history data going back as far as possible, and I'd like to be able to access it from the same place . I mean, worst case scenario, I could rsync the contents of each snapshot to the new filesystem and take a snapshot for each

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Scott Meilicke
You might bring over all of your old data and snaps, then clone that into a new volume. Bring your recent stuff into the clone. Since the clone only updates blocks that are different than the underlying snap, you may see a significant storage savings. Two clones could even be made - one for

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Joe Auty
Cindy Swearingen wrote: Joe, Yes, the device should resilver when its back online. You can use the fmdump -eV command to discover when this device was removed and other hardware-related events to help determine when this device was removed. I would recommend exporting

Re: [zfs-discuss] Drive showing as removed

2010-06-08 Thread Joe Auty
Cindy Swearingen wrote: Hi Joe, The REMOVED status generally means that a device was physically removed from the system. If necessary, physically reconnect c0t7d0 or if connected, check cabling, power, and so on. If the device is physically connected, see what cfgadm

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread Joe Auty
Brandon High wrote: On Tue, Jun 8, 2010 at 10:33 AM, besson3c j...@netmusician.org wrote: On heavy reads or writes (writes seem to be more problematic) my load averages on my VM host shoot up and overall performance is bogged down. I suspect that I do need a mirrored SLOG, but I'm

Re: [zfs-discuss] General help with understanding ZFS performance bottlenecks

2010-06-08 Thread Joe Auty
Brandon High wrote: On Tue, Jun 8, 2010 at 11:27 AM, Joe Auty j...@netmusician.org wrote: things. I've also read this on a VMWare forum, although I don't know if this correct? This is in context to me questioning why I don't seem to have these same load average problems

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Brandon High
On Tue, Jun 8, 2010 at 12:52 PM, BJ Quinn bjqu...@seidal.com wrote: Is there any way to merge them back together?  I really need the history data going back as far as possible, and I'd like to be able to access it from the same place .  I mean, worst case scenario, I could rsync the contents

Re: [zfs-discuss] are these errors dangerous

2010-06-08 Thread Gary Mitchell
I have seen this too I 'm guessing you have SATA disks which are on a iSCSI target. I'm also guessing you have used something like iscsitadm create target --type raw -b /dev/dsk/c4t0d00 c4t0d0 ie you are not using a zfs shareiscsi property on a zfs volume but creating the target from the

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Not exactly sure how to do what you're recommending -- are you suggesting I go ahead with using rsync to bring in each snapshot, but to bring it into to a clone of the old set of snapshots? Is there another way to bring my recent stuff in to the clone? If so, then as for the storage savings,

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
Ugh, yeah, I've learned by now that you always want at least that one snapshot in common to keep the continuity in the dataset. Wouldn't I be able to recreate effectively the same thing by rsync'ing over each snapshot one by one? It may take a while, and I'd have to use the --inplace and

Re: [zfs-discuss] Native ZFS for Linux

2010-06-08 Thread Anurag Agarwal
Hi Brandon, Thanks for providing update on this. We at KQInfotech, initially started on an independent port of ZFS to linux. When we posted our progress about port last year, then we came to know about the work on LLNL port. Since then we started working on to re-base our changing on top Brian's

[zfs-discuss] ZFS host to host replication with AVS?

2010-06-08 Thread Moazam Raja
Hi all, I'm trying to accomplish server to server storage replication in synchronous mode where each server is a Solaris/OpenSolaris machine with its own local storage. For Linux, I've been able to achieve what I want with DRBD but I'm hoping I can find a similar solution on Solaris so that I can

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread Brandon High
On Tue, Jun 8, 2010 at 4:29 PM, BJ Quinn bjqu...@seidal.com wrote: Ugh, yeah, I've learned by now that you always want at least that one snapshot in common to keep the continuity in the dataset.  Wouldn't I be able to recreate effectively the same thing by rsync'ing over each snapshot one by

Re: [zfs-discuss] combining series of snapshots

2010-06-08 Thread BJ Quinn
In my case, snapshot creation time and atime don't matter. I think rsync can preserve mtime and ctime, though. I'll have to double check that. I'd love to enable dedup. Trying to stay on stable releases of OpenSolaris for whatever that's worth, and I can't seem to find a link to download

Re: [zfs-discuss] zfs corruptions in pool

2010-06-08 Thread Toby Thain
On 6-Jun-10, at 7:11 AM, Thomas Maier-Komor wrote: On 06.06.2010 08:06, devsk wrote: I had an unclean shutdown because of a hang and suddenly my pool is degraded (I realized something is wrong when python dumped core a couple of times). This is before I ran scrub: pool: mypool state:

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-08 Thread Bob Friesenhahn
On Tue, 8 Jun 2010, Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re Please don't confuse Ethernet with IP. okay, but I'm not. seriously, if you'll look into it. Did you misread where I said FC can exert back-pressure? I was contrasting with Ethernet.

Re: [zfs-discuss] ZFS host to host replication with AVS?

2010-06-08 Thread David Magda
On Jun 8, 2010, at 20:17, Moazam Raja wrote: One of the major concerns I have is what happens when the primary storage server fails. Will the secondary take over automatically (using some sort of heartbeat mechanism)? Once the secondary node takes over, can it fail-back to the primary node

Re: [zfs-discuss] Homegrown Hybrid Storage

2010-06-08 Thread Erik Trimble
On 6/8/2010 6:33 PM, Bob Friesenhahn wrote: On Tue, 8 Jun 2010, Miles Nordin wrote: re == Richard Elling richard.ell...@gmail.com writes: re Please don't confuse Ethernet with IP. okay, but I'm not. seriously, if you'll look into it. Did you misread where I said FC can exert