Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Ross
In that case, this may be a much tougher nut to crack than I thought. I'll be the first to admit that other than having seen a few presentations I don't have a clue about the details of how ZFS works under the hood, however... You mention that moving the old block means updating all it's

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Henry Zhang
Christian Kelly 写道: Hi Calum, heh, as it happens, I was tinkering with pygtk to see how difficult this would be :) Supposing I have a ZFS on my machine called root/export/home which is mounted on /export/home. Then I have my home dir as /export/home/chris. Say I only want to

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Christian Kelly
Time Machine is storing all in the system by default, but you still can select some ones that you don't like to store. And Time Machine don't use ZFS. Here we will use ZFS snapshot, and what it's working with is file system. In Nevada, the default file system is not ZFS, it means some

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread can you guess?
... My understanding of ZFS (in short: an upside down tree) is that each block is referenced by it's parent. So regardless of how many snapshots you take, each block is only ever referenced by one other, and I'm guessing that the pointer and checksum are both stored there. If that's the

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Tim Foster
On Tue, 2007-11-20 at 13:35 +, Christian Kelly wrote: What I'm suggesting is that the configuration presents a list of pools and their ZFSes and that you have a checkbox, backup/don't backup sort of an option. That's basically the (hacked-up) zenity GUI I have at the moment on my blog,

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Ross
Hmm... that's a pain if updating the parent also means updating the parent's checksum too. I guess the functionality is there for moving bad blocks, but since that's likely to be a rare occurence, it wasn't something that would need to be particularly efficient. With regards sharing the disk

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Calum Benson
On 20 Nov 2007, at 12:56, Christian Kelly wrote: Hi Calum, heh, as it happens, I was tinkering with pygtk to see how difficult this would be :) Supposing I have a ZFS on my machine called root/export/home which is mounted on /export/home. Then I have my home dir as /export/home/

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Christian Kelly
Calum Benson wrote: Right, for Phase 0 the thinking was that you'd really have to manually set up whatever pools and filesystems you required first. So in your example, you (or, perhaps, the Indiana installer) would have had to set up /export/home/chris/Documents as a ZFS filesystem in its

Re: [zfs-discuss] raidz DEGRADED state

2007-11-20 Thread MC
So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? Can someone answer that? Or does the zpool command NOT accommodate the creation of a degraded raidz array? This message posted from opensolaris.org

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread can you guess?
... With regards sharing the disk resources with other programs, obviously it's down to the individual admins how they would configure this, Only if they have an unconstrained budget. but I would suggest that if you have a database with heavy enough requirements to be suffering noticable

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Calum Benson
On 20 Nov 2007, at 13:35, Christian Kelly wrote: Take the example I gave before, where you have a pool called, say, pool1. In the pool you have two ZFSes: pool1/export and pool1/ export/home. So, suppose the user chooses /export in nautilus and adds this to the backup list. Will the

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Moore, Joe
Louwtjie Burger wrote: Richard Elling wrote: - COW probably makes that conflict worse This needs to be proven with a reproducible, real-world workload before it makes sense to try to solve it. After all, if we cannot measure where we are, how can we prove that we've

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Ross
doing these writes now sounds like a lot of work. I'm guessing that needing two full-path updates to achieve this means you're talking about a much greater write penalty. Not all that much. Each full-path update is still only a single write request to the disk, since all the path

Re: [zfs-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Asif Iqbal
On Nov 20, 2007 1:48 AM, Louwtjie Burger [EMAIL PROTECTED] wrote: That is still 256MB/s . I am getting about 194MB/s No, I don't think you can take 2Gbit / 8bits per byte and say 256MB is what you should get... Someone with far more FC knowledge can comment here. There must be some

Re: [zfs-discuss] [perf-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Asif Iqbal
On Nov 20, 2007 7:01 AM, Chad Mynhier [EMAIL PROTECTED] wrote: On 11/20/07, Asif Iqbal [EMAIL PROTECTED] wrote: On Nov 19, 2007 1:43 AM, Louwtjie Burger [EMAIL PROTECTED] wrote: On Nov 17, 2007 9:40 PM, Asif Iqbal [EMAIL PROTECTED] wrote: (Including storage-discuss) I have 6

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Darren J Moffat
Calum Benson wrote: On 20 Nov 2007, at 13:35, Christian Kelly wrote: Take the example I gave before, where you have a pool called, say, pool1. In the pool you have two ZFSes: pool1/export and pool1/ export/home. So, suppose the user chooses /export in nautilus and adds this to the backup

Re: [zfs-discuss] [perf-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Chad Mynhier
On 11/20/07, Asif Iqbal [EMAIL PROTECTED] wrote: On Nov 20, 2007 7:01 AM, Chad Mynhier [EMAIL PROTECTED] wrote: On 11/20/07, Asif Iqbal [EMAIL PROTECTED] wrote: On Nov 19, 2007 1:43 AM, Louwtjie Burger [EMAIL PROTECTED] wrote: On Nov 17, 2007 9:40 PM, Asif Iqbal [EMAIL PROTECTED] wrote:

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread can you guess?
Rats - I was right the first time: there's a messy problem with snapshots. The problem is that the parent of the child that you're about to update in place may *already* be in one or more snapshots because one or more of its *other* children was updated since each snapshot was created. If so,

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Wade . Stuart
Resilver and scrub are broken and restart when a snapshot is created -- the current workaround is to disable snaps while resilvering, the ZFS team is working on the issue for a long term fix. -Wade [EMAIL PROTECTED] wrote on 11/20/2007 09:58:19 AM: On b66: # zpool replace tww

[zfs-discuss] NFS performance considerations (Linux vs Solaris)

2007-11-20 Thread msl
Hello all... I think all of you agree that performance is a great topic in NFS. So, when we talk about NFS and ZFS we imagine a great combination/solution. But one is not dependent on another, actually are two well distinct technologies. ZFS has a lot of features that all we know about, and

Re: [zfs-discuss] [perf-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Andrew Wilson
And, just to add one more point, since pretty much everything the host writes to the controller eventually has to make it out to the disk drives, the long term average write rate cannot exceed the rate that the backend disk subsystem can absorb the writes, regardless of the workload. (An

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote: Resilver and scrub are broken and restart when a snapshot is created -- the current workaround is to disable snaps while resilvering, the ZFS team is working on the issue for a long term fix. But, no snapshot was taken. If so,

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Calum Benson
On 20 Nov 2007, at 15:04, Darren J Moffat wrote: Calum Benson wrote: On 20 Nov 2007, at 13:35, Christian Kelly wrote: Take the example I gave before, where you have a pool called, say, pool1. In the pool you have two ZFSes: pool1/export and pool1/ export/home. So, suppose the user

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Calum Benson
On 20 Nov 2007, at 14:31, Christian Kelly wrote: Ah, I see. So, for phase 0, the 'Enable Automatic Snapshots' option would only be available for/work for existing ZFSes. Then at some later stage, create them on the fly. Yes, that's the scenario for the mockups I posted, anyway... if the

Re: [zfs-discuss] [desktop-discuss] ZFS snapshot GUI

2007-11-20 Thread Darren J Moffat
Calum Benson wrote: You're right that they can, and while that probably does write it off, I wonder how many really do. (And we could possibly do something clever like a semi-opaque overlay anyway, we may not have to replace the background entirely.) Almost everyone I've seen using the

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Chris Csanady
On Nov 19, 2007 10:08 PM, Richard Elling [EMAIL PROTECTED] wrote: James Cone wrote: Hello All, Here's a possibly-silly proposal from a non-expert. Summarising the problem: - there's a conflict between small ZFS record size, for good random update performance, and large ZFS record

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread can you guess?
But the whole point of snapshots is that they don't take up extra space on the disk. If a file (and hence a block) is in every snapshot it doesn't mean you've got multiple copies of it. You only have one copy of that block, it's just referenced by many snapshots. I used the wording copies

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Ross
But the whole point of snapshots is that they don't take up extra space on the disk. If a file (and hence a block) is in every snapshot it doesn't mean you've got multiple copies of it. You only have one copy of that block, it's just referenced by many snapshots. The thing is, the location

[zfs-discuss] Unsubscribe

2007-11-20 Thread Hay, Mausul W
-Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Calum Benson Sent: Tuesday, November 20, 2007 11:12 AM To: Darren J Moffat Cc: Henry Zhang; zfs-discuss@opensolaris.org; Desktop discuss; Christian Kelly Subject: Re: [zfs-discuss] [desktop-discuss] ZFS

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM: On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote: Resilver and scrub are broken and restart when a snapshot is created -- the current workaround is to disable snaps while resilvering, the ZFS team is working on the issue

[zfs-discuss] raidz2 testing

2007-11-20 Thread Brian Lionberger
Is there a preferred method to test a raidz2. I would like to see the the disks recover on there own after simulating a disk failure. I'm have a 4 disk configuration. Brian. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Will Murnane
On Nov 20, 2007 5:33 PM, can you guess? [EMAIL PROTECTED] wrote: But the whole point of snapshots is that they don't take up extra space on the disk. If a file (and hence a block) is in every snapshot it doesn't mean you've got multiple copies of it. You only have one copy of that

Re: [zfs-discuss] Why did resilvering restart?

2007-11-20 Thread Albert Chin
On Tue, Nov 20, 2007 at 11:10:20AM -0600, [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote on 11/20/2007 10:11:50 AM: On Tue, Nov 20, 2007 at 10:01:49AM -0600, [EMAIL PROTECTED] wrote: Resilver and scrub are broken and restart when a snapshot is created -- the current workaround is to

Re: [zfs-discuss] raidz2

2007-11-20 Thread Eric Schrock
On Tue, Nov 20, 2007 at 11:02:55AM +0100, Paul Boven wrote: I seem to be having exactly the problems you are describing (see my postings with the subject 'zfs on a raid box'). So I would very much like to give b77 a try. I'm currently running b76, as that's the latest sxce that's available.

Re: [zfs-discuss] raidz2

2007-11-20 Thread Richard Elling
comment on retries below... Paul Boven wrote: Hi Eric, everyone, Eric Schrock wrote: There have been many improvements in proactively detecting failure, culminating in build 77 of Nevada. Earlier builds: - Were unable to distinguish device removal from devices misbehaving, depending

[zfs-discuss] snv-76 panics on installation

2007-11-20 Thread Bill Moloney
I have an Intel based server running dual P3 Xeons (Intel A46044-609, 1.26GHz) with a BIOS from American Megatrends Inc (AMIBIOS, SCB2 production BIOS rev 2.0, BIOS build 0039) with 2GB of RAM when I attempt to install snv-76 the system panics during the initial boot from CD I've been using

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread Al Hopper
On Tue, 20 Nov 2007, Ross wrote: doing these writes now sounds like a lot of work. I'm guessing that needing two full-path updates to achieve this means you're talking about a much greater write penalty. Not all that much. Each full-path update is still only a single write request to the

Re: [zfs-discuss] snv-76 panics on installation

2007-11-20 Thread Michael Schuster
Bill Moloney wrote: I have an Intel based server running dual P3 Xeons (Intel A46044-609, 1.26GHz) with a BIOS from American Megatrends Inc (AMIBIOS, SCB2 production BIOS rev 2.0, BIOS build 0039) with 2GB of RAM when I attempt to install snv-76 the system panics during the initial boot

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-20 Thread Al Hopper
On Mon, 19 Nov 2007, Brian Hechinger wrote: On Sun, Nov 18, 2007 at 02:18:21PM +0100, Peter Schuller wrote: Right now I have noticed that LSI has recently began offering some lower-budget stuff; specifically I am looking at the MegaRAID SAS 8208ELP/XLP, which are very reasonably priced. I

Re: [zfs-discuss] [perf-discuss] [storage-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Asif Iqbal
On Nov 20, 2007 10:40 AM, Andrew Wilson [EMAIL PROTECTED] wrote: What kind of workload are you running. If you are you doing these measurements with some sort of write as fast as possible microbenchmark, Oracle database with blocksize 16K .. populating the database as fast I can once the 4

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-20 Thread Brian Hechinger
On Tue, Nov 20, 2007 at 02:01:34PM -0600, Al Hopper wrote: a) the SuperMicro AOC-SAT2-MV8 is an 8-port SATA card available for around $110 IIRC. Yeah, I'd like to spend a lot less than that, especially as I only need 2 ports. :) b) There is also a PCI-X version of the older LSI 4-port

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-20 Thread Jason P. Warr
the 3124 looks perfect. The only problem is the only thing I found on ebay was for the 3132, which is PCIe, which doesn't help me. :) I'm not finding anything for 3124 other than the data on silicon image's site. Do you know of any cards I should be looking for that uses this chip?

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-20 Thread Al Hopper
On Tue, 20 Nov 2007, Jason P. Warr wrote: the 3124 looks perfect. The only problem is the only thing I found on ebay was for the 3132, which is PCIe, which doesn't help me. :) I'm not finding anything for 3124 other than the data on silicon image's site. Do you know of any cards I should

Re: [zfs-discuss] zpool io to 6140 is really slow

2007-11-20 Thread Richard Elling
Asif Iqbal wrote: On Nov 19, 2007 11:47 PM, Richard Elling [EMAIL PROTECTED] wrote: Asif Iqbal wrote: I have the following layout A 490 with 8 1.8Ghz and 16G mem. 6 6140s with 2 FC controllers using A1 anfd B1 controller port 4Gbps speed. Each controller has 2G NVRAM On 6140s I

[zfs-discuss] which would be faster

2007-11-20 Thread Tim Cook
So I have 8 drives total. 5x500GB seagate 7200.10 3x300GB seagate 7200.10 I'm trying to decide, would I be better off just creating two separate pools? pool1 = 5x500gb raidz pool2= 3x300gb raidz or would I be better off creating one large pool, with two raid sets? I'm trying to figure out

Re: [zfs-discuss] raidz2 testing

2007-11-20 Thread Richard Elling
Brian Lionberger wrote: Is there a preferred method to test a raidz2. I would like to see the the disks recover on there own after simulating a disk failure. I'm have a 4 disk configuration. It really depends on what failure mode you're interested in. The most common failure we see from

Re: [zfs-discuss] which would be faster

2007-11-20 Thread Al Hopper
On Tue, 20 Nov 2007, Tim Cook wrote: So I have 8 drives total. 5x500GB seagate 7200.10 3x300GB seagate 7200.10 I'm trying to decide, would I be better off just creating two separate pools? pool1 = 5x500gb raidz pool2= 3x300gb raidz ... reformatted ... or would I be better off creating

Re: [zfs-discuss] Recommended many-port SATA controllers for budget ZFS

2007-11-20 Thread Jaz
the 3124 looks perfect. The only problem is the only thing I found on ebay was for the 3132, which is PCIe, which doesn't help me. :) I'm not finding anything for 3124 other than the data on silicon image's site. Do you know of any cards I should be looking for that uses this chip?

Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-20 Thread asa
Well then this is probably the wrong list to be hounding I am looking for something like http://blog.wpkg.org/2007/10/26/stale-nfs-file-handle/ Where when fileserver A dies, fileserver B can come up, grab the same IP address via some mechanism(in this case I am using sun cluster) and keep on

Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-20 Thread Richard Elling
asa wrote: Well then this is probably the wrong list to be hounding I am looking for something like http://blog.wpkg.org/2007/10/26/stale-nfs-file-handle/ Where when fileserver A dies, fileserver B can come up, grab the same IP address via some mechanism(in this case I am using sun

Re: [zfs-discuss] which would be faster

2007-11-20 Thread Rob Logan
On the other hand, the pool of 3 disks is obviously going to be much slower than the pool of 5 while today that's true, someday io will be balanced by the latency of vdevs rather than the number... plus two vdevs are always going to be faster than one vdev, even if one is slower than the

Re: [zfs-discuss] Modify fsid/guid of dataset for NFS failover

2007-11-20 Thread asa
I am rolling my own replication using zfs send|recv through the cluster agent framework and a custom HA shared local storage set of scripts(similar to http://www.posix.brte.com.br/blog/?p=75 but without avs). I am not using zfs off of shared storage in the supported way. So this is a bit

Re: [zfs-discuss] raidz DEGRADED state

2007-11-20 Thread Joe Little
On Nov 20, 2007 6:34 AM, MC [EMAIL PROTECTED] wrote: So there is no current way to specify the creation of a 3 disk raid-z array with a known missing disk? Can someone answer that? Or does the zpool command NOT accommodate the creation of a degraded raidz array? can't started

Re: [zfs-discuss] ZFS + DB + fragments

2007-11-20 Thread can you guess?
... just rearrange your blocks sensibly - and to at least some degree you could do that while they're still cache-resident Lots of discussion has passed under the bridge since that observation above, but it may have contained the core of a virtually free solution: let your table become

Re: [zfs-discuss] ZFS snapshot GUI

2007-11-20 Thread Anton B. Rang
How does the ability to set a snapshot schedule for a particular *file* or *folder* interact with the fact that ZFS snapshots are on a per-filesystem basis? This seems a poor fit. If I choose to snapshot my Important Documents folder every 5 minutes, that's implicitly creating snapshots of my