Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Matthew Ahrens
Torrey McMahon wrote: Matthew Ahrens wrote: The problem that this feature attempts to address is when you have some data that is more important (and thus needs a higher level of redundancy) than other data. Of course in some situations you can use multiple pools, but that is antithetical to

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread eric kustarz
David Dyer-Bennet wrote: On 9/12/06, eric kustarz [EMAIL PROTECTED] wrote: So it seems to me that having this feature per-file is really useful. Say i have a presentation to give in Pleasanton, and the presentation lives on my single-disk laptop - I want all the meta-data and the actual

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread eric kustarz
Torrey McMahon wrote: eric kustarz wrote: Matthew Ahrens wrote: Matthew Ahrens wrote: Here is a proposal for a new 'copies' property which would allow different levels of replication for different filesystems. Thanks everyone for your input. The problem that this feature attempts to

Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Dick Davies
On 13/09/06, Matthew Ahrens [EMAIL PROTECTED] wrote: Dick Davies wrote: But they raise a lot of administrative issues Sure, especially if you choose to change the copies property on an existing filesystem. However, if you only set it at filesystem creation time (which is the recommended

Re: [zfs-discuss] ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
I think this is user error: the man page explicitly says: -f Forces import, even if the pool appears to be potentially active. and that's exactly what you did. If the behaviour had been the same without the -f option, I guess this would be a bug. HTH

Re: [zfs-discuss] Memory Usage

2006-09-13 Thread Wee Yeh Tan
On 9/13/06, Thomas Burns [EMAIL PROTECTED] wrote: BTW -- did I guess right wrt where I need to set arc.c_max (etc/system)? I think you need to use mdb. As Mark and Johansen mentioned, only do this as your last resort. # mdb -kw arc::print -a c_max d3b0f874 c_max = 0x1d0fe800 d3b0f874 /W

[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Well, we are using the -f parameter to test failover functionality. If one system with mounted ZFS is down, we have to use the force to mount it on the failover system. But when the failed system comes online again, it remounts the ZFS without errors, so it is mounted simultanously on both

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: Well, we are using the -f parameter to test failover functionality. If one system with mounted ZFS is down, we have to use the force to mount it on the failover system. But when the failed system comes online again, it remounts the ZFS without errors, so it is mounted

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Thomas Wagner
On Wed, Sep 13, 2006 at 12:28:23PM +0200, Michael Schuster wrote: Mathias F wrote: Well, we are using the -f parameter to test failover functionality. If one system with mounted ZFS is down, we have to use the force to mount it on the failover system. But when the failed system comes

[zfs-discuss] 'zfs mirror as backup' status?

2006-09-13 Thread Dick Davies
Since we were just talking about resilience on laptops, I wondered if it there had been any progress in sorting some of the glitches that were involved in: http://www.opensolaris.org/jive/thread.jspa?messageID=25144#25144 ? -- Rasputin :: Jack of All Trades - Master of Nuns

[zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
Without -f option, the ZFS can't be imported while reserved for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using Veritas. Thanks for all your

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Mike Gerdts
On 9/13/06, Richard Elling [EMAIL PROTECTED] wrote: * Mirroring offers slightly better redundancy, because one disk from each mirror can fail without data loss. Is this use of slightly based upon disk failure modes? That is, when disks fail do they tend to get isolated areas of badness

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Tobias Schacht
On 9/13/06, Mike Gerdts [EMAIL PROTECTED] wrote: The only part of the proposal I don't like is space accounting. Double or triple charging for data will only confuse those apps and users that check for free space or block usage. Why exactly isn't reporting the free space divided by the copies

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: Without -f option, the ZFS can't be imported while reserved for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using Veritas.

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Mathias F wrote: Without -f option, the ZFS can't be imported while reserved for the other host, even if that host is down. This is the correct behaviour. What do you want to cause? data corruption? As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as

Re: [zfs-discuss] ZFS API (again!), need quotactl(7I)

2006-09-13 Thread Boyd Adamson
On 13/09/2006, at 2:29 AM, Eric Schrock wrote: On Tue, Sep 12, 2006 at 07:23:00AM -0400, Jeff A. Earickson wrote: Modify the dovecot IMAP server so that it can get zfs quota information to be able to implement the QUOTA feature of the IMAP protocol (RFC 2087). In this case pull the zfs

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Zoram Thanga
Hi Mathias, Mathias F wrote: Without -f option, the ZFS can't be imported while reserved for the other host, even if that host is down. As I said, we are testing ZFS as a [b]replacement for VxVM[/b], which we are using atm. So as a result our tests have failed and we have to keep on using

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Al Hopper
On Tue, 12 Sep 2006, Matthew Ahrens wrote: Torrey McMahon wrote: Matthew Ahrens wrote: The problem that this feature attempts to address is when you have some data that is more important (and thus needs a higher level of redundancy) than other data. Of course in some situations you can

Re: [zfs-discuss] Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Michael Schuster
Mathias F wrote: I think I get the whole picture, let me summarise: - you create a pool P and an FS on host A - Host A crashes - you import P on host B; this only works with -f, as zpool import otherwise refuses to do so. - now P is imported on B - host A comes back up and re-accesses P,

Re: [zfs-discuss] Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Mathias F wrote: ... Yes it is, you got it ;) VxVM just notices that it's previously imported DiskGroup(s) (for ZFS this is the Pool) were failed over and doesn't try to re-acquire them. It waits for an admin action. The topic of clustering ZFS is not the problem atm, we just test the failover

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration and so any errors you encounter are *your fault alone*. Still, after reading Mathias's

[zfs-discuss] zfs receive kernel panics the machine

2006-09-13 Thread Niclas Sodergard
Hi, I'm running some experiments with zfs send and receive on Solaris 10u2 between two different machines. On server 1 I have the following data/zones/app1838M 26.5G 836M /zones/app1 data/zones/[EMAIL PROTECTED] 2.35M - 832M - I have a script that creates a new snapshot

[zfs-discuss] Re: Re: Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Mathias F
[...] a product which is *not* currently multi-host-aware to behave in the same safe manner as one which is. That`s the point we figured out while testing it ;) I just wanted to have our thoughts reviewed by other ZFS users. Our next steps IF the failover would have succeeded would be to

Re[2]: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Robert Milkowski
Hello Frank, Tuesday, September 12, 2006, 9:41:05 PM, you wrote: FC It would be interesting to have a zfs enabled HBA to offload the checksum FC and parity calculations. How much of zfs would such an HBA have to FC understand? That won't be end-to-end checksuming anymore, right? That way you

Re[2]: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-13 Thread Robert Milkowski
Hello Philippe, It was recommended to lower ncsize and I did (to default ~128K). So far it works ok for last days and staying at about 1GB free ram (fluctuating between 900MB-1,4GB). Do you think it's a long term solution or with more load and more data the problem can surface again even with

Re: [zfs-discuss] Memory Usage

2006-09-13 Thread Robert Milkowski
Hello Thomas, Tuesday, September 12, 2006, 7:40:25 PM, you wrote: TB Hi, TB We have been using zfs for a couple of months now, and, overall, really TB like it. However, we have run into a major problem -- zfs's memory TB requirements TB crowd out our primary application. Ultimately, we have

[zfs-discuss] Re: Re[2]: System hang caused by a bad snapshot

2006-09-13 Thread Ben Miller
Hello Matthew, Tuesday, September 12, 2006, 7:57:45 PM, you wrote: MA Ben Miller wrote: I had a strange ZFS problem this morning. The entire system would hang when mounting the ZFS filesystems. After trial and error I determined that the problem was with one of the 2500 ZFS

[zfs-discuss] when zfs enabled java

2006-09-13 Thread Jill Manfield
My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit application you see is java: 507 89464

[zfs-discuss] Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Hi, There's something really bizarre in ZFS snaphot specs : Uses no separate backing store. . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a backing-store-file

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Scott Howard
On Wed, Sep 13, 2006 at 07:38:22AM -0700, Nicolas Dorfsman wrote: There's something really bizarre in ZFS snaphot specs : Uses no separate backing store. . It's not at all bizarre once you understand how ZFS works. I'd suggest reading through some of the documentation available at

Re: [zfs-discuss] Re: Re: ZFS forces system to paging to the point it is

2006-09-13 Thread Mark Maybee
Robert Milkowski wrote: Hello Philippe, It was recommended to lower ncsize and I did (to default ~128K). So far it works ok for last days and staying at about 1GB free ram (fluctuating between 900MB-1,4GB). Do you think it's a long term solution or with more load and more data the problem can

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Matthew Ahrens
Nicolas Dorfsman wrote: Hi, There's something really bizarre in ZFS snaphot specs : Uses no separate backing store. . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there any chance to have a

[zfs-discuss] Re: Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
I ran the DTrace script and the resulting output is rather large (1 million lines and 65MB), so I won't burden this forum with that much data. Here are the top 100 lines from the DTrace output. Let me know if you need the full output and I'll figure out a way for the group to get it. dtrace:

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Bill Sommerfeld
On Wed, 2006-09-13 at 02:30, Richard Elling wrote: The field data I have says that complete disk failures are the exception. I hate to leave this as a teaser, I'll expand my comments later. That matches my anecdotal experience with laptop drives; maybe I'm just lucky, or maybe I'm just paying

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
It would be interesting to have a zfs enabled HBA to offload the checksum and parity calculations. How much of zfs would such an HBA have to understand? That's an interesting question. For parity, it's actually pretty easy. One can envision an HBA which took a group of related write commands

[zfs-discuss] Re: when zfs enabled java

2006-09-13 Thread Roch - PAE
Jill Manfield writes: My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit

[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
Well. ZFS isn't copy-on-write in the same way that things like ufssnap are. ufssnap is copy-on-write in that when you write something, it copies out the old data and writes it somewhere else (the backing store). ZFS doesn't need to do this - it simply writes the new data to a new

[zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Nicolas Dorfsman
If you want to copy your filesystems (or snapshots) to other disks, you can use 'zfs send' to send them to a different pool (which may even be on a different machine!). Oh no ! It means copy the whole filesystem. The target here is definitively to snapshot the filesystem and them backup

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 6:09:50 AM -0700 Mathias F [EMAIL PROTECTED] wrote: [...] a product which is *not* currently multi-host-aware to behave in the same safe manner as one which is. That`s the point we figured out while testing it ;) I just wanted to have our thoughts reviewed by other ZFS

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
just measured quickly that a 1.2Ghz sparc can do [400-500]MB/sec of encoding (time spent in misnamed function vdev_raidz_reconstruct) for a 3 disk raid-z group. Strange, that seems very low. Ah, I see. The current code loops through each buffer, either copying or XORing it into the parity.

[zfs-discuss] Re: Re: Recommendation ZFS on StorEdge 3320

2006-09-13 Thread Anton B. Rang
With ZFS however the in-between cache is obsolete, as individual disk caches can be used directly. I also openly question whether even the dedicated RAID HW is faster than the newest CPUs in modern servers. Individual disk caches are typically in the 8-16 MB range; for 15 disks, that gives you

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Eric Schrock
On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote: Why again shouldn't zfs have a hostid written into the pool, to prevent import if the hostid doesn't match? See: 6282725 hostname/hostid should be stored in the label Keep in mind that this is not a complete clustering solution -

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Torrey McMahon
eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to copies than what Matt is currently proposing - per file copies, but its more work (one thing being we don't have

Re: [zfs-discuss] Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Gregory Shaw
On Sep 12, 2006, at 2:55 PM, Celso wrote:On 12/09/06, Celso [EMAIL PROTECTED] wrote: One of the great things about zfs, is that it protects not just against mechanical failure, butagainst silent data corruption. Having this availableto laptop owners seems to me to be important tomaking zfs even

[zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
I filed this RFE earlier, since there is no way for non sun personel to see this RFE for a while I am posting it here, and asking for feedback from the community. [Fwd: CR 6470231 Created P5 opensolaris/triage-queue Add an inuse check that is inforced even if import -f is used.] Inbox Assign a

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 9:32:50 AM -0700 Eric Schrock [EMAIL PROTECTED] wrote: On Wed, Sep 13, 2006 at 09:14:36AM -0700, Frank Cusack wrote: Why again shouldn't zfs have a hostid written into the pool, to prevent import if the hostid doesn't match? See: 6282725 hostname/hostid should be

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Bart Smaalders
Torrey McMahon wrote: eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to copies than what Matt is currently proposing - per file copies, but its more work (one thing

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread Torrey McMahon
Matthew Ahrens wrote: Nicolas Dorfsman wrote: Hi, There's something really bizarre in ZFS snaphot specs : Uses no separate backing store. . Hum...if I want to mutualize one physical volume somewhere in my SAN as THE snaphots backing-store...it becomes impossible to do ! Really bad. Is there

Re: [zfs-discuss] Proposal: multiple copies of user data

2006-09-13 Thread Torrey McMahon
Bart Smaalders wrote: Torrey McMahon wrote: eric kustarz wrote: I want per pool, per dataset, and per file - where all are done by the filesystem (ZFS), not the application. I was talking about a further enhancement to copies than what Matt is currently proposing - per file copies, but

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will solve the clustering problem oversimplifies the problem and will lead to people who

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 1:28:47 PM -0400 Dale Ghent [EMAIL PROTECTED] wrote: On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will solve the

[zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Rich
Hi zfs-discuss,I was running Solaris 11, b42 on x86, and I tried upgrading to b44. I didn't have space on the root for live_upgrade, so I booted from disc to upgrade, but it failed on every attempt, so I ended up blowing away / and doing a clean b44 install. Now the zpool that was attached to that

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Darren J Moffat
Frank Cusack wrote: Sounds cool! Better than depending on an out-of-band heartbeat. I disagree it sounds really really bad. If you want a high availability cluster you really need a faster interconnect than spinning rust which is probably the slowest interface we have now! -- Darren J

Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Eric Schrock
Can you send the output of 'zdb -l /dev/dsk/c2t0d0s0' ? So you do the 'zpool import -f' and all is well, but then when you reboot, it doesn't show up, and you must import it again? Can you send the output of 'zdb -C' both before and after you do the import? Thanks, - Eric On Wed, Sep 13,

Re: Re: [zfs-discuss] marvel cards.. as recommended

2006-09-13 Thread Joe Little
On 9/12/06, James C. McPherson [EMAIL PROTECTED] wrote: Joe Little wrote: So, people here recommended the Marvell cards, and one even provided a link to acquire them for SATA jbod support. Well, this is what the latest bits (B47) say: Sep 12 13:51:54 vram marvell88sx: [ID 679681

Re: [zfs-discuss] Loss of compression with send/receive

2006-09-13 Thread Eric Schrock
You want: 6421959 want zfs send to preserve properties ('zfs send -p') Which Matt is currently working on. - Eric On Thu, Sep 14, 2006 at 02:04:32AM +0800, Darren Reed wrote: Using Solaris 10, Update 2 (b9a) I've just used zfs send | zfs receive to move some filesystems from one disk to

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Dale Ghent
On Sep 13, 2006, at 1:37 PM, Darren J Moffat wrote: That might be acceptable in some environments but that is going to cause disks to spin up. That will be very unacceptable in a laptop and maybe even in some energy conscious data centres. Introduce an option to 'zpool create'? Come to

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Ceri Davies
On Wed, Sep 13, 2006 at 06:37:25PM +0100, Darren J Moffat wrote: Dale Ghent wrote: On Sep 13, 2006, at 12:32 PM, Eric Schrock wrote: Storing the hostid as a last-ditch check for administrative error is a reasonable RFE - just one that we haven't yet gotten around to. Claiming that it will

Re: [zfs-discuss] zpool always thinks it's mounted on another system

2006-09-13 Thread Rich
I do the 'zpool import -f moonside', and all is well until I reboot, at which point I must zpool import -f again.Below is zdb -l /dev/dsk/c2t0d0s0's output:LABEL 0 version=3 name='moonside' state=0 txg=1644418

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
On 9/13/06, Eric Schrock [EMAIL PROTECTED] wrote: There are several problems I can see: - This is what the original '-f' flag is for. I think a better approach is to expand the default message of 'zpool import' with more information, such as which was the last host to access the pool and

[zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Anton B. Rang
I think there are at least two separate issues here. The first is that ZFS doesn't support multiple hosts accessing the same pool. That's simply a matter of telling people. UFS doesn't support multiple hosts, but it doesn't have any special features to prevent administrators from *trying* it.

[zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Anton B. Rang
If I'm reading the source correctly, for the $60xx boards, the only supported revision is $09. Yours is $07, which presumably has some errata with no workaround, and which the Solaris driver refuses to support. Hope you can return it ... ? This message posted from opensolaris.org

[zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Anton B. Rang
A quick peek at the Linux source shows a small workaround in place for the 07 revision...maybe if you file a bug against Solaris to support this revision it might be possible to get it added, at least if that's the only issue. This message posted from opensolaris.org

[zfs-discuss] Re: Proposal: multiple copies of user data

2006-09-13 Thread Anton B. Rang
Is this true for single-sector, vs. single-ZFS-block, errors? (Yes, it's pathological and probably nobody really cares.) I didn't see anything in the code which falls back on single-sector reads. (It's slightly annoying that the interface to the block device drivers loses the SCSI error

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Eric Schrock
On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote: this would not be the first time that Solaris overrided an administive command, because its just not safe or sane to do so. For example. rm -rf / As I've repeated before, and will continue to repeat, it's not actually possible

Re: [zfs-discuss] Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread James Dickens
On 9/13/06, Eric Schrock [EMAIL PROTECTED] wrote: On Wed, Sep 13, 2006 at 02:29:55PM -0500, James Dickens wrote: this would not be the first time that Solaris overrided an administive command, because its just not safe or sane to do so. For example. rm -rf / As I've repeated before, and

[zfs-discuss] Re: when zfs enabled java

2006-09-13 Thread Mark Maybee
Jill Manfield wrote: My customer is running java on a ZFS file system. His platform is Soalris 10 x86 SF X4200. When he enabled ZFS his memory of 18 gigs drops to 2 gigs rather quickly. I had him do a # ps -e -o pid,vsz,comm | sort -n +1 and it came back: The culprit application you see

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Matthew Ahrens
Nicolas Dorfsman wrote: We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole concept is different. Agreed. So. What could be the best architecture ? What is the problem? With UFS, I used to have separate metadevices/LUNs for each application. With ZFS, I thought

[zfs-discuss] Re: Bizzare problem with ZFS filesystem

2006-09-13 Thread Anantha N. Srirama
One more piece of information. I was able to ascertain the slowdown happens only when ZFS is used heavily; meaning lots of inflight I/O. This morning when the system was quiet my writes to the /u099 filesystem was excellent and it has gone south like I reported earlier. I am currently

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Darren Dunham
Including performance considerations ? For instance, if I have two Oracle Databases with two I/O profiles (TP versus Batch)...what would be the best : 1) Two pools, each one on two LUNs. Each LUN distributed on n trays. 2) One pool on one LUN. This LUN distributed on 2 x n trays. 3) One

Re: [zfs-discuss] Re: Snapshots and backing store

2006-09-13 Thread Torrey McMahon
Matthew Ahrens wrote: Nicolas Dorfsman wrote: We need to think ZFS as ZFS, and not as a new filesystem ! I mean, the whole concept is different. Agreed. So. What could be the best architecture ? What is the problem? With UFS, I used to have separate metadevices/LUNs for each

[zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Erik Trimble
OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two systems? That is, can

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Eric Schrock
If you're using EFI labels, yes (VTOC labels are not endian neutral). ZFS will automatically convert endianness from the on-disk format, and new data will be written using the native endianness, so data will be gradually be rewritten to avoid the byteswap overhead. - Eric On Wed, Sep 13, 2006 at

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread James C. McPherson
Erik Trimble wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread Torrey McMahon
Erik Trimble wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to import/export pools between the two

Re: [zfs-discuss] Importing ZFS filesystems across architectures...

2006-09-13 Thread James Dickens
On 9/13/06, Erik Trimble [EMAIL PROTECTED] wrote: OK, this may seem like a stupid question (and we all know that there are such things...) I'm considering sharing a disk array (something like a 3510FC) between two different systems, a SPARC and an Opteron. Will ZFS transparently work to

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread James C. McPherson
Frank Cusack wrote: ...[snip James McPherson's objections to PMC] I understand the objection to mickey mouse configurations, but I don't understand the objection to (what I consider) simply improving safety. ... And why should failover be limited to SC? Why shouldn't VCS be able to play? Why

Re: [zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Daniel Rock
Anton B. Rang schrieb: The hostid solution that VxVM uses would catch this second problem, because when A came up after its reboot, it would find that -- even though it had created the pool -- it was not the last machine to access it, and could refuse to automatically mount it. If the

Re: [zfs-discuss] Re: Comments on a ZFS multiple use of a pool, RFE.

2006-09-13 Thread Frank Cusack
On September 14, 2006 1:25:01 AM +0200 Daniel Rock [EMAIL PROTECTED] wrote: Just to clear some things up. The OP who started the whole discussion would have had the same problems with VxVM as he has now with ZFS. If you force an import of a disk group on one host while it is still active on

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 4:33:31 PM -0700 Frank Cusack [EMAIL PROTECTED] wrote: You'd typically have a dedicated link for heartbeat, what if that cable gets yanked or that NIC port dies. The backup system could avoid mounting the pool if zfs had its own heartbeat. What if the cluster software has

[zfs-discuss] Re: zfs and Oracle ASM

2006-09-13 Thread Anantha N. Srirama
I did a non-scientific benchmark against ASM and ZFS. Just look for my posts and you'll see it. To summarize it was a statistical tie for simple loads of around 2GB of data and we've chosen to stick with ASM for a variety of reasons not the least of which is its ability to rebalance when disks

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Richard Elling
Dale Ghent wrote: James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration and so any errors you encounter are *your fault alone*. Still, after reading

Re: [zfs-discuss] Re: ZFS imported simultanously on 2 systems...

2006-09-13 Thread Frank Cusack
On September 13, 2006 7:07:40 PM -0700 Richard Elling [EMAIL PROTECTED] wrote: Dale Ghent wrote: James C. McPherson wrote: As I understand things, SunCluster 3.2 is expected to have support for HA-ZFS and until that version is released you will not be running in a supported configuration

Re: [zfs-discuss] Re: zfs and Oracle ASM

2006-09-13 Thread Richard Elling
Anantha N. Srirama wrote: I did a non-scientific benchmark against ASM and ZFS. Just look for my posts and you'll see it. To summarize it was a statistical tie for simple loads of around 2GB of data and we've chosen to stick with ASM for a variety of reasons not the least of which is its

Re: [zfs-discuss] Re: Re: Re: Proposal: multiple copies of user data

2006-09-13 Thread Wee Yeh Tan
On 9/13/06, Matthew Ahrens [EMAIL PROTECTED] wrote: Sure, if you want *everything* in your pool to be mirrored, there is no real need for this feature (you could argue that setting up the pool would be easier if you didn't have to slice up the disk though). Not necessarily. Implementing this

Re: [zfs-discuss] Snapshots and backing store

2006-09-13 Thread David Magda
On Sep 13, 2006, at 10:52, Scott Howard wrote: It's not at all bizarre once you understand how ZFS works. I'd suggest reading through some of the documentation available at http://www.opensolaris.org/os/community/zfs/docs/ , in paricular the Slides available there. The presentation that

Re: [zfs-discuss] Re: Re: marvel cards.. as recommended

2006-09-13 Thread Joe Little
Yeah. I got the message from a few others, and we are hoping to return/buy the newer one. I've sort of surprised by the limited set of SATA RAID or JBOD cards that one can actually use. Even the one's linked to on this list sometimes aren't supported :). I need to get up and running like

[zfs-discuss] any update on zfs root/boot ?

2006-09-13 Thread James C. McPherson
Hi folks, I'm in the annoying position of having to replace my rootdisk (since it's a [EMAIL PROTECTED]@$! maxtor and dying). I'm currently running with zfsroot after following Tabriz' and TimF's procedure to enable that. However, I'd like to know whether there's a better way to get zfs