Re: [zfs-discuss] HP Proliant DL360 G7

2013-01-08 Thread mark
... not that theyre bad but in this realm youre the man :) Thanks, Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] HP Proliant DL360 G7

2013-01-08 Thread Mark -
Good call Saso. Sigh... I guess I wait to hear from HP on supported IT mode HBAs in their D2000s or other jbods. On Tue, Jan 8, 2013 at 11:40 AM, Sašo Kiselkov skiselkov...@gmail.comwrote: On 01/08/2013 04:27 PM, mark wrote: On Jul 2, 2012, at 7:57 PM, Richard Elling wrote: FYI, HP also

Re: [zfs-discuss] Repairing corrupted ZFS pool

2012-11-19 Thread Mark Shellenbaum
On 11/16/12 17:15, Peter Jeremy wrote: I have been tracking down a problem with zfs diff that reveals itself variously as a hang (unkillable process), panic or error, depending on the ZFS kernel version but seems to be caused by corruption within the pool. I am using FreeBSD but the issue looks

Re: [zfs-discuss] Repairing corrupted ZFS pool

2012-11-19 Thread Mark Shellenbaum
On 11/19/12 1:14 PM, Jim Klimov wrote: On 2012-11-19 20:58, Mark Shellenbaum wrote: There is probably nothing wrong with the snapshots. This is a bug in ZFS diff. The ZPL parent pointer is only guaranteed to be correct for directory objects. What you probably have is a file that was hard

[zfs-discuss] Trick to keeping NFS file references in kernel memory for Dtrace?

2012-10-03 Thread Mark
for monitoring usage? I wonder how they have it all working in Fishworks gear as some of the analytics demos show you being able to drill down on through file activity in real time. Any advice or suggestions greatly appreciated. Cheers, Mark

[zfs-discuss] Zpool recovery after too many failed disks

2012-08-27 Thread Mark Wolek
: Permanent errors have been detected in the following files: rpool/filemover:0x1 # zfs list NAME USED AVAIL REFER MOUNTPOINT rpool6.64T 0 29.9K /rpool rpool/filemover 6.64T 323G 6.32T - Thanks Mark ___ zfs

Re: [zfs-discuss] Problem with ESX NFS store on ZFS

2012-03-01 Thread Mark Wolek
=192.168.1.52:192.168.1.51:192.168.1.53 local -Original Message- From: Jim Klimov [mailto:jimkli...@cos.ru] Sent: Wednesday, February 29, 2012 1:44 PM To: Mark Wolek Cc: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Problem with ESX NFS store on ZFS 2012-02-29 21:15, Mark Wolek

[zfs-discuss] Problem with ESX NFS store on ZFS

2012-02-29 Thread Mark Wolek
? Thanks Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Mark Musante
You can see the original ARC case here: http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt On 8 Dec 2011, at 16:41, Ian Collins wrote: On 12/ 9/11 12:39 AM, Darren J Moffat wrote: On 12/07/11 20:48, Mertol Ozyoney wrote: Unfortunetly the answer is no. Neither l1 nor l2

[zfs-discuss] First zone creation - getting ZFS error

2011-12-06 Thread Mark Creamer
. Do I have to do: zfs create datastore/zones/zonemaster before I can create a zone in that path? That's not in the documentation, so I didn't want to do anything until someone can point out my error for me. Thanks for your help! -- Mark ___ zfs-discuss

[zfs-discuss] Log disk with all ssd pool?

2011-10-28 Thread Mark Wolek
a drive to sorry no more writes aloud scenarios. Thanks Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Log disk with all ssd pool?

2011-10-28 Thread Mark Wolek
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Neil Perrin Sent: Friday, October 28, 2011 11:38 AM To: zfs-discuss@opensolaris.org Subject: Re: [zfs-discuss] Log disk with all ssd pool? On 10/28/11 00:54, Neil Perrin wrote: On 10/28/11 00:04, Mark Wolek wrote: Still kicking around this idea

Re: [zfs-discuss] File contents changed with no ZFS error

2011-10-22 Thread Mark Sandrock
Why don't you see which byte differs, and how it does? Maybe that would suggest the failure mode. Is it the same byte data in all affected files, for instance? Mark Sent from my iPhone On Oct 22, 2011, at 2:08 PM, Robert Watzlavick rob...@watzlavick.com wrote: On Oct 22, 2011, at 13:14

Re: [zfs-discuss] about btrfs and zfs

2011-10-18 Thread Mark Sandrock
. Sweet update. Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Mirror Gone

2011-09-27 Thread Mark Musante
On 27 Sep 2011, at 18:29, Edward Ned Harvey wrote: From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org] On Behalf Of Tony MacDoodle Now: mirror-0 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0 c1t3d0 ONLINE

Re: [zfs-discuss] All drives intact but vdev UNAVAIL in raidz1

2011-09-06 Thread Mark J Musante
On Tue, 6 Sep 2011, Tyler Benster wrote: It seems quite likely that all of the data is intact, and that something different is preventing me from accessing the pool. What can I do to recover the pool? I have downloaded the Solaris 11 express livecd if that would be of any use. Try running

Re: [zfs-discuss] zpool replace

2011-08-15 Thread Mark J Musante
Hi Doug, The vms pool was created in a non-redundant way, so there is no way to get the data off of it unless you can put back the original c0t3d0 disk. If you can still plug in the disk, you can always do a zpool replace on it afterwards. If not, you'll need to restore from backup,

Re: [zfs-discuss] Large scale performance query

2011-08-06 Thread Mark Sandrock
Shouldn't the choice of RAID type also be based on the i/o requirements? Anyway, with RAID-10, even a second failed disk is not catastophic, so long as it is not the counterpart of the first failed disk, no matter the no. of disks. (With 2-way mirrors.) But that's why we do backups, right? Mark

Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Mark Musante
minor quibble: compressratio uses a lowercase x for the description text whereas the new prop uses an uppercase X On 6 Jun 2011, at 21:10, Eric Schrock wrote: Webrev has been updated: http://dev1.illumos.org/~eschrock/cr/zfs-refratio/ - Eric -- Eric Schrock Delphix 275

Re: [zfs-discuss] NFS acl inherit problem

2011-06-01 Thread Mark Shellenbaum
On 6/1/11 12:51 AM, lance wilson wrote: The problem is that nfs clients that connect to my solaris 11 express server are not inheriting the acl's that are set for the share. They create files that don't have any acl assigned to them, just the normal unix file permissions. Can someone please

Re: [zfs-discuss] Another zfs issue

2011-06-01 Thread Mark Musante
Yeah, this is a known problem. The DTL on the toplevel shows an outage, and is preventing the removal of the spare even though removing the spare won't make the outage worse. Unfortunately, for opensolaris anyway, there is no workaround. You could try doing a full scrub, replacing any disks

Re: [zfs-discuss] Recommended eSATA PCI cards

2011-05-06 Thread Mark Danico
external eSATA enclosures to these. You'll get two eSATA ports without needing to use any PCI slots and I believe that if you use the very bottom pci slot opening you won't even block any of the actual pci slots from future use. -Mark D. On 05/ 6/11 12:04 PM, Rich Teer wrote: Hi all, I'm looking

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
exactly the problem is? I don't follow? What else would an X4540 or a 7xxx box be used for, other than a storage appliance? Guess I'm slow. :-) Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 3:29 AM, Ian Collins i...@ianshome.com wrote: On 04/ 8/11 08:08 PM, Mark Sandrock wrote: On Apr 8, 2011, at 2:37 AM, Ian Collinsi...@ianshome.com wrote: On 04/ 8/11 06:30 PM, Erik Trimble wrote: On 4/7/2011 10:25 AM, Chris Banal wrote: While I understand everything

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
line, which by the way has brilliant engineering design, the choice is gone now. Okay, so what is the great advantage of an X4540 versus X86 server plus disk array(s)? Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 9:39 PM, Ian Collins i...@ianshome.com wrote: On 04/ 9/11 03:20 AM, Mark Sandrock wrote: On Apr 8, 2011, at 7:50 AM, Evaldas Aurylaevaldas.aur...@edqm.eu wrote: On 04/ 8/11 01:14 PM, Ian Collins wrote: You have built-in storage failover with an AR cluster; and they do

Re: [zfs-discuss] X4540 no next-gen product?

2011-04-08 Thread Mark Sandrock
On Apr 8, 2011, at 11:19 PM, Ian Collins i...@ianshome.com wrote: On 04/ 9/11 03:53 PM, Mark Sandrock wrote: I'm not arguing. If it were up to me, we'd still be selling those boxes. Maybe you could whisper in the right ear? I wish. I'd have a long list if I could do that. Mark

[zfs-discuss] trouble replacing spare disk

2011-04-05 Thread Mahabir, Mark I.
drive in the software, before physically replacing it? I'm also not sure at exactly which juncture to do a 'zpool clear' and 'zpool scrub'? I'd appreciate any guidance - thanks in advance, Mark Mark Mahabir Systems Manager, X-Ray and Observational Astronomy Dept. of Physics Astronomy

Re: [zfs-discuss] Any use for extra drives?

2011-03-25 Thread Mark Sandrock
(/export/...) on a separate pool, thus off-loading the root pool. My two cents, Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Any use for extra drives?

2011-03-24 Thread Mark Sandrock
as I would have done in the old days under SDS. :-) Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] cannot replace c10t0d0 with c10t0d0: device is too small

2011-03-04 Thread Mark J Musante
The fix for 6991788 would probably let the 40mb drive work, but it would depend on the asize of the pool. On Fri, 4 Mar 2011, Cindy Swearingen wrote: Hi Robert, We integrated some fixes that allowed you to replace disks of equivalent sizes, but 40 MB is probably beyond that window. Yes,

[zfs-discuss] Investigating a hung system

2011-02-25 Thread Mark Logan
= 3836 MB arc_meta_max = 3951 MB Is it normal for arc_meta_used == arc_meta_limit? Does this explain the hang? Thanks, Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

[zfs-discuss] ZFS and Virtual Disks

2011-02-14 Thread Mark Creamer
for trouble. This environment is completely available to mess with (no data at risk), so I'm willing to try any option you guys would recommend. Thanks! -- Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman

Re: [zfs-discuss] ZFS and spindle speed (7.2k / 10k / 15k)

2011-02-02 Thread Mark Sandrock
On Feb 2, 2011, at 8:10 PM, Eric D. Mudama wrote: All other things being equal, the 15k and the 7200 drive, which share electronics, will have the same max transfer rate at the OD. Is that true? So the only difference is in the access time? Mark

Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
Why do you say fssnap has the same problem? If it write locks the file system, it is only for a matter of seconds, as I recall. Years ago, I used it on a daily basis to do ufsdumps of large fs'es. Mark On Jan 30, 2011, at 5:41 PM, Torrey McMahon wrote: On 1/30/2011 5:26 PM, Joerg Schilling

Re: [zfs-discuss] Best choice - file system for system

2011-01-31 Thread Mark Sandrock
iirc, we would notify the user community that the FS'es were going to hang briefly. Locking the FS'es is the best way to quiesce it, when users are worldwide, imo. Mark On Jan 31, 2011, at 9:45 AM, Torrey McMahon wrote: A matter of seconds is a long time for a running Oracle database

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
. Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
filesystems.) I'm supposing that a block-level snapshot is not doable -- or is it? Mark On Dec 20, 2010, at 1:27 PM, Erik Trimble wrote: On 12/20/2010 9:20 AM, Saxon, Will wrote: -Original Message- From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss- boun...@opensolaris.org

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
On Dec 20, 2010, at 2:05 PM, Erik Trimble wrote: On 12/20/2010 11:56 AM, Mark Sandrock wrote: Erik, just a hypothetical what-if ... In the case of resilvering on a mirrored disk, why not take a snapshot, and then resilver by doing a pure block copy from the snapshot? It would

Re: [zfs-discuss] A few questions

2010-12-20 Thread Mark Sandrock
It well may be that different methods are optimal for different use cases. Mechanical disk vs. SSD; mirrored vs. raidz[123]; sparse vs. populated; etc. It would be interesting to read more in this area, if papers are available. I'll have to take a look. ... Or does someone have pointers? Mark

Re: [zfs-discuss] Problem with a failed replace.

2010-12-07 Thread Mark J Musante
On Mon, 6 Dec 2010, Curtis Schiewek wrote: Hi Mark, I've tried running zpool attach media ad24 ad12 (ad12 being the new disk) and I get no response. I tried leaving the command run for an extended period of time and nothing happens. What version of solaris are you running

Re: [zfs-discuss] Zfs ignoring spares?

2010-12-05 Thread Mark Musante
On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote: Hot spares are dedicated spares in the ZFS world. Until you replace the actual bad drives, you will be running in a degraded state. The idea is that spares are only used in an emergency. You are degraded until your

Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante
On Fri, 3 Dec 2010, Curtis Schiewek wrote: NAME STATE READ WRITE CKSUM media DEGRADED 0 0 0 raidz1 ONLINE 0 0 0 ad8ONLINE 0 0 0 ad10 ONLINE 0 0 0

Re: [zfs-discuss] Problem with a failed replace.

2010-12-03 Thread Mark J Musante
ad24 ad18 for you. On Fri, Dec 3, 2010 at 1:38 PM, Mark J Musante mark.musa...@oracle.comwrote: On Fri, 3 Dec 2010, Curtis Schiewek wrote: NAME STATE READ WRITE CKSUM media DEGRADED 0 0 0 raidz1 ONLINE 0 0 0

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-19 Thread Mark Little
with their support and see if you can use something similar. Cheers, Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-15 Thread Mark Sandrock
On Nov 2, 2010, at 12:10 AM, Ian Collins wrote: On 11/ 2/10 08:33 AM, Mark Sandrock wrote: I'm working with someone who replaced a failed 1TB drive (50% utilized), on an X4540 running OS build 134, and I think something must be wrong. Last Tuesday afternoon, zpool status reported

Re: [zfs-discuss] Faster than 1G Ether... ESX to ZFS

2010-11-15 Thread Mark Sandrock
Edward, I recently installed a 7410 cluster, which had added Fiber Channel HBAs. I know the site also has Blade 6000s running VMware, but no idea if they were planning to run fiber to those blades (or even had the option to do so). But perhaps FC would be an option for you? Mark On Nov 12

Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Mark J Musante
On Wed, 10 Nov 2010, Darren J Moffat wrote: On 10/11/2010 11:18, sridhar surampudi wrote: I was wondering how zpool split works or implemented. Or are you really asking about the implementation details ? If you want to know how it is implemented then you need to read the source code. Also

[zfs-discuss] Excruciatingly slow resilvering on X4540 (build 134)

2010-11-01 Thread Mark Sandrock
the archives, but they don't seem searchable. Or am I wrong about that? Thanks. Mark (subscription pending) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] ZPOOL_CONFIG_IS_HOLE

2010-10-15 Thread Mark Musante
You should only see a HOLE in your config if you removed a slog after having added more stripes. Nothing to do with bad sectors. On 14 Oct 2010, at 06:27, Matt Keenan wrote: Hi, Can someone shed some light on what this ZPOOL_CONFIG is exactly. At a guess is it a bad sector of the disk,

Re: [zfs-discuss] zfs unmount versus umount?

2010-09-30 Thread Mark J Musante
On Thu, 30 Sep 2010, Linder, Doug wrote: Is there any technical difference between using zfs unmount to unmount a ZFS filesystem versus the standard unix umount command? I always use zfs unmount but some of my colleagues still just use umount. Is there any reason to use one over the other?

Re: [zfs-discuss] zfs unmount versus umount?

2010-09-30 Thread Mark J Musante
On Thu, 30 Sep 2010, Darren J Moffat wrote: * It can be applied recursively down a ZFS hierarchy True. * It will unshare the filesystems first Actually, because we use the zfs command to do the unmount, we end up doing the unshare on the filesystem first. See the opensolaris code for

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante
On Mon, 20 Sep 2010, Valerio Piancastelli wrote: After a crash i cannot access one of my datasets anymore. ls -v cts brwxrwxrwx+ 2 root root 0, 0 ott 18 2009 cts zfs list sas/mail-cts NAME USED AVAIL REFER MOUNTPOINT sas/mail-cts 149G 250G 149G /sas/mail-cts

Re: [zfs-discuss] Cannot access dataset

2010-09-20 Thread Mark J Musante
On Mon, 20 Sep 2010, Valerio Piancastelli wrote: Yes, it is mounted r...@disk-00:/volumes/store# zfs get sas/mail-ccts NAME PROPERTY VALUESOURCE sas/mail-cts mounted yes - OK - so the next question would be where the data is. I assume when you say you cannot access

Re: [zfs-discuss] moving rppol in laptop to spare SSD drive.

2010-09-19 Thread Mark Farmer
Hi Steve, Couple of options. Create a new boot environment on the SSD, and this will copy the data over. Or zfs send -R rp...@backup | zfs recv altpool I'd use the alt boot environment, rather than the send and receive. Cheers, -Mark. On 19/09/2010, at 5:37 PM, Steve Arkley wrote

Re: [zfs-discuss] performance leakage when copy huge data

2010-09-09 Thread Mark Little
? Cheers, Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] onnv_142 - vfs_mountroot: cannot mount root

2010-09-07 Thread Mark J Musante
Did you run installgrub before rebooting? On Tue, 7 Sep 2010, Piotr Jasiukajtis wrote: Hi, After upgrade from snv_138 to snv_142 or snv_145 I'm unable to boot the system. Here is what I get. Any idea why it's not able to import rpool? I saw this issue also on older builds on a different

Re: [zfs-discuss] new labelfix needed

2010-09-02 Thread Mark J Musante
On Wed, 1 Sep 2010, Benjamin Brumaire wrote: your point have only a rethoric meaning. I'm not sure what you mean by that. I was asking specifically about your situation. You want to run labelfix on /dev/rdsk/c0d1s4 - what happened to that slice that requires a labelfix? Is there

Re: [zfs-discuss] How to rebuild raidz after system reinstall

2010-09-02 Thread Mark J Musante
What does 'zpool import' show? If that's empty, what about 'zpool import -d /dev'? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] How to rebuild raidz after system reinstall

2010-09-02 Thread Mark J Musante
On Thu, 2 Sep 2010, Dominik Hoffmann wrote: I think, I just destroyed the information on the old raidz members by doing zpool create BackupRAID raidz /dev/disk0s2 /dev/disk1s2 /dev/disk2s2 It should have warned you that two of the disks were already formatted with a zfs pool. Did it not do

Re: [zfs-discuss] new labelfix needed

2010-08-31 Thread Mark J Musante
On Mon, 30 Aug 2010, Benjamin Brumaire wrote: As this feature didn't make it into zfs it would be nice to have it again. Better to spend time fixing the problem that requires a 'labelfix' as a workaround, surely. What's causing the need to fix vdev labels?

Re: [zfs-discuss] pool died during scrub

2010-08-30 Thread Mark J Musante
On Mon, 30 Aug 2010, Jeff Bacon wrote: All of this would be ok... except THOSE ARE THE ONLY DEVICES THAT WERE PART OF THE POOL. How can it be missing a device that didn't exist? The device(s) in question are probably the logs you refer to here: I can't obviously use b134 to import the

Re: [zfs-discuss] zpool status and format/kernel disagree about root disk

2010-08-27 Thread Mark J Musante
On Fri, 27 Aug 2010, Rainer Orth wrote: zpool status thinks rpool is on c1t0d0s3, while format (and the kernel) correctly believe it's c11t0d0(s3) instead. Any suggestions? Try removing the symlinks or using 'devfsadm -C' as suggested here:

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mark
Hey thanks for the replies everyone. Saddly most of those options will not work, since we are using a SUN Unified Storage 7210, the only option is to buy the SUN SSD's for it, which is about $15k USD for a pair. We also don't have the ability to shut off ZIL or any of the other options that

Re: [zfs-discuss] VM's on ZFS - 7210

2010-08-27 Thread Mark
It does, its on a pair of large APC's. Right now we're using NFS for our ESX Servers. The only iSCSI LUN's I have are mounted inside a couple Windows VM's. I'd have to migrate all our VM's to iSCSI, which I'm willing to do if it would help and not cause other issues. So far the 7210

[zfs-discuss] VM's on ZFS - 7210

2010-08-26 Thread Mark
We are using a 7210, 44 disks I believe, 11 stripes of RAIDz sets. When I installed I selected the best bang for the buck on the speed vs capacity chart. We run about 30 VM's on it, across 3 ESX 4 servers. Right now, its all running NFS, and it sucks... sooo slow. iSCSI was no better. I

Re: [zfs-discuss] Narrow escape with FAULTED disks

2010-08-23 Thread Mark Bennett
I have been testing to work properly. On the other hand, I could just use the spare 7210 Appliance boot disk I have lying about. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http

Re: [zfs-discuss] Cant't detach spare device from pool

2010-08-18 Thread Mark Musante
You need to let the resilver complete before you can detach the spare. This is a known problem, CR 6909724. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724 On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote: Hi! I had trouble with my raidz in the way, that some of

Re: [zfs-discuss] ZFS pool and filesystem version list, OpenSolaris builds list

2010-08-16 Thread Mark J Musante
I keep the pool version information up-to-date here: http://blogs.sun.com/mmusante/entry/a_zfs_taxonomy On Sun, 15 Aug 2010, Haudy Kazemi wrote: Hello, This is a consolidated list of ZFS pool and filesystem versions, along with the builds and systems they are found in. It is based on

Re: [zfs-discuss] Replaced pool device shows up in zpool status

2010-08-16 Thread Mark J Musante
On Mon, 16 Aug 2010, Matthias Appel wrote: Can anybody tell me how to get rid of c1t3d0 and heal my zpool? Can you do a zpool detach performance c1t3d0/o? If that works, then zpool replace performance c1t3d0 c1t0d0 should replace the bad disk with the new hot spare. Once the resilver

Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Mark Musante
On 16 Aug 2010, at 22:30, Robert Hartzell wrote: cd /mnt ; ls bertha export var ls bertha boot etc where is the rest of the file systems and data? By default, root filesystems are not mounted. Try doing a zfs mount bertha/ROOT/snv_134___

Re: [zfs-discuss] Opensolaris is apparently dead

2010-08-14 Thread Mark Bennett
ran Red Hat 9 with updated packages for quite a few years. As long as the kernel is stable, and you can work through the hurdles, it can still do the job. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss

Re: [zfs-discuss] ZFS development moving behind closed doors

2010-08-14 Thread Mark Bennett
bleeding edge testing community will no doubt impact on the Solaris code quality. It is now even more likely Solaris will revert to it's niche on SPARC over the next few years. Mark. -- This message posted from opensolaris.org ___ zfs-discuss mailing

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: # zpool status pool: brick state: UNAVAIL status: One or more devices could not be used because the label is missing or invalid. There are insufficient replicas for the pool to continue functioning. action: Destroy and re-create the pool

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, Seth Keith wrote: When I do a zdb -l /dev/rdsk/any device I get the same output for all my drives in the pool, but I don't think it looks right: # zdb -l /dev/rdsk/c4d0 What about /dev/rdsk/c4d0s0? ___ zfs-discuss mailing

Re: [zfs-discuss] zfs replace problems please please help

2010-08-11 Thread Mark J Musante
On Wed, 11 Aug 2010, seth keith wrote: NAME STATE READ WRITE CKSUM brick DEGRADED 0 0 0 raidz1 DEGRADED 0 0 0 c13d0 ONLINE 0 0 0 c4d0

Re: [zfs-discuss] zfs replace problems please please help

2010-08-10 Thread Mark J Musante
On Tue, 10 Aug 2010, seth keith wrote: first off I don't have the exact failure messages here, and I did not take good notes of the failures, so I will do the best I can. Please try and give me advice anyway. I have a 7 drive raidz1 pool with 500G drives, and I wanted to replace them all

Re: [zfs-discuss] How to identify user-created zfs filesystems?

2010-08-04 Thread Mark J Musante
You can use 'zpool history -l syspool' to show the username of the person who created the dataset. The history is in a ring buffer, so if too many pool operations have happened since the dataset was created, the information is lost. On Wed, 4 Aug 2010, Peter Taps wrote: Folks, In my

Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Mark Musante
You can also use the zpool split command and save yourself having to do the zfs send|zfs recv step - all the data will be preserved. zpool split rpool preserve does essentially everything up to and including the zpool export preserve commands you listed in your original email. Just don't try

[zfs-discuss] snapshot question

2010-07-29 Thread Mark
I'm trying to understand how snapshots work in terms of how I can use them for recovering and/or duplicating virtual machines, and how I should set up my file system. I want to use OpenSolaris as a storage platform with NFS/ZFS for some development VMs; that is, the VMs use the OpenSolaris box

Re: [zfs-discuss] root pool expansion

2010-07-28 Thread Mark J Musante
On Wed, 28 Jul 2010, Gary Gendel wrote: Right now I have a machine with a mirrored boot setup. The SAS drives are 43Gs and the root pool is getting full. I do a backup of the pool nightly, so I feel confident that I don't need to mirror the drive and can break the mirror and expand the pool

[zfs-discuss] VMGuest IOMeter numbers

2010-07-25 Thread Mark
Hello, first time posting. I've been working with zfs on and off with limited *nix experience for a year or so now, and have read a lot of things by a lot of you I'm sure. Still tons I don't understand/know I'm sure. We've been having awful IO latencies on our 7210 running about 40 VM's

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-15 Thread Mark J Musante
On Thu, 15 Jul 2010, Tim Castle wrote: j...@opensolaris:~# zpool import -d /dev ...shows nothing after 20 minutes OK, then one other thing to try is to create a new directory, e.g. /mydev, and create in it symbolic links to only those drives that are part of your pool. Based on your

Re: [zfs-discuss] invalid vdev configuration meltdown

2010-07-14 Thread Mark J Musante
What does 'zpool import -d /dev' show? On Wed, 14 Jul 2010, Tim Castle wrote: My raidz1 (ZFSv6) had a power failure, and a disk failure. Now: j...@opensolaris:~# zpool import pool: files id: 3459234681059189202 state: UNAVAIL status: One or

[zfs-discuss] ZFS crash

2010-07-07 Thread Mark Christooph
I had an interesting dilemma recently and I'm wondering if anyone here can illuminate on why this happened. I have a number of pools, including the root pool, in on-board disks on the server. I also have one pool on a SAN disk, outside the system. Last night the SAN crashed, and shortly

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: Hi all With several messages in here about troublesome zpools, would there be a good reason to be able to fsck a pool? As in, check the whole thing instead of having to boot into live CDs and whatnot? You can do this with zpool scrub. It

Re: [zfs-discuss] ZFS fsck?

2010-07-06 Thread Mark J Musante
On Tue, 6 Jul 2010, Roy Sigurd Karlsbakk wrote: what I'm saying is that there are several posts in here where the only solution is to boot onto a live cd and then do an import, due to metadata corruption. This should be doable from the installed system Ah, I understand now. A couple of

[zfs-discuss] ZFS on external iSCSI storage

2010-07-01 Thread Mark
I'm new with ZFS, but I have had good success using it with raw physical disks. One of my systems has access to an iSCSI storage target. The underlying physical array is in a propreitary disk storage device from Promise. So the question is, when building a OpenSolaris host to store its data on

Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread Mark Musante
I'm guessing that the virtualbox VM is ignoring write cache flushes. See this for more ifno: http://forums.virtualbox.org/viewtopic.php?f=8t=13661 On 12 Jun, 2010, at 5.30, zfsnoob4 wrote: Thanks, that works. But it only when I do a proper export first. If I export the pool then I can

[zfs-discuss] NOTICE: spa_import_rootpool: error 5

2010-06-07 Thread Mark S Durney
IHAC Who has an x4500(x86 box) who has a zfs root filesystem. They installed patches today, the latest solaris 10 x86 recommended patch cluster and the patching seemed to complete successfully. Then when they tried to reboot the box the machine would not boot? They get the following error

Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Mark Musante
Can you find the devices in /dev/rdsk? I see there is a path in /pseudo at least, but the zpool import command only looks in /dev. One thing you can try is doing this: # mkdir /tmpdev # ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a And then see if 'zpool import -d /tmpdev' finds the pool. On

Re: [zfs-discuss] zpool vdev's

2010-05-28 Thread Mark Musante
On 28 May, 2010, at 17.21, Vadim Comanescu wrote: In a stripe zpool configuration (no redundancy) is a certain disk regarded as an individual vdev or do all the disks in the stripe represent a single vdev ? In a raidz configuration im aware that every single group of raidz disks is

Re: [zfs-discuss] cannot import pool from another system, device-ids different! please help!

2010-05-24 Thread Mark J Musante
On Mon, 24 May 2010, h wrote: i had 6 disks in a raidz1 pool that i replaced from 1TB drives to 2TB drives. i have installed the older 1TB drives in another system and would like to import the old pool to access some files i accidentally deleted from the new pool. Did you use the 'zpool

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-20 Thread Mark J Musante
On Wed, 19 May 2010, John Andrunas wrote: ff001f45e830 unix:die+dd () ff001f45e940 unix:trap+177b () ff001f45e950 unix:cmntrap+e6 () ff001f45ea50 zfs:ddt_phys_decref+c () ff001f45ea80 zfs:zio_ddt_free+55 () ff001f45eab0 zfs:zio_execute+8d () ff001f45eb50

Re: [zfs-discuss] Very serious performance degradation

2010-05-20 Thread Mark J Musante
On Thu, 20 May 2010, Edward Ned Harvey wrote: Also, since you've got s0 on there, it means you've got some partitions on that drive. You could manually wipe all that out via format, but the above is pretty brainless and reliable. The s0 on the old disk is a bug in the way we're formatting

Re: [zfs-discuss] zfs mount -a kernel panic

2010-05-19 Thread Mark J Musante
Do you have a coredump? Or a stack trace of the panic? On Wed, 19 May 2010, John Andrunas wrote: Running ZFS on a Nexenta box, I had a mirror get broken and apparently the metadata is corrupt now. If I try and mount vol2 it works but if I try and mount -a or mount vol2/vm2 is instantly

Re: [zfs-discuss] MPT issues strikes back

2010-04-27 Thread Mark Ogden
, we have not encounted that issue. You might want to look at http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=7acda35c626180d9cda7bd1df451?bug_id=6894775 too. -Mark Machine specs : Dell R710, 16 GB memory, 2 Intel Quad-Core E5506 SunOS san01 5.11 snv_134 i86pc i386 i86pc

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.06, Phillip Oldham wrote: I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure already defined. Starting an instance from this image, without attaching the EBS volume, shows the pool structure exists and that the pool state is UNAVAIL (as

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 7.31, Phillip Oldham wrote: I'm not actually issuing any when starting up the new instance. None are needed; the instance is booted from an image which has the zpool configuration stored within, so simply starts and sees that the devices aren't available, which become

Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante
On 23 Apr, 2010, at 8.38, Phillip Oldham wrote: The instances are ephemeral; once terminated they cease to exist, as do all their settings. Rebooting an image keeps any EBS volumes attached, but this isn't the case I'm dealing with - its when the instance terminates unexpectedly. For

  1   2   3   4   5   6   >