Re: [zfs-discuss] Improving L1ARC cache efficiency with dedup

2011-12-08 Thread Mark Musante

You can see the original ARC case here:

http://arc.opensolaris.org/caselog/PSARC/2009/557/20091013_lori.alt

On 8 Dec 2011, at 16:41, Ian Collins wrote:

 On 12/ 9/11 12:39 AM, Darren J Moffat wrote:
 On 12/07/11 20:48, Mertol Ozyoney wrote:
 Unfortunetly the answer is no. Neither l1 nor l2 cache is dedup aware.
 
 The only vendor i know that can do this is Netapp
 
 In fact , most of our functions, like replication is not dedup aware.
 For example, thecnicaly it's possible to optimize our replication that
 it does not send daya chunks if a data chunk with the same chechsum
 exists in target, without enabling dedup on target and source.
 We already do that with 'zfs send -D':
 
   -D
 
   Perform dedup processing on the stream. Deduplicated
   streams  cannot  be  received on systems that do not
   support the stream deduplication feature.
 
 
 
 
 Is there any more published information on how this feature works?
 
 -- 
 Ian.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mirror Gone

2011-09-27 Thread Mark Musante

On 27 Sep 2011, at 18:29, Edward Ned Harvey wrote:

 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Tony MacDoodle
 
 
 Now:
 mirror-0  ONLINE   0 0 0
 c1t2d0  ONLINE   0 0 0
 c1t3d0  ONLINE   0 0 0
   c1t4d0ONLINE   0 0 0
   c1t5d0ONLINE   0 0 0
 
 There is only one way for this to make sense:  You did not have mirror-1 in
 the first place.  

An easy way to tell is taking a look at the zpool history command for this pool.
What does that show?

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [illumos-Developer] zfs refratio property

2011-06-06 Thread Mark Musante

minor quibble: compressratio uses a lowercase x for the description text 
whereas the new prop uses an uppercase X


On 6 Jun 2011, at 21:10, Eric Schrock wrote:

 Webrev has been updated:
 
 http://dev1.illumos.org/~eschrock/cr/zfs-refratio/
 
 - Eric
 
 -- 
 Eric Schrock
 Delphix
 
 275 Middlefield Road, Suite 50
 Menlo Park, CA 94025
 http://www.delphix.com
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another zfs issue

2011-06-01 Thread Mark Musante

Yeah, this is a known problem. The DTL on the toplevel shows an outage, and is 
preventing the removal of the spare even though removing the spare won't make 
the outage worse.

Unfortunately, for opensolaris anyway, there is no workaround.

You could try doing a full scrub, replacing any disks that show errors, and 
waiting for the resilver to complete. That may clean up the DTL enough to 
detach the spare.

On 1 Jun 2011, at 20:20, Roy Sigurd Karlsbakk wrote:

 Hi all
 
 I have this pool that has been suffering from some bad backplanes etc. 
 Currently it's showing up ok, but after a resilver, a spare is stuck.
 
  raidz2-5 ONLINE   0 0 4
c4t1d0 ONLINE   0 0 0
c4t2d0 ONLINE   1 0 0
c4t3d0 ONLINE   0 0 0
c4t4d0 ONLINE   0 0 0
spare-4ONLINE   0 0 0
  c4t5d0   ONLINE   0 0 0
  c4t44d0  ONLINE   0 0 0
c4t6d0 ONLINE   0 0 0
c4t7d0 ONLINE   0 0 0
 
 So, the VDEV seems ok, the pool reports two data errors, which is sad, but 
 not a showstopper, however, trying to detach the spare from that vdev doesn's 
 seem to easy
 
 roy@dmz-backup:~$ sudo zpool detach dbpool c4t44d0
 cannot detach c4t44d0: no valid replicas
 
 iostat -en shows some issues with drives in that pool, but none on the two in 
 the spare mirror
 
0   0   0   0 c4t1d0
0  82 131 213 c4t2d0
0   0   0   0 c4t3d0
0   0   0   0 c4t4d0
0   0   0   0 c4t5d0
0   0   0   0 c4t6d0
0   0   0   0 c4t7d0
0   0   0   0 c4t44d0
 
 Is there a good explaination why I can't detach this mirror from the VDEV?
 
 Vennlige hilsener / Best regards
 
 roy
 --
 Roy Sigurd Karlsbakk
 (+47) 97542685
 r...@karlsbakk.net
 http://blogg.karlsbakk.net/
 --
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk.
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zfs ignoring spares?

2010-12-05 Thread Mark Musante







On 5 Dec 2010, at 16:06, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:

 Hot spares are dedicated spares in the ZFS world. Until you replace
 the actual bad drives, you will be running in a degraded state. The
 idea is that spares are only used in an emergency. You are degraded
 until your spares are no longer in use. 
 
 --Tim 
 
 Thanks for the clarification. Wouldn't it be nice if ZFS could fail over
 to a spare and then allow the replacement as the new spare, as with what
 is done with most commercial hardware RAIDs?

If you use zpool detach to remove the disk that went bad, the spare is 
promoted to a proper member of the pool. Then, when you replace the bad disk, 
you can use zpool add to add it into the pool as a new spare.

Admittedly, this is all a manual procedure. It's unclear if you were asking for 
this to be fully automated.


 
 Vennlige hilsener / Best regards 
 
 roy 
 -- 
 Roy Sigurd Karlsbakk 
 (+47) 97542685 
 r...@karlsbakk.net 
 http://blogg.karlsbakk.net/ 
 -- 
 I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
 er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
 idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
 relevante synonymer på norsk. 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZPOOL_CONFIG_IS_HOLE

2010-10-15 Thread Mark Musante
You should only see a HOLE in your config if you removed a slog after having 
added more stripes.  Nothing to do with bad sectors.

On 14 Oct 2010, at 06:27, Matt Keenan wrote:

 Hi,
 
 Can someone shed some light on what this ZPOOL_CONFIG is exactly.
 At a guess is it a bad sector of the disk, non writable and thus ZFS marks it 
 as a hole ?
 
 cheers
 
 Matt
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Cant't detach spare device from pool

2010-08-18 Thread Mark Musante
You need to let the resilver complete before you can detach the spare.  This is 
a known problem, CR 6909724.

http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6909724



On 18 Aug 2010, at 14:02, Dr. Martin Mundschenk wrote:

 Hi!
 
 I had trouble with my raidz in the way, that some of the blockdevices where 
 not found by the OSOL Box the other day, so the spare device was hooked on 
 automatically.
 
 After fixing the problem, the missing device came back online, but I am 
 unable to detach the spare device, even though all devices are online and 
 functional.
 
 m...@iunis:~# zpool status tank
   pool: tank
  state: ONLINE
 status: One or more devices is currently being resilvered.  The pool will
 continue to function, possibly in a degraded state.
 action: Wait for the resilver to complete.
  scrub: resilver in progress for 1h5m, 1,76% done, 61h12m to go
 config:
 
 NAME   STATE READ WRITE CKSUM
 tank   ONLINE   0 0 0
   raidz1-0 ONLINE   0 0 0
 c9t0d1 ONLINE   0 0 0
 c9t0d3 ONLINE   0 0 0  15K resilvered
 c9t0d0 ONLINE   0 0 0
 spare-3ONLINE   0 0 0
   c9t0d2   ONLINE   0 0 0  37,5K resilvered
   c16t0d0  ONLINE   0 0 0  14,1G resilvered
 cache
   c18t0d0  ONLINE   0 0 0
 spares
   c16t0d0  INUSE currently in use
 
 errors: No known data errors
 
 m...@iunis:~# zpool detach tank c16t0d0
 cannot detach c16t0d0: no valid replicas
 
 How can I solve the Problem?
 
 Martin
 
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How do I Import rpool to an alternate location?

2010-08-16 Thread Mark Musante

On 16 Aug 2010, at 22:30, Robert Hartzell wrote:

 
 cd /mnt ; ls
 bertha export var
 ls bertha
 boot etc
 
 where is the rest of the file systems and data?

By default, root filesystems are not mounted.  Try doing a zfs mount 
bertha/ROOT/snv_134___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Splitting root mirror to prep for re-install

2010-08-04 Thread Mark Musante

You can also use the zpool split command and save yourself having to do the 
zfs send|zfs recv step - all the data will be preserved.

zpool split rpool preserve does essentially everything up to and including 
the zpool export preserve commands you listed in your original email.  Just 
don't try to boot off it.

On 4 Aug 2010, at 20:58, Edward Ned Harvey wrote:

 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Chris Josephes
 
 I have a host running svn_133 with a root mirror pool that I'd like to
 rebuild with a fresh install on new hardware; but I still have data on
 the pool that I would like to preserve.
 
 So, after rebuilding, you don't want to restore the same OS that you're
 currently running.  But there are some files you'd like to save for after
 you reinstall.  Why not just copy them off somewhere, in a tarball or
 something like that?
 
 
 Given a rpool with disks c7d0s0 and c6d0s0, I think the following
 process will do what I need:
 
 1. Run these commands
 
 # zpool detach rpool c6d0s0
 # zpool create preserve c6d0s0
 
 The only reason you currently have the rpool in a slice (s0) is because
 that's a requirement for booting.  If you aren't planning to boot from the
 device after breaking it off the mirror ... Maybe just use the whole device
 instead of the slice.
 
 zpool create preserve c6d0
 
 
 # zfs create export/home
 # zfs send rpool/export/home | zfs receive preserve/home
 # zfs send (other filesystems)
 # zpool export preserve
 
 These are not right.  It should be something more like this:
 zfs create -o readonly=on preserve/rpool_export_home
 zfs snapshot rpool/export/h...@fubarsnap
 zfs send rpool/export/h...@fubarsnap | zfs receive -F
 preserve/rpool_export_home
 
 And finally
 zpool export preserve
 
 
 2. Build out new host with svn_134, placing new root pool on c6d0s0 (or
 whatever it's called on the new SATA controller)
 
 Um ... I assume that's just a type-o ... 
 Yes, install fresh.  No, don't overwrite the existing preserve disk.
 
 For that matter, why break the mirror at all?  Just install the OS again,
 onto a single disk, which implicitly breaks the mirror.  Then when it's all
 done, use zpool import to import the other half of the mirror, which you
 didn't overwrite.
 
 
 3. Run zpool import against preserve, copy over data that should be
 migrated.
 
 4. Rebuild the mirror by destroying the preserve pool and attaching
 c7d0s0 to the rpool mirror.
 
 Am I missing anything?
 
 If you blow away the partition table of the 2nd disk (as I suggested above,
 but now retract) then you'll have to recreate the partition table of the
 second disk.  So you only attach s0 to s0.
 
 After attaching, and resilvering, you'll want to installgrub on the 2nd
 disk, or else it won't be bootable after the first disk fails.  See the ZFS
 Troubleshooting Guide for details.
 
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Zpool import not working

2010-06-12 Thread Mark Musante

I'm guessing that the virtualbox VM is ignoring write cache flushes.  See this 
for more ifno:
http://forums.virtualbox.org/viewtopic.php?f=8t=13661

On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:

 Thanks, that works. But it only when I do a proper export first.
 
 If I export the pool then I can import with:
 zpool import -d /
 (test files are located in /)
 
 but if I destroy the pool, then I can no longer import it back, even though 
 the files are still there. Is this normal?
 
 
 Thanks for your help.
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and IBM SDD Vpaths

2010-05-29 Thread Mark Musante

Can you find the devices in /dev/rdsk?  I see there is a path in /pseudo at 
least, but the zpool import command only looks in /dev.  One thing you can try 
is doing this:

# mkdir /tmpdev
# ln -s /pseudo/vpat...@1:1 /tmpdev/vpath1a

And then see if 'zpool import -d /tmpdev' finds the pool.


On 29 May, 2010, at 19.53, morris hooten wrote:

 I have 6 zfs pools and after rebooting init 6 the vpath device path names 
 have changed for some unknown reason. But I can't detach, remove and reattach 
 to the new device namesANY HELP! please
 
 pjde43m01  -  -  -  -  FAULTED  -
 pjde43m02  -  -  -  -  FAULTED  -
 pjde43m03  -  -  -  -  FAULTED  -
 poas43m01  -  -  -  -  FAULTED  -
 poas43m02  -  -  -  -  FAULTED  -
 poas43m03  -  -  -  -  FAULTED  -
 
 
 One pool listed below as example
 
 pool: poas43m01
 state: UNAVAIL
 status: One or more devices could not be opened.  There are insufficient
replicas for the pool to continue functioning.
 action: Attach the missing device and online it using 'zpool online'.
   see: http://www.sun.com/msg/ZFS-8000-3C
 scrub: none requested
 config:
 
NAMESTATE READ WRITE CKSUM
poas43m01   UNAVAIL  0 0 0  insufficient replicas
  vpath4c   UNAVAIL  0 0 0  cannot open
 
 
 
 before
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath2a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@2:2
  32. vpath3a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@3:3
  33. vpath4a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@4:4
  34. vpath5a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@5:5
  35. vpath6a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@6:6
  36. vpath7a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@7:7
 
 
 after
 
 30. vpath1a IBM-2145- cyl 8190 alt 2 hd 64 sec 256
  /pseudo/vpat...@1:1
  31. vpath8a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@8:8
  32. vpath9a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@9:9
  33. vpath10a IBM-2145- cyl 13822 alt 2 hd 64 sec 256
  /pseudo/vpat...@10:10
  34. vpath11a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@11:11
  35. vpath12a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@12:12
  36. vpath13a IBM-2145- cyl 27646 alt 2 hd 64 sec 256
  /pseudo/vpat...@13:13
 
 
 
 
 {usbderp...@root} zpool detach poas43m03 vpath2c
 cannot open 'poas43m03': pool is unavailable
 -- 
 This message posted from opensolaris.org
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool vdev's

2010-05-28 Thread Mark Musante

On 28 May, 2010, at 17.21, Vadim Comanescu wrote:

 In a stripe zpool configuration (no redundancy) is a certain disk regarded as 
 an individual vdev or do all the disks in the stripe represent a single vdev 
 ? In a raidz configuration im aware that every single group of raidz disks is 
 regarded as a top level vdev but i was wondering how is it in the case i 
 mentioned earlier. Thanks.

In a stripe config, each disk is considered a top-level vdev.


Regards,
markm
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante

On 23 Apr, 2010, at 7.06, Phillip Oldham wrote:
 
 I've created an OpenSolaris 2009.06 x86_64 image with the zpool structure 
 already defined. Starting an instance from this image, without attaching the 
 EBS volume, shows the pool structure exists and that the pool state is 
 UNAVAIL (as expected). Upon attaching the EBS volume to the instance the 
 status of the pool changes to ONLINE, the mount-point/directory is 
 accessible and I can write data to the volume.
 
 Now, if I terminate the instance, spin-up a new one, and connect the same 
 (now unattached) EBS volume to this new instance the data is no longer there 
 with the EBS volume showing as blank. 

Could you share with us the zpool commands you are using?


Regards,
markm___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante

On 23 Apr, 2010, at 7.31, Phillip Oldham wrote:

 I'm not actually issuing any when starting up the new instance. None are 
 needed; the instance is booted from an image which has the zpool 
 configuration stored within, so simply starts and sees that the devices 
 aren't available, which become available after I've attached the EBS device.
 

Forgive my ignorance with EC2/EBS, but why doesn't the instance remember that 
there were EBS volumes attached?  Why aren't they automatically attached prior 
to booting solaris within the instance?  The error output from zpool status 
that you're seeing matches what I would expect if we are attempting to import 
the pool at boot, and the disks aren't present.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re-attaching zpools after machine termination [amazon ebs ec2]

2010-04-23 Thread Mark Musante

On 23 Apr, 2010, at 8.38, Phillip Oldham wrote:

 The instances are ephemeral; once terminated they cease to exist, as do all 
 their settings. Rebooting an image keeps any EBS volumes attached, but this 
 isn't the case I'm dealing with - its when the instance terminates 
 unexpectedly. For instance, if a reboot operation doesn't succeed or if 
 there's an issue with the data-centre.

OK, I think if this issue can be addressed, it would be by people familiar with 
how EC2  EBS interact.  The steps I see are:

- start a new instance
- attach the EBS volumes to it
- log into the instance and zpool online the disks

I know the last step can be automated with a script inside the instance, but 
I'm not sure about the other two steps.


Regards,
markm

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss