Re: [zfs-discuss] compressratio vs. dedupratio

2009-12-15 Thread Craig S. Bell
Mike, I believe that ZFS treats runs of zeros as holes in a sparse file, rather 
than as regular data.  So they aren't really present to be counted for 
compressratio.

http://blogs.sun.com/bonwick/entry/seek_hole_and_seek_data
http://mail.opensolaris.org/pipermail/zfs-discuss/2008-April/017565.html
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-15 Thread Cesare
Hy Cindy,

I downloaded that document and I'll follow istruction before update
the host. I just tried the procedure on a different host (but did not
have the problem I wrote) and it worked.

I'll follow news after upgrade the host where the problem occur.

Cesare

On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
 Hi Cesare,

 According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but
 then the problem reoccurred.

 According to the EMC PowerPath Release notes, here:

 www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf

 This problem is fixed in 5.2 SP1.

 I would review the related ZFS information in this doc before proceeding.

 Thanks,

 Cindy

 On 12/14/09 03:53, Cesare wrote:

 On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston mijoh...@gmail.com wrote:

 Thanks for the info Alexander... I will test this out.  I'm just
 wondering
 what it's going to see after I install Power Path.  Since each drive will
 have 4 paths, plus the Power Path...  after doing a zfs import how will
 I
 force it to use a specific path?  Thanks again!  Good to know that this
 can
 be done.

 I had in the last weeks a similar problem. I have on my testbed server
 (Solaris 10.x Update4) PowerPath 5.2 that it's connected on two FC
 switch and then to Clariion CX3.

 Each LUN on the Clariion create 4 path to the host. I created 8 LUN,
 reconfigured Solaris for make them visible to the host and then tried
 to create a ZFS pool. I encountered a problem when I run the command:

 --
 # r...@solaris10# zpool status
  pool: tank
  state: ONLINE
  scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009
 config:

        NAME            STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          mirror        ONLINE       0     0     0
            emcpower7a  ONLINE       0     0     0
            emcpower5a  ONLINE       0     0     0
          mirror        ONLINE       0     0     0
            emcpower8a  ONLINE       0     0     0
            emcpower6a  ONLINE       0     0     0

 errors: No known data errors
 r...@solaris10# zpool history
 History for 'tank':
 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a
 2009-12-11.05:00:01 zpool scrub tank
 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a
 2009-12-14.05:00:01 zpool scrub tank

 r...@solaris10# zpool add tank mirror emcpower3a emcpower1a
 internal error: Invalid argument
 Abort (core dumped)
 r...@solaris#
 --

 Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then
 retry the command to see if the problem (internal error) will be
 disappear. Anybody did have a similar problem?

 Cesare




-- 

Mike Ditka  - If God had wanted man to play soccer, he wouldn't have
given us arms. -
http://www.brainyquote.com/quotes/authors/m/mike_ditka.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-15 Thread Darren J Moffat

Cyril Plisko wrote:

On Mon, Dec 14, 2009 at 9:32 PM, Andrey Kuzmin
andrey.v.kuz...@gmail.com wrote:

Right, but 'verify' seems to be 'extreme safety' and thus rather rare
use case.


Hmm, dunno. I wouldn't set anything, but scratch file system to
dedup=on. Anything of even slight significance is set to dedup=verify.


Why ?  Is it because you don't believe SHA256 (which is the default 
checksum used when dedup=on is specified) is strong enough ?


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Something wrong with zfs mount

2009-12-15 Thread Martin Uhl
The dirs blocking the mount are created at import/mount time.
how you know that??

In the previous example I could reconstruct that using zfs mount.  Just look at 
the last post.
I doubt ZFS removes mount directories.

If you're correct you should been able to reproduce 
the problem by doing a clean shutdown (or an export/import), can you 
reproduce it this way??

The server is in a production environment and we cannot afford the necessary 
downtime for that.
Unfortunately the server has lots of datasets which cause import/export times 
of 45 mins.

We import the pool with the -R parameter, might that contribute to the problem? 
Perhaps a zfs mount -a bug in correspondence with the -R parameter?

Greetings, Martin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-15 Thread Kjetil Torgrim Homme
Robert Milkowski mi...@task.gda.pl writes:
 On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
 Because if you can de-dup anyway why bother to compress THEN check?
 This SEEMS to be the behaviour - i.e. I would suspect many of the
 files I'm writing are dups - however I see high cpu use even though
 on some of the copies I see almost no disk writes.

 First, the checksum is calculated after compression happens.

for some reason I, like Steve, thought the checksum was calculated on
the uncompressed data, but a look in the source confirms you're right,
of course.

thinking about the consequences of changing it, RAID-Z recovery would be
much more CPU intensive if hashing was done on uncompressed data --
every possible combination of the N-1 disks would have to be
decompressed (and most combinations would fail), and *then* the
remaining candidates would be hashed to see if the data is correct.

this would be done on a per recordsize basis, not per stripe, which
means reconstruction would fail if two disk blocks (512 octets) on
different disks and in different stripes go bad.  (doing an exhaustive
search for all possible permutations to handle that case doesn't seem
realistic.)

in addition, hashing becomes slightly more expensive since more data
needs to be hashed.

overall, my guess is that this choice (made before dedup!) will give
worse performance in normal situations in the future, when dedup+lzjb
will be very common, at a cost of faster and more reliable resilver.  in
any case, there is not much to be done about it now.

-- 
Kjetil T. Homme
Redpill Linpro AS - Changing the game

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Gluster Storage Platform 3.0 GA Release

2009-12-15 Thread Harshavardhana
Greetings!

 The Gluster Team is happy to announce the release of Gluster Storage
Platform 3.0. The Gluster Storage Platform is based on the popular open
source clustered file system GlusterFS, integrating the file system, an
operating system layer, a web based management interface, and an easy to use
installer.

 Gluster Storage Platform is an open source clustered storage solution. The
software is a powerful and flexible solution that simplifies the task of
managing unstructured file data whether you have a few terabytes of storage
or multiple petabytes.

 Gluster Storage Platform runs on industry standard hardware from any vendor
and delivers multiple times the scalability and performance of conventional
storage at a fraction of the cost.

 To learn more please check us out at www.gluster.org where you can download
source and binary, read release notes, and engage with the community.

 If you are already using Gluster, please help strengthen our community by
leaving your mark on Who is using Gluster page:

 http://www.gluster.com/community/whoisusing.php

 Happy Hacking

-- 

Gluster Team

--
Harshavardhana
Gluster - http://www.gluster.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] FYI: NAS storage server in OpenSolaris JeOS Prototype

2009-12-15 Thread Rudolf Kutina

Hi All,

NAS storage server in OpenSolaris JeOS Prototype
http://blogs.sun.com/VirtualGuru/entry/nas_storage_server_in_opensolaris

Nice day
Rudolf Kutina
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-15 Thread Cesare
Hy all,

after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands
to create zpool, it was executed successfully:

--
r...@solaris10# zpool history
History for 'tank':
2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-15.14:37:20 zpool add tank mirror emcpower8a emcpower6a
2009-12-15.14:37:56 zpool add tank mirror emcpower1a emcpower3a
2009-12-15.14:38:09 zpool add tank mirror emcpower2a emcpower4a
r...@solaris10# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower7a  ONLINE   0 0 0
emcpower5a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower8a  ONLINE   0 0 0
emcpower6a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower1a  ONLINE   0 0 0
emcpower3a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower2a  ONLINE   0 0 0
emcpower4a  ONLINE   0 0 0

errors: No known data errors
--

before PowerPath Version was 5.2.0.GA.b146, now 5.2.SP2.b012:

--
r...@solaris10# pkginfo -l EMCpower
   PKGINST:  EMCpower
  NAME:  EMC PowerPath (Patched with 5.2.SP2.b012)
  CATEGORY:  system
  ARCH:  sparc
   VERSION:  5.2.0_b146
   BASEDIR:  /opt
VENDOR:  EMC Corporation
PSTAMP:  beavis951018123443
  INSTDATE:  Dec 15 2009 12:53
STATUS:  completely installed
 FILES:  339 installed pathnames
  42 directories
 123 executables
  199365 blocks used (approx)
--

So the SP2 incorporated the fix about PowerPath and ZFS using pseudo
emcpower device.

Cesare


On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:
 Hi Cesare,

 According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but
 then the problem reoccurred.

 According to the EMC PowerPath Release notes, here:

 www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf

 This problem is fixed in 5.2 SP1.

 I would review the related ZFS information in this doc before proceeding.

 Thanks,

 Cindy

 On 12/14/09 03:53, Cesare wrote:

 On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston mijoh...@gmail.com wrote:

 Thanks for the info Alexander... I will test this out.  I'm just
 wondering
 what it's going to see after I install Power Path.  Since each drive will
 have 4 paths, plus the Power Path...  after doing a zfs import how will
 I
 force it to use a specific path?  Thanks again!  Good to know that this
 can
 be done.

 I had in the last weeks a similar problem. I have on my testbed server
 (Solaris 10.x Update4) PowerPath 5.2 that it's connected on two FC
 switch and then to Clariion CX3.

 Each LUN on the Clariion create 4 path to the host. I created 8 LUN,
 reconfigured Solaris for make them visible to the host and then tried
 to create a ZFS pool. I encountered a problem when I run the command:

 --
 # r...@solaris10# zpool status
  pool: tank
  state: ONLINE
  scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009
 config:

        NAME            STATE     READ WRITE CKSUM
        tank           ONLINE       0     0     0
          mirror        ONLINE       0     0     0
            emcpower7a  ONLINE       0     0     0
            emcpower5a  ONLINE       0     0     0
          mirror        ONLINE       0     0     0
            emcpower8a  ONLINE       0     0     0
            emcpower6a  ONLINE       0     0     0

 errors: No known data errors
 r...@solaris10# zpool history
 History for 'tank':
 2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a
 2009-12-11.05:00:01 zpool scrub tank
 2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a
 2009-12-14.05:00:01 zpool scrub tank

 r...@solaris10# zpool add tank mirror emcpower3a emcpower1a
 internal error: Invalid argument
 Abort (core dumped)
 r...@solaris#
 --

 Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then
 retry the command to see if the problem (internal error) will be
 disappear. Anybody did have a similar problem?

 Cesare




-- 

Pablo Picasso  - Computers are useless. They can only give you
answers. - http://www.brainyquote.com/quotes/authors/p/pablo_picasso.html
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Something wrong with zfs mount

2009-12-15 Thread Gonzalo Siero

Martin,

i think we should continue offline. Anyway see my comments/answers inline.

Thanks,
Gonzalo.
--

Martin Uhl wrote:


The dirs blocking the mount are created at import/mount time.
 


how you know that??
   



In the previous example I could reconstruct that using zfs mount.  Just look at 
the last post.
 

You said ...the dirs blocking the mount are created at import/mount 
time.. and your previous post suggest a different scenario: mount 
points created prior to the import/not cleared when doing a umount.
Fixing the umount problem is expensive and will not resolve the 
problem you've. I've tested on my lab system setting a breakpoint in 
zfs_umount() which is called for each FS of a pool when you export it 
but not called for the FS's of an imported pool (other than rootpool) 
when you stop your system via /etc/reboot.



I doubt ZFS removes mount directories.
 

It does. With a simple dtrace script i saw that is done in 
zpool_disable_datasets()-remove_mountpoint() when you export the pool.


# zfs list -o name,mountpoint,mounted,canmount -r tank
NAME  MOUNTPOINTMOUNTED  CANMOUNT
tank  /tank yeson
tank/gongui   /gonguinoon
tank/gongui/test  /gongui/test  yeson
# zfs mount tank/gongui
cannot mount '/gongui': directory is not empty
#  dtrace -q -n 'syscall::rmdir:entry{printf(Mountpoint deleted: 
%s\n,stringof(copyinstr(arg0)));ustack();}' -c zpool export tank

Mountpoint deleted: /tank

 libc.so.1`rmdir+0x7
 libzfs.so.1`zpool_disable_datasets+0x319
 zpool`zpool_do_export+0x10f
 zpool`main+0x158
 zpool`_start+0x7d
Mountpoint deleted: /gongui/test

 libc.so.1`rmdir+0x7
 libzfs.so.1`zpool_disable_datasets+0x32c
 zpool`zpool_do_export+0x10f
 zpool`main+0x158
 zpool`_start+0x7d

If you're correct you should been able to reproduce 
the problem by doing a clean shutdown (or an export/import), can you 
reproduce it this way??
 



The server is in a production environment and we cannot afford the necessary 
downtime for that.
Unfortunately the server has lots of datasets which cause import/export times 
of 45 mins.

We import the pool with the -R parameter, might that contribute to the problem? 
Perhaps a zfs mount -a bug in correspondence with the -R parameter?

 

See above. If you export the pool you shouldn't have problems. We should 
study if we can lower the time to import/export the pools but my 
recommendation is to do a proper shutdown.


Thanks,
Gonzalo.


Greetings, Martin
 



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DeDup and Compression - Reverse Order?

2009-12-15 Thread Andrey Kuzmin
On Tue, Dec 15, 2009 at 3:06 PM, Kjetil Torgrim Homme
kjeti...@linpro.no wrote:
 Robert Milkowski mi...@task.gda.pl writes:
 On 13/12/2009 20:51, Steve Radich, BitShop, Inc. wrote:
 Because if you can de-dup anyway why bother to compress THEN check?
 This SEEMS to be the behaviour - i.e. I would suspect many of the
 files I'm writing are dups - however I see high cpu use even though
 on some of the copies I see almost no disk writes.

 First, the checksum is calculated after compression happens.

 for some reason I, like Steve, thought the checksum was calculated on
 the uncompressed data, but a look in the source confirms you're right,
 of course.

 thinking about the consequences of changing it, RAID-Z recovery would be
 much more CPU intensive if hashing was done on uncompressed data --

I don't quite see how dedupe (based on sha256) and parity (based on
crc32) are related.

Regards,
Andrey

 every possible combination of the N-1 disks would have to be
 decompressed (and most combinations would fail), and *then* the
 remaining candidates would be hashed to see if the data is correct.

 this would be done on a per recordsize basis, not per stripe, which
 means reconstruction would fail if two disk blocks (512 octets) on
 different disks and in different stripes go bad.  (doing an exhaustive
 search for all possible permutations to handle that case doesn't seem
 realistic.)

 in addition, hashing becomes slightly more expensive since more data
 needs to be hashed.

 overall, my guess is that this choice (made before dedup!) will give
 worse performance in normal situations in the future, when dedup+lzjb
 will be very common, at a cost of faster and more reliable resilver.  in
 any case, there is not much to be done about it now.

 --
 Kjetil T. Homme
 Redpill Linpro AS - Changing the game

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] snv_129 dedup panic

2009-12-15 Thread Markus Kovero
Hi, I encountered panic and spontaneous reboot after canceling zfs send from 
another server. It took around 2-3hrs to remove 2TB data server had sent and 
then:

Dec 15 16:54:05 foo ^Mpanic[cpu2]/thread=ff0916724560:
Dec 15 16:54:05 foo genunix: [ID 683410 kern.notice] BAD TRAP: type=0 (#de 
Divide error) rp=ff003db82910 addr=ff003db82a10
Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 839527 kern.notice] zpool:
Dec 15 16:54:05 foo unix: [ID 753105 kern.notice] #de Divide error
Dec 15 16:54:05 foo unix: [ID 358286 kern.notice] addr=0xff003db82a10
Dec 15 16:54:05 foo unix: [ID 243837 kern.notice] pid=15520, 
pc=0xf794310a, sp=0xff003db82a00, eflags=0x10246
Dec 15 16:54:05 foo unix: [ID 211416 kern.notice] cr0: 
80050033pg,wp,ne,et,mp,pe cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de
Dec 15 16:54:05 foo unix: [ID 624947 kern.notice] cr2: 80a7000
Dec 15 16:54:05 foo unix: [ID 625075 kern.notice] cr3: 4721dc000
Dec 15 16:54:05 foo unix: [ID 625715 kern.notice] cr8: c
Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rdi: ff129712b578 
rsi:  rdx:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rcx:1  
r8:173724e00  r9:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rax:173724e00 
rbx:8 rbp: ff003db82a90
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r10: afd231db9a85b86e 
r11:  3fc244aaa90 r12:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r13: ff12fed0e9d0 
r14: ff092953d000 r15: ff003db82a10
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   fsb:0 
gsb: ff09128e9000  ds:   4b
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]es:   4b  
fs:0  gs:  1c3
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]   trp:0 
err:0 rip: f794310a
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]cs:   30 
rfl:10246 rsp: ff003db82a00
Dec 15 16:54:06 foo unix: [ID 266532 kern.notice]ss:   38
Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db827f0 
unix:die+10f ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82900 
unix:trap+1558 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82910 
unix:cmntrap+e6 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82a90 
zfs:ddt_get_dedup_object_stats+152 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b00 
zfs:spa_config_generate+2d9 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b90 
zfs:spa_open_common+1c2 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c00 
zfs:spa_get_stats+50 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c40 
zfs:zfs_ioc_pool_stats+32 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82cc0 
zfs:zfsdev_ioctl+175 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d00 
genunix:cdev_ioctl+45 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d40 
specfs:spec_ioctl+5a ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82dc0 
genunix:fop_ioctl+7b ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82ec0 
genunix:ioctl+18e ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82f10 
unix:brand_sys_syscall32+19d ()
Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 672855 kern.notice] syncing file systems...
Dec 15 16:54:06 foo genunix: [ID 904073 kern.notice]  done
Dec 15 16:54:07 foo genunix: [ID 111219 kern.notice] dumping to 
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Dec 15 16:55:07 foo genunix: [ID 10 kern.notice]
Dec 15 16:55:07 foo genunix: [ID 665016 kern.notice] ^M 64% done: 1881224 pages 
dumped,
Dec 15 16:55:07 foo genunix: [ID 495082 kern.notice] dump failed: error 28

Is it just me or everlasting Monday again.

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing ZFS drive pathing

2009-12-15 Thread Cindy Swearingen

Great news. Thanks for letting us know. Cindy

On 12/15/09 06:48, Cesare wrote:

Hy all,

after upgrade PowerPath (from 5.2 to 5.2 SP 2) and then retry commands
to create zpool, it was executed successfully:

--
r...@solaris10# zpool history
History for 'tank':
2009-12-15.14:37:00 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-15.14:37:20 zpool add tank mirror emcpower8a emcpower6a
2009-12-15.14:37:56 zpool add tank mirror emcpower1a emcpower3a
2009-12-15.14:38:09 zpool add tank mirror emcpower2a emcpower4a
r...@solaris10# zpool status
  pool: tank
 state: ONLINE
 scrub: none requested
config:

NAMESTATE READ WRITE CKSUM
tankONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower7a  ONLINE   0 0 0
emcpower5a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower8a  ONLINE   0 0 0
emcpower6a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower1a  ONLINE   0 0 0
emcpower3a  ONLINE   0 0 0
  mirrorONLINE   0 0 0
emcpower2a  ONLINE   0 0 0
emcpower4a  ONLINE   0 0 0

errors: No known data errors
--

before PowerPath Version was 5.2.0.GA.b146, now 5.2.SP2.b012:

--
r...@solaris10# pkginfo -l EMCpower
   PKGINST:  EMCpower
  NAME:  EMC PowerPath (Patched with 5.2.SP2.b012)
  CATEGORY:  system
  ARCH:  sparc
   VERSION:  5.2.0_b146
   BASEDIR:  /opt
VENDOR:  EMC Corporation
PSTAMP:  beavis951018123443
  INSTDATE:  Dec 15 2009 12:53
STATUS:  completely installed
 FILES:  339 installed pathnames
  42 directories
 123 executables
  199365 blocks used (approx)
--

So the SP2 incorporated the fix about PowerPath and ZFS using pseudo
emcpower device.

Cesare


On Mon, Dec 14, 2009 at 9:12 PM, Cindy Swearingen
cindy.swearin...@sun.com wrote:

Hi Cesare,

According to our CR 6524163, this problem was fixed in PowerPath 5.0.2, but
then the problem reoccurred.

According to the EMC PowerPath Release notes, here:

www.emc.com/microsites/clariion-support/pdf/300-006-626.pdf

This problem is fixed in 5.2 SP1.

I would review the related ZFS information in this doc before proceeding.

Thanks,

Cindy

On 12/14/09 03:53, Cesare wrote:

On Wed, Dec 9, 2009 at 3:22 PM, Mike Johnston mijoh...@gmail.com wrote:

Thanks for the info Alexander... I will test this out.  I'm just
wondering
what it's going to see after I install Power Path.  Since each drive will
have 4 paths, plus the Power Path...  after doing a zfs import how will
I
force it to use a specific path?  Thanks again!  Good to know that this
can
be done.

I had in the last weeks a similar problem. I have on my testbed server
(Solaris 10.x Update4) PowerPath 5.2 that it's connected on two FC
switch and then to Clariion CX3.

Each LUN on the Clariion create 4 path to the host. I created 8 LUN,
reconfigured Solaris for make them visible to the host and then tried
to create a ZFS pool. I encountered a problem when I run the command:

--
# r...@solaris10# zpool status
 pool: tank
 state: ONLINE
 scrub: scrub completed with 0 errors on Mon Dec 14 05:00:01 2009
config:

   NAMESTATE READ WRITE CKSUM
   tank   ONLINE   0 0 0
 mirrorONLINE   0 0 0
   emcpower7a  ONLINE   0 0 0
   emcpower5a  ONLINE   0 0 0
 mirrorONLINE   0 0 0
   emcpower8a  ONLINE   0 0 0
   emcpower6a  ONLINE   0 0 0

errors: No known data errors
r...@solaris10# zpool history
History for 'tank':
2009-12-10.20:19:17 zpool create -f tank mirror emcpower7a emcpower5a
2009-12-11.05:00:01 zpool scrub tank
2009-12-11.14:28:33 zpool add tank mirror emcpower8a emcpower6a
2009-12-14.05:00:01 zpool scrub tank

r...@solaris10# zpool add tank mirror emcpower3a emcpower1a
internal error: Invalid argument
Abort (core dumped)
r...@solaris#
--

Next task will be to upgrade PowerPath (from 5.2 to 5.2 SP 2) and then
retry the command to see if the problem (internal error) will be
disappear. Anybody did have a similar problem?

Cesare





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Something wrong with zfs mount

2009-12-15 Thread Martin Uhl
 We import the pool with the -R parameter, might that contribute to the 
 problem? Perhaps a zfs mount -a bug in correspondence with the -R parameter?

This Bugreport seems to confirm this:
http://bugs.opensolaris.org/view_bug.do?bug_id=6612218

Note that the /zz directory mentioned in the bugreport does not exist before 
the zfs set mountpoint command.

Greetings, Martin
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-15 Thread Cindy Swearingen

Hi--

I haven't had a chance to reproduce this problem but Niall's heads up 
message, says that default schedules that include frequent still

work:

http://mail.opensolaris.org/pipermail/zfs-auto-snapshot/2009-November/000199.html

I included a snippet of his instructions below.

If this doesn't help, I'll see if Niall can comment.

Thanks,

Cindy

*

For those who want to use time-slider without going through the GUI, simply
enable/configure (or create) the auto-snapshot instances you need then
enable
the time-slider SMF service. time-slider will pick up the enabled
auto-snapshot
instances and schedule snapshots for them.

For folks who prefer to continue using zfs-auto-snapshot, you will need to
remove SUNWgnome-time-slider and install the existing zfs-auto-snapshot
packages instead.



On 12/12/09 11:05, Roman Ivanov wrote:

Am I missing something?

I have had monthly,weekly,daily,hourly,frequent snapshots since March 2009.
Now with new b129 I lost all of them.

From zpool history:


2009-12-12.20:30:02 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-26-09:28
2009-12-12.20:30:03 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-11-18-23:37
2009-12-12.20:30:04 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:monthly-2009-10-17-20:32
2009-12-12.20:30:04 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-12-19:47
2009-12-12.20:30:05 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-15:59
2009-12-12.20:30:05 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-14:54
2009-12-12.20:30:06 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-13:54
2009-12-12.20:30:07 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-12:54
2009-12-12.20:30:07 zfs destroy -r 
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-11-11:54
.
2009-12-12.20:30:43 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-06-16-08:15
2009-12-12.20:30:44 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-05-16-11:52
2009-12-12.20:30:44 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-04-16-08:06
2009-12-12.20:30:46 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:monthly-2009-03-16-18:55

Current zfs list -t all:
NAME   USED  AVAIL  REFER  
MOUNTPOINT
rpool 54,3G  83,5G  63,5K  
/rpool
rpool/ROOT17,1G  83,5G18K  
legacy
rpool/ROOT/b128a  28,5M  83,5G  9,99G  
legacy
rpool/ROOT/b1...@zfs-auto-snap:frequent-2009-12-12-20:17  9,70M  -  9,99G  -
rpool/ROOT/b129   17,1G  83,5G  10,2G  
legacy
rpool/ROOT/b...@2009-09-04-11:28:13   3,74G  -  10,0G  -
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-12-03-14:59 1,25G  -  10,2G  -
rpool/ROOT/b...@zfs-auto-snap:weekly-2009-12-10-14:59  550M  -  10,4G  -
rpool/ROOT/b...@2009-12-12-17:11:35   29,9M  -  10,4G  -
rpool/ROOT/b...@zfs-auto-snap:hourly-2009-12-12-21:00 0  -  10,2G  -
rpool/ROOT/b...@zfs-auto-snap:-2009-12-12-21:00   0  -  10,2G  -
rpool/dump1023M  83,5G  1023M  -
rpool/rixx35,2G  83,5G  34,9G  
/export/home/rixx
rpool/rixxx...@zfs-auto-snap:weekly-2009-12-03-14:59   190M  -  31,8G  -
rpool/rixxx...@zfs-auto-snap:weekly-2009-12-10-14:59   116M  -  34,9G  -
rpool/rixxx...@zfs-auto-snap:-2009-12-12-21:002,29M  -  34,9G  -
rpool/swap1023M  84,3G   275M  -

The latest snapshot does not have word frequent in it. Moreover hourly 
snapshot died right after born
2009-12-12.21:00:02 zfs snapshot -r 
rpool/rixxx...@zfs-auto-snap:hourly-2009-12-12-21:00
2009-12-12.21:00:02 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:hourly-2009-12-12-21:00
2009-12-12.21:00:03 zfs destroy -r 
rpool/rixxx...@zfs-auto-snap:-2009-12-12-20:45
2009-12-12.21:00:03 zfs snapshot -r 
rpool/rixxx...@zfs-auto-snap:-2009-12-12-21:00

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] raid-z as a boot pool

2009-12-15 Thread Luca Morettoni

As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ

we can't boot from a pool with raidz, any plan to have this feature?
--
Luca Morettoni luca(AT)morettoni.net | OpenSolaris SCA #OS0344
Web/BLOG: http://www.morettoni.net/ | http://twitter.com/morettoni
jugUmbria founder: https://jugUmbria.dev.java.net/
ITL-OSUG leader: http://hub.opensolaris.org/bin/view/User+Group+itl-osug
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] raid-z as a boot pool

2009-12-15 Thread Lori Alt

On 12/15/09 09:26, Luca Morettoni wrote:

As reported here:
http://hub.opensolaris.org/bin/view/Community+Group+zfs/zfsbootFAQ

we can't boot from a pool with raidz, any plan to have this feature?
At this time, there is no scheduled availability for raidz boot.  It's 
on the list of possible enhancements, but not yet under development.


Lori
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_129 dedup panic

2009-12-15 Thread Cindy Swearingen

Hi Markus,

We're checking your panic below against a similar problem reported very
recently. The bug is filed and if its the same problem, a fix should be
available soon.

We'll be in touch.

Cindy

On 12/15/09 08:07, Markus Kovero wrote:

Hi, I encountered panic and spontaneous reboot after canceling zfs send from 
another server. It took around 2-3hrs to remove 2TB data server had sent and 
then:

Dec 15 16:54:05 foo ^Mpanic[cpu2]/thread=ff0916724560:
Dec 15 16:54:05 foo genunix: [ID 683410 kern.notice] BAD TRAP: type=0 (#de 
Divide error) rp=ff003db82910 addr=ff003db82a10
Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 839527 kern.notice] zpool:
Dec 15 16:54:05 foo unix: [ID 753105 kern.notice] #de Divide error
Dec 15 16:54:05 foo unix: [ID 358286 kern.notice] addr=0xff003db82a10
Dec 15 16:54:05 foo unix: [ID 243837 kern.notice] pid=15520, 
pc=0xf794310a, sp=0xff003db82a00, eflags=0x10246
Dec 15 16:54:05 foo unix: [ID 211416 kern.notice] cr0: 80050033pg,wp,ne,et,mp,pe 
cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de
Dec 15 16:54:05 foo unix: [ID 624947 kern.notice] cr2: 80a7000
Dec 15 16:54:05 foo unix: [ID 625075 kern.notice] cr3: 4721dc000
Dec 15 16:54:05 foo unix: [ID 625715 kern.notice] cr8: c
Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rdi: ff129712b578 
rsi:  rdx:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rcx:1  
r8:173724e00  r9:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rax:173724e00 
rbx:8 rbp: ff003db82a90
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r10: afd231db9a85b86e 
r11:  3fc244aaa90 r12:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r13: ff12fed0e9d0 
r14: ff092953d000 r15: ff003db82a10
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   fsb:0 
gsb: ff09128e9000  ds:   4b
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]es:   4b  
fs:0  gs:  1c3
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]   trp:0 
err:0 rip: f794310a
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]cs:   30 
rfl:10246 rsp: ff003db82a00
Dec 15 16:54:06 foo unix: [ID 266532 kern.notice]ss:   38
Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db827f0 
unix:die+10f ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82900 
unix:trap+1558 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82910 
unix:cmntrap+e6 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82a90 
zfs:ddt_get_dedup_object_stats+152 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b00 
zfs:spa_config_generate+2d9 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b90 
zfs:spa_open_common+1c2 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c00 
zfs:spa_get_stats+50 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c40 
zfs:zfs_ioc_pool_stats+32 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82cc0 
zfs:zfsdev_ioctl+175 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d00 
genunix:cdev_ioctl+45 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d40 
specfs:spec_ioctl+5a ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82dc0 
genunix:fop_ioctl+7b ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82ec0 
genunix:ioctl+18e ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82f10 
unix:brand_sys_syscall32+19d ()
Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 672855 kern.notice] syncing file systems...
Dec 15 16:54:06 foo genunix: [ID 904073 kern.notice]  done
Dec 15 16:54:07 foo genunix: [ID 111219 kern.notice] dumping to 
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel
Dec 15 16:55:07 foo genunix: [ID 10 kern.notice]
Dec 15 16:55:07 foo genunix: [ID 665016 kern.notice] ^M 64% done: 1881224 pages 
dumped,
Dec 15 16:55:07 foo genunix: [ID 495082 kern.notice] dump failed: error 28

Is it just me or everlasting Monday again.

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Convert from FBSD

2009-12-15 Thread Allen
I would like to load OpenSolaris on my file server.  I have previously loaded 
FBSD using zfs as the storage file system.  Will OpenSolaris be able to import 
the pool and mount the file system created on FBSD or will I have to recreate 
the the file system.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert from FBSD

2009-12-15 Thread Ed Jobs
On Tuesday 15 December 2009 20:47, Allen wrote:
 I would like to load OpenSolaris on my file server.  I have previously
  loaded FBSD using zfs as the storage file system.  Will OpenSolaris be
  able to import the pool and mount the file system created on FBSD or will
  I have to recreate the the file system.
 
i have tried the exact same thing on a VM and it works perfectly.
a word of caution tho.
if you upgrade the zpools/zfs to a newer version, FBSD will propably be unable 
to read them again if you want to revert to BSD.

-- 
Real programmers don't document. If it was hard to write, it should be hard to 
understand.


signature.asc
Description: This is a digitally signed message part.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-15 Thread Henrik Johansson
Hello,

On Dec 15, 2009, at 8:02 AM, Giridhar K R wrote:

 Hi,
 Created a zpool with 64k recordsize and enabled dedupe on it.
 zpool create -O recordsize=64k TestPool device1
 zfs set dedup=on TestPool
 
 I copied files onto this pool over nfs from a windows client.
 
 Here is the output of zpool list
 Prompt:~# zpool list
 NAME  SIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
 TestPool   696G  19.1G   677G 2%  1.13x  ONLINE  -
 
 When I ran a dir /s command on the share from a windows client cmd, I see 
 the file size as 51,193,782,290 bytes. The alloc size reported by zpool along 
 with the DEDUP of 1.13x does not addup to 51,193,782,290 bytes.
 
 According to the DEDUP (Dedupe ratio) the amount of data copied is 21.58G 
 (19.1G * 1.13) 

Are you sure this problem is related to ZFS, not a Windows, link or CIFS issue? 
 Have you looked at the filesystem from the OpenSolaris host locally? Are sure 
there are no links in the filesystems that the windows client  also counts? 

Henrik
http://sparcv9.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Dedupe reporting incorrect savings

2009-12-15 Thread Giridhar K R
As I have noted above after editing the initial post, its the same locally too.

I found that the ls -l on the zpool also reports 51,193,782,290 bytes
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snv_129 dedup panic

2009-12-15 Thread Cindy Swearingen

Hi Markus,

CR 6909931 is filed to cover this problem.

We'll let you know when a fix is putback.

Thanks,

Cindy

On 12/15/09 10:24, Cindy Swearingen wrote:

Hi Markus,

We're checking your panic below against a similar problem reported very
recently. The bug is filed and if its the same problem, a fix should be
available soon.

We'll be in touch.

Cindy

On 12/15/09 08:07, Markus Kovero wrote:
Hi, I encountered panic and spontaneous reboot after canceling zfs 
send from another server. It took around 2-3hrs to remove 2TB data 
server had sent and then:


Dec 15 16:54:05 foo ^Mpanic[cpu2]/thread=ff0916724560:
Dec 15 16:54:05 foo genunix: [ID 683410 kern.notice] BAD TRAP: type=0 
(#de Divide error) rp=ff003db82910 addr=ff003db82a10

Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 839527 kern.notice] zpool:
Dec 15 16:54:05 foo unix: [ID 753105 kern.notice] #de Divide error
Dec 15 16:54:05 foo unix: [ID 358286 kern.notice] addr=0xff003db82a10
Dec 15 16:54:05 foo unix: [ID 243837 kern.notice] pid=15520, 
pc=0xf794310a, sp=0xff003db82a00, eflags=0x10246
Dec 15 16:54:05 foo unix: [ID 211416 kern.notice] cr0: 
80050033pg,wp,ne,et,mp,pe cr4: 6f8xmme,fxsr,pge,mce,pae,pse,de

Dec 15 16:54:05 foo unix: [ID 624947 kern.notice] cr2: 80a7000
Dec 15 16:54:05 foo unix: [ID 625075 kern.notice] cr3: 4721dc000
Dec 15 16:54:05 foo unix: [ID 625715 kern.notice] cr8: c
Dec 15 16:54:05 foo unix: [ID 10 kern.notice]
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rdi: 
ff129712b578 rsi:  rdx:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   
rcx:1  r8:173724e00  r9:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   rax:
173724e00 rbx:8 rbp: ff003db82a90
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r10: 
afd231db9a85b86e r11:  3fc244aaa90 r12:0
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   r13: 
ff12fed0e9d0 r14: ff092953d000 r15: ff003db82a10
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]   
fsb:0 gsb: ff09128e9000  ds:   4b
Dec 15 16:54:05 foo unix: [ID 592667 kern.notice]
es:   4b  fs:0  gs:  1c3
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]   
trp:0 err:0 rip: f794310a
Dec 15 16:54:06 foo unix: [ID 592667 kern.notice]
cs:   30 rfl:10246 rsp: ff003db82a00
Dec 15 16:54:06 foo unix: [ID 266532 kern.notice]
ss:   38

Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db827f0 
unix:die+10f ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82900 
unix:trap+1558 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82910 
unix:cmntrap+e6 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82a90 
zfs:ddt_get_dedup_object_stats+152 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b00 
zfs:spa_config_generate+2d9 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82b90 
zfs:spa_open_common+1c2 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c00 
zfs:spa_get_stats+50 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82c40 
zfs:zfs_ioc_pool_stats+32 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82cc0 
zfs:zfsdev_ioctl+175 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d00 
genunix:cdev_ioctl+45 ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82d40 
specfs:spec_ioctl+5a ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82dc0 
genunix:fop_ioctl+7b ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82ec0 
genunix:ioctl+18e ()
Dec 15 16:54:06 foo genunix: [ID 655072 kern.notice] ff003db82f10 
unix:brand_sys_syscall32+19d ()

Dec 15 16:54:06 foo unix: [ID 10 kern.notice]
Dec 15 16:54:06 foo genunix: [ID 672855 kern.notice] syncing file 
systems...

Dec 15 16:54:06 foo genunix: [ID 904073 kern.notice]  done
Dec 15 16:54:07 foo genunix: [ID 111219 kern.notice] dumping to 
/dev/zvol/dsk/rpool/dump, offset 65536, content: kernel

Dec 15 16:55:07 foo genunix: [ID 10 kern.notice]
Dec 15 16:55:07 foo genunix: [ID 665016 kern.notice] ^M 64% done: 
1881224 pages dumped,
Dec 15 16:55:07 foo genunix: [ID 495082 kern.notice] dump failed: 
error 28


Is it just me or everlasting Monday again.

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Space not freed?

2009-12-15 Thread Cindy Swearingen

I do not see this problem in build 129:

# mkfile 10g /export1/file123
# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
export1 10.3G  33.0G  10.3G  /export1
export1/cindys21K  33.0G21K  /export1/cindys
# rm /export1/file123
.
.
.
# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
export1  285M  33.1G   282M  /export1
export1/cindys21K  33.1G21K  /export1/cindys


CR 6792701 was fixed in build 118.

Cindy


On 12/14/09 08:08, Henrik Johansson wrote:

Hello,

On 14 dec 2009, at 14.16, Markus Kovero markus.kov...@nebula.fi 
mailto:markus.kov...@nebula.fi wrote:


Hi, if someone running 129 could try this out, turn off compression in 
your pool, mkfile 10g /pool/file123, see used space and then remove 
the file and see if it makes used space available again. I’m having 
trouble with this, reminds me of similar bug that occurred in 111-release.


I filed bug about a year ago on a similar issue, bugid: 6792701 
tel:6792701, but it should have been fixed in snv_118.


Regards

Henrik
http://sparcv9.blogspot.com




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] all zfs snapshot made by TimeSlider destroyed after upgrading to b129

2009-12-15 Thread Daniel Carosone
None of these look like the issue either.  With 128, I did have to edit the 
code to avoid the month rollover error, and add the missing dependency 
dbus-python26.

I think I have a new install that went to 129 without having auto snapshots 
enabled yet.  When I can get to that machine later, I will enable them there 
and see whether the same fault is apparent, in case it's some kind of 
compatibility problem with older state.  Also not much help, sorry..  I don't 
have an opportunity to spend time digging into it much further just now.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Convert from FBSD

2009-12-15 Thread Allen
Thanks for letting me know.  I plan on attempting in a couple of weeks.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] force 4k writes?

2009-12-15 Thread Bill Sprouse
This is most likely a naive question on my part.  If recordsize is set  
to 4k (or a multiple of 4k), will ZFS ever write a record that is less  
than 4k or not a multiple of 4k?  This includes metadata.  Does  
compression have any effect on this?


thanks for the help,
bill
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Bill Sommerfeld
On Tue, 2009-12-15 at 17:28 -0800, Bill Sprouse wrote:
 After  
 running for a while (couple of months) the zpool seems to get  
 fragmented, backups take 72 hours and a scrub takes about 180  
 hours. 

Are there periodic snapshots being created in this pool?  

Can they run with atime turned off?

(file tree walks performed by backups will update the atime of all
directories; this will generate extra write traffic and also cause
snapshots to diverge from their parents and take longer to scrub).

- Bill

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] force 4k writes?

2009-12-15 Thread Richard Elling

On Dec 15, 2009, at 5:31 PM, Bill Sprouse wrote:

This is most likely a naive question on my part.  If recordsize is  
set to 4k (or a multiple of 4k), will ZFS ever write a record that  
is less than 4k or not a multiple of 4k?


Yes.  The recordsize is the upper limit for a file record.


This includes metadata.


Yes.  Metadata is compressed and seems to usually be one block.


Does compression have any effect on this?


Yes. 4KB is the minimum size that can be compressed for regular data.

NB. Physical writes may be larger because they are coalesced.  But
if you are worried about recordsize, then you are implicitly worried  
about

reads.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool fragmentation issues?

2009-12-15 Thread Brent Jones
On Tue, Dec 15, 2009 at 5:28 PM, Bill Sprouse bill.spro...@sun.com wrote:
 Hi Everyone,

 I hope this is the right forum for this question.  A customer is using a
 Thumper as an NFS file server to provide the mail store for multiple email
 servers (Dovecot).  They find that when a zpool is freshly created and
 populated with mail boxes, even to the extent of 80-90% capacity,
 performance is ok for the users, backups and scrubs take a few hours (4TB of
 data). There are around 100 file systems.  After running for a while (couple
 of months) the zpool seems to get fragmented, backups take 72 hours and a
 scrub takes about 180 hours.  They are running mirrors with about 5TB usable
 per pool (500GB disks).  Being a mail store, the writes and reads are small
 and random.  Record size has been set to 8k (improved performance
 dramatically).  The backup application is Amanda.  Once backups become too
 tedious, the remedy is to replicate the pool and start over.  Things get
 fast again for a while.

 Is this expected behavior given the application (email - small, random
 writes/reads)?  Are there recommendations for system/ZFS/NFS configurations
 to improve this sort of thing?  Are there best practices for structuring
 backups to avoid a directory walk?

 Thanks,
 bill
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Anyone reason in particular they chose to use Dovecot with the old Mbox format?
Mbox has been proven many times over to be painfully slow when the
files get larger, and in this day and age, I can't imagine anyone
having smaller than a 50MB mailbox. We have about 30,000 e-mail users
on various systems, and it seems the average size these days is
approaching close to a GB. Though Dovecot has done a lot to improve
the performance of Mbox mailboxes, Maildir might be more rounded for
your system.

I wonder if the soon to be released block/parity rewrite tool will
freshen up a pool thats heavily fragmented, without having to redo
the pools.

-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] iscsi on 2 hosts for mutual backup

2009-12-15 Thread Frank Cusack

I'm considering setting up a poor man's cluster.  The hardware I'd
like to use for some critical services is especially attractive for
price/space/performance reasons, however it only has a single power
supply.  I'm using S10 U8 and can't migrate to OpenSolaris.

It's fine if a server dies (ie, power supply failure) and I have
to manually get the service running on the other server.  But what's
not ok is to lose data from the last transaction on the primary
server.  For this reason I'm reluctant to do a zfs send | zfs recv
setup (besides the bad vibes I get from such a plebian method anyway).

I was thinking that rather than what I'd normally do which is mirror
two data drives locally, I could have one drive local and use one
drive from the partner server shared via iSCSI.  Comments?

If that's feasible and sane, then on the one hand it seems attractive
to use a zfs backing store, but I'd have a zfs filesystem on top of
the zfs backing store, which doesn't seem helpful and I'm guessing
would reduce the total space available, so I'd then have to not use
the whole local disk.  So I think I would want to use a raw backing
store, yes?

Performance is not a concern.

If no one is throwing red flags yet, then to extend the idea further
maybe I would move the redundant server offsite (I will have metro
ethernet connectivity) and do iSCSI over the WAN and get DR for free.

-frank
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss