[zfs-discuss] convert output zfs diff to something human readable

2010-12-07 Thread Yuri Vorobyev

Hello.

It is possible to convert octal representation of zfs diff output to 
something human readable?

Iconv may be?

Please see screenshot http://i.imgur.com/bHhXV.png
I create file with russian name there. OS is Solaris 11 Express.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Problem with a failed replace.

2010-12-07 Thread Mark J Musante

On Mon, 6 Dec 2010, Curtis Schiewek wrote:


Hi Mark,

I've tried running zpool attach media ad24 ad12 (ad12 being the new 
disk) and I get no response.  I tried leaving the command run for an 
extended period of time and nothing happens.


What version of solaris are you running?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] device retired?

2010-12-07 Thread Roy Sigurd Karlsbakk
hi all

I had a 7-VDEV raid on some drives, and after rearranging them a bit, I booted 
up once more and tried to recreate a new, albeit similar zpool on them. This 
gave me an error message, c8t35d0 not available. The disk is there, is visible 
from the controller BIOS etc, so I tried to boot on the OI install CD. That 
could see the drive without problems, so I created the zpool, pbpool. This 
worked well, rebooted into OI, imported the pool (with -f, forgot to export 
it), which worked well, but, upon bootup, I get this message

Dec  7 17:00:16 prv-backup genunix: [ID 751201 kern.notice] NOTICE: One or more 
I/O devices have been retired

c8t35d0 (now a spare) is now UNAVAIL http://pastebin.com/mBBZL4BF and I can't 
understand why. The live CD sees it, the controller also. cfgadm on the 
installed OI does _not_ see it, though. I can possibly reinstall the whole box, 
but if this happens again, I really want to know what to do with it without 
having to reinstall everything.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] device retired?

2010-12-07 Thread Roy Sigurd Karlsbakk
- Original Message -
 hi all
 
 I had a 7-VDEV raid on some drives, and after rearranging them a bit
snip/

11-vdev, even, but the problem was fm had stepped in and found something was 
wrong with c8t35d0. To fix this, I did:

# fmadm repaired 
hc://:product-id=LSI-CORP-SAS2X28:server-id=:chassis-id=50030480009a4c7f:serial=WD-WMAY00247950:part=WDC-WD2001FASS-00W2B0:revision=05.01D05/ses-enclosure=3/bay=11/disk=0
# zpool remove pbpool c8t35d0
# zpool add pbpool spare c8t35d0

Done - works

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Roy Sigurd Karlsbakk
 Bear a few things in mind:
 
 iops is not iops.
snip/

I am totally aware of these differences, but it seems some people think RAIDz 
is nonsense unless you don't need speed at all. My testing shows (so far) that 
the speed is quite good, far better than single drives. Also, as Eric said, 
those speeds are for random i/o. I doubt there is very much out there that is 
truely random i/o except perhaps databases, but then, I would never use 
raid5/raidz for a DB unless at gunpoint.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] snaps lost in space?

2010-12-07 Thread Joost Mulders

I was told that this could be caused by
 http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6792701
 Removing large holey file does not free space

However, attempts to verify the cause were fruitless as zdb dumps core:
r...@onix# zdb -ddd p0
...
  partial [2473161,2473169) length 8
  outage [0,18446744073709551615) length 
18446744073709551615

/dev/dsk/c9t1d0s0 [DTL-required]
Dataset p0/tmp [ZPL], ID 112, cr_txg 6724, 40.6M, 1387 objects

ZIL header: claim_txg 0, claim_blk_seq 0, claim_lr_seq 0 replay_seq 
0, flags 0x0

Memory fault(coredump)
r...@onix#

I transferred the contents of p0 to p1 via send/receive and I can tell 
that the p1 pool does *not* have the ghost allocation and zdb -ddd p1 
does it's thing *without* dump.


To summarize; there are/were bugs that cause(d) ghost allocations and 
zfs send/receive is a method to remove them.


Best regards, Joost

On 6-12-2010 13:09, Joost Mulders wrote:

Hi,

I've output of space allocation which I can't explain. I hope someone
can point me at the right direction.

The allocation of my home filesystem looks like this:

jo...@onix$ zfs list -o space p0/home
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
p0/home 31.0G 156G 86.7G 69.7G 0 0

This tells me that *86,7G* is used by *snapshots* of this filesystem.
However, when I look at the space allocation of the snapshots, I don't
see the 86,7G back!

jo...@onix$ zfs list -t snapshot -o space | egrep 'NAME|^p0\/home'
NAME AVAIL USED USEDSNAP USEDDS USEDREFRESERV USEDCHILD
p0/h...@s1 - 62.7M - - - -
p0/h...@s2 - 53.1M - - - -
p0/h...@s3 - 34.1M - - - -
p0/h...@s4 - 277M - - - -
p0/h...@s5 - 2.21G - - - -
p0/h...@s6 - 175M - - - -
p0/h...@s7 - 46.1M - - - -
p0/h...@s8 - 47.6M - - - -
p0/h...@s9 - 43.0M - - - -
p0/h...@s10 - 64.1M - - - -
p0/h...@s11 - 563M - - - -
p0/h...@s12 - 76.6M - - - -

The sum of the USED column is only some 3,6G, so the question is to
what is the 86,7G of USEDSNAP allocated? Ghost snapshots?

This is with zpool version 22. This zpool was used a year or so in
onnv-129. I upgraded the host recently to build 151a but I didn't
upgrade the pool yet.

Any pointers are appreciated!

Joost


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Ross Walker
On Dec 7, 2010, at 12:46 PM, Roy Sigurd Karlsbakk r...@karlsbakk.net wrote:

 Bear a few things in mind:
 
 iops is not iops.
 snip/
 
 I am totally aware of these differences, but it seems some people think RAIDz 
 is nonsense unless you don't need speed at all. My testing shows (so far) 
 that the speed is quite good, far better than single drives. Also, as Eric 
 said, those speeds are for random i/o. I doubt there is very much out there 
 that is truely random i/o except perhaps databases, but then, I would never 
 use raid5/raidz for a DB unless at gunpoint.

Well besides databases there are VM datastores, busy email servers, busy ldap 
servers, busy web servers, and I'm sure the list goes on and on.

I'm sure it is much harder to list servers that are truly sequential in IO then 
random. This is especially true when you have thousands of users hitting it.

-Ross

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Increase Volume Size

2010-12-07 Thread Tony MacDoodle
Is is possible to expand the size of a ZFS volume?

It was created with the following command:

zfs create -V 20G ldomspool/test

Thanks
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-07 Thread Brandon High
On Mon, Dec 6, 2010 at 7:10 PM, taemun tae...@gmail.com wrote:
 Sorry, you're right. If they're using 512B internally, this is a non-event
 here. I think that most folks talking about 3TB drives in this list are
 looking for internal drives. That the desktop dock (USB, I presume)
 coalesces blocks doesn't really make any difference.

It's a shame that Seagate doesn't sell their 3TB drive bare, but right
now it's cheaper by about $30 to buy the 7200 rpm Seagate and throw
away the desktop dock than it is to buy a WD EARS drive. Consider it a
fancy anti-shock packaging.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Increase Volume Size

2010-12-07 Thread Robert Milkowski

On 07/12/2010 23:54, Tony MacDoodle wrote:

Is is possible to expand the size of a ZFS volume?

It was created with the following command:

zfs create -V 20G ldomspool/test




see man page for zfs, section about volsize property.

Best regards,
 Robert Milkowski
 http://milek.blogspot.com

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Edward Ned Harvey
 From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
 
  Bear a few things in mind:
 
  iops is not iops.
 snip/
 
 I am totally aware of these differences, but it seems some people think
 RAIDz is nonsense unless you don't need speed at all. My testing shows (so
 far) that the speed is quite good, far better than single drives. 

There is a grain of truth.  For sequential IO, either reads or writes, raidz 
will be much faster than a single drive.  For random IO, it's more complex...

If you're doing random writes, then ZFS will make them into sequential IO, and 
hence, your raidz will greatly outperform a single drive.
If you're doing random reads, you will get the performance of a single drive, 
at best.

In order to test random reads, you have to configure iozone to use a data set 
which is much larger than physical ram.  Since iozone will write a big file and 
then immediately afterward, start reading it ...  It means that whole file will 
be in cache unless that whole file is much larger than physical ram.  You'll 
get false read results which are unnaturally high.

For this reason, when I'm using an iozone benchmark, I remove as much ram from 
the system as possible.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [OpenIndiana-discuss] iops...

2010-12-07 Thread Edward Ned Harvey
 From: Ross Walker [mailto:rswwal...@gmail.com]
 
 Well besides databases there are VM datastores, busy email servers, busy
 ldap servers, busy web servers, and I'm sure the list goes on and on.
 
 I'm sure it is much harder to list servers that are truly sequential in IO
then
 random. This is especially true when you have thousands of users hitting
it.

Depends on the purpose of your server.  For example, I have a ZFS server
whose sole purpose is to receive a backup data stream from another machine,
and then write it to tape.  This is a highly sequential operation, and I use
raidz.

Some people have video streaming servers.  And http/ftp servers with large
files.  And a fileserver which is the destination for laptop whole-disk
backups.  And a repository that stores iso files and rpm's used for OS
installs on other machines.  And data capture from lab equipment.  And
packet sniffer / compliance email/data logger.

and I'm sure the list goes on and on.  ;-)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] convert output zfs diff to something human readable

2010-12-07 Thread Yuri Vorobyev



It is possible to convert octal representation of zfs diff output to
something human readable?
Iconv may be?

Please see screenshot http://i.imgur.com/bHhXV.png
I create file with russian name there. OS is Solaris 11 Express.


This command did the job:
zfs diff  | perl -plne 's#\\\d{8}(\d{3})#chr(oct ($1)-oct (400))#ge; 
s#\\040# #g'


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] 3TB HDD in ZFS

2010-12-07 Thread Eugen Leitl
On Tue, Dec 07, 2010 at 05:17:08PM -0800, Brandon High wrote:
 On Mon, Dec 6, 2010 at 7:10 PM, taemun tae...@gmail.com wrote:
  Sorry, you're right. If they're using 512B internally, this is a non-event
  here. I think that most folks talking about 3TB drives in this list are
  looking for internal drives. That the desktop dock (USB, I presume)
  coalesces blocks doesn't really make any difference.
 
 It's a shame that Seagate doesn't sell their 3TB drive bare, but right
 now it's cheaper by about $30 to buy the 7200 rpm Seagate and throw
 away the desktop dock than it is to buy a WD EARS drive. Consider it a
 fancy anti-shock packaging.

What about Hitachi HDS723030ALA640 (aka Deskstar 7K3000, claimed
24/7)?

-- 
Eugen* Leitl a href=http://leitl.org;leitl/a http://leitl.org
__
ICBM: 48.07100, 11.36820 http://www.ativel.com http://postbiota.org
8B29F6BE: 099D 78BA 2FD3 B014 B08A  7779 75B0 2443 8B29 F6BE
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss