Re: [zfs-discuss] just can't import

2011-04-12 Thread Matt Harrison

On 13/04/2011 00:36, David Magda wrote:

On Apr 11, 2011, at 17:54, Brandon High wrote:


I suspect that the minimum memory for most moderately sized pools is
over 16GB. There has been a lot of discussion regarding how much
memory each dedup'd block requires, and I think it was about 250-270
bytes per block. 1TB of data (at max block size and no duplicate data)
will require about 2GB of memory to run effectively. (This seems high
to me, hopefully someone else can confirm.)


There was a  thread on the topic with the subject "Newbie ZFS Question: RAM for 
Dedup". I think it summarized pretty well by Erik Trimble:


bottom line: 270 bytes per record

so, for 4k record size, that  works out to be 67GB per 1 TB of unique data. 
128k record size means about 2GB per 1 TB.

dedup means buy a (big) SSD for L2ARC.


http://mail.opensolaris.org/pipermail/zfs-discuss/2010-October/045720.html

Remember that 270 bytes per block means you're allocating one 512-byte sector 
for most current disks (a 4K sector for each block  RSN).

See also:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/037978.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037300.html



Thanks for the info guys.

I decided that the overhead involved in managing (esp deleting) deduped 
datasets far outweighed the benefits it was bringing me. I'm currently 
remaking datasets non-dedup and now I know about the "hang", I am a lot 
more patient :D


Thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-12 Thread David Magda
On Apr 11, 2011, at 17:54, Brandon High wrote:

> I suspect that the minimum memory for most moderately sized pools is
> over 16GB. There has been a lot of discussion regarding how much
> memory each dedup'd block requires, and I think it was about 250-270
> bytes per block. 1TB of data (at max block size and no duplicate data)
> will require about 2GB of memory to run effectively. (This seems high
> to me, hopefully someone else can confirm.) 

There was a  thread on the topic with the subject "Newbie ZFS Question: RAM for 
Dedup". I think it summarized pretty well by Erik Trimble:

> bottom line: 270 bytes per record
> 
> so, for 4k record size, that  works out to be 67GB per 1 TB of unique data. 
> 128k record size means about 2GB per 1 TB.
> 
> dedup means buy a (big) SSD for L2ARC.

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-October/045720.html

Remember that 270 bytes per block means you're allocating one 512-byte sector 
for most current disks (a 4K sector for each block  RSN).

See also:

http://mail.opensolaris.org/pipermail/zfs-discuss/2010-March/037978.html
http://mail.opensolaris.org/pipermail/zfs-discuss/2010-February/037300.html

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
On Mon, Apr 11, 2011 at 10:55 AM, Matt Harrison
 wrote:
> It did finish eventually, not sure how long it took in the end. Things are
> looking good again :)

If you want to continue using dedup, you should invest in (a lot) more
memory. The amount of memory required depends on the size of your pool
and the type of data that you're storing. Data that large blocks will
use less memory.

I suspect that the minimum memory for most moderately sized pools is
over 16GB. There has been a lot of discussion regarding how much
memory each dedup'd block requires, and I think it was about 250-270
bytes per block. 1TB of data (at max block size and no duplicate data)
will require about 2GB of memory to run effectively. (This seems high
to me, hopefully someone else can confirm.) This is memory that is
available to the ARC, above and beyond what is being used by the
system and applications. Of course, using all your ARC to hold dedup
data won't help much either, as either cacheable data or dedup info
will be evicted rather quickly. Forcing the system to read dedup
tables from the pool is slow, since it's a lot of random reads.

All I know is that I have 8GB in my home system, and it is not enough
to work with the 8TB pool that I have. Adding a fast SSD as L2ARC can
help reduce the memory requirements somewhat by keeping dedup data
more easily accessible. (And make sure that your L2ARC device is large
enough. I fried a 30GB OCV Vertex in just a few months of use, I
suspect from the constant writes.)

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-11 Thread Matt Harrison

On 11/04/2011 10:04, Brandon High wrote:

On Sun, Apr 10, 2011 at 10:01 PM, Matt Harrison
  wrote:

The machine only has 4G RAM I believe.


There's your problem. 4G is not enough memory for dedup, especially
without a fast L2ARC device.


It's time I should be heading to bed so I'll let it sit overnight, and if
I'm still stuck with it I'll give Ian's recent suggestions a go and report
back.


I'd suggest waiting for it to finish the destroy. It will, if you give it time.

Trying to force the import is only going to put you back in the same
situation - The system will attempt to complete the destroy and seem
to hang until it's completed.

-B



Thanks Brandon,

It did finish eventually, not sure how long it took in the end. Things 
are looking good again :)


Thanks for the help everyone

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-11 Thread Brandon High
On Sun, Apr 10, 2011 at 10:01 PM, Matt Harrison
 wrote:
> The machine only has 4G RAM I believe.

There's your problem. 4G is not enough memory for dedup, especially
without a fast L2ARC device.

> It's time I should be heading to bed so I'll let it sit overnight, and if
> I'm still stuck with it I'll give Ian's recent suggestions a go and report
> back.

I'd suggest waiting for it to finish the destroy. It will, if you give it time.

Trying to force the import is only going to put you back in the same
situation - The system will attempt to complete the destroy and seem
to hang until it's completed.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-10 Thread Matt Harrison

On 11/04/2011 05:25, Brandon High wrote:

On Sun, Apr 10, 2011 at 9:01 PM, Matt Harrison
  wrote:

I had a de-dup dataset and tried to destroy it. The command hung and so did
anything else zfs related. I waited half and hour or so, the dataset was
only 15G, and rebooted.


How much RAM does the system have? Dedup uses a LOT of memory, and it
can take a long time to destroy dedup'd datasets.

If you keep waiting, it'll eventually return. It could be a few hours or longer.


The machine refused to boot, stuck at Reading ZFS Config. Asking around on


The system resumed the destroy that was in progress. If you let it
sit, it'll eventually complete.


Well the livecd is also hanging on import, anything else zfs hangs. iostat
shows some reads but they drop off to almost nothing after 2 mins or so.


Likewise, it's trying to complete the destroy. Be patient and it'll
complete. Never versions of Open Solaris or Solaris 11 Express may
complete it faster.


Any tips greatly appreciated,


Just wait...

-B



Thanks for the replies,

The machine only has 4G RAM I believe.

It's time I should be heading to bed so I'll let it sit overnight, and 
if I'm still stuck with it I'll give Ian's recent suggestions a go and 
report back.


Many thanks

Matt
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-10 Thread Brandon High
On Sun, Apr 10, 2011 at 9:01 PM, Matt Harrison
 wrote:
> I had a de-dup dataset and tried to destroy it. The command hung and so did
> anything else zfs related. I waited half and hour or so, the dataset was
> only 15G, and rebooted.

How much RAM does the system have? Dedup uses a LOT of memory, and it
can take a long time to destroy dedup'd datasets.

If you keep waiting, it'll eventually return. It could be a few hours or longer.

> The machine refused to boot, stuck at Reading ZFS Config. Asking around on

The system resumed the destroy that was in progress. If you let it
sit, it'll eventually complete.

> Well the livecd is also hanging on import, anything else zfs hangs. iostat
> shows some reads but they drop off to almost nothing after 2 mins or so.

Likewise, it's trying to complete the destroy. Be patient and it'll
complete. Never versions of Open Solaris or Solaris 11 Express may
complete it faster.

> Any tips greatly appreciated,

Just wait...

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] just can't import

2011-04-10 Thread Ian Collins

 On 04/11/11 04:01 PM, Matt Harrison wrote:
I'm running a slightly old version of OSOL, I'm sorry I can't remember 
the version.


I had a de-dup dataset and tried to destroy it. The command hung and 
so did anything else zfs related. I waited half and hour or so, the 
dataset was only 15G, and rebooted.


The machine refused to boot, stuck at Reading ZFS Config. Asking 
around on the OSOL list someone kindly suggested I try a livecd and 
import, scrub, export the pool from there.


Well the livecd is also hanging on import, anything else zfs hangs. 
iostat shows some reads but they drop off to almost nothing after 2 
mins or so. Truss'ing the import process just loops this over and over:


3134/6:lwp_park(0xFDE02F38, 0)(sleeping...)
3134/6:lwp_park(0xFDE02F38, 0) Err#62 ETIME

I wouldn't mind waiting for the pool to right itself, but to my 
inexperienced eyes it doesn't actually seem to be doing anything.


Any tips greatly appreciated,


Hello again,

Two suggestions spring to main:

Try to import read only.

Try a recovery import, zpool import -Fn should tell you if the pool can 
be imported.  It may take a while!


--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] just can't import

2011-04-10 Thread Matt Harrison
I'm running a slightly old version of OSOL, I'm sorry I can't remember 
the version.


I had a de-dup dataset and tried to destroy it. The command hung and so 
did anything else zfs related. I waited half and hour or so, the dataset 
was only 15G, and rebooted.


The machine refused to boot, stuck at Reading ZFS Config. Asking around 
on the OSOL list someone kindly suggested I try a livecd and import, 
scrub, export the pool from there.


Well the livecd is also hanging on import, anything else zfs hangs. 
iostat shows some reads but they drop off to almost nothing after 2 mins 
or so. Truss'ing the import process just loops this over and over:


3134/6: lwp_park(0xFDE02F38, 0) (sleeping...)
3134/6: lwp_park(0xFDE02F38, 0) Err#62 ETIME

I wouldn't mind waiting for the pool to right itself, but to my 
inexperienced eyes it doesn't actually seem to be doing anything.


Any tips greatly appreciated,

thanks

Matt Harrison
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss