Re: [zfs-discuss] Growing a root ZFS mirror on b134?

2010-09-25 Thread Carl Brewer
Thank you very much to Casper for his help. I now have a rpool with 2 x 2TB 
HDD's and it's all good :

# zpool list rpool
NAMESIZE  ALLOC   FREECAP  DEDUP  HEALTH  ALTROOT
rpool  1.82T   640G  1.19T34%  1.09x  ONLINE  -

It gets quite unhappy if you don't use detach (offline isn't so useful!)
and booting with -m milestone=none is a lifesaver!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup testing?

2010-09-25 Thread Richard Elling
On Sep 25, 2010, at 9:54 AM, Roy Sigurd Karlsbakk  wrote:

> Hi all
> 
> Has anyone done any testing with dedup with OI? On opensolaris there is a 
> nifty "feature" that allows the system to hang for hours or days if 
> attempting to delete a dataset on a deduped pool. This is said to be fixed, 
> but I haven't seen that myself, so I'm just wondering...

Yes, there is at least one fix that improves this scenario over b134.

> I'll get a 10TB test box released for testing OI in a few weeks, but before 
> than, has anyone tested this?

Yes. 
 -- richard

> 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] zfs send/receive?

2010-09-25 Thread Richard Elling
On Sep 25, 2010, at 7:42 PM, "Edward Ned Harvey"  wrote:

>> From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
>> discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
>> 
>> I'm using a custom snaopshot scheme which snapshots every hour, day,
>> week and month, rotating 24h, 7d, 4w and so on. What would be the best
>> way to zfs send/receive these things? I'm a little confused about how
>> this works for delta udpates...
> 
> Out of curiosity, why custom?  It sounds like a default config.
> 
> Anyway, as long as the present destination filesystem matches a snapshot from 
> the source system, you can incrementally send any newer snapshot.  Generally 
> speaking, you don't want to send anything that's extremely volatile such as 
> hourly...  because if the snap of the source disappears, then you have 
> nothing to send incrementally from anymore.  Make sense?

It is relatively easy to find the latest, common snapshot on two file systems.
Once you know the latest, common snapshot, you can send the incrementals
up to the latest. 

> 
> I personally send incrementals once a day, and only send the daily 
> incrementals.

For NexentaStor customers, the auto-sync service manages this rather well.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Edward Ned Harvey
> From: Roy Sigurd Karlsbakk [mailto:r...@karlsbakk.net]
> 
> > For now, the rule of thumb is 3G ram for every 1TB of unique data,
> > including
> > snapshots and vdev's.
> 
> 3 gigs? Last I checked it was a little more than 1GB, perhaps 2 if you
> have small files.

http://opensolaris.org/jive/thread.jspa?threadID=131761

The true answer is "it varies" depending on things like block size, etc, so if 
you want to say 1G or 3G, despite sounding like a big difference, it's in the 
noise.  We're only talking "rule of thumb" here, based on vague (vague) and 
widely variable estimates of your personal usage characteristics.

It's just a rule of thumb, and slightly over 1G ~= slightly under 3G in this 
context.

Hence, the comment:

> After a system is running, I don't know how/if you can measure current
> mem usage, to gauge the results of your own predictions.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [osol-discuss] zfs send/receive?

2010-09-25 Thread Edward Ned Harvey
> From: opensolaris-discuss-boun...@opensolaris.org [mailto:opensolaris-
> discuss-boun...@opensolaris.org] On Behalf Of Roy Sigurd Karlsbakk
> 
> I'm using a custom snaopshot scheme which snapshots every hour, day,
> week and month, rotating 24h, 7d, 4w and so on. What would be the best
> way to zfs send/receive these things? I'm a little confused about how
> this works for delta udpates...

Out of curiosity, why custom?  It sounds like a default config.

Anyway, as long as the present destination filesystem matches a snapshot from 
the source system, you can incrementally send any newer snapshot.  Generally 
speaking, you don't want to send anything that's extremely volatile such as 
hourly...  because if the snap of the source disappears, then you have nothing 
to send incrementally from anymore.  Make sense?

I personally send incrementals once a day, and only send the daily incrementals.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread R.G. Keen
> Erik Trimble sez:
> Honestly, I've said it before, and I'll say it (yet) again:  unless you 
> have very stringent power requirement (or some other unusual 
> requirement, like very, very low noise),  used (or even new-in-box, 
> previous generation excess inventory) OEM stuff is far superior to any 
> build-it-yourself rig you can come up with. 
It's horses for courses, I guess. I've had to live with server fan noise and
power requirements and it's not pleasant. I very much like the reliability
characteristics of older servers, but they eat a lot of power and are noisy
as you say.

On the other hand, I did the calculation of the difference in cost of 
electricity over two years at my local rates (central Texas) and it's easy to
save a few hundred dollars over two years of 24/7 operation with low power
systems. I am NOT necessarily saying that my system is something to emulate, 
nor that my choices are right for everyone, particularly hardware building 
amateurs. My past includes a lot of hardware design and build. So putting 
together a Frankenserver is not something that is daunting. I also have a 
history which includes making educated guesses about failure rates and the 
cost of losing data. So I made choices based on my experience and skills.

For **me**, putting together a server out of commercial parts is a far better 
bet than running a server in a virtual machine on desktop parts of 
any vintage, which was the original question - whether a virtual server on top
of Windows running on some hardware was advisable for a beginner. For me, 
it's not. The relative merits will vary from user to user according to their 
skills
and experience level. I was willing to learn Solaris to get zfs. Given what's 
happened with Oracle since I started that, that may have been a bad bet, but
my server and data do now live and breathe for better or worse. 

But I have no fears of breathing life into new hardware and copying 
the old data over. Nor is it a trial to me to fire up a last-generation server, 
install a new OS and copy the data over. To me, that's all a cost/benefit 
calculation.

>So much so, in fact, that we   should really consider the reference 
> recommendation for a ZFS fileserver 
> to be certain configs of brand-name hardware, and NOT
> try to recommend other things to folks.
I personally would have loved to have that when I started the zfs/Opensolaris 
trek a year ago. It was not available, and I paid my dues learning the OS and
zfs. I'm not sure, given where Oracle is taking Solaris, that there is any need 
to
recommend any particular hardware to folks in general. I think the number of 
people following the path I took, using OpenSolaris to get zfs, and 
buying/building
a home machine to do it, are going to nosedive dramatically, by Oracle's design.

To me the data stability issues dictated zfs, and Opensolaris was where I
got that. I put up with the labyrinthine mess of figuring out what would and 
would
not run OS to get zfs, and it worked OK. To me, data integrity was what I was
after.

I had sub issues. It's silly (in my estimation) to worry about data integrity 
on 
disks and not in memory. That made ECC an issue. Hence my burrowing through 
the most cost-efficient way to get ECC. Oh, yeah, cost. I wanted it to be as
cheap as possible, given the other constraints. Then hardware reliability. I 
actually bought an off-duty server locally because of the cost advantages and 
the
perceived hardware realiability. I can't get OS to work on it - yet at least. 
I'm sure
that it's my problems with being an Open Solaris neophyte. But it sure is noisy.

What **my** compromise was is 
- new hardware to stay inside the shallow end of the failure-rate bathtub
- burn in to get past the infant mortality issues
- ECC as cheaply as possible, given that I actually wanted it to work
- modern SATA controllers for the storage, which dragged in PCIe and compatible
controllers under Opensolaris
- as low a power as possible, as that can save about $100 a year *for me*
- as low a noise factor as possible because I've spent too much of my life 
listening
to machines desperately try to stay cool.

What I could trade for this was not caring whether the hardware was 
particularly 
fast; it was a layer of data backup, not a mission critical server. And it had 
to 
run zfs, which is why I started this mess. Also, I don't have huge data storage 
problems. I enforce the live backup data to be under 4TB. Yep, that's tiny by
comparison. I have small problems. 8-)

Result: a new-components server that runs zfs, works on my house network, uses
under 100W as measured at the wall socket, and stores 4TB. I got what I set out
to get, so I'm happy with it. 

This is not the system for everybody, but it works for me. Writing down what 
you're trying to do is a great tool. People used to get really mad at me for 
saying, 
in Very Serious Business Meetings "If we were completely successful, what would 
that look like?"  It almost always 

[zfs-discuss] zfs send/receive?

2010-09-25 Thread Roy Sigurd Karlsbakk
hi all

I'm using a custom snaopshot scheme which snapshots every hour, day, week and 
month, rotating 24h, 7d, 4w and so on. What would be the best way to zfs 
send/receive these things? I'm a little confused about how this works for delta 
udpates...

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Ian Collins

On 09/26/10 07:25 AM, Erik Trimble wrote:

 On 9/25/2010 1:57 AM, Ian Collins wrote:

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to 
any build-it-yourself rig you can come up with. So much so, in fact, 
that we should really consider the reference recommendation for a 
ZFS fileserver to be certain configs of brand-name hardware, and NOT 
try to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!



But, I hear there's this newfangled thingy, called some darned fool 
thing like "the interanets" or some such, that lets you, you know, 
*order* things from far away places using one of those funny PeeCee 
dood-ads, and they like, *deliver* to your door.


Have you ever had to pay international shipping to the other side of the 
world on a second hand server?!  Not all sellers will ship internationally.


I do bring in a lot of system components, but chassis aren't worth the cost.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup testing?

2010-09-25 Thread Markus Kovero
> On Sat, Sep 25, 2010 at 10:19 AM, Piotr Jasiukajtis  wrote:
>> AFAIK that part of dedup code is not changed in b147.

> I think I remember seeing that there was a change made in 142 that
> helps, though I'm not sure to what extent.

> -B

OI seemed to behave much better than 134 in low disk space situation with dedup 
turned on after server crashed during (terabytes of) snapshot destroy.
import took some time but it did not block IO and most time consuming part was 
mounting datasets, already mounted datasets could be used during import too.
Also performance is a lot better.

Yours
Markus Kovero
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Giovanni Tirloni
On Thu, Sep 23, 2010 at 1:08 PM, Dick Hoogendijk  wrote:

>  And about what SUN systems are you thinking for 'home use' ?
> The likeliness of memory failures might be much higher than becoming a
> millionair, but in the years past I have never had one. And my home sytems
> are rather cheap. Mind you, not the cheapest, but rather cheap. I do buy
> good memory though. So, to me, with a good backup I feel rather safe using
> ZFS. I also had it running for quite some time on a 32bits machine and that
> also worked out fine.
>

We have correctable memory errors on ECC systems on a monthly basis. It's
not if they'll happen but how often.

-- 
Giovanni Tirloni
gtirl...@sysdroid.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Erik Trimble

 On 9/25/2010 1:57 AM, Ian Collins wrote:

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to 
any build-it-yourself rig you can come up with. So much so, in fact, 
that we should really consider the reference recommendation for a ZFS 
fileserver to be certain configs of brand-name hardware, and NOT try 
to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!



But, I hear there's this newfangled thingy, called some darned fool 
thing like "the interanets" or some such, that lets you, you know, 
*order* things from far away places using one of those funny PeeCee 
dood-ads, and they like, *deliver* to your door.


And here I was, just getting used to all those nice things from the 
Sears catalog.  Gonna have to learn me a whole new thing again.


Damn kids.





Of course, living in certain countries it's hard to get the hardware 
through customs...


--
Erik Trimble
Java System Support
Mailstop:  usca22-123
Phone:  x17195
Santa Clara, CA

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup testing?

2010-09-25 Thread Brandon High
On Sat, Sep 25, 2010 at 10:19 AM, Piotr Jasiukajtis  wrote:
> AFAIK that part of dedup code is not changed in b147.

I think I remember seeing that there was a change made in 142 that
helps, though I'm not sure to what extent.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Scott Meilicke
When I do the calculations, assuming 300bytes per block to be conservative, 
with 128K blocks, I get 2.34G of cache (RAM, L2ARC) per Terabyte of deduped 
data. But block size is dynamic, so you will need more than this.

Scott
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedup testing?

2010-09-25 Thread Piotr Jasiukajtis
AFAIK that part of dedup code is not changed in b147.

On Sat, Sep 25, 2010 at 6:54 PM, Roy Sigurd Karlsbakk  
wrote:
> Hi all
>
> Has anyone done any testing with dedup with OI? On opensolaris there is a 
> nifty "feature" that allows the system to hang for hours or days if 
> attempting to delete a dataset on a deduped pool. This is said to be fixed, 
> but I haven't seen that myself, so I'm just wondering...
>
> I'll get a 10TB test box released for testing OI in a few weeks, but before 
> than, has anyone tested this?
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> r...@karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det 
> er et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
> idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
> relevante synonymer på norsk.
>
> ___
> OpenIndiana-discuss mailing list
> openindiana-disc...@openindiana.org
> http://openindiana.org/mailman/listinfo/openindiana-discuss
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>



-- 
Piotr Jasiukajtis | estibi | SCA OS0072
http://estseg.blogspot.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Roy Sigurd Karlsbakk
> > For de-duplication to perform well you need to be able to fit the
> > de-
> > dup table in memory. Is a good rule-of-thumb for needed RAM
> > Size=(pool
> > capacity/avg block size)*270 bytes? Or perhaps it's
> > Size/expected_dedup_ratio?
> 
> For now, the rule of thumb is 3G ram for every 1TB of unique data,
> including
> snapshots and vdev's.
> 
> After a system is running, I don't know how/if you can measure current
> mem usage, to gauge the results of your own predictions.

3 gigs? Last I checked it was a little more than 1GB, perhaps 2 if you have 
small files.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] dedup testing?

2010-09-25 Thread Roy Sigurd Karlsbakk
Hi all

Has anyone done any testing with dedup with OI? On opensolaris there is a nifty 
"feature" that allows the system to hang for hours or days if attempting to 
delete a dataset on a deduped pool. This is said to be fixed, but I haven't 
seen that myself, so I'm just wondering...

I'll get a 10TB test box released for testing OI in a few weeks, but before 
than, has anyone tested this?

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.

___
OpenIndiana-discuss mailing list
openindiana-disc...@openindiana.org
http://openindiana.org/mailman/listinfo/openindiana-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedup relationship between pool and filesystem

2010-09-25 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Brad Stone
> 
> For de-duplication to perform well you need to be able to fit the de-
> dup table in memory. Is a good rule-of-thumb for needed RAM  Size=(pool
> capacity/avg block size)*270 bytes? Or perhaps it's
> Size/expected_dedup_ratio?

For now, the rule of thumb is 3G ram for every 1TB of unique data, including
snapshots and vdev's.

After a system is running, I don't know how/if you can measure current mem
usage, to gauge the results of your own predictions.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] non-ECC Systems and ZFS for home users

2010-09-25 Thread Ian Collins

On 09/25/10 02:54 AM, Erik Trimble wrote:


Honestly, I've said it before, and I'll say it (yet) again:  unless 
you have very stringent power requirement (or some other unusual 
requirement, like very, very low noise),  used (or even new-in-box, 
previous generation excess inventory) OEM stuff is far superior to any 
build-it-yourself rig you can come up with. So much so, in fact, that 
we should really consider the reference recommendation for a ZFS 
fileserver to be certain configs of brand-name hardware, and NOT try 
to recommend other things to folks.



Unless you live somewhere with a very small used server market that is!

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs proerty aclmode gone in 147?

2010-09-25 Thread Ralph Böhme
> I must admit though, that the new ACL/ACE model in
> 147 is really nice and slick.

Darwin ACL model is nice and slick, the new NFSv4 one in 147 is just braindead. 
chmod resulting in ACLs being discarded is a bizarre design decision.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss