Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 10:31 PM, Fred Liu wrote:
> 
>> -Original Message-
>> From: Richard Elling [mailto:richard.ell...@gmail.com]
>> Sent: 星期三, 六月 15, 2011 11:59
>> To: Fred Liu
>> Cc: Jim Klimov; zfs-discuss@opensolaris.org
>> Subject: Re: [zfs-discuss] zfs global hot spares?
>> 
>> On Jun 14, 2011, at 2:36 PM, Fred Liu wrote:
>> 
>>> What is the difference between warm spares and hot spares?
>> 
>> Warm spares are connected and powered. Hot spares are connected,
>> powered, and automatically brought online to replace a "failed" disk.
>> The reason I'm leaning towards warm spares is because I see more
>> replacements than "failed" disks... a bad thing.
>> -- richard
>> 
> 
> You mean so-called "failed" disks replaced by hot spares are not really
> physically damaged? Do I misunderstand?

That is not how I would phrase it, let's try: assuming the disk is failed 
because
you can't access it or it returns bad data is a bad assumption.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Fred Liu


> -Original Message-
> From: Richard Elling [mailto:richard.ell...@gmail.com]
> Sent: 星期三, 六月 15, 2011 11:59
> To: Fred Liu
> Cc: Jim Klimov; zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] zfs global hot spares?
> 
> On Jun 14, 2011, at 2:36 PM, Fred Liu wrote:
> 
> > What is the difference between warm spares and hot spares?
> 
> Warm spares are connected and powered. Hot spares are connected,
> powered, and automatically brought online to replace a "failed" disk.
> The reason I'm leaning towards warm spares is because I see more
> replacements than "failed" disks... a bad thing.
>  -- richard
> 

You mean so-called "failed" disks replaced by hot spares are not really
physically damaged? Do I misunderstand?


Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 2:36 PM, Fred Liu wrote:

> What is the difference between warm spares and hot spares?

Warm spares are connected and powered. Hot spares are connected,
powered, and automatically brought online to replace a "failed" disk.
The reason I'm leaning towards warm spares is because I see more
replacements than "failed" disks... a bad thing.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about COW and snapshots

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 10:25 AM, Simon Walter wrote:

> I'm looking to create a NAS with versioning for non-technical users (Windows 
> and Mac). I want the users to be able to simply save a file, and a 
> revision/snapshot is created. I could use a revision control software like 
> SVN (it has autoversioning with WebDAV), and I will if this filesystem level 
> idea does not pan out. However, since this is a NAS for any files(Word, 
> JPEGs, MP3, etc), not just source code, I thought doing this at the 
> filesystem level via some kind of COW would be better.

I can't answer the "better" question, but with ZFS you can delegate to the user 
the ability to make
snapshots when they want. You can even have applications like databases make 
snapshots when
they want. Using this feature, you don't need to archive every write.

> 
> So now my (ignorant) question: can ZFS make a snapshot every time it's 
> written to?

I hope not, that would suck most heinously.

> Can all writes be available as snapshots so all previous "versions" are 
> available?

That would suck worse.

> Or does one need to explicitly create a snapshot?

Yes.

> 
> I've read about auto-snapshot. But that seems like nice cron job (scheduled 
> snapshots). Am I mistaken?

No, that is how it works.

> 
> I looked in to ext3cow, next3, and then some. ZFS seems the most stable and 
> ready for use. However, is the above possible? Or how is it possible? Is 
> there a better way of doing this? Other filesystems?
> 
> Thanks for ideas and suggestions,

I use TimeMachine with a ZFS repository. It is time-based snapshots, too.
 -- richard


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Fajar A. Nugraha
On Tue, Jun 14, 2011 at 7:15 PM, Jim Klimov  wrote:
> Hello,
>
>  A college friend of mine is using Debian Linux on his desktop,
> and wondered if he could tap into ZFS goodness without adding
> another server in his small quiet apartment or changing the
> desktop OS. According to his research, there are some kernel
> modules for Debian which implement ZFS, or a FUSE variant.
>
>  Can anyone comment how stable and functional these are?
> Performance is a secondary issue, as long as it does not
> lead to system crashes due to timeouts, etc. ;)

zfs-fuse has been around for a long time, and is quite stable. Ubuntu
natty has it on universe repository (don't know about Debian's
repository, but you should be able to use Ubuntu's). It has the
benefits and drawbacks of fuse implementation (namely: it does not
support zvol)

zfsonlinux is somewhat new, and has some problems relating memory
management (in some cases arc usage can get very high, and then you'll
see high cpu usage by arc_reclaim thread). It's not recommended for
32bit OS. Being in kernel, it has potential to be more stable and
faster than zfs-fuse. It has zvol support. Latest rc version is
somewhat stable for normal uses.

Performance-wise, from my test it can be 4 times slower compared to
ext4 (depending on the load).

-- 
Fajar
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Jim Klimov
> 
>A college friend of mine is using Debian Linux on his desktop,
> and wondered if he could tap into ZFS goodness without adding
> another server in his small quiet apartment or changing the

Most likely BTRFS will be your best friend, if what you care about is mostly
snapshots.  Unfortunately, one major deficiency of BTRFS is the inability to
do something on-par with 'zfs send' onto a remote system.  Maybe you care,
maybe not.  BTRFS is included now (and for the last couple of years) on
ubuntu, fedora, and surely some other major distros.

If you want to consider the Solaris guest idea...  I certainly do this in
some situations.  Here's what you should know.  Even with VT (or whatever)
enabled and guest tools installed (or whatever) I have never seen virtualbox
perform disk IO at a rate satisfactorily similar to the native OS.
Furthermore, even if you network the host & guest via virtual network
interface (speed limited only by cpu & ram) it doesn't go nearly as fast as
you would think...  I see something like sustainable maximum 3Gbit going
through the virtual network interfaces.  And of course you give up a
significant chunk of ram to run the virtual guest.

Yes, it works.  Yes, it's appropriate in some cases.  My personal advice
would be to look at BTRFS first, and virtual guest second.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs usable space?

2011-06-14 Thread Ding Honghui

I have 15x1TB disks, each disk usable space should be
1Tib=1B=1/1024/1024/1024G=931G.

As it shows in command format:
# echo | format | grep MD
   3. c4t60026B900053AA1502C74B8F0EADd0 
   4. c4t60026B900053AA1502C94B8F0EE3d0 
   5. c4t60026B900053AA1502CB4B8F0F0Dd0 
   6. c4t60026B900053AA1502CD4B8F0F3Dd0 
   7. c4t60026B900053AA1502CF4B8F0F6Dd0 
   8. c4t60026B900053AA1502D14B8F0F9Cd0 
   9. c4t60026B900053AA1502D34B8F0FC8d0 
  10. c4t60026B900053AA1802A04B8F0D91d0 
  11. c4t60026B900053AA1802A24B8F0DC5d0 
  12. c4t60026B900053AA1802A44B8F0DEDd0 
  13. c4t60026B900053AA18029C4B8F0D35d0 
  14. c4t60026B900053AA18029E4B8F0D61d0 
  15. c4t60026B900053AA18036E4DBF6BA6d0 
  16. c4t60026B900053AA1802984B8F0CD2d0 
  17. c4t60026B900053AA1503074B901CF3d0 
#

I create 2 raidz1(7 disk) and 1 global hot spare as zpool:
NAME STATE READ 
WRITE CKSUM
datapool ONLINE   0 
 0 0
  raidz1 ONLINE   0 
 0 0
c4t60026B900053AA1502C74B8F0EADd0ONLINE   0 
 0 0
c4t60026B900053AA1502C94B8F0EE3d0ONLINE   0 
 0 0
c4t60026B900053AA1502CB4B8F0F0Dd0ONLINE   0 
 0 0
c4t60026B900053AA1502CD4B8F0F3Dd0ONLINE   0 
 0 0
c4t60026B900053AA1502CF4B8F0F6Dd0ONLINE   0 
 0 0
c4t60026B900053AA1502D14B8F0F9Cd0ONLINE   0 
 0 0
c4t60026B900053AA1502D34B8F0FC8d0ONLINE   0 
 0 0
  raidz1 ONLINE   0 
 0 0
spareONLINE   0 
 0 7
  c4t60026B900053AA1802A04B8F0D91d0  ONLINE  10 
 0 0  194K resilvered
  c4t60026B900053AA18036E4DBF6BA6d0  ONLINE   0 
 0 0  531G resilvered
c4t60026B900053AA1802A24B8F0DC5d0ONLINE   0 
 0 0
c4t60026B900053AA1802A44B8F0DEDd0ONLINE   0 
 0 0
c4t60026B900053AA1503074B901CF3d0ONLINE   0 
 0 0
c4t60026B900053AA18029C4B8F0D35d0ONLINE   0 
 0 0
c4t60026B900053AA18029E4B8F0D61d0ONLINE   0 
 0 0
c4t60026B900053AA1802984B8F0CD2d0ONLINE   0 
 0 0

spares
  c4t60026B900053AA18036E4DBF6BA6d0  INUSE 
currently in use


I expect to have 14*931/1024=12.7TB zpool space, but actually, it only 
have 12.6TB zpool space:

# zpool list
NAME   SIZE   USED  AVAILCAP  HEALTH  ALTROOT
datapool  12.6T  9.96T  2.66T78%  ONLINE  -
#

And I expect the zfs usable space is 12*931/1024=10.91TB, but actually, 
it only have 10.58TB zfs space.


Can any one explain where the disk space goes?

Regards,
Ding
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] question about COW and snapshots

2011-06-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Simon Walter
> 
> I'm looking to create a NAS with versioning for non-technical users
> (Windows and Mac). I want the users to be able to simply save a file,
> and a revision/snapshot is created. I could use a revision control
> software like SVN (it has autoversioning with WebDAV), and I will if
> this filesystem level idea does not pan out. However, since this is a
> NAS for any files(Word, JPEGs, MP3, etc), not just source code, I
> thought doing this at the filesystem level via some kind of COW would be
> better.

CVS works on individual files.
SVN works on a subset of a directory.
ZFS works on whole file systems.

You want to save snapshots or revs of individual files, upon writes.  Saves,
Closes, etc.  The closest I know to get what you want is ... 
Google Docs
Alfresco
Sharepoint

There's probably something... I'm sure this isn't the first time anyone ever
thought of that.  But the answer isn't ZFS.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool/zfs properties in SNMP

2011-06-14 Thread Fred Liu
Hi,

Anyone who is successfully poll the zpool/zfs properties thrun SNMP?

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] The length of zpool history

2011-06-14 Thread Fred Liu
I assume the history is stored in the meta data. Is it possible to configure 
how long/much history can be stored/displayed?
I know it is doable via external/additional automation like porting to a 
database.

Thanks.

Fred
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs - pls help

2011-06-14 Thread Nathan Kroenert

 Hi Max,

Unhelpful questions about your CPU aside, what else is your box doing?

Can you run up a second or third shell (ssh or whatever) and watch if 
the disks / system are doing any work?


Were it Solaris, I'd run:
iostat -x
prstat -a
vmstat
mpstat (Though as discussed, you have only a single core CPU)
echo "::memstat" | mdb -k  (No idea how you might do that in BSD)

Some other things to think about:
 - Have you tried removing the extra memory? I have indeed seen in 
crappy PC hardware where more than 3GB caused some really bad behaviour 
in Solaris.
 - Have you tried booting into a current Solaris (from CD) and seeing 
if it can import the pool? (Don't upgrade - just import) ;)



I'm aware that there were some long import issues discussed on the list 
recently - someone had an import take some 12 hours or more - would be 
worth looking over the last few weeks posts.


Also - getting a truss or pstack (if freebds has that?) of the process 
trying to initiate the import might help some of the more serious folks 
on the list to see where it's getting stuck. (Or if indeed, it's 
actually getting stuck, and not simply catastrophically slow.)


Hope this helps at least a little.

Cheers,

Nathan.

On 06/14/11 03:20 PM, Maximilian Sarte wrote:

Hi,
   I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I decided to up the RAM to 4 Gb since it is what was 
recommended. Asap I re-started data migration, the ZFS issued messages 
indicating that the pool was unavailable and froze the system.
After reboot (FN is based in FreeBSD) and re-installing FN (did not want to 
complete booting - probably a corruption on the USB stick it was running from), 
tank1 was unavailable.
Stauts indicates that there are no pools as List does.
Import indicates that tank1 is OK and all 4 hdds are ONLINE and their status 
seems OK.
When I try either:

zpool import tank1
zpool imprt -f tank1
zpool import -fF tank1

the commands simply hang forever (FreeNAS semms OK).

Any suggestions would be immensly appreciated.
Tx!


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Fred Liu
What is the difference between warm spares and hot spares?
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-14 Thread Jim Klimov

2011-06-15 0:16, Frank Van Damme пишет:

2011/6/10 Tim Cook:

While your memory may be sufficient, that cpu is sorely lacking.  Is it even
64bit?  There's a reason intel couldn't give those things away in the early
2000s and amd was eating their lunch.

A Pentium 4 is 32-bit.



Technically, this is not a false statement. First P4's were 32-bit.
Then they weren't. They did or did not support virtualization or
whatever newer features popped up, but they became 64-bit capable
pretty long ago.

The one in question is 64-bit dual-core without virtualization
or hyperthreading:
http://ark.intel.com/Product.aspx?id=27512

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-14 Thread Tim Cook
On Tue, Jun 14, 2011 at 3:16 PM, Frank Van Damme
wrote:

> 2011/6/10 Tim Cook :
> > While your memory may be sufficient, that cpu is sorely lacking.  Is it
> even
> > 64bit?  There's a reason intel couldn't give those things away in the
> early
> > 2000s and amd was eating their lunch.
>
> A Pentium 4 is 32-bit.
>
>  



EM64T was added to the Pentium 4 architecture with the "D" nomenclature,
which is what he has.

--Tim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool import hangs any zfs-related programs, eats all RAM and dies in swapping hell

2011-06-14 Thread Frank Van Damme
2011/6/10 Tim Cook :
> While your memory may be sufficient, that cpu is sorely lacking.  Is it even
> 64bit?  There's a reason intel couldn't give those things away in the early
> 2000s and amd was eating their lunch.

A Pentium 4 is 32-bit.

-- 
Frank Van Damme
No part of this copyright message may be reproduced, read or seen,
dead or alive or by any means, including but not limited to telepathy
without the benevolence of the author.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Erik Trimble

On 6/14/2011 12:50 PM, Roy Sigurd Karlsbakk wrote:

Are there estimates on how performant and stable would
it be to run VirtualBox with a Solaris-derived NAS with
dedicated hardware disks, and use that from the same
desktop? I did actually suggest this as a considered
variant as well ;)

I am going to try and build such a VirtualBox for my ailing
HomeNAS as well - so it would import that iSCSI "dcpool"
and try to process its defer-free blocks. At least if the
hardware box doesn't stall so that a human has to be
around to go and push reset, this would be a more
viable solution for my repair-reboot cycles...

If you want good performance and ZFS, I'd suggest using something like 
OpenIndiana or Solaris 11EX or perhaps FreeBSD for the host and VirtualBox for 
a linux guest if that's needed. Doing so, you'll get good I/O performance, and 
you can use the operating system or distro you like for the rest of the 
services.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/

The other option is to make sure you have a newer CPU that supports 
Virtualized I/O. I'd have to look at the desktop CPUs, but all Intel 
Nehalem and later CPUs have this feature, and I'm pretty sure all AMD 
MangyCours and later CPUs do also.


Without V-IO, doing anything that pounds on a disk under *any* 
Virtualization product is sure to make you cry.


--
Erik Trimble
Java Platform Group Infrastructure
Mailstop:  usca22-317
Phone:  x17195
Santa Clara, CA
Timezone: US/Pacific (UTC-0800)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Roy Sigurd Karlsbakk
> Are there estimates on how performant and stable would
> it be to run VirtualBox with a Solaris-derived NAS with
> dedicated hardware disks, and use that from the same
> desktop? I did actually suggest this as a considered
> variant as well ;)
> 
> I am going to try and build such a VirtualBox for my ailing
> HomeNAS as well - so it would import that iSCSI "dcpool"
> and try to process its defer-free blocks. At least if the
> hardware box doesn't stall so that a human has to be
> around to go and push reset, this would be a more
> viable solution for my repair-reboot cycles...

If you want good performance and ZFS, I'd suggest using something like 
OpenIndiana or Solaris 11EX or perhaps FreeBSD for the host and VirtualBox for 
a linux guest if that's needed. Doing so, you'll get good I/O performance, and 
you can use the operating system or distro you like for the rest of the 
services.

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Sriram Narayanan
I just learned from the Phoronix website that KQ Infotech has stopped
working on ZFS for Linux, but that their github repo is still active.

Also, zfsonlinux.org mentioned earlier on this mail thread is seeing
active development.

-- Sriram

On 6/14/11, Sriram Narayanan  wrote:
> There's also ZFS from KQInfotech.
>
> -- Sriram
>
> On 6/14/11, David Magda  wrote:
>> On Tue, June 14, 2011 08:15, Jim Klimov wrote:
>>> Hello,
>>>
>>>A college friend of mine is using Debian Linux on his desktop,
>>> and wondered if he could tap into ZFS goodness without adding
>>> another server in his small quiet apartment or changing the
>>> desktop OS. According to his research, there are some kernel
>>> modules for Debian which implement ZFS, or a FUSE variant.
>>
>> Besides FUSE, there's also this:
>>
>> http://zfsonlinux.org/
>>
>> Btrfs also has many ZFS-like features:
>>
>> http://en.wikipedia.org/wiki/Btrfs
>>
>>>Can anyone comment how stable and functional these are?
>>> Performance is a secondary issue, as long as it does not
>>> lead to system crashes due to timeouts, etc. ;)
>>
>> A better bet would probably be to check out the lists of the porting
>> projects themselves. Most of the folks on zfs-discuss are probably people
>> that use ZFS on platforms that have more official support for it
>> (OpenSolaris-based stuff and FreeBSD).
>>
>>
>> ___
>> zfs-discuss mailing list
>> zfs-discuss@opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>>
>
> --
> Sent from my mobile device
>
> ==
> Belenix: www.belenix.org
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Bill Sommerfeld
On 06/14/11 04:15, Rasmus Fauske wrote:
> I want to replace some slow consumer drives with new edc re4 ones but
> when I do a replace it needs to scan the full pool and not only that
> disk set (or just the old drive)
> 
> Is this normal ? (the speed is always slow in the start so thats not
> what I am wondering about, but that it needs to scan all of my 18.7T to
> replace one drive)

This is normal.  The resilver is not reading all data blocks; it's
reading all of the metadata blocks which contain one or more block
pointers, which is the only way to find all the allocated data (and in
the case of raidz, know precisely how it's spread and encoded across the
members of the vdev).  And it's reading all the data blocks needed to
reconstruct the disk to be replaced.

- Bill




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Roy Sigurd Karlsbakk
> Hi,
> 
> I want to replace some slow consumer drives with new edc re4 ones but
> when I do a replace it needs to scan the full pool and not only that
> disk set (or just the old drive)
> 
> Is this normal ? (the speed is always slow in the start so thats not
> what I am wondering about, but that it needs to scan all of my 18.7T
> to replace one drive)

I had a pool where some three VDEVs got full and replaced the 2TB (green) 
members with new and faster 3TB drives. To do this, I replaced one drive in 
each VDEV at a time. This worked well for me. Just make sure you have a backup 
in case the shit hits the fan

Vennlige hilsener / Best regards

roy
--
Roy Sigurd Karlsbakk
(+47) 97542685
r...@karlsbakk.net
http://blogg.karlsbakk.net/
--
I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er 
et elementært imperativ for alle pedagoger å unngå eksessiv anvendelse av 
idiomer med fremmed opprinnelse. I de fleste tilfeller eksisterer adekvate og 
relevante synonymer på norsk.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Stephan Budach

Am 14.06.11 15:12, schrieb Rasmus Fauske:

Den 14.06.2011 14:06, skrev Edward Ned Harvey:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rasmus Fauske

I want to replace some slow consumer drives with new edc re4 ones but
when I do a replace it needs to scan the full pool and not only that
disk set (or just the old drive)

Is this normal ? (the speed is always slow in the start so thats not
what I am wondering about, but that it needs to scan all of my 18.7T to
replace one drive)

The disk config:
pool: tank
   state: ONLINE
status: One or more devices is currently being resilvered.  The pool 
will

  continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
   scan: resilver in progress since Tue Jun 14 10:03:37 2011
  16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no 
estimated

time)
  388M resilvered, 0.09% done
config:

  NAME STATE READ WRITE CKSUM
  tank ONLINE   0 0 0
raidz2-1   ONLINE   0 0 0
  c10t21d0 ONLINE   0 0 0
  replacing-1  ONLINE   0 0 0
c10t35d0   ONLINE   0 0 0
c10t22d0   ONLINE   0 0 0  (resilvering)
  c10t23d0 ONLINE   0 0 0
  c10t24d0 ONLINE   0 0 0
  c10t25d0 ONLINE   0 0 0
  c10t26d0 ONLINE   0 0 0
  c10t27d0 ONLINE   0 0 0

Only this raidz2 vdev is resilvering.  The other raidz2-vdev's are idle.
You can verify this with the command:
zpool iostat 30


Yes but still it wants to scan all of the data in the pool:

action: Wait for the resilver to complete.
scan: resilver in progress since Tue Jun 14 10:03:37 2011
   16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no 
estimated  time)


Each raidz is 7 x 1TB disks, around 50% filled. So should it not only 
scan that data and not the full pool that holds around 18T ?
This is what I experienced as well. I had a zpool made out of 16 mirrors 
and in that zpool 12 mirrors were made out of 1 TB drives, while 4 
mirrors were made out of 2 TB drives.
When I started to swap out the 1 TB drives with 2 TB ones, zfs just 
didn't read from the corresponding drive of the mirrored vdev, but from 
all drives in the zpool.


iostat showed that all disks were nearly equally busy. The scan started 
very slow and picked up speed after some time.
Interestingly, I was able to swap a number 1 TB drives with 2 TB ones - 
I think I had up to 8 drives at once in progress, so I was able to 
complete the whole task in less than 48 hrs.



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Jim Klimov

2011-06-14 21:38, Marty Scholes пишет:

Just for completeness, there is also VirtualBox which runs Solaris nicely.

Are there estimates on how performant and stable would
it be to run VirtualBox with a Solaris-derived NAS with
dedicated hardware disks, and use that from the same
desktop? I did actually suggest this as a considered
variant as well ;)

I am going to try and build such a VirtualBox for my ailing
HomeNAS as well - so it would import that iSCSI "dcpool"
and try to process its defer-free blocks. At least if the
hardware box doesn't stall so that a human has to be
around to go and push reset, this would be a more
viable solution for my repair-reboot cycles...

//Jim

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling

On Jun 14, 2011, at 10:38 AM, Jim Klimov wrote:

> 2011-06-14 19:23, Richard Elling пишет:
>> On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote:
>> 
>>> Hello all,
>>> 
>>>  Is there any sort of a "Global Hot Spare" feature in ZFS,
>>> i.e. that one sufficiently-sized spare HDD would automatically
>>> be pulled into any faulted pool on the system?
>> Yes. See the ZFS Admin Guide section on Designating Hot Spares inYour 
>> Storage Pool.
>>  -- richard
> Yes, thatnk. I've read that, but the guide and its examples
> only refer to hot spares for a certain single pool, except
> in the starting phrase: "The hot spares feature enables you
> to identify disks that could be used to replace a failed or
> faulted device in one or more storage pools. "
> 
> Further on in the examples, hot spares are added to specific
> pools, etc.
> 
> Can the same spare be added to several pools?

Yes

> 
> So far it is a theoretical question - I don't have a box with
> several pools and a spare disk in place at once, which I'd
> want to sacrifice to uncertain experiments ;)

ZFS won't let you intentionally compromise a pool without warning.

Now, as to whether a hot spare for multiple pools is a good thing, that
depends on many factors. I find for most systems I see today, warm
spares are superior to hot spares.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Jim Klimov

2011-06-14 19:23, Richard Elling пишет:

On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote:


Hello all,

  Is there any sort of a "Global Hot Spare" feature in ZFS,
i.e. that one sufficiently-sized spare HDD would automatically
be pulled into any faulted pool on the system?

Yes. See the ZFS Admin Guide section on Designating Hot Spares inYour Storage 
Pool.
  -- richard

Yes, thatnk. I've read that, but the guide and its examples
only refer to hot spares for a certain single pool, except
in the starting phrase: "The hot spares feature enables you
to identify disks that could be used to replace a failed or
faulted device in one or more storage pools. "

Further on in the examples, hot spares are added to specific
pools, etc.

Can the same spare be added to several pools?

So far it is a theoretical question - I don't have a box with
several pools and a spare disk in place at once, which I'd
want to sacrifice to uncertain experiments ;)

//Jim


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Marty Scholes
Just for completeness, there is also VirtualBox which runs Solaris nicely.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] question about COW and snapshots

2011-06-14 Thread Simon Walter
I'm looking to create a NAS with versioning for non-technical users 
(Windows and Mac). I want the users to be able to simply save a file, 
and a revision/snapshot is created. I could use a revision control 
software like SVN (it has autoversioning with WebDAV), and I will if 
this filesystem level idea does not pan out. However, since this is a 
NAS for any files(Word, JPEGs, MP3, etc), not just source code, I 
thought doing this at the filesystem level via some kind of COW would be 
better.


So now my (ignorant) question: can ZFS make a snapshot every time it's 
written to? Can all writes be available as snapshots so all previous 
"versions" are available? Or does one need to explicitly create a snapshot?


I've read about auto-snapshot. But that seems like nice cron job 
(scheduled snapshots). Am I mistaken?


I looked in to ext3cow, next3, and then some. ZFS seems the most stable 
and ready for use. However, is the above possible? Or how is it 
possible? Is there a better way of doing this? Other filesystems?


Thanks for ideas and suggestions,

Simon
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-06-14 Thread Paul Kraus
On Tue, Jun 14, 2011 at 11:48 AM, Eric D. Mudama
 wrote:
> On Tue, Jun 14 at  8:04, Paul Kraus wrote:

>> I saw some stats a year or more ago that indicated the MTDL for raidZ2
>> was better than for a 2-way mirror. In order of best to worst I
>> remember the rankings as:
>>
>> raidZ3 (least likely to lose data)
>> 3-way mirror
>> raidZ2
>> 2-way mirror
>> raidZ1 (most likely to lose data)

> Google "mttdl raidz zfs" digs up:
>
> http://blogs.oracle.com/relling/entry/zfs_raid_recommendations_space_performance
> http://blogs.oracle.com/relling/entry/raid_recommendations_space_vs_mttdl
> http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html
>
> I think the second picture is the one you were thinking of.  The 3rd
> link adds raidz3 data to the charts.

Yup, that was it, although I got the information via a Sun FE and not directly.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-06-14 Thread Eric D. Mudama

On Tue, Jun 14 at  8:04, Paul Kraus wrote:

On Mon, Jun 13, 2011 at 6:01 PM, Erik Trimble  wrote:


I'd have to re-look at the exact numbers, but, I'd generally say that
2x6raidz2 vdevs would be better than either 1x12raidz3 or 4x3raidz1 (or
3x4raidz1, for a home server not looking for super-critical protection (in
which case, you should be using mirrors with spares, not raidz*).


I saw some stats a year or more ago that indicated the MTDL for raidZ2
was better than for a 2-way mirror. In order of best to worst I
remember the rankings as:

raidZ3 (least likely to lose data)
3-way mirror
raidZ2
2-way mirror
raidZ1 (most likely to lose data)

This is for Mean Time to Data Loss, or essentially the odds of losing
_data_ due to one (or more) drive failures. I do not know if this took
number of devices per vdev and time to resilver into account.
Non-redundant configurations were not even discussed. This information
came out of Sun (pre-Oracle) and _may_ have been traceable back to
Brendan Gregg.


Google "mttdl raidz zfs" digs up:

http://blogs.oracle.com/relling/entry/zfs_raid_recommendations_space_performance
http://blogs.oracle.com/relling/entry/raid_recommendations_space_vs_mttdl
http://blog.richardelling.com/2010/02/zfs-data-protection-comparison.html

I think the second picture is the one you were thinking of.  The 3rd
link adds raidz3 data to the charts.



--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread Sriram Narayanan
There's also ZFS from KQInfotech.

-- Sriram

On 6/14/11, David Magda  wrote:
> On Tue, June 14, 2011 08:15, Jim Klimov wrote:
>> Hello,
>>
>>A college friend of mine is using Debian Linux on his desktop,
>> and wondered if he could tap into ZFS goodness without adding
>> another server in his small quiet apartment or changing the
>> desktop OS. According to his research, there are some kernel
>> modules for Debian which implement ZFS, or a FUSE variant.
>
> Besides FUSE, there's also this:
>
> http://zfsonlinux.org/
>
> Btrfs also has many ZFS-like features:
>
> http://en.wikipedia.org/wiki/Btrfs
>
>>Can anyone comment how stable and functional these are?
>> Performance is a secondary issue, as long as it does not
>> lead to system crashes due to timeouts, etc. ;)
>
> A better bet would probably be to check out the lists of the porting
> projects themselves. Most of the folks on zfs-discuss are probably people
> that use ZFS on platforms that have more official support for it
> (OpenSolaris-based stuff and FreeBSD).
>
>
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>

-- 
Sent from my mobile device

==
Belenix: www.belenix.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
> 
> But as it for a home media server, which is mainly WORM access and will be
> storing (legal!) DVD/Bluray rips i'm not so sure I can sacrify the space.

For your purposes, raidzN will work very well.  And since you're going to
sequentially write your data once initially and leave it in place, even the
resilver should perform pretty well.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs global hot spares?

2011-06-14 Thread Richard Elling
On Jun 14, 2011, at 5:18 AM, Jim Klimov wrote:

> Hello all,
> 
>  Is there any sort of a "Global Hot Spare" feature in ZFS,
> i.e. that one sufficiently-sized spare HDD would automatically
> be pulled into any faulted pool on the system?

Yes. See the ZFS Admin Guide section on Designating Hot Spares inYour Storage 
Pool.
 -- richard

> 
>  So far I saw adding hot spares dedicated to a certain pool.
> And perhaps scripts to detach-attach hotspares between pools
> when bad things happen - but that takes some latency (parsing
> logs or cronjobs with "zpool status" output - if that does
> not hang on problems, etc.)
> 
>  Are there any kernel-side implementations?
> 
> Thanks,
> //Jim Klimov
> 
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Thanks martysch.

That is what I meant about adding disks to vdevs - not adding disks to vdevs 
but adding vdevs to pools.

If the geometry of the vdevs should ideally be the same, it would make sense to 
buy one more disk now and have a 7 disk raid-z2 to start with, then buy disks 
as and when and create a further 7 disk raid-z2 leaving the 15th disk as a hot 
spare. Would 'only' give 10TB usable though.

The only thing though I seem to remember reading that adding vdevs to pools way 
after the creation of the pool and data had been written to it, that things 
aren't spread evenly - is that right? So it might actually make sense to buy 
all the disks now and start fresh with the final build.

Starting with only 6 disks would leave growth for another 6 disk raid-z2 (to 
keep matching geometry) leaving 3 disks spare which is not ideal.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Marty Scholes
I am asssuming you will put all of the vdevs into a single pool, which is a 
good idea unless you have a specific reason for keeping them separate, e.g. you 
want to be able to destroy / rebuild a particular vdev while leaving the others 
intact.

Fewer disks per vdev implies more vdevs, providing better random performance, 
lower scrub and resilver times and the ability to expand a vdev by replacing 
only the few disks in it.

The downside of more vdevs is that you dedicate your parity to each vdev, e.g. 
a RAIDZ2 would need two parity disks per vdev.

> I'm in two minds with mirrors. I know they provide
> the best performance and protection, and if this was
> a business critical machine I wouldn't hesitate.
> 
> But as it for a home media server, which is mainly
> WORM access and will be storing (legal!) DVD/Bluray
> rips i'm not so sure I can sacrify the space.

For a home media server, all accesses are essentially sequential, so random 
performance should not be a deciding factor.

> 7x 2 way mirrors would give me 7TB usable with 1 hot
> spare, using 1TB disks, which is a big drop from
> 12TB! I could always jump to 2TB disks giving me 14TB
> usable but I already have 6x 1TB disks in my WHS
> build which i'd like to re-use.

I would be tempted to start with a 4+2 (six disk RAIDZ2) vdev using your 
current disks and plan from there.  There is no reason you should feel 
compelled to buy more 1TB disks just because you already have some.

> Am I right in saying that single disks cannot be
> added to a raid-z* vdev so a minimum of 3 would be
> required each time. However a mirror is just 2 disks
> so if adding disks over a period of time mirrors
> would be cheaper each time.

That is not correct.  You cannot ever add disks to a vdev.  Well, you can add 
additional disks to a mirror vdev, but otherwise, once you set the geometry, a 
vdev is stuck for life.

However, you can add any vdev you want to an existing pool.  You can take a 
pool with a single vdev set up as a 6x RAIDZ2 and add a single disk to that 
pool.  The previous example is a horrible idea because it makes the entire pool 
dependent upon a single disk.  The example also illustrates that you can add 
any type of vdev to a pool.

Most agree it is best to make the pool from vdevs of identical geometry, but 
that is not enforced by zfs.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Rasmus Fauske

Den 14.06.2011 14:06, skrev Edward Ned Harvey:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Rasmus Fauske

I want to replace some slow consumer drives with new edc re4 ones but
when I do a replace it needs to scan the full pool and not only that
disk set (or just the old drive)

Is this normal ? (the speed is always slow in the start so thats not
what I am wondering about, but that it needs to scan all of my 18.7T to
replace one drive)

The disk config:
pool: tank
   state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
  continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
   scan: resilver in progress since Tue Jun 14 10:03:37 2011
  16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated
time)
  388M resilvered, 0.09% done
config:

  NAME STATE READ WRITE CKSUM
  tank ONLINE   0 0 0
raidz2-1   ONLINE   0 0 0
  c10t21d0 ONLINE   0 0 0
  replacing-1  ONLINE   0 0 0
c10t35d0   ONLINE   0 0 0
c10t22d0   ONLINE   0 0 0  (resilvering)
  c10t23d0 ONLINE   0 0 0
  c10t24d0 ONLINE   0 0 0
  c10t25d0 ONLINE   0 0 0
  c10t26d0 ONLINE   0 0 0
  c10t27d0 ONLINE   0 0 0

Only this raidz2 vdev is resilvering.  The other raidz2-vdev's are idle.
You can verify this with the command:
zpool iostat 30


Yes but still it wants to scan all of the data in the pool:

action: Wait for the resilver to complete.
scan: resilver in progress since Tue Jun 14 10:03:37 2011
   16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated  time)

Each raidz is 7 x 1TB disks, around 50% filled. So should it not only scan that 
data and not the full pool that holds around 18T ?

--
Rasmus F



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import hangs - pls help

2011-06-14 Thread Maximilian Sarte
Hi,
  I am posting here in a tad of desperation. FYI, I am running FreeNAS 8.0.
Anyhow, I created a raidz1 (tank1) with 4 x 2Tb WD EARS hdds.
All was doing ok until I decided to up the RAM to 4 Gb since it is what was 
recommended. Asap I re-started data migration, the ZFS issued messages 
indicating that the pool was unavailable and froze the system. 
After reboot (FN is based in FreeBSD) and re-installing FN (did not want to 
complete booting - probably a corruption on the USB stick it was running from), 
tank1 was unavailable.
Stauts indicates that there are no pools as List does.
Import indicates that tank1 is OK and all 4 hdds are ONLINE and their status 
seems OK.
When I try either:

zpool import tank1
zpool imprt -f tank1
zpool import -fF tank1

the commands simply hang forever (FreeNAS semms OK).

Any suggestions would be immensly appreciated.
Tx!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS for Linux?

2011-06-14 Thread David Magda
On Tue, June 14, 2011 08:15, Jim Klimov wrote:
> Hello,
>
>A college friend of mine is using Debian Linux on his desktop,
> and wondered if he could tap into ZFS goodness without adding
> another server in his small quiet apartment or changing the
> desktop OS. According to his research, there are some kernel
> modules for Debian which implement ZFS, or a FUSE variant.

Besides FUSE, there's also this:

http://zfsonlinux.org/

Btrfs also has many ZFS-like features:

http://en.wikipedia.org/wiki/Btrfs

>Can anyone comment how stable and functional these are?
> Performance is a secondary issue, as long as it does not
> lead to system crashes due to timeouts, etc. ;)

A better bet would probably be to check out the lists of the porting
projects themselves. Most of the folks on zfs-discuss are probably people
that use ZFS on platforms that have more official support for it
(OpenSolaris-based stuff and FreeBSD).


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Thanks Edward.

I'm in two minds with mirrors. I know they provide the best performance and 
protection, and if this was a business critical machine I wouldn't hesitate.

But as it for a home media server, which is mainly WORM access and will be 
storing (legal!) DVD/Bluray rips i'm not so sure I can sacrify the space.

7x 2 way mirrors would give me 7TB usable with 1 hot spare, using 1TB disks, 
which is a big drop from 12TB! I could always jump to 2TB disks giving me 14TB 
usable but I already have 6x 1TB disks in my WHS build which i'd like to re-use.

Hmmm!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs global hot spares?

2011-06-14 Thread Jim Klimov

Hello all,

  Is there any sort of a "Global Hot Spare" feature in ZFS,
i.e. that one sufficiently-sized spare HDD would automatically
be pulled into any faulted pool on the system?

  So far I saw adding hot spares dedicated to a certain pool.
And perhaps scripts to detach-attach hotspares between pools
when bad things happen - but that takes some latency (parsing
logs or cronjobs with "zpool status" output - if that does
not hang on problems, etc.)

  Are there any kernel-side implementations?

Thanks,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS for Linux?

2011-06-14 Thread Jim Klimov

Hello,

  A college friend of mine is using Debian Linux on his desktop,
and wondered if he could tap into ZFS goodness without adding
another server in his small quiet apartment or changing the
desktop OS. According to his research, there are some kernel
modules for Debian which implement ZFS, or a FUSE variant.

  Can anyone comment how stable and functional these are?
Performance is a secondary issue, as long as it does not
lead to system crashes due to timeouts, etc. ;)

Thanks,
//Jim Klimov

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Rasmus Fauske
> 
> I want to replace some slow consumer drives with new edc re4 ones but
> when I do a replace it needs to scan the full pool and not only that
> disk set (or just the old drive)
> 
> Is this normal ? (the speed is always slow in the start so thats not
> what I am wondering about, but that it needs to scan all of my 18.7T to
> replace one drive)
> 
> The disk config:
>pool: tank
>   state: ONLINE
> status: One or more devices is currently being resilvered.  The pool will
>  continue to function, possibly in a degraded state.
> action: Wait for the resilver to complete.
>   scan: resilver in progress since Tue Jun 14 10:03:37 2011
>  16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated
> time)
>  388M resilvered, 0.09% done
> config:
> 
>  NAME STATE READ WRITE CKSUM
>  tank ONLINE   0 0 0
>raidz2-0   ONLINE   0 0 0
>  c10t14d0 ONLINE   0 0 0
>  c10t15d0 ONLINE   0 0 0
>  c10t16d0 ONLINE   0 0 0
>  c10t17d0 ONLINE   0 0 0
>  c10t18d0 ONLINE   0 0 0
>  c10t19d0 ONLINE   0 0 0
>  c10t20d0 ONLINE   0 0 0
>raidz2-1   ONLINE   0 0 0
>  c10t21d0 ONLINE   0 0 0
>  replacing-1  ONLINE   0 0 0
>c10t35d0   ONLINE   0 0 0
>c10t22d0   ONLINE   0 0 0  (resilvering)
>  c10t23d0 ONLINE   0 0 0
>  c10t24d0 ONLINE   0 0 0
>  c10t25d0 ONLINE   0 0 0
>  c10t26d0 ONLINE   0 0 0
>  c10t27d0 ONLINE   0 0 0

Only this raidz2 vdev is resilvering.  The other raidz2-vdev's are idle.
You can verify this with the command:
zpool iostat 30

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] optimal layout for 8x 1 TByte SATA (consumer)

2011-06-14 Thread Paul Kraus
On Mon, Jun 13, 2011 at 6:01 PM, Erik Trimble  wrote:

> I'd have to re-look at the exact numbers, but, I'd generally say that
> 2x6raidz2 vdevs would be better than either 1x12raidz3 or 4x3raidz1 (or
> 3x4raidz1, for a home server not looking for super-critical protection (in
> which case, you should be using mirrors with spares, not raidz*).

I saw some stats a year or more ago that indicated the MTDL for raidZ2
was better than for a 2-way mirror. In order of best to worst I
remember the rankings as:

raidZ3 (least likely to lose data)
3-way mirror
raidZ2
2-way mirror
raidZ1 (most likely to lose data)

This is for Mean Time to Data Loss, or essentially the odds of losing
_data_ due to one (or more) drive failures. I do not know if this took
number of devices per vdev and time to resilver into account.
Non-redundant configurations were not even discussed. This information
came out of Sun (pre-Oracle) and _may_ have been traceable back to
Brendan Gregg.

-- 
{1-2-3-4-5-6-7-}
Paul Kraus
-> Senior Systems Architect, Garnet River ( http://www.garnetriver.com/ )
-> Sound Coordinator, Schenectady Light Opera Company (
http://www.sloctheater.org/ )
-> Technical Advisor, RPI Players
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] # disks per vdev

2011-06-14 Thread Edward Ned Harvey
> From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
> boun...@opensolaris.org] On Behalf Of Lanky Doodle
> 
> The ZFS install will be mirrored, but I am not sure how to configure the
15
> data disks from a performance (inc. resilvering) vs protection vs usable
space
> perspective;
> 
> 3x 5 disk raid-z. 3 disk failures in the right scenario, 12TB storage
> 2x 7 disk raid-z + hot spare. 2 disk failures in the right scenario, 12TB
storage
> 1x 15 disk raid-z2. 2 disk failures, 13TB storage
> 2x 7 disk raid-z2 + hot spare. 4 disk failures in the right scenario, 10TB
storage

The above all provide highest usable space (lowest hardware cost.)
But if you want performance, go for mirrors.  (highest hardware cost.)

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Disk replacement need to scan full pool ?

2011-06-14 Thread Rasmus Fauske

Hi,

I want to replace some slow consumer drives with new edc re4 ones but 
when I do a replace it needs to scan the full pool and not only that 
disk set (or just the old drive)


Is this normal ? (the speed is always slow in the start so thats not 
what I am wondering about, but that it needs to scan all of my 18.7T to 
replace one drive)


The disk config:
  pool: tank
 state: ONLINE
status: One or more devices is currently being resilvered.  The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
 scan: resilver in progress since Tue Jun 14 10:03:37 2011
16.8G scanned out of 18.7T at 1.52M/s, (scan is slow, no estimated 
time)

388M resilvered, 0.09% done
config:

NAME STATE READ WRITE CKSUM
tank ONLINE   0 0 0
  raidz2-0   ONLINE   0 0 0
c10t14d0 ONLINE   0 0 0
c10t15d0 ONLINE   0 0 0
c10t16d0 ONLINE   0 0 0
c10t17d0 ONLINE   0 0 0
c10t18d0 ONLINE   0 0 0
c10t19d0 ONLINE   0 0 0
c10t20d0 ONLINE   0 0 0
  raidz2-1   ONLINE   0 0 0
c10t21d0 ONLINE   0 0 0
replacing-1  ONLINE   0 0 0
  c10t35d0   ONLINE   0 0 0
  c10t22d0   ONLINE   0 0 0  (resilvering)
c10t23d0 ONLINE   0 0 0
c10t24d0 ONLINE   0 0 0
c10t25d0 ONLINE   0 0 0
c10t26d0 ONLINE   0 0 0
c10t27d0 ONLINE   0 0 0
  raidz2-2   ONLINE   0 0 0
c10t28d0 ONLINE   0 0 0
c10t29d0 ONLINE   0 0 0
c10t30d0 ONLINE   0 0 0
c10t31d0 ONLINE   0 0 0
c10t32d0 ONLINE   0 0 0
c10t33d0 ONLINE   0 0 0
c10t34d0 ONLINE   0 0 0
  raidz2-3   ONLINE   0 0 0
c10t0d0  ONLINE   0 0 0
c10t1d0  ONLINE   0 0 0
c10t2d0  ONLINE   0 0 0
c10t3d0  ONLINE   0 0 0
c10t4d0  ONLINE   0 0 0
c10t5d0  ONLINE   0 0 0
c10t6d0  ONLINE   0 0 0
  raidz2-4   ONLINE   0 0 0
c10t7d0  ONLINE   0 0 0
c10t8d0  ONLINE   0 0 0
c10t9d0  ONLINE   0 0 0
c10t10d0 ONLINE   0 0 0
c10t11d0 ONLINE   0 0 0
c10t12d0 ONLINE   0 0 0
c10t13d0 ONLINE   0 0 0

errors: No known data errors
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] # disks per vdev

2011-06-14 Thread Lanky Doodle
Hiya,

I am just in the planning stages for my ZFS Home Media Server build at the 
moment (to replace WHS v1).

I plan to use 2x motherboard ports and 2x Supermicro AOC-SASLP-MV8 8 port SATA 
cards to give 17* drive connections; 2 disks (120GB SATA 2.5") will be used for 
the ZFS install using the motherboard ports and the remaing 15 disks (1TB SATA) 
will be used for data using the 2x 8 port cards.

* = the total number of ports is 18 but I only have enough space in the chassis 
for 17 drives (2x 2.5" in 1x 3.5" bay and 15x 3.5" by using 5-in-3 hotswop 
caddies in 9x 5.1/4" bays).

All disks are 5400RPM to keep power requirements down.

The ZFS install will be mirrored, but I am not sure how to configure the 15 
data disks from a performance (inc. resilvering) vs protection vs usable space 
perspective;

3x 5 disk raid-z. 3 disk failures in the right scenario, 12TB storage
2x 7 disk raid-z + hot spare. 2 disk failures in the right scenario, 12TB 
storage
1x 15 disk raid-z2. 2 disk failures, 13TB storage
2x 7 disk raid-z2 + hot spare. 4 disk failures in the right scenario, 10TB 
storage

Without having a mash of different raid-z* levels I can't think of any other 
options.

I am leaning towards the first option as it gives seperation between all the 
disks; I would have seperate Movie folders on each of them while having 
critical data (pictures, home videos, documents etc) stored on each set of 
raid-z.

Suggestions welcomed.

Thanks
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss