[zfs-discuss] SSD ZIL/L2ARC partitioning

2012-11-14 Thread Michel Jansens

Hi,

I've ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ 
SATA problems). Specs: http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html


I want to use the 2 OCZ SSDs as mirrored intent log devices, but as  
the intent log needs quite a small amount of the disks (10GB?), I was  
wondering if I can use the rest of the disks as L2ARC?


I have a few questions about this:

-Is 10GB enough for a log device?
-Can I partition the disks (10GB + 90 GB) and use the unused (90GB)  
space as L2ARC?
-If I use the rest of the disks as L2ARC, do I have to mirror the  
L2ARC or can I just add 2 partitions (eg: 2 x 90GB = 180GB)
-If I used non mirrored L2ARC, Could something go wrong if one L2ARC  
device failed (pool unavailable,lock in the kernel, panic,...)?




--
Michel Jansens

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD ZIL/L2ARC partitioning

2012-11-14 Thread Sašo Kiselkov
On 11/14/2012 11:14 AM, Michel Jansens wrote:
 Hi,
 
 I've ordered a new server with:
 - 4x600GB Toshiba 10K SAS2 Disks
 - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no
 SAS/SATA problems). Specs:
 http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html
 
 I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the
 intent log needs quite a small amount of the disks (10GB?), I was
 wondering if I can use the rest of the disks as L2ARC?
 
 I have a few questions about this:
 
 -Is 10GB enough for a log device?

A log device, essentially, only needs to hold a single
transaction's-worth of small sync writes, so unless you write more than
that, you'll be fine. In fact, DDRdrive's X1 is only 4GB and works just
fine.

 -Can I partition the disks (10GB + 90 GB) and use the unused (90GB)
 space as L2ARC?

Yes, you can.

 -If I use the rest of the disks as L2ARC, do I have to mirror the L2ARC
 or can I just add 2 partitions (eg: 2 x 90GB = 180GB)

L2ARC doesn't have mirroring, it's always striped.

 -If I used non mirrored L2ARC, Could something go wrong if one L2ARC
 device failed (pool unavailable,lock in the kernel, panic,...)?

No, an L2ARC only holds non-dirty data from the main storage pool for
quick access, so if a read to an L2ARC device fails (due to it failing,
being removed, or returning bad data), then the read is simply reissued
to the main pool drives transparently (and fault-management kicks in to
handle the problem with the L2ARC, such as taking it offline, etc.).

Cheers,
--
Saso
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD ZIL/L2ARC partitioning

2012-11-14 Thread Tomas Forsman
On 14 November, 2012 - Michel Jansens sent me these 1,0K bytes:

 Hi,

 I've ordered a new server with:
 - 4x600GB Toshiba 10K SAS2 Disks
 - 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no SAS/ 
 SATA problems). Specs: 
 http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html

 I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the 
 intent log needs quite a small amount of the disks (10GB?), I was  
 wondering if I can use the rest of the disks as L2ARC?

 I have a few questions about this:

 -Is 10GB enough for a log device?

Our log device for the department file server, for roughly 100
workstations etc, seems to hover about 2MB used. It's only sync writes
that goes here, and it's emptied at the next transaction.

So check how much sync writes you have per flush (normally 5 seconds
nowadays, used to be 30 I think?). If you are pushing more than 2GB of
sync operations per second, then I think you should get something
beefier ;)

 -Can I partition the disks (10GB + 90 GB) and use the unused (90GB)  
 space as L2ARC?
 -If I use the rest of the disks as L2ARC, do I have to mirror the L2ARC 
 or can I just add 2 partitions (eg: 2 x 90GB = 180GB)
 -If I used non mirrored L2ARC, Could something go wrong if one L2ARC  
 device failed (pool unavailable,lock in the kernel, panic,...)?

It's checksummed and verified, shouldn't be a problem even if it fails
(could be a problem if it's half-failing and just being slow, if so -
get rid of it).



 --
 Michel Jansens

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


/Tomas
-- 
Tomas Forsman, st...@acc.umu.se, http://www.acc.umu.se/~stric/
|- Student at Computing Science, University of Umeå
`- Sysadmin at {cs,acc}.umu.se
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread dswartz
 What was wrong with the suggestion to use VMWare ESXi and Nexenta or
 OpenIndiana to do this?

Sorry I don't know which specific post you refer to? I am already running
ESXi with OI on top and serving back to the other guests.  Ned made a good
point that running a virtualization solution on top of the storage host
would eliminate any network traffic for disk access by guests.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread dswartz
 On 11/14/12 15:20, Dan Swartzendruber wrote:
 Well, I think I give up for now.  I spent quite a few hours over the
 last
 couple of days trying to get gnome desktop working on bare-metal OI,
 followed by virtualbox.  Supposedly that works in headless mode with RDP
 for
 management, but nothing but fail for me.  Found quite a few posts on
 various
 forums of people complaining that RDP with external auth doesn't work
 (or
 not reliably), and that was my experience.  The final straw was when I
 rebooted the OI server as part of cleaning things up, and... It hung.
 Last
 line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
 it
 to no avail.  Even tried a backup BE from hours earlier, to no avail.
 Likely whatever was bunged happened prior to that.  If I could get
 something
 that ran like xen or kvm reliably for a headless setup, I'd be willing
 to
 give it a try, but for now, no...

 SmartOS.

Interesting, I may take a play with this...

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Karl Wagner
 

On 2012-11-14 12:55, dswa...@druber.com wrote: 

 On 11/14/12
15:20, Dan Swartzendruber wrote: 
 
 Well, I think I give up for
now. I spent quite a few hours over the last couple of days trying to
get gnome desktop working on bare-metal OI, followed by virtualbox.
Supposedly that works in headless mode with RDP for management, but
nothing but fail for me. Found quite a few posts on various forums of
people complaining that RDP with external auth doesn't work (or not
reliably), and that was my experience. The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
Last line in verbose boot log is 'ucode0 is /pseudo/ucode@0'. I
power-cycled it to no avail. Even tried a backup BE from hours earlier,
to no avail. Likely whatever was bunged happened prior to that. If I
could get something that ran like xen or kvm reliably for a headless
setup, I'd be willing to give it a try, but for now, no...
 SmartOS.

Interesting, I may take a play with this...
___ zfs-discuss mailing list
zfs-discuss@opensolaris.org [1]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss [2]

That _does
_look interesting. All I need is some time to try it out. Looks perfect
for me to consolidate my home servers. 

Links:
--
[1]
mailto:zfs-discuss@opensolaris.org
[2]
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
 

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?

On 2012-11-14 03:20, Dan Swartzendruber wrote:
 Well, I think I give up for now.  I spent quite a few hours over the 
 last couple of days trying to get gnome desktop working on bare-metal 
 OI, followed by virtualbox.  Supposedly that works in headless mode 
 with RDP for management, but nothing but fail for me.  Found quite a 
 few posts on various forums of people complaining that RDP with 
 external auth doesn't work (or not reliably), and that was my experience.


I can't say I used VirtualBox RDP extensively, certainly not in the newer
4.x series, yet. For my tasks it sufficed to switch the VM from headless to
GUI and back via savestate, as automated by my script from vboxsvc (vbox.sh
-s vmname startgui for a VM config'd as a vboxsvc SMF service already).

*** The thing for me is my wife needs to be able to get to her XP desktop
console when she reboots, since her company mandates encrypted disks, so at
boot time, she gets a PGP prompt before it will even boot :(  Currently she
has two icons on her desktop - an RDP one for her normal production use,
and a VNC one to get to the ESXi machine console.  If I have to diddle
things manually (or show her how to), the WAF will take a big hit.  I can
already imagine the response: everything works just fine now!  Why do you
have to 'improve' things??? :) Most of my frustration is with the apparent
lack of QA at Oracle for basic things like this.  Jim, I did nothing weird
or unusual.  I installed virtualbox from their site, installed their
extensions pack, configured RDP for the 3 guests, fired up RDP client from
windows, and... FAIL.  Google indicates their RDP server doing the
authentication somehow differently than windows expects.  I dunno about that
- all I know is something basic doesn't work, and searching the web found
literally a dozen threads going back 3-4 years on this same issue.  Things
like not being able to run RDP for a guest if you run it as non-root, since
it can't bind to the default low-numbered port number, so I had to use
higher-numbered ports.  And on and on and on.  Sorry to rant here, just
frustrated.  If I was hacking on some bleeding edge setup, my expectations
are to run into things like this and get lots of paper cuts, but not for
something so basic.

   The final straw was when I
  rebooted the OI server as part of cleaning things up, and... It hung. 
  Last
  line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
it   to no avail.  Even tried a backup BE from hours earlier, to no avail.
  Likely whatever was bunged happened prior to that.  If I could get
something   that ran like xen or kvm reliably for a headless setup, I'd be
willing to   give it a try, but for now, no...

I can't say much about OI desktop problems either - works for me (along with
VBox 4.2.0 release), suboptimally due to lack of drivers, but reliably.

Try to boot with -k option to use a kmdb debugger as well - maybe the
system would enter it upon getting stuck (does so instead of rebooting when
it is panicking) and you can find some more details there?..

*** Well, I still have the OI boot disk, so I may try that sometime this
weekend when I have cycles...  


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
 

-Original Message-
From: zfs-discuss-boun...@opensolaris.org
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jim Klimov
Sent: Tuesday, November 13, 2012 10:08 PM
To: zfs-discuss@opensolaris.org
Subject: Re: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?


   The final straw was when I
  rebooted the OI server as part of cleaning things up, and... It hung. 
  Last
  line in verbose boot log is 'ucode0 is /pseudo/ucode@0'.  I power-cycled
it   to no avail.  Even tried a backup BE from hours earlier, to no avail.
  Likely whatever was bunged happened prior to that.  If I could get
something   that ran like xen or kvm reliably for a headless setup, I'd be
willing to   give it a try, but for now, no...

Try to boot with -k option to use a kmdb debugger as well - maybe the
system would enter it upon getting stuck (does so instead of rebooting when
it is panicking) and you can find some more details there?..

*** I just re-read my notes, and in fact I *was* booting with '-k -v', so no
help there, I'm afraid :(


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Dan Swartzendruber
 
 Well, I think I give up for now.  I spent quite a few hours over the last
 couple of days trying to get gnome desktop working on bare-metal OI,
 followed by virtualbox.  

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.


 Found quite a few posts on
 various
 forums of people complaining that RDP with external auth doesn't work (or
 not reliably), 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.

Right now, I'll say this much:  When you RDP from a windows machine to a 
windows machine, you get prompted for password.  Nice, right?  Seems pretty 
obvious.   ;-)   But the VirtualBox RDP server doesn't have that capability.   
Pt...  You need to enter the username  password into the RDP client, and 
save it, before attempting the connection.


 The final straw was when I
 rebooted the OI server as part of cleaning things up, and... It hung.  

Bummer.  That might be some unsupported hardware for running OI.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
On 11/14/2012 9:44 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.
 

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.
   
Ned, sorry if I was unclear.  This was a fresh desktop install.  I tried 
to back-fit GUI to my old server install but it didn't work well...

Found quite a few posts on
various
forums of people complaining that RDP with external auth doesn't work (or
not reliably),
 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.

Right now, I'll say this much:  When you RDP from a windows machine to a windows 
machine, you get prompted for password.  Nice, right?  Seems pretty obvious.   ;-)  
 But the VirtualBox RDP server doesn't have that capability.   Pt...  You need 
to enter the username  password into the RDP client, and save it, before 
attempting the connection.
   

Oh, okay.  Yuck...

The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
 

Bummer.  That might be some unsupported hardware for running OI.
   
But it worked!  I did the install on the bare metal, it worked fine.  
Did some updates and rebooted, and that worked fine.  Installed 
virtualbox and some other stuff, and when I rebooted to get a clean 
baseline to work from, that is when it hung.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Dedicated server running ESXi with no RAID card, ZFS for storage?

2012-11-14 Thread Dan Swartzendruber
On 11/14/2012 9:44 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) wrote:

From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
boun...@opensolaris.org] On Behalf Of Dan Swartzendruber

Well, I think I give up for now.  I spent quite a few hours over the last
couple of days trying to get gnome desktop working on bare-metal OI,
followed by virtualbox.
 

I would recommend installing OI desktop, not OI server.  Because I too, tried 
to get gnome working in OI server, to no avail.  But if you install OI desktop, 
it simply goes in, brainlessly, simple.


   

Found quite a few posts on
various
forums of people complaining that RDP with external auth doesn't work (or
not reliably),
 

Actually, it does work, and it works reliably, but the setup is very much not 
straightforward.  I'm likely to follow up on this later today, because as 
coincidence would have it, this is on my to-do for today.
   

Please post your results, I'd like to know this.

The final straw was when I
rebooted the OI server as part of cleaning things up, and... It hung.
 

Bummer.  That might be some unsupported hardware for running OI.

   
I had two backup BEs, and I only tried the more recent one. Maybe the 
older one would not hang.  I have to say my perception of OI is that it 
is a lot more fragile when it comes to things like this than Linux is.  
I'm used to windows being 'the boy in the bubble', but not Unix 
variants.  I'd be more charitably inclined if it were error messages and 
still comes up okay - a silent hang at bootload is frustrating to put it 
mildly.  I'm wondering if virtualbox ties in here somewhere?  This was 
the first time I rebooted this box since the vbox install.  My previous 
attempt to put gui+vbox on my old server install also hung at bootload, 
but I assumed that was just stuff that hadn't been intended to be done 
the way it was.


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] SSD ZIL/L2ARC partitioning

2012-11-14 Thread Neil Perrin

On 11/14/12 03:24, Sašo Kiselkov wrote:

On 11/14/2012 11:14 AM, Michel Jansens wrote:

Hi,

I've ordered a new server with:
- 4x600GB Toshiba 10K SAS2 Disks
- 2x100GB OCZ DENEVA 2R SYNC eMLC SATA (no expander so I hope no
SAS/SATA problems). Specs:
http://www.oczenterprise.com/ssd-products/deneva-2-r-sata-6g-2.5-emlc.html

I want to use the 2 OCZ SSDs as mirrored intent log devices, but as the
intent log needs quite a small amount of the disks (10GB?), I was
wondering if I can use the rest of the disks as L2ARC?

I have a few questions about this:

-Is 10GB enough for a log device?

A log device, essentially, only needs to hold a single
transaction's-worth of small sync writes,


Actually it needs to hold 3 transaction groups worth.
There are 3 phases to ZFS's transaction group model: open, quiescing and 
syncing.
Nowadays the sync phase is targetted at 5s so the log device needs to be able to
hold up to 15s of synchronous data.


  so unless you write more than
that, you'll be fine. In fact, DDRdrive's X1 is only 4GB and works just
fine.


Agreed, 10GB should be fine for your system.

Neil.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Metaslab load performance

2012-11-14 Thread f-toyohara
Hi,

I have an issue of zfs performance.

My OpenSolaris system is below.
- snv_134 + CR:6917066

When a current metaslab is inactivated and another metaslab
is activated, it takes too much time ( 10 seconds) to load it
to memory.

Please let me know how to avoid this issue.


Toyo
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-14 Thread Eric D. Mudama

On Wed, Nov 14 at  0:28, Jim Klimov wrote:
All in all, I can't come up with anything offensive against it 
quickly ;) One possible nit regards the ratings being geared towards 
4KB block

(which is not unusual with SSDs), so it may be further from announced
performance with other block sizes - i.e. when caching ZFS metadata.


Would an ashift of 12 conceivably address that issue?


--
Eric D. Mudama
edmud...@bounceswoosh.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel DC S3700

2012-11-14 Thread Jim Klimov

On 2012-11-14 18:05, Eric D. Mudama wrote:

On Wed, Nov 14 at  0:28, Jim Klimov wrote:

All in all, I can't come up with anything offensive against it quickly
;) One possible nit regards the ratings being geared towards 4KB block
(which is not unusual with SSDs), so it may be further from announced
performance with other block sizes - i.e. when caching ZFS metadata.


Would an ashift of 12 conceivably address that issue?



Performance-wise (and wear-wise) - probably. Gotta test how bad it is
at 512b IOs ;) Also I am not sure if ashift applies to (can be set for)
L2ARC cache devices...

Actually, if read performance does not happen to suck at smaller block
sizes, ashift is not needed - the L2ARC writes seem to be streamed
sequentially (as in an infinite tape) so smaller writes would still
coalesce into big HW writes and not cause excessive wear by banging
many random flash cells. IMHO :)

//Jim
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] LUN expansion choices

2012-11-14 Thread Peter Tribble
On Tue, Nov 13, 2012 at 6:16 PM, Karl Wagner k...@mouse-hole.com wrote:
 On 2012-11-13 17:42, Peter Tribble wrote:

  Given storage provisioned off a SAN (I know, but sometimes that's
   what you have to work with), what's the best way to expand a pool?
 
  Specifically, I can either grow existing LUNs, a]or add new LUNs.
 
  As an example, If I have 24x 2TB LUNs, and wish to double the
  size of the pool, is it better to add 24 additional 2TB LUNs, or
  get each of the existing LUNs expanded to 4TB each?

 This is only my opinion, but I would say you'd be better off expanding your
 current LUNs.

 The reason for this is balance. Currently, your data should be spread fairly
 evenly over the LUNs. If you add more, those will be empty, which will
 affect how data is written (data will try to go to those first).

 If you just expand your current LUNs, the data will remain balanced, and ZFS
 will just use the additional space.

Maybe, or maybe not. If you think in terms of metaslabs, then there
isn't any difference between creating extra metaslabs by growing an
existing LUN and creating new LUNs. With pooled storage on the
SAN back-end, there's no difference in I/O placement either.

Peripherally, this note by Adam Leventhal may be of interest

http://dtrace.org/blogs/ahl/2012/11/08/zfs-trivia-metaslabs/

-- 
-Peter Tribble
http://www.petertribble.co.uk/ - http://ptribble.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss