[zfs-discuss] recreating stack (zpool and zfs) once hardware snapshot is taken.

2010-09-15 Thread sridhar surampudi
Hi,

Follow is what I am looking for.

I need to take hardware snap shot of devices which are provided under zfs file 
system.

I could identify type of configuration using zpool status and get the list of 
disks.
For example : if a single LUN is assigned to zpool and a file system is created 
 for mpool/myfs.
Say the LUN is 1002.

so the stack would be myfs file system under mypool which is created from a LUN 
1002.

Once I took snapshot for LUN 1002 on 1003 then new device I got is 1003 as 
snapshot device..

I would like to know how can I recreate the stack similar to existing say zpool 
mypool_snap and file system myfs_snap.

I found some information that zpool export and import needs to be done before 
taking snapshot.
wondering how i can use it for my requirement.


Thanks,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS file system without pool

2010-09-15 Thread Ramesh Babu
I would like to know if  I can create ZFS file system without ZFS storage
pool. Also I would like to know if I can create ZFS pool/ZFS pool on Veritas
Volume.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system without pool

2010-09-15 Thread Todd Urie
A ZFS filesystem without a zpool doesn't make much sense.  Unless I'm badly
mistaken, you have to have the pool to get the filesystem.

As far as using a Veritas volume for the zpools, that is easily done.  We do
that where I work for almost all of our ZFS filesystems as a way to
facilitate cluster failover of the filesystems and ensure that the zpool
can't be imported on two nodes simultaneously.   You just specify the VxVM
volumes as the block devices when you configure the zpool.  Of course, you
have to import the releveant disk group before you can import the zpool.

On Wed, Sep 15, 2010 at 8:25 AM, Ramesh Babu rama.b...@gmail.com wrote:

 I would like to know if  I can create ZFS file system without ZFS storage
 pool. Also I would like to know if I can create ZFS pool/ZFS pool on Veritas
 Volume.

 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS file system without pool

2010-09-15 Thread Edward Ned Harvey
 From: zfs-discuss-boun...@opensolaris.org [mailto:zfs-discuss-
 boun...@opensolaris.org] On Behalf Of Ramesh Babu
 
 I would like to know if  I can create ZFS file system without ZFS
 storage pool. Also I would like to know if I can create ZFS pool/ZFS
 pool on Veritas Volume.

Unless I'm mistaken, you seem to be confused, thinking zpools can only be
created from physical devices.  You can make zpools from files, sparse
files, physical devices, remote devices, in memory ... basically any type of
storage you can communicate with.  You create the zpool, and the zfs
filesystem optionally comes along with it.

All the magic is done in the pool - snapshots, dedup, etc.  The only reason
you would want a zfs filesystem is because it's specifically designed to
leverage the magic of a zpool natively.  If it were possible to create a zfs
filesystem without a zpool, you might as well just use ufs.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-15 Thread Richard Elling
On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:

 We are looking into the possibility of adding a dedicated ZIL and/or L2ARC 
 devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E SSD 
 drives. Would this be a good solution to slow write speeds?

Maybe, maybe not.  Use zilstat to check whether the ZIL is actually in use
before spending money or raising expectations.

 We are currently sharing out different slices of the pool to windows servers 
 using comstar and fibrechannel. We are currently getting around 300MB/sec 
 performance with 70-100% disk busy.

This seems high, is the blocksize/recordsize matched?
 -- richard

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com

Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] dedicated ZIL/L2ARC

2010-09-15 Thread Chris Mosetick
We have two Intel X25-E 32GB SSD drives in one of our servers.  I'm using
one for ZIl and one for L2ARC, we are having great results so far.

Cheers,

-Chris

On Wed, Sep 15, 2010 at 9:43 AM, Richard Elling rich...@nexenta.com wrote:

 On Sep 14, 2010, at 6:59 AM, Wolfraider wrote:

  We are looking into the possibility of adding a dedicated ZIL and/or
 L2ARC devices to our pool. We are looking into getting 4 – 32GB  Intel X25-E
 SSD drives. Would this be a good solution to slow write speeds?

 Maybe, maybe not.  Use zilstat to check whether the ZIL is actually in use
 before spending money or raising expectations.

  We are currently sharing out different slices of the pool to windows
 servers using comstar and fibrechannel. We are currently getting around
 300MB/sec performance with 70-100% disk busy.

 This seems high, is the blocksize/recordsize matched?
  -- richard

 --
 OpenStorage Summit, October 25-27, Palo Alto, CA
 http://nexenta-summit2010.eventbrite.com

 Richard Elling
 rich...@nexenta.com   +1-760-896-4422
 Enterprise class storage for everyone
 www.nexenta.com





 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-15 Thread Nabil
any resolution to this issue?  I'm experiencing the same annoying lockd thing 
with mac osx 10.6 clients.  I am at pool ver 14, fs ver 3.  Would somehow going 
back to the earlier 8/2 setup make things better?

-Nabil
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Brad Diggs
Has anyone done much testing of just using the solid state devices (F20 or F5100) asdevices for ZFS pools? Are there any concerns with running in this mode versus usingsolid state devices for L2ARC cache?Second, has anyone done this sort of testing with MLC based solid state drives?What has your experience been?Thanks in advance!BradBrad Diggs | Principal Security Sales ConsultantOracleNorth America Technology Organization16000 Dallas Parkway, Dallas, TX 75248Tech Blog:http://TheZoneManager.comLinkedIn:http://www.linkedin.com/in/braddiggs ___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Matt Cowger
We actually did some pretty serious testing with SATA SLCs from Sun directly 
hosting zpools (not as L2ARC).  We saw some really bad performance - as though 
there were something wrong, but couldn't find it.

If you search my name on this list you'll find the description of the problem.

--m

m a t tc o w g e r
sr. systems engineer | internal services / techops
salesforce.com
+1 206 262 8078 (office)
+1 415 401 5930 (mobile)
mattkcowger (aim  |  yahoo)

From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Brad Diggs
Sent: Wednesday, September 15, 2010 12:27 PM
To: zfs-discuss@opensolaris.org
Subject: [zfs-discuss] ZFS on solid state as disk rather than L2ARC...

Has anyone done much testing of just using the solid state devices (F20 or 
F5100) as
devices for ZFS pools?  Are there any concerns with running in this mode versus 
using
solid state devices for L2ARC cache?

Second, has anyone done this sort of testing with MLC based solid state drives?
What has your experience been?

Thanks in advance!

Brad
[cid:image001.png@01CB54D6.1F2EB3C0]
Brad Diggs | Principal Security Sales Consultant
Oracle North America Technology Organization
16000 Dallas Parkway, Dallas, TX 75248
Tech Blog: http://TheZoneManager.comhttp://TheZoneManager.com/
LinkedIn: http://www.linkedin.com/in/braddiggs

inline: image001.png___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Mac OS X clients with ZFS server

2010-09-15 Thread Mike Mackovitch
On Wed, Sep 15, 2010 at 12:08:20PM -0700, Nabil wrote:
 any resolution to this issue?  I'm experiencing the same annoying
 lockd thing with mac osx 10.6 clients.  I am at pool ver 14, fs ver
 3.  Would somehow going back to the earlier 8/2 setup make things
 better?

As noted in the earlier thread, the annoying lockd thing is not a
ZFS issue, but rather a networking issue.

FWIW, I never saw a resolution.  But the suggestions for how to debug
situations like this still stand:

 So, it looks like you need to investigate why the client isn't
 getting responses from the server's lockd.

 This is usually caused by a firewall or NAT getting in the way.

 I would also check /var/log/system.log and /var/log/kernel.log on the Mac to
 see if any other useful messages are getting logged.

 Then I'd grab packet traces with wireshark/tcpdump/snoop *simultaneously* on
 the client and the server, reproduce the problem, and then determine which
 packets are being sent and which packets are being received.

HTH
--macko
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Richard Elling
On Sep 14, 2010, at 4:58 AM, Edward Ned Harvey wrote:

 From: Haudy Kazemi [mailto:kaze0...@umn.edu]
 
 With regard to multiuser systems and how that negates the need to
 defragment, I think that is only partially true.  As long as the files
 are defragmented enough so that each particular read request only
 requires one seek before it is time to service the next read request,
 further defragmentation may offer only marginal benefit.  On the other
 
 Here's a great way to quantify how much fragmentation is acceptable:
 
 Suppose you want to ensure at least 99% efficiency of the drive.  At most 1%
 time wasted by seeking.

This is practically impossible on a HDD.  If you need this, use SSD.
This phenomenon is why short-stroking became popular, until SSDs
killed short-stroking.

 Suppose you're talking about 7200rpm sata drives, which sustain 500Mbit/s
 transfer, and have average seek time 8ms.
 
 8ms is 1% of 800ms.
 In 800ms, the drive could read 400 Mbit of sequential data.
 That's 40 MB

In UFS we have cluster groups -- doesn't survive the test of time. 
In ZFS we have metaslabs, perhaps with a better chance of longevity.
The vdev is divided into a set of metaslabs and the allocator tries to
use space in one metaslab before changing to another.

Several features work against HDD optimization. Redundant copies 
of the metadata are intentionally spread across the media, so that there
is some resilience to media errors. Entries into the ZIL can also be of
varying size and are allocated in the pool -- solved by using a separate 
log device. COW can lead to wikipedia disk [de]fragmentation for files
which are larger than the recordsize.

Continuing to try to optimize for  HDD performance is just a matter of 
changing the lipstick on the pig.
 -- richard

-- 
OpenStorage Summit, October 25-27, Palo Alto, CA
http://nexenta-summit2010.eventbrite.com

Richard Elling
rich...@nexenta.com   +1-760-896-4422
Enterprise class storage for everyone
www.nexenta.com





___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Edward Ned Harvey
 From: Richard Elling [mailto:rich...@nexenta.com]

  Suppose you want to ensure at least 99% efficiency of the drive.  At
 most 1%
  time wasted by seeking.
 
 This is practically impossible on a HDD.  If you need this, use SSD.

Lately, Richard, you're saying some of the craziest illogical things I've
ever heard, about fragmentation and/or raid.

It is absolutely not difficult to avoid fragmentation on a spindle drive, at
the level I described.  Just keep plenty of empty space in your drive, and
you won't have a fragmentation problem.  (Except as required by COW.)  How
on earth do you conclude this is practically impossible?

For example, if you start with an empty drive, and you write a large amount
of data to it, you will have no fragmentation.  (At least, no significant
fragmentation; you may get a little bit based on random factors.)  As life
goes on, as long as you keep plenty of empty space on the drive, there's
never any reason for anything to become significantly fragmented.

Again, except for COW.  It is known that COW will cause fragmentation if you
write randomly in the middle of a file that is protected by snapshots.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Richard Elling

On Sep 15, 2010, at 2:18 PM, Edward Ned Harvey wrote:

 From: Richard Elling [mailto:rich...@nexenta.com]
 
 Suppose you want to ensure at least 99% efficiency of the drive.  At
 most 1%
 time wasted by seeking.
 
 This is practically impossible on a HDD.  If you need this, use SSD.
 
 Lately, Richard, you're saying some of the craziest illogical things I've
 ever heard, about fragmentation and/or raid.
 
 It is absolutely not difficult to avoid fragmentation on a spindle drive, at
 the level I described.  Just keep plenty of empty space in your drive, and
 you won't have a fragmentation problem.  (Except as required by COW.)  How
 on earth do you conclude this is practically impossible?

It is practically impossible to keep a drive from seeking.  It is also
practically impossible to keep from blowing a rev.  Cute little piggy, eh? :-)

 For example, if you start with an empty drive, and you write a large amount
 of data to it, you will have no fragmentation.  (At least, no significant
 fragmentation; you may get a little bit based on random factors.)  As life
 goes on, as long as you keep plenty of empty space on the drive, there's
 never any reason for anything to become significantly fragmented.
 
 Again, except for COW.  It is known that COW will cause fragmentation if you
 write randomly in the middle of a file that is protected by snapshots.

IFF the file is larger than recordsize.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Ian Collins

On 09/16/10 09:18 AM, Edward Ned Harvey wrote:

From: Richard Elling [mailto:rich...@nexenta.com]

 

Suppose you want to ensure at least 99% efficiency of the drive.  At
   

most 1%
 

time wasted by seeking.
   

This is practically impossible on a HDD.  If you need this, use SSD.
 

Lately, Richard, you're saying some of the craziest illogical things I've
ever heard, about fragmentation and/or raid.

It is absolutely not difficult to avoid fragmentation on a spindle drive, at
the level I described.  Just keep plenty of empty space in your drive, and
you won't have a fragmentation problem.  (Except as required by COW.)  How
on earth do you conclude this is practically impossible?
   


Drives seek, there isn't a lot you can do to stop that.

--
Ian.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Nicolas Williams
On Wed, Sep 15, 2010 at 05:18:08PM -0400, Edward Ned Harvey wrote:
 It is absolutely not difficult to avoid fragmentation on a spindle drive, at
 the level I described.  Just keep plenty of empty space in your drive, and
 you won't have a fragmentation problem.  (Except as required by COW.)  How
 on earth do you conclude this is practically impossible?

That's expensive.  It's also approaching short-stroking (which is
expensive).  Which is what Richard said (in so many words, that it's
expensive).  Can you make HDDs perform awesome?  Yes, but you'll need
lots of them, and you'll need to use them very inefficiently.

Nico
-- 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Brandon High
On Wed, Sep 15, 2010 at 12:27 PM, Brad Diggs brad.di...@oracle.com wrote:

 Has anyone done much testing of just using the solid state devices (F20 or
 F5100) as
 devices for ZFS pools?  Are there any concerns with running in this mode
 versus using
 solid state devices for L2ARC cache?


Depending on the configuration (raidz vs. mirror) you can wind up issuing
unaligned writes. Current SSDs work best when writes are aligned on a 4k
boundary, which raidz won't always do.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression with Oracle - anyone implemented?

2010-09-15 Thread Brad
Ed,

See my answers inline:

I don't think your question is clear. What do you mean on oracle backed by
storage luns?

We'll be using luns from a storage array vs ZFS controller disks.  The luns are 
mapped the db server and from there initialize under ZFS.

 Do you mean on oracle hardware?  
On Sun/Oracle x86 hardware 

Do you mean you plan to run oracle database on the server, with ZFS under
the database? 

Yes
Generally speaking, you can enable compression on any zfs filesystem, and
the cpu overhead is not very big, and the compression level is not very
strong by default. However, if the data you have is generally
uncompressible, any overhead is a waste.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Compression block sizes

2010-09-15 Thread Brandon High
When using compression, are the on-disk record sizes determined before
or after compression is applied? In other words, if record size is set
to 128k, is that the amount of data fed into the compression engine,
or is the output size trimmed to fit? I think it's the former, but I'm
not certain.

This could affect block alignment for SSDs, leading to worse write
performance. It could also have a negative effect on 4k sector disks
that lie about the sector size, such as the recent WD EARS drives.

-B

-- 
Brandon High : bh...@freaks.com
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resilver = defrag?

2010-09-15 Thread Edward Ned Harvey
 From: Richard Elling [mailto:rich...@nexenta.com]
 
 It is practically impossible to keep a drive from seeking.  It is also

The first time somebody (Richard) said you can't prevent a drive from
seeking, I just decided to ignore it.  But then it was said twice.  (Ian.)

I don't get why anybody is saying drives seek.  Did anybody say drives
don't seek?

I said you can quantify how much fragmentation is acceptable, given drive
speed characteristics, and a percentage of time you consider acceptable for
seeking.  I suggested acceptable was 99% efficiency and 1% time waste
seeking.  Roughly calculated, I came up with 40 MB sequential data per
random seek would yield 99% efficiency.

For some situations, that's entirely possible and likely to be the norm.
For other cases, it may be unrealistic, and you may suffer badly from
fragmentation.

Is there some point we're talking about here?  I don't get why the
conversation seems to have taken such a tangent.

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS on solid state as disk rather than L2ARC...

2010-09-15 Thread Richard Jahnel
14x 256gb MLC SSDs in a raidz2 array have worked fine for us. Performace seems 
to be mostly limited by the raid controller in operating in JBOD mode.

Raidz2 allows sufficient redundancy to replace any MLC drives that develop 
issues and when you have that many consumer level SSDs, some will develop 
issues until you get all the weak ones weeded out.

If I had more space in the box or didn't need quite as much space I would have 
split the array into 2 or more striped raidz2 arrays.

Your mileage may vary.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss