Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Jorgen Lundman



Bob Friesenhahn wrote:
Something to be aware of is that not all SSDs are the same.  In fact, 
some faster SSDs may use a RAM write cache (they all do) and then 
ignore a cache sync request while not including hardware/firmware 
support to ensure that the data is persisted if there is power loss. 
Perhaps your fast CF device does that.  If so, that would be really 
bad for zfs if your server was to spontaneously reboot or lose power. 
This is why you really want a true enterprise-capable SSD device for 
your slog.


Naturally, we just wanted to try the various technologies to see how 
they compared. Store-bought CF card took 26s, store-bought SSD 48s. We 
have not found a PCI NVRam card yet.


When talking to our Sun vendor, they have no solutions, which is annoying.

X25-E would be good, but some pools have no spares, and since you can't 
remove vdevs, we'd have to move all customers off the x4500 before we 
can use it.


CF card need reboot to see the cards, but 6 servers are x4500, not 
x4540, so not really a global solution.


PCI NVRam cards need a reboot, but should work in both x4500 and x4540 
without zpool rebuilding. But can't actually find any with Solaris drivers.


Peculiar.

Lund


--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Markus Kovero
btw, there's coming new Intel X25-M (G2) next month that will offer better 
random read/writes than E-series and seriously cheap pricetag, worth for a try 
I'd say.

Yours
Markus Kovero


-Original Message-
From: zfs-discuss-boun...@opensolaris.org 
[mailto:zfs-discuss-boun...@opensolaris.org] On Behalf Of Jorgen Lundman
Sent: 30. heinäkuuta 2009 9:55
To: ZFS Discussions
Subject: Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 
10 10/08



Bob Friesenhahn wrote:
 Something to be aware of is that not all SSDs are the same.  In fact, 
 some faster SSDs may use a RAM write cache (they all do) and then 
 ignore a cache sync request while not including hardware/firmware 
 support to ensure that the data is persisted if there is power loss. 
 Perhaps your fast CF device does that.  If so, that would be really 
 bad for zfs if your server was to spontaneously reboot or lose power. 
 This is why you really want a true enterprise-capable SSD device for 
 your slog.

Naturally, we just wanted to try the various technologies to see how 
they compared. Store-bought CF card took 26s, store-bought SSD 48s. We 
have not found a PCI NVRam card yet.

When talking to our Sun vendor, they have no solutions, which is annoying.

X25-E would be good, but some pools have no spares, and since you can't 
remove vdevs, we'd have to move all customers off the x4500 before we 
can use it.

CF card need reboot to see the cards, but 6 servers are x4500, not 
x4540, so not really a global solution.

PCI NVRam cards need a reboot, but should work in both x4500 and x4540 
without zpool rebuilding. But can't actually find any with Solaris drivers.

Peculiar.

Lund


-- 
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install and boot from USB stick?

2009-07-30 Thread Thomas Nau
Hi

 Ive tried to find any hard information on how to install, and boot, 
 opensolaris from a USB stick. Ive seen a few people written a few sucessfull 
 stories about this, but I cant seem to get it to work.
 
 The procedure:
 Boot from LiveCD, insert USB drive, find it using `format', start installer. 
 The USB stick it not found (just stands on Finding disks). Remove USB 
 stick, hit back in installer, insert USB stick again, USB stick found, start 
 installing.
 
 At 19%, it just stands there. Have no idea why.
 
 Suggestions?


In my experience the system is actually installing just vry slowly.
have a look at the ZFS evil tuning guide and disable the ZIL while
installing. Really speeds up things. Don't know why it's so slow on a
USB memory stick but good enough as work around and not really a risk as
it's a fresh install anyway

Thomas


-- 
-
GPG fingerprint: B1 EE D2 39 2C 82 26 DA  A5 4D E0 50 35 75 9E ED
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

Roman V Shaposhnik wrote:
On the read-only front: wouldn't it be cool to *not* run zfs sends 
explicitly but have:

.zfs/send/snap name
.zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically? 


On the read-write front: wouldn't it be cool to be able to snapshot
things by:
$ mkdir .zfs/snapshot/snap-name


That already works if you have the snapshot delegation as that user.  It 
even works over NFS and CIFS.


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Cyril Plisko
On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:
 Roman V Shaposhnik wrote:

 On the read-only front: wouldn't it be cool to *not* run zfs sends
 explicitly but have:
    .zfs/send/snap name
    .zfs/sendr/from-snap-name-to-snap-name
 give you the same data automagically?
 On the read-write front: wouldn't it be cool to be able to snapshot
 things by:
    $ mkdir .zfs/snapshot/snap-name

 That already works if you have the snapshot delegation as that user.  It
 even works over NFS and CIFS.

WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


-- 
Regards,
Cyril
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2009-07-30 Thread Ralf Gans
Hello there,

I'm working for a bigger customer in germany.
The customer ist some thousend TB big.

The information that the zpool shrink feature will not be implemented soon
is no problem, we just keep using Veritas Storage Foundation.

Shirinking a pool is not the only problem with ZFS,
try setting up a jumpstart Server with Solaris 10u7
with the media copy on a separate zfs filesystem.

Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.

The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.

Viele Grüße,

rapega
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

Cyril Plisko wrote:

On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:

Roman V Shaposhnik wrote:

On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
   .zfs/send/snap name
   .zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically?
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
   $ mkdir .zfs/snapshot/snap-name

That already works if you have the snapshot delegation as that user.  It
even works over NFS and CIFS.


WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


Initially introduced in:

changeset:   4543:12bb2876a62e
user:marks
date:Tue Jun 26 07:44:24 2007 -0700
description:
PSARC/2006/465 ZFS Delegated Administration
PSARC/2006/577 zpool property to disable delegation
PSARC/2006/625 Enhancements to zpool history
PSARC/2007/228 ZFS delegation amendments
PSARC/2007/295 ZFS Delegated Administration Addendum
6280676 restore owner property
6349470 investigate non-root restore/backup
6572465 'zpool set bootfs=...' records history as 'zfs set 
bootfs=...'



Bug fix for CIFS clients in:

changeset:   6803:468e12a53baf
user:marks
date:Fri May 16 08:55:36 2008 -0700
description:
6700649 zfs_ctldir snapshot creation issues with CIFS clients



--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Ross
Without spare drive bays I don't think you're going to find one solution that 
works for x4500 and x4540 servers.  However, are these servers physically close 
together?  Have you considered running the slog devices externally?

One possible choice may be to run something like the Supermicro SC216 chassis 
(2U with 24x 2.5 drive bays):
http://www.supermicro.com/products/chassis/2U/216/SC216E2-R900U.cfm

Buy the chassis with redundant power (SC216E2-R900UB), and the JBOD power 
module (CSE-PTJBOD-CB1) to convert it to a dumb JBOD unit.  The standard 
backplane has six SAS connectors, each of which connects to four drives.  You 
might struggle if you need to connect more than six servers, although it may be 
possible to run it in a rather non standard configuration, removing the 
backplane and powering and connecting drives individually.

However, for up to six servers, you can just fit Adaptec raid cards with 
external ports to each (PCI-e or PCI-x as needed), and use external cables to 
connect those to the SSD drives in the external chassis.

If you felt like splashing out on the raid cards, that would let you run the 
ZIL on up to four Intel X25-E drives per server, backed up by 512MB of battery 
backed cache.

I think that would have a dramatic effect on NFS speed to say the least :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Ross
Whoah!  Seriously?  When did that get added and how did I miss it?

That is absolutely superb!  And an even stronger case for mkdir creating 
filesystems.  A filesystem per user that they can snapshot at will o_0

Ok, it'll need some automated pruning of old snapshots, but even so, that has 
some serious potential!
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread James Lever

Hi Darryn,

On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:

That already works if you have the snapshot delegation as that  
user.  It even works over NFS and CIFS.


Can you give us an example of how to correctly get this working?

I've read through the manpage but have not managed to get the correct  
set of permissions for it to work as a normal user (so far).


I'm sure others here would be keen to see a correct recipe to allow  
user managed snapshots remotely via mkdir/rmdir.


cheers,
James

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Mike Gerdts
On Thu, Jul 30, 2009 at 5:27 AM, Rossno-re...@opensolaris.org wrote:
 Without spare drive bays I don't think you're going to find one solution that 
 works for x4500 and x4540 servers.  However, are these servers physically 
 close together?  Have you considered running the slog devices externally?

It appears as though there is an upgrade path.

http://www.c0t0d0s0.org/archives/5750-Upgrade-of-a-X4500-to-a-X4540.html

However, the troll that you have to pay to follow that path demands a
hefty sum ($7995 list).  Oh, and a reboot is required.  :)

-- 
Mike Gerdts
http://mgerdts.blogspot.com/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

James Lever wrote:

Hi Darryn,

On 30/07/2009, at 6:33 PM, Darren J Moffat wrote:

That already works if you have the snapshot delegation as that user.  
It even works over NFS and CIFS.


Can you give us an example of how to correctly get this working?


On the host that has the ZFS datasets (ie the NFS/CIFS server) you need 
to give the user the delegation to create snapshots and to mount them:


# zfs allow -u james snapshot,mount,destroy tank/home/james

If you don't give the destroy delegation users won't be able to remove 
the snapshots the create.


Now on the client you should be able to:

cd .zfs/snapshot
mkdir newsnap

I've read through the manpage but have not managed to get the correct 
set of permissions for it to work as a normal user (so far).


What did you try ?
What release of OpenSolaris are you running ?

--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Richard Elling

On Jul 30, 2009, at 2:15 AM, Cyril Plisko wrote:


On Thu, Jul 30, 2009 at 11:33 AM, Darren J
Moffatdarr...@opensolaris.org wrote:

Roman V Shaposhnik wrote:


On the read-only front: wouldn't it be cool to *not* run zfs sends
explicitly but have:
   .zfs/send/snap name
   .zfs/sendr/from-snap-name-to-snap-name
give you the same data automagically?
On the read-write front: wouldn't it be cool to be able to snapshot
things by:
   $ mkdir .zfs/snapshot/snap-name


That already works if you have the snapshot delegation as that  
user.  It

even works over NFS and CIFS.


WOW !  That's incredible !  When did that happen ? I was completely
unaware of that feature and I am sure plenty of people out there never
heard of that as well.


Most folks don't RTFM :-)  Cindy does an excellent job of keeping track
of new features and procedures in the ZFS Administration Guide. This
one is example 9-6 under the Using ZFS Delegated Administration
section.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] shrinking a zpool - roadmap

2009-07-30 Thread Kyle McDonald

Ralf Gans wrote:


Jumpstart puts a loopback mount into the vfstab,
and the next boot fails.

The Solaris will do the mountall before ZFS starts,
so the filesystem service fails and you have not even
an sshd to login over the network.
  
This is why I don't use the mountpoint settings in ZFS. I set them all 
to 'legacy', and put them in the /etc/vfstab myself.


I keep many .ISO files on a ZFS filesystem, and I LOFI mount them onto 
subdirectories of the same ZFS tree, and then (since they are for 
Jumpstart) loop back mount parts of eacch of the ISO's into /tftpboot


When you've got to manage all this other stuff in /etc/vfstab ayway, 
it's easier to manage ZFS there too. I don't see it as a hardship, and I 
don't see the value of doing it in ZFS to be honest (unless every 
filesystem you have is in ZFS maybe.)


The same with sharing this stuff through NFS. I since the LOFI mounts 
are separate filesystems, I have to share them with share (or sharemgr) 
and it's easier to share the ZFS diretories through those commands at 
the same time.


I must be missing something, but I'm not sure I get the rationale behind 
duplicating all this admin stuff inside ZFS.


 -Kyle

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Ross
That should work just as well Bob, although rather than velcro I'd be tempted 
to drill some holes into the server chassis somewhere and screw the drives on.  
These things do use a bit of power, but with the airflow in a thumper I don't 
think I'd be worried.

If they were my own servers I'd be very tempted, but it really depends on how 
happy you would be voiding the warranty on a rather expensive piece of kit :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Andrew Gabriel

Richard Elling wrote:

On Jul 30, 2009, at 9:26 AM, Bob Friesenhahn wrote:

Do these SSDs require a lot of cooling?  


No. During the Turbo Charge your Apps presentations I was doing around 
the UK, I often pulled one out of a server to hand around the audience 
when I'd finished the demos on it. The first thing I noticed when doing 
this is that the disk is stone cold, which is not what you expect when 
you pull an operating disk out of a system.


Note that they draw all their power from the 5V rail, and can draw more 
current on the 5V rail than some HDDs, which is something to check if 
you're putting lots in a disk rack.


Traditional drive slots are designed for hard drives which need to 
avoid vibration and have specific cooling requirements.  What are the 
environmental requirements for the Intel X25-E?


Operating and non-operating shock: 1,000 G/0.5 msec (vs operating shock
for Barracuda ES.2 of 63G/2ms)
Power spec: 2.4 W @ 32 GB, 2.6W @ 64 GB. (less than HDDs @ ~8-15W)
MTBF: 2M hours (vs 1.2M hours for Barracuda ES.2)
Vibration specs are not consistent for comparison.
Compare:
http://download.intel.com/design/flash/nand/extreme/319984.pdf
vs
http://www.seagate.com/docs/pdf/datasheet/disc/ds_barracuda_es_2.pdf

Interesting that they are now specifying write endurance as:
1 PB of random writes for 32GB, 2 PB of random writes for 64GB.

Except for price/GB, it is game over for HDDs.  Since price/GB is 
based on

Moore's Law, it is just a matter of time.


SSD's are a sufficiently new technology that I suspect there's 
significant probably of discovering new techniques which give larger 
step improvements than Moore's Law for some years yet. However, HDD's 
aren't standing still either when it comes to capacity, although 
improvements in other HDD performance characteristics has been very 
disappointing this decade (e.g. IOPs haven't improved much at all, 
indeed they've only seen a 10-fold improvement over the last 25 years).


--
Andrew
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Bob Friesenhahn

On Thu, 30 Jul 2009, Andrew Gabriel wrote:


Except for price/GB, it is game over for HDDs.  Since price/GB is based on
Moore's Law, it is just a matter of time.


SSD's are a sufficiently new technology that I suspect there's significant 
probably of discovering new techniques which give larger step improvements 
than Moore's Law for some years yet. However, HDD's aren't standing still


FLASH technology is highly mature and has been around since the '80s. 
Given this, it is perhaps the case that (through continual refinement) 
FLASH has finally made it to the point of usability for bulk mass 
storage.  It is not clear if FLASH will obey Moore's Law or if it has 
already started its trailing off stage (similar to what happened with 
single-core CPU performance).


Only time will tell.  Currently (after rounding) SSDs occupy 0% of the 
enterprise storage market even though they dominate in some other 
markets.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10

2009-07-30 Thread Will Murnane
On Thu, Jul 30, 2009 at 14:50, Kurt Olsenno-re...@opensolaris.org wrote:
 I'm using an Acard ANS-9010B (configured with 12 GB battery backed ECC RAM w/ 
 16 GB CF card for longer term power losses. Device cost $250, RAM cost about 
 $120, and the CF around $100.) It just shows up as a SATA drive. Works fine 
 attached to an LSI 1068E. Since -- as I understand it -- one's ZIL doesn't 
 need to be particularly large, I've split that into 2 GB of ZIL and 10 GB of 
 L2ARC. Simple tests show it can do around 3200 sync 4k writes/sec over NFS 
 into a RAID-Z pool of five western digital 1 TB caviar green drives.

I, too, have one of these, and am mostly happy with it.  The biggest
inconvenience about it is the form factor: it occupies a 5.25 bay.
Since my case has no 5.25 bays (Norco RPC-4220) I improvised by
drilling a pair of correctly spaced holes into the lid of the case and
screwing it in there.  This isn't really recommended for enterprise
use, where drilling holes in the equipment is discouraged.

I don't have benchmarks for my setup, but anecdotally I no longer see
the stalls accessing files over NFS that I had before adding the Acard
to my pool as a log device.  I only have 1GB in it, and that seems
plenty for the purpose: it only ever seems to show up as 8k used, even
with 100 MB/s or more of writes to it.

Also, I should point out that the device doesn't support SMART.  Some
raid controllers may be unhappy about this.

Will
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] resizing zpools by growing LUN

2009-07-30 Thread A Darren Dunham
On Wed, Jul 29, 2009 at 03:51:22AM -0700, Jan wrote:
 Hi all,
 I need to know if it is possible to expand the capacity of a zpool
 without loss of data by growing the LUN (2TB) presented from an HP EVA
 to a Solaris 10 host.

Yes.

 I know that there is a possible way in Solaris Express Community
 Edition, b117 with the autoexpand property. But I still work with
 Solaris 10 U7. Besides, when will this feature be integrated in
 Solaris 10?

Not sure.

 Is there a workaround? I have checked it out with format tool -
 without effects.

What did you try?  

Since you're larger than 1T, you certainly have an EFI label.  What you
have to do is destroy the existing EFI label, then have format create a
new one for the larger LUN.  Finally, create slice 0 as the size of the
entire (now larger) disk.

There are four ZFS labels inside the EFI data slice.  Two at front, two
at end.  After enlarging, it probably won't be able to find the end two,
but it should import just fine (and will then write new labels at the
end).

As always, if you haven't done this before, you'll want to test it and
make a backup before trying on live data.

-- 
Darren
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread roland
what`s your disk controller?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] avail drops to 32.1T from 40.8T after create -o mountpoint

2009-07-30 Thread Bill Sommerfeld
On Wed, 2009-07-29 at 06:50 -0700, Glen Gunselman wrote:
 There was a time when manufacturers know about base-2 but those days  
 are long gone.

Oh, they know all about base-2; it's just that disks seem bigger when
you use base-10 units.

Measure a disk's size in 10^(3n)-based KB/MB/GB/TB units, and you get a
bigger number than its size in the natural-for-software 2^(10n)-sized
units.

So it's obvious which numbers end up on the marketing glossies, and it's
all downhill from there...

- Bill


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-30 Thread Darren J Moffat

James Lever wrote:


On 30/07/2009, at 11:32 PM, Darren J Moffat wrote:

On the host that has the ZFS datasets (ie the NFS/CIFS server) you 
need to give the user the delegation to create snapshots and to mount 
them:


# zfs allow -u james snapshot,mount,destroy tank/home/james


Ahh, it was the lack of mount that caught me!  Thanks Darren.


It is documented in the zfs(1M) man page:

 Permissions are generally the ability to use a  ZFS  subcom-
 mand or change a ZFS property. The following permissions are
 available:

...
   snapshot subcommand   Must also have the 'mount' ability.


I've read through the manpage but have not managed to get the correct 
set of permissions for it to work as a normal user (so far).


What did you try ?
What release of OpenSolaris are you running ?


snv 118.  I blame being tired and not trying enough options!  I was 
trying to do it with just snapshot and destroy expecting something like 
a snapshot didn't need to be mounted for some reason.


Thanks for the clarification.  Next time I think I'll also consult the 
administration guide as well as the manpage though I guess an explicit 
example for the snapshot delegation wouldn't go astray in the manpage.


Like the one that is already there ?

Example 18 Delegating ZFS Administration  Permissions  on  a
 ZFS Dataset


 The following example shows how to set permissions  so  that
 user  cindys  can create, destroy, mount, and take snapshots

   # # zfs allow cindys create,destroy,mount,snapshot tank/cindys


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread Ross
Supermicro AOC-SAT2-MV8, based on the Marvell chipset.  I figured it was the 
best available at the time since it's using the same chipset as the x4500 
Thumper servers.

Our next machine will be using LSI controllers, but I'm still not entirely 
happy with the way ZFS handles timeout type errors.  It seems that it handles 
drive reported read or write errors fine, and also handles checksum errors, but 
it's completely missed drive timeout errors as used by hardware raid 
controllers.

Personally, I feel that when a pool usually responds to requests in the order 
of milliseconds, a timeout of even a tenth of a second is too long.  Several 
minutes before a pool responds is just a joke.

I'm still a big fan of ZFS, and modern hardware may have better error handling, 
but I can't help but feel this is a little short sighted.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Richard Elling


On Jul 30, 2009, at 12:07 PM, Bob Friesenhahn wrote:


On Thu, 30 Jul 2009, Andrew Gabriel wrote:
Except for price/GB, it is game over for HDDs.  Since price/GB is  
based on

Moore's Law, it is just a matter of time.


SSD's are a sufficiently new technology that I suspect there's  
significant probably of discovering new techniques which give  
larger step improvements than Moore's Law for some years yet.  
However, HDD's aren't standing still


FLASH technology is highly mature and has been around since the  
'80s. Given this, it is perhaps the case that (through continual  
refinement) FLASH has finally made it to the point of usability for  
bulk mass storage.  It is not clear if FLASH will obey Moore's Law  
or if it has already started its trailing off stage (similar to what  
happened with single-core CPU performance).


Only time will tell.  Currently (after rounding) SSDs occupy 0% of  
the enterprise storage market even though they dominate in some  
other markets.


According to Gartner, enterprise SSDs accounted for $92.6M of a
$585.5M SSD market in June 2009, representing 15.8% of the SSD
market. STEC recently announced an order for $120M of ZeusIOPS
drives from a single enterprise storage customer.  From 2007 to
2008, SSD market grew by 100%. IDC reports Q1CY09 had
$4,203M for the external disk storage factory revenue, down 16%
from Q1CY08 while total disk storage systems were down 25.8%
YOY to $5,616M[*]. So while it looks like enterprise SSDs represented
less than 1% of total storage revenue in 2008, it is the part that is
growing rapidly. I would not be surprised to see enterprise SSDs at  
5-10%

of the total disk storage systems market in 2010. I would also expect to
see  total disk storage systems revenue continue to decline as fewer
customers buy expensive RAID controllers.  IMHO, the total disk storage
systems market has already peaked, so the enterprise SSD gains at
the expense of overall market size.  Needless to say, whether or not
Sun can capitalize on its OpenStorage strategy, the market is moving
in the same direction, perhaps at a more rapid pace due to current
economic conditions.

[*] IDC defines a Disk Storage System as a set of storage elements,
including controllers, cables, and (in some instances) host bus  
adapters,

associated with three or more disks. A system may be located outside of
or within a server cabinet and the average cost of the disk storage  
systems

does not include infrastructure storage hardware (i.e. switches) and
non-bundled storage software.
 -- richard

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread C. Bergström

Ross wrote:

Supermicro AOC-SAT2-MV8, based on the Marvell chipset.  I figured it was the 
best available at the time since it's using the same chipset as the x4500 
Thumper servers.

Our next machine will be using LSI controllers, but I'm still not entirely 
happy with the way ZFS handles timeout type errors.  It seems that it handles 
drive reported read or write errors fine, and also handles checksum errors, but 
it's completely missed drive timeout errors as used by hardware raid 
controllers.

Personally, I feel that when a pool usually responds to requests in the order 
of milliseconds, a timeout of even a tenth of a second is too long.  Several 
minutes before a pool responds is just a joke.

I'm still a big fan of ZFS, and modern hardware may have better error handling, 
but I can't help but feel this is a little short sighted.
  

patches welcomed

./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Bob Friesenhahn

On Thu, 30 Jul 2009, Richard Elling wrote:


According to Gartner, enterprise SSDs accounted for $92.6M of a 
$585.5M SSD market in June 2009, representing 15.8% of the SSD 
market. STEC recently announced an order for $120M of ZeusIOPS 
drives from a single enterprise storage customer.  From 2007 to 
2008, SSD market grew by 100%. IDC reports Q1CY09 had $4,203M for 
the external disk storage factory revenue, down 16% from Q1CY08 
while total disk storage systems were down 25.8% YOY to $5,616M[*]. 
So while it looks like enterprise SSDs represented less than 1% of 
total storage revenue in 2008, it is the part that is growing 
rapidly. I would not be surprised to see enterprise SSDs at 5-10%


While $$$ are important for corporate bottom lines, when it comes to 
the number of units deployed, $$$ are a useless measure when comparing 
disk drives to SSDs since SSDs are much more expensive and offer much 
less storage space.


Bob
--
Bob Friesenhahn
bfrie...@simple.dallas.tx.us, http://www.simplesystems.org/users/bfriesen/
GraphicsMagick Maintainer,http://www.GraphicsMagick.org/
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deduplication

2009-07-30 Thread Nathan Hudson-Crim
I'll maintain hope for seeing/hearing the presentation until you guys announce 
that you had NASA store the tape for safe-keeping.

Bump'd.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] crossmnt ?

2009-07-30 Thread roland
Hello !

How can i export a filesystem /export1 so that sub-filesystems within that 
filesystems will be available and usable on the client side without additional 
mount/share effort ?

this is possible with linux nfsd and i wonder how this can be done with solaris 
nfs.

i`d like to use /export1 as datastore for ESX and create zfs sub-filesystems 
for each VM in that datastore, for better snapshot handling.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best ways to contribute WAS: Fed up with ZFS causing data loss

2009-07-30 Thread C. Bergström

Rob Terhaar wrote:

I'm sure this has been discussed in the past. But its very hard to
understand, or even patch incredibly advanced software such as ZFS
without a deep understanding of the internals.

It will take quite a while before anyone can start understanding a
file system which was developed behind closed doors for nearly a
decade, and then released into opensource land via tarballs thrown
over the wall. Only until recently the source has become more
available to normal humans via projects such as indiana.

Saying if you don't like it, patch it is an ignorant cop-out, and a
troll response to people's problems with software.
  
bs.  I'm entirely *outside* of Sun and just tired of hearing whining and 
complaints about features not implemented.  So the facts are a bit more 
clear in case you think I'm ignorant...


#1 The source has been available and modified from those outside sun for 
I think 3 years??


#2 I fully agree the threshold to contribute is *significantly* high.  
(I'm working on a project to reduce this)


#3 zfs unlike other things like the build system are extremely well 
documented.  There are books on it, code to read and even instructors 
(Max Bruning) who can teach you about the internals.  My project even 
organized a free online training for this


This isn't zfs-haters or zfs-.

Use it, love it or help out...

documentation, patches to help lower the barrier of entry, irc support, 
donations, detailed and accurate feedback on needed features and lots of 
other things welcomed..  maybe there's a more productive way to get what 
you need implemented?


I think what I'm really getting at is instead of dumping on this list 
all the problems that need to be fixed and the long drawn out stories..  
File a bug report..  put the time in to explore the issue on your own.. 
I'd bet that if even 5% of the developers using zfs sent a patch of some 
nature we would avoid this whole thread.


Call me a troll if you like.. I'm still going to lose my tact every once 
in a while when all I see is whiny/noisy threads for days..  I actually 
don't mean to single you out.. there just seems to be a lot of 
negativity lately..



./C

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Fed up with ZFS causing data loss

2009-07-30 Thread Glenn Lagasse
* Rob Terhaar (rob...@robbyt.net) wrote:
 I'm sure this has been discussed in the past. But its very hard to
 understand, or even patch incredibly advanced software such as ZFS
 without a deep understanding of the internals.

It's also very hard for the primary ZFS developers to satisfy everyone's
itch :-)

 It will take quite a while before anyone can start understanding a
 file system which was developed behind closed doors for nearly a
 decade, and then released into opensource land via tarballs thrown
 over the wall. Only until recently the source has become more
 available to normal humans via projects such as indiana.

I don't think you've got your facts straight.  OpenSolaris was launched
in June 2005.  ZFS was integrated October 31st, 2005 after being in
development (of a sort) from October 31st 2001[1].  It hasn't been
developed behind closed doors for nearly a decade.  Four years at most
and it was available for all to see (in much better form than 'tarballs
thrown over the wall') LONG before Indiana was even a gleam in Ian's
eyes.

 Saying if you don't like it, patch it is an ignorant cop-out, and a
 troll response to people's problems with software.

And people seemingly expecting that the ZFS team (or any technology team
working on OpenSolaris) has infinite cycles to solve everyone's itches
is equally ignorant imo.  OpenSolaris (the project) is meant to be a
community project.  As in allowing contributions from entities outside
of sun.com.  So, saying 'patches welcomed' is mostly an appropriate
response (depending on how it's presented) because they are in fact
welcome.  That's sort of how opensource works (at least in my
experience).  If the primary developers aren't 'scratching your itch'
then you (or someone you can get to do the work for you) can fix your
own problems and contribute them back to the community as a whole where
everyone wins.

Cheers,

-- 
Glenn

1 - http://blogs.sun.com/bonwick/entry/zfs_the_last_word_in
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Managing ZFS Replication

2009-07-30 Thread Joseph L. Casale
Anyone come up with a solution to manage the replication of ZFS snapshots?
The send/recv criteria gets tricky with all but the first unless you purge
the destination of snapshots, then force a full stream into it.

I was hoping to script a daily update but I see that I would have to keep track
of what's been done on both sides when using the -i|I syntax so it would not
be reliable in a hands off script.

Would AVS be a possible solution in a mixed S10/Osol/SXCE environment? I presume
that would make it fairly trivially but right now I am duplicating data from an
s10 box to an osol snv118 box based on hardware/application needs forcing the
two platforms.

Thanks for any ideas!
jlc
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Intel X25-E SSD in x4500 followup

2009-07-30 Thread Alex Li
We found lots of SAS Controller Reset and errors to SSD on our servers 
(OpenSolaris 2008.05 and 2009.06 with third-party JBOD and X25-E). Whenever 
there is an error, the MySQL insert takes more than 4 seconds. It was quite 
scary.

Eventually our engineer disabled the Fault Management SMART Pooling and seems 
working.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Managing ZFS Replication

2009-07-30 Thread Brent Jones
On Thu, Jul 30, 2009 at 3:54 PM, Joseph L.
Casalejcas...@activenetwerx.com wrote:
 Anyone come up with a solution to manage the replication of ZFS snapshots?
 The send/recv criteria gets tricky with all but the first unless you purge
 the destination of snapshots, then force a full stream into it.

 I was hoping to script a daily update but I see that I would have to keep 
 track
 of what's been done on both sides when using the -i|I syntax so it would not
 be reliable in a hands off script.

 Would AVS be a possible solution in a mixed S10/Osol/SXCE environment? I 
 presume
 that would make it fairly trivially but right now I am duplicating data from 
 an
 s10 box to an osol snv118 box based on hardware/application needs forcing the
 two platforms.

 Thanks for any ideas!
 jlc
 ___
 zfs-discuss mailing list
 zfs-discuss@opensolaris.org
 http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


I came up with a somewhat custom script, using some pre-existing
scripts I found about the land.

http://www.brentrjones.com/?p=45

I schedule some file systems every 5 minutes, hour, and nightly
depending on requirements. It has worked quite well for me, and proved
to be quite useful in restoring as well (already had to use it).
E-mails status reports, handles conflicts in a simple but effective
way, and replication can be reversed by just starting to run it from
the other system.

I expanded on it by being able to handle A-B and B-A replication
(mirror half of A to B, and half of B to A for paired redundancy).
I'll post that version up in a few weeks when I clean it up a little.

Credits go to Constantin Gonzalez for inspiration and source for parts
of my script.
http://blogs.sun.com/constantin/


-- 
Brent Jones
br...@servuhome.net
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Jorgen Lundman


X25-E would be good, but some pools have no spares, and since you can't 
remove vdevs, we'd have to move all customers off the x4500 before we 
can use it.


Ah it just occurred to me that perhaps for our specific problem, we will 
buy two X25-Es and replace the root mirror. The OS and ZIL logs can live 
 together and put /var in the data pool. That way we would not need to 
rebuild the data-pool and all the work that comes with that.


Shame I can't zpool replace to a smaller disk (500GB HDD to 32GB SSD) 
though, I will have to lucreate and reboot one time.


Lund

--
Jorgen Lundman   | lund...@lundman.net
Unix Administrator   | +81 (0)3 -5456-2687 ext 1017 (work)
Shibuya-ku, Tokyo| +81 (0)90-5578-8500  (cell)
Japan| +81 (0)3 -3375-1767  (home)
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] article on btrfs, comparison with zfs

2009-07-30 Thread James C. McPherson


An introduction to btrfs, from somebody who used to work on ZFS:

http://www.osnews.com/story/21920/A_Short_History_of_btrfs



James C. McPherson
--
Senior Kernel Software Engineer, Solaris
Sun Microsystems
http://blogs.sun.com/jmcp   http://www.jmcp.homeunix.com/blog
Kernel Conference Australia - http://au.sun.com/sunnews/events/2009/kernel
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Install and boot from USB stick?

2009-07-30 Thread tore
Well that seem to work well! :) Still, now the issue have changed from not 
being able to install to USB, to not being able to properly boot from USB.

The GRUB menu is presented, no problem there, and then the opensolaris progress 
bar. But im unable to find a way to view any details on whats happening there. 
The progress bar just keep scrolling and scrolling.

Suggestions?
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] article on btrfs, comparison with zfs

2009-07-30 Thread C. Bergström

James C. McPherson wrote:

An introduction to btrfs, from somebody who used to work on ZFS:

http://www.osnews.com/story/21920/A_Short_History_of_btrfs
  
*very* interesting article.. Not sure why James didn't directly link to 
it, but courteous of Valerie Aurora (formerly Henson)


http://lwn.net/Articles/342892/




___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] [n/zfs-discuss] Strange speeds with x4500, Solaris 10 10/08

2009-07-30 Thread Ross
Great idea, much neater than most of my suggestions too :-)
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss