Re: [zfs-discuss] Zfs deduplication

2009-08-03 Thread Andre van Eyssen

On Tue, 4 Aug 2009, James C. McPherson wrote:


If so, did anyone see the presentation?


Yes. Everybody who attended.


You know, I think we might even have some evidence of their attendance!

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2177.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2178.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2179.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2184.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2186.jpg.html

http://mexico.purplecow.org/static/kca_spk/tn/IMG_2228.jpg.html

So they obviously attended, but it takes time to get get video and 
documentation out the door.


You can already watch their participation in the ZFS panel online:

http://www.ustream.tv/recorded/1810931

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


Subdirectory is automatically a new filesystem property - an administrator 
turns
on this magic property of a filesystem, after that every mkdir *in the root* of
that filesystem creates a new filesystem. The new filesystems have
default/inherited properties except for the magic property which is off.

Right now I see this as being mostly useful for /home. Main benefit in this case
is that various user administration tools can work unmodified and do the right
thing when an administrator wants a policy of a separate fs per user
But I am sure that there could be other interesting uses for this.


It's a nice idea, but zfs filesystems consume memory and have overhead. 
This would make it trivial for a non-root user (assuming they have 
permissions) to crush the host under the weight of .. mkdir.


$ mkdir -p waste/resources/now/waste/resources/now/waste/resources/now

(now make that much longer and put it in a loop)

Also, will rmdir call zfs destroy? Snapshots interacting with that could 
be somewhat unpredictable. What about rm -rf?


It'd either require major surgery to userland tools, including every 
single program that might want to create a directory, or major surgery to 
the kernel. The former is unworkable, the latter .. scary.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, David Magda wrote:


Which makes me wonder: is there a programmatic way to determine if a path
is on ZFS?


statvfs(2)

--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Andriy Gapon wrote:


Well, I specifically stated that this property should not be recursive, i.e. it
should work only in a root of a filesystem.
When setting this property on a filesystem an administrator should carefully set
permissions to make sure that only trusted entities can create directories 
there.


Even limited to the root of a filesystem, it still gives a user the 
ability to consume resources rapidly. While I appreciate the fact that it 
would be restricted by permissions, I can think of a number of usage cases 
where it could suddenly tank a host. One use that might pop up, for 
example, would be cache spools - which often contain *many* directories. 
One runaway and kaboom.


We generally use hosts now with plenty of RAM and the per-filesystem 
overhead for ZFS doesn't cause much concern. However, on a scratch box, 
try creating a big stack of filesystems - you can end up with a pool that 
consumes so much memory you can't import it!



'rmdir' question requires some thinking, my first reaction is it should do zfs
destroy...


.. which will fail if there's a snapshot, for example. The problem seems 
to be reasonably complex - compounded by the fact that many programs that 
create or remove directories do so directly - not by calling externals 
that would be ZFS aware.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] feature proposal

2009-07-29 Thread Andre van Eyssen

On Wed, 29 Jul 2009, Mark J Musante wrote:


Yes, if it's local. Just use df -n $path and it'll spit out the filesystem 
type.  If it's mounted over NFS, it'll just say something like nfs or autofs, 
though.


$ df -n /opt
Filesystemkbytesused   avail capacity  Mounted on
/dev/md/dsk/d24  33563061 11252547 2197488434%/opt
$ df -n /sata750
Filesystemkbytesused   avail capacity  Mounted on
sata750  2873622528  77 322671575 1%/sata750

Not giving the filesystem type. It's easy to spot the zfs with the lack of 
recognisable device path, though.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Another user looses his pool (10TB) in this case and 40 days work

2009-07-19 Thread Andre van Eyssen

On Sun, 19 Jul 2009, Richard Elling wrote:

I do, even though I have a small business.  Neither InDesign nor 
Illustrator will be ported to Linux or OpenSolaris in my lifetime... 
besides, iTunes rocks and it is the best iPhone developer's environment 
on the planet.


Richard,

I think the point that Gavin was trying to make is that a sensible 
business would commit their valuable data back to a fileserver running on 
solid hardware with a solid operating system rather than relying on their 
single-spindle laptops to store their valuable content - not making any 
statement on the actual desktop platform.


For example, I use a mixture of Windows, MacOS, Solaris and OpenBSD around 
here, but all the valuable data is stored on a zpool located on a SPARC 
server (obviously with ECC RAM) with UPS power. With Windows around, I 
like the fact that I don't need to think twice before reinstalling those 
machines.


Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deduplication

2009-07-12 Thread Andre van Eyssen

On Sun, 12 Jul 2009, Cyril Plisko wrote:


There is an ongoing speculations of what/when/how deduplication will
be in ZFS and I am curious: what is the reason to keep the thing
secret ? I always thought open source assumes open development
process. What exactly people behind deduplication effort trying to
prove by keeping their mouth shut ?

Something feels wrong...


The conference is less than a week away. It's hardly keeping things secret 
to announce at a public conference!


You should remember that the Amber Road product was announced at a 
conference, too - and was kept reasonably quiet until the announcement. 
I'm just glad I was at the conference where Amber Road was announced and 
will be attending KCA!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] deduplication

2009-07-12 Thread Andre van Eyssen

On Sun, 12 Jul 2009, Cyril Plisko wrote:


Open source is much more than throwing the code over the wall.
Heck, in the early pilot days I was told by a number of Sun engineers,
that the reason things are taking time is exactly that - we do not
want to just throw the code over the wall - we want to build a
community.


With respect, Sun is entitled to develop new features in whichever manner 
suits their ends. While the community may desire fresh, juicy source to 
dig through on a regular basis, it's not always going to land in your lap.


You can't always get what you want. In this case, however, you will get 
what you need - the finished product.


Finally, there is one rather simple way to pull development out into the 
open - write some relevant code and be part of the development process. If 
people delivered code as quickly as they deliver words, the development 
process would be wide out in the open.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Why is Solaris 10 ZFS performance so terrible?

2009-07-06 Thread Andre van Eyssen

On Mon, 6 Jul 2009, Gary Mills wrote:


As for a business case, we just had an extended and catastrophic
performance degradation that was the result of two ZFS bugs.  If we
have another one like that, our director is likely to instruct us to
throw away all our Solaris toys and convert to Microsoft products.


If you change platform every time you get two bugs in a product, you must 
cycle platforms on a pretty regular basis!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-07-01 Thread Andre van Eyssen

On Thu, 2 Jul 2009, Ian Collins wrote:


5+ is typical for telco use.


Aah, but we start getting into rooms full of giant 2V wet lead acid cells 
and giant busbars the size of railway tracks.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS, power failures, and UPSes

2009-06-30 Thread Andre van Eyssen

On Tue, 30 Jun 2009, Monish Shah wrote:

The evil tuning guide says The ZIL is an essential part of ZFS and should 
never be disabled.  However, if you have a UPS, what can go wrong that 
really requires ZIL?


Without addressing a single ZFS-specific issue:

* panics
* crashes
* hardware failures
- dead RAM
- dead CPU
- dead systemboard
- dead something else
* natural disasters
* UPS failure
* UPS failure (must be said twice)
* Human error (what does this button do?)
* Cabling problems (say, where did my disks go?)
* Malicious actions (Fired? Let me turn their power off!)

That's just a warm-up; I'm sure people can add both the ZFS-specific 
reasons and also the fallacy that a UPS does anything more than mitigate 
one particular single point of failure.


Don't forget to buy two UPSes and split your machine across both. And 
don't forget to actually maintain the UPS. And check the batteries. And 
schedule a load test.


The single best way to learn about the joys of UPS behaviour is to sit 
down and have a drink with a facilities manager who has been doing the job 
for at least ten years. At least you'll hear some funny stories about the 
day a loose screw on one floor took out a house UPS and 100+ hosts and NEs 
with it.


Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Any news on deduplication?

2009-06-30 Thread Andre van Eyssen

On Tue, 30 Jun 2009, MC wrote:


Any news on the ZFS deduplication work being done?  I hear Jeff Bonwick might 
speak about it this month.


Yes, it is definately on the agenda for Kernel Conference Australia 
(http://www.kernelconference.net) - you should come along!


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Best controller card for 8 SATA drives ?

2009-06-21 Thread Andre van Eyssen

On Sun, 21 Jun 2009, Carson Gaspar wrote:

I'll chime in as a happy owner of the LSI SAS 3081E-R PCI-E board. It works 
just fine. You need to get lsiutil from the LSI web site to fully access 
all the functionality, and they cleverly hide the download link only under 
their FC HBAs on their support site, even though it works for everything.


I'll add another vote for the LSI products. I have a four port PCI-X card 
in my V880, and the performance is good and the product is well behaved. 
The only caveats:


1. Make sure you upgrade the firmware ASAP
2. You may need to use lsiutil to fiddle the target mappings

Andre.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to destroy a pool by id?

2009-06-20 Thread Andre van Eyssen

On Sat, 20 Jun 2009, Cindy Swearingen wrote:


I wish we had a zpool destroy option like this:

# zpool destroy -really_dead tank2


Cindy,

The moment we implemented such a thing, there would be a rash of requests 
saying:


a) I just destroyed my pool with -really_dead - how can I get my data 
back??!
b) I was able to recover my data from -really_dead - can we have 
-ultra-nuke please?


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] What can I do to shorten the long awkward names of snapshots?

2009-04-15 Thread Andre van Eyssen

On Wed, 15 Apr 2009, Harry Putnam wrote:



Would become:
 a:freq-041509_1630


Can I suggest perhaps something inspired by the old convention for DNS 
serials, along the lines of fmmddtt? Like:


a:f200904151630

This makes things easier to sort and lines up in a tidy manner.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Panic

2009-04-09 Thread Andre van Eyssen

On Fri, 10 Apr 2009, Rince wrote:


FWIW, I strongly expect live ripping of a SATA device to not panic the disk
layer. It explicitly shouldn't panic the ZFS layer, as ZFS is supposed to be
fault-tolerant and drive dropping away at any time is a rather expected
scenario.


Ripping a SATA device out runs a goodly chance of confusing the 
controller. If you'd had this problem with fibre channel or even SCSI, I'd 
find it a far bigger concern. IME, IDE and SATA just don't hold up to the 
abuses we'd like to level at them. Of course, this boils down to 
controller and enclosure and a lot of other random chances for disaster.


In addition, where there is a procedure to gently remove the device, use 
it. We don't just yank disks from the FC-AL backplanes on V880s, because 
there is a procedure for handling this even for failed disks. The five 
minutes to do it properly is a good investment compared to much longer 
downtime from a fault condition arising from careless manhandling of 
hardware.


--
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Richard Elling wrote:


 The drives that Sun sells will come with the correct bracket.
 Ergo, there is no reason to sell the bracket as a separate
 item unless the customer wishes to place non-Sun disks in
 them.  That represents a service liability for Sun, so they are
 not inclined to do so.  It is really basic business.
 -- richard

This thread has been running for a little too long, considering the issues 
are pretty simple.

Sun sells a JBOD storage product, along with disks and accessories. The 
disks they provide are mounted in the correct carriers for the array. The 
pricing of the disks and accessories are part of the price calculation for 
the entire system - you could provide the array empty, with a full set of 
empty carriers but the price will go up.

No brand name storage vendor supports or encourages installation of third 
party disks. It's not the way the business works. If the customer wants 
the reassurance, quality, (etc, etc) associated with buying brand name 
storage, they purchase the disks from the same vendor. If price is more 
critical than these factors, there's a wide range of white box 
solutions on the market. Try approaching IBM, HP, EMC, HDS, NetApp or 
similar and ask to buy an empty JBOD and spare trays - it's not happening.

Yes, this is unfortunate for those who would like to purchase a Sun JBOD 
for home or for a microbusiness. However, these users are probably aware 
that if they want to buy their own spindles and run an unsupported 
configuration, their local metal shop will be happy to bang out some 
frames. Not to mention the fact that one could always run up a set of 
sleds in the shed without too much strife - in fact, in the past when 
spuds were less commonly available, I've seen a home user make sleds out 
of wood that did the job.

Now, I'm looking forward to seeing the first Sun JBOD loaded up with 
CNC-milled mahogany sleds. It'll look great.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] j4200 drive carriers

2009-02-01 Thread Andre van Eyssen
On Sun, 1 Feb 2009, Bob Friesenhahn wrote:


 I am worried that Sun is primarily interested in new business and
 tends to not offer replacement/new drives for as long as the actual
 service-life of the array.  What is the poor customer to do when Sun
 is no longer willing to offer a service contract and Sun is no longer
 willing to sell drives (or even the carriers) for the array?

You can still procure replacement drives for real vintage kit, like the 
A1000/D1000 arrays. I doubt your argument is valid. As a side point, by 
the time these arrays are dead  buried, the sleds for them will no doubt 
be as common as spuds (and don't we all have at least 30 of those lying 
around?)

 Sometimes it is only a matter of weeks before Sun stops offering
 supportive components.  For example, my Ultra 40 was only discontinued
 a month or so ago but already Sun somehow no longer lists memory for
 it (huh?).

If your trusty Sun partner couldn't supply you with memory for an 
Ultra-40, I'd take that as a sign to find a new partner. EOSL products 
vanish from websites but parts can still be ordered for them.

-- 
Andre van Eyssen.
mail: an...@purplecow.org  jabber: an...@interact.purplecow.org
purplecow.org: UNIX for the masses http://www2.purplecow.org
purplecow.org: PCOWpix http://pix.purplecow.org

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss