Re: [zfs-discuss] repost - high read iops

2009-12-29 Thread przemolicc
On Mon, Dec 28, 2009 at 01:40:03PM -0800, Brad wrote:
 This doesn't make sense to me. You've got 32 GB, why not use it?
 Artificially limiting the memory use to 20 GB seems like a waste of
 good money.
 
 I'm having a hard time convincing the dbas to increase the size of the SGA to 
 20GB because their philosophy is, no matter what eventually you'll have to 
 hit disk to pick up data thats not stored in cache (arc or l2arc).  The 
 typical database server in our environment holds over 3TB of data.

Brad,

are your DBAs aware that if you increase your SGA (currently 4 GB)
- to 8  GB - you get 100 % more memory for SGA
- to 16 GB - you get 300 % more memory for SGA
- to 20 GB - you get 400 % ...

If they are not aware, well ...

But try to be patient - I had similar situation. It took quite long time to 
convince
our DBA to increase SGA from 16 GB to 20 GB. Finally they did :-)

You can always use stronger argument that not using already bought memory
is wasting of _money_.

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





























--
Sprawdz, co przyniesie Nowy Rok!
Zapytaj wrozke  http://link.interia.pl/f254d

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] new logbias property

2009-08-11 Thread przemolicc
On Tue, Aug 11, 2009 at 05:42:31AM -0700, Robert Milkowski wrote:
 Hi,
 
 I like the new feature but I was thinking that maybe the keywords being used 
 should be different?
 
 Currently it is:
 
 # zfs set logbias=latency {dataset}
 # zfs set logbias=throughput {dataset}
 
 Maybe it would be more clear this way:
 
 # zfs set logdest=dedicated {dataset}
 # zfs set logdest=pool {dataset}
 
 or even intentlog={dedicated|pool} ?
 
 IMHO it would better describe what the feature actually does.

I am not sure if 'logbias' is better then 'logdest' but IMHO
'latency' and 'throughput' is much more intuitive :-)

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





























--
Wybierz dziewczyne lub chlopaka lata!
Final trwa  http://link.interia.pl/f22c4 


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs compression - btrfs compression

2008-11-03 Thread przemolicc
On Mon, Nov 03, 2008 at 12:33:52PM -0600, Bob Friesenhahn wrote:
 On Mon, 3 Nov 2008, Robert Milkowski wrote:
  Now, the good filter could be to use MAGIC numbers within files or
  approach btrfs come up with, or maybe even both combined.
 
 You are suggesting that ZFS should detect a GIF or JPEG image stored 
 in a database BLOB.  That is pretty fancy functionality. ;-)

Maybe some general approach (not strictly GIF- or JPEG-oriented)
could be useful.

Give people a choice and they will love ZFS even more.

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





















--
Konkurs! Wygraj telewizor LCD!
Sprawdz  http://link.interia.pl/f1f61

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Automatic removal of old snapshots

2008-09-25 Thread przemolicc
On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
  Before re-inventing the wheel, does anyone have any nice shell script to do 
  this 
  kind of thing (to be executed from cron)?
 
 http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_10
 http://blogs.sun.com/timf/entry/zfs_automatic_snapshots_0_11

Storage Checkpoints in Veritas software has this feature (removing
the oldest checkpoint in case of 100% filesystem usage) by default.
Why not add such option to ZFS ?

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





















--
Znajdź mieszkanie w Twoim regionie!
kliknij  http://link.interia.pl/f1f19

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Automatic removal of old snapshots

2008-09-25 Thread przemolicc
On Thu, Sep 25, 2008 at 11:30:04AM +0100, Tim Foster wrote:
 On Thu, 2008-09-25 at 12:07 +0200, [EMAIL PROTECTED] wrote:
  On Thu, Sep 25, 2008 at 11:43:51AM +0200, Nils Goroll wrote:
 
  Storage Checkpoints in Veritas software has this feature (removing
  the oldest checkpoint in case of 100% filesystem usage) by default.
  Why not add such option to ZFS ?
 
  - because we shouldn't assume a user's intentions - some would consider
 automatic destruction of data (even historical data) a p1 bug.

Tim,

nobody is going to assume user's intentions. Just give us
snapshot-related property which we can set to on/off and everybody
can setup zfs according to his/her needs.

Regards
Przemyslaw Bak (przemol)
--
http://przemol.blogspot.com/





















--
Znajdź mieszkanie w Twoim regionie!
kliknij  http://link.interia.pl/f1f19

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] filebench for Solaris 10?

2008-02-17 Thread przemolicc
On Sat, Feb 16, 2008 at 07:26:27PM -0600, Bob Friesenhahn wrote:
 Some of us are still using Solaris 10 since it is the version of 
 Solaris released and supported by Sun.  The 'filebench' software from 
 SourceForge does not seem to install or work on Solaris 10. 
 [...]

I am not sure if you are right:

bash-3.00# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007
bash-3.00# pkginfo|grep -i filebench
application filebenchFileBench
bash-3.00# pkginfo -l filebench
   PKGINST:  filebench
  NAME:  FileBench
  CATEGORY:  application
  ARCH:  i386
   VERSION:  1.1.0
   BASEDIR:  /opt
VENDOR:  filebench.org
  DESC:  FileBench
PSTAMP:  1.1.0_snv
  INSTDATE:  Jan 28 2008 13:20
 EMAIL:  [EMAIL PROTECTED]
STATUS:  completely installed
 FILES:  261 installed pathnames
   5 linked files
  27 directories
  62 executables
   1 setuid/setgid executables
   20158 blocks used (approx)

Regards
przemol

-- 
http://przemol.blogspot.com/






















--
Myslisz, ze nie ma sniegu? Sprawdź gdzie mozesz poszusowac na nartach
  http://link.interia.pl/f1cfb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] vxfs vs ufs vs zfs

2008-02-17 Thread przemolicc
Hello,

I have just done comparison of all the above filesystems
using the latest filebench.  If you are interested:
http://przemol.blogspot.com/2008/02/zfs-vs-vxfs-vs-ufs-on-x4500-thumper.html

Regards
przemol

-- 
http://przemol.blogspot.com/






















--
Myslisz, ze nie ma sniegu? Sprawdź gdzie mozesz poszusowac na nartach
  http://link.interia.pl/f1cfb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Which DTrace provider to use

2008-02-14 Thread przemolicc
On Tue, Feb 12, 2008 at 10:21:44PM -0800, Jonathan Loran wrote:
 
 Hi List,
 
 I'm wondering if one of you expert DTrace guru's can help me.  I want to 
 write a DTrace script to print out a a histogram of how long IO requests 
 sit in the service queue.  I can output the results with the quantize 
 method.  I'm not sure which provider I should be using for this.  Does 
 anyone know?  I can easily adapt one of the DTrace Toolkit routines for 
 this, if I can find the provider.
 
 I'll also throw out the problem I'm trying to meter.  We are using ZFS 
 on a large SAN array (4TB).  The pool on this array serves up a lot of 
 users, (250 home file systems/directories) and also /usr/local and other 
 OTS software.  It works fine most of the time, but then gets overloaded 
 during busy periods.  I'm going to reconfigure the array to help with 
 this, but I sure would love to have some metrics to know how big a 
 difference my tweaks are making.  Basically, the problem users 
 experience, when the load shoots up are huge latencies.  An ls on a 
 non-cached directory, which usually is instantaneous, will take 20, 30, 
 40 seconds or more.   Then when the storage array catches up, things get 
 better.  My clients are not happy campers. 
 
 I know, I know, I should have gone with a JBOD setup, but it's too late 
 for that in this iteration of this server.  We we set this up, I had the 
 gear already, and it's not in my budget to get new stuff right now.
 
 Thanks for any help anyone can offer.

I have faced similar problem (although not exactly the same) and was going to
monitor disk queue with dtrace but couldn't find any docs/urls about it.
Finally asked Chris Gerhard for help. He partially answered via his
blog: http://blogs.sun.com/chrisg/entry/latency_bubble_in_your_io

Maybe it helps you.

Regards
przemol

--
http://przemol.blogspot.com/





















--
Masz ostatnia szanse !
Sprawdz  http://link.interia.pl/f1d02  

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Ditto blocks in S10U4 ?

2008-01-22 Thread przemolicc
bash-3.00# cat /etc/release
Solaris 10 8/07 s10x_u4wos_12b X86
   Copyright 2007 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
Assembled 16 August 2007

(with all the latest patches)

bash-3.00# zpool list
NAMESIZEUSED   AVAILCAP  HEALTH ALTROOT
zpool1 20.8T   5.44G   20.8T 0%  ONLINE -

bash-3.00# zpool upgrade -v
This system is currently running ZFS version 4.

The following versions are supported:

VER  DESCRIPTION
---  
 1   Initial ZFS version
 2   Ditto blocks (replicated metadata)
 3   Hot spares and double parity RAID-Z
 4   zpool history

For more information on a particular version, including supported
releases, see:

http://www.opensolaris.org/os/community/zfs/version/N

Where 'N' is the version number.

bash-3.00# zfs set copies=2 zpool1
cannot set property for 'zpool1': invalid property 'copies'

From http://www.opensolaris.org/os/community/zfs/version/2/
... This version includes support for Ditto Blocks, or replicated
metadata.

Can anybody shed any light on it ?

Regards
przemol

-- 
http://przemol.blogspot.com/






















--
Kogo Doda ciagnela do lozka?
Sprawdz   http://link.interia.pl/f1cde

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write time performance question

2007-12-06 Thread przemolicc
On Wed, Dec 05, 2007 at 09:02:43PM -0800, Tim Cook wrote:
 what firmware revision are you at?

Revision: 415G


Regards
przemol

-- 
http://przemol.blogspot.com/






















--
A co by bylo, gdybys to TY rzadzil?
Kliknij  http://link.interia.pl/f1c91

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS write time performance question

2007-12-03 Thread przemolicc
On Thu, Nov 29, 2007 at 02:05:13PM -0800, Brendan Gregg - Sun Microsystems 
wrote:
 
 I'd recommend running filebench for filesystem benchmarks, and see what
 the results are:
 
   http://www.solarisinternals.com/wiki/index.php/FileBench
 
 Filebench is able to purge the ZFS cache (export/import) between runs,
 and can be customised to match real world workloads.  It should improve
 the accuracy of the numbers.  I'm expecting filebench to become *the*
 standard tool for filesystem benchmarks.

And some results (for OLTP workload):

http://przemol.blogspot.com/2007/08/zfs-vs-vxfs-vs-ufs-on-scsi-array.html

Regards
przemol

--
http://przemol.blogspot.com/






















--
A co by bylo, gdybys to TY rzadzil?
Kliknij  http://link.interia.pl/f1c91

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Direct I/O ability with zfs?

2007-10-02 Thread przemolicc
On Tue, Oct 02, 2007 at 01:20:24PM -0600, eric kustarz wrote:
 
 On Oct 2, 2007, at 1:11 PM, David Runyon wrote:
 
  We are using MySQL, and love the idea of using zfs for this.  We  
  are used to using Direct I/O to bypass file system caching (let the  
  DB do this).  Does this exist for zfs?
 
 Not yet, see:
 6429855 Need way to tell ZFS that caching is a lost cause
 
 Is there a specific reason why you need to do the caching at the DB  
 level instead of the file system?  I'm really curious as i've got  
 conflicting data on why people do this.  If i get more data on real  
 reasons on why we shouldn't cache at the file system, then this could  
 get bumped up in my priority queue.

At least two reasons:
http://developers.sun.com/solaris/articles/mysql_perf_tune.html#6
http://blogs.sun.com/glennf/entry/where_do_you_cache_oracle
(the first example proofs that this issue is not only Oracle-related)

Regards
przemol


--
http://przemol.blogspot.com/









--
Fajne i smieszne. Zobacz najlepsze filmiki!

 http://link.interia.pl/f1bbb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Performance Tuning - ZFS, Oracle and T2000

2007-08-20 Thread przemolicc
On Sun, Aug 19, 2007 at 09:32:02AM +0200, Louwtjie Burger wrote:
 http://blogs.sun.com/realneel/entry/zfs_and_databases
 
 http://www.sun.com/servers/coolthreads/tnb/parameters.jsp
 
 http://www.sun.com/servers/coolthreads/tnb/applications_oracle.jsp
 
 Be careful with long running single queries...  you want to throw lots
 of users at it ... or parallelize as much as possible!
 
 Also keep in mind that a T2000 running flat out has the potential to
 generate LOTS of I/O... it's vital to have a decent storage system
 supporting it.
 
 On 8/19/07, Damian Reed - Systems Support Engineer [EMAIL PROTECTED] wrote:
  Hi All,
 
  We are currently in the process of testing Solaris 10 with ZFS and
  Oracle and are running it on a T2000.
 
  When checking performance statistics on the T2000, we notice that only
  one thread of the CPU appears to be doing any of the processing.
  Leaving all other threads seemingly idle.
 
  Are there any tuning parameters that need to be set or changed when
  running Oracle on T2000 using ZFS?  Can anyone suggest what I might look
  for?
 
  Any initial suggestions would be greatly appreciated.
 
  Regards,
 
  Damian Reed

The following page might also be helpful:
http://blogs.sun.com/ValdisFilks/entry/improving_i_o_throughput_for

Regards
przemol

--
http://przemol.blogspot.com/


























--
Jestes kierowca? To poczytaj!  http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and VXVM/VXFS

2007-07-04 Thread przemolicc
On Mon, Jul 02, 2007 at 12:30:32PM -0700, Richard Elling wrote:
 Magesh R wrote:
  We are looking at the alternatives to VXVM/VXFS. One of the feature
  which we liked in Veritas, apart from the obvious ones is the
  ability to call the disks by name and group them in to a disk group.
 
  Especially in SAN based environment where the disks may be shared by
  multiple machines, it is very easy to manage them by disk group
  names rather than cxtxdx numbers.
 
  Does zfs offer such capabilities?

 ZFS greatly simplifies disk management.  I would argue that is
 eliminates the
 need for vanity naming or some features of diskgroups.  I suggest you
 read through
 the docs on how to administer and setup ZFS, try a few examples, and
 then ask
 specific questions.

 Nit: You confused me with disks may be shared by multiple machines
 because LUNs
 have no protection below the LUN level, and if your disk is a LUN,
 then sharing it
 leaves the data unprotected.  Perhaps you are speaking of LUNs on a
 RAID array?
   -- richard

Richard,
It is rather rare that in SAN based environment you are given access
to particular disks. :-)

Magesh,
in ZFS world you create pools. Each pool has its own name. Then, later,
you refer to each pool (and filesystem) using its name.

Regards
przemol

--
http://przemol.blogspot.com/














--
O Twoich stronach juz się mówi...
Na  http://link.interia.pl/f1ad3



___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Layout for multiple large streaming writes.

2007-03-12 Thread przemolicc
On Mon, Mar 12, 2007 at 09:34:22AM +0100, Robert Milkowski wrote:
 Hello przemolicc,
 
 Monday, March 12, 2007, 8:50:57 AM, you wrote:
 
 ppf On Sat, Mar 10, 2007 at 12:08:22AM +0100, Robert Milkowski wrote:
  Hello Carisdad,
  
  Friday, March 9, 2007, 7:05:02 PM, you wrote:
  
  C I have a setup with a T2000 SAN attached to 90 500GB SATA drives 
  C presented as individual luns to the host.  We will be sending mostly 
  C large streaming writes to the filesystems over the network (~2GB/file)
  C in 5/6 streams per filesystem.  Data protection is pretty important, but
  C we need to have at most 25% overhead for redundancy.
  
  C Some options I'm considering are:
  C 10 x 7+2 RAIDZ2 w/ no hotspares
  C 7 x 10+2 RAIDZ2 w/ 6 spares
  
  C Does any one have advice relating to the performance or reliability to
  C either of these?  We typically would swap out a bad drive in 4-6 hrs and
  C we expect the drives to be fairly full most of the  time ~70-75% fs 
  C utilization.
  
  On x4500 with a config: 4x 9+2 RAID-z2 I get ~600MB/s logical
  (~700-800 with redundancy overhead). It's somewhat jumpy but it's a
  known bug in zfs...
  So in your config, assuming host/SAN/array is not a bottleneck,
  you should be able to write at least two times more throughput.
 
 ppf Look also at:
 ppf http://sunsolve.sun.com/search/document.do?assetkey=1-9-88385-1
 ppf where you have a way to increase the I/O and application performance on
 ppf T2000.
 
 
 I was talking about x4500 not T2000.

But Carisdad mentioned T2000.

Regards
przemol

--
Jestes kierowca? To poczytaj!  http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread przemolicc
On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
 Hi Przemol,
 
 I think Casper had a good point bringing up the data integrity
 features when using ZFS for RAID. Big companies do a lot of things
 just because that's the certified way that end up biting them in the
 rear. Trusting your SAN arrays is one of them. That all being said,
 the need to do migrations is a very valid concern.

Jason,

I don't claim that SAN/RAID solutions are the best and don't have any
mistakes/failures/problems. But if SAN/RAID is so bad why companies
using them survive ?

Imagine also that some company is using SAN/RAID for a few years
and doesn't have any problems (or once a few months). Also from time to
time they need to migrate between arrays (for whatever reason). Now you come 
and say
that they have unreliable SAN/RAID and you offer something new (ZFS)
which is going to make it much more reliable but migration to another array
will be painfull. What do you think what they choose ?

BTW: I am a fan of ZFS. :-)

przemol

--
Ustawiaj rekordy DNS dla swojej domeny  
http://link.interia.pl/f1a1a

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-27 Thread przemolicc
On Tue, Feb 27, 2007 at 08:29:04PM +1100, Shawn Walker wrote:
 On 27/02/07, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 On Thu, Feb 22, 2007 at 12:21:50PM -0700, Jason J. W. Williams wrote:
  Hi Przemol,
 
  I think Casper had a good point bringing up the data integrity
  features when using ZFS for RAID. Big companies do a lot of things
  just because that's the certified way that end up biting them in the
  rear. Trusting your SAN arrays is one of them. That all being said,
  the need to do migrations is a very valid concern.
 
 Jason,
 
 I don't claim that SAN/RAID solutions are the best and don't have any
 mistakes/failures/problems. But if SAN/RAID is so bad why companies
 using them survive ?
 
 I think he was trying to say that people that believe that those
 solutions are reliable just because they are based on SAN/RAID
 technology and are not aware of the true situation surrounding them.

Is the true situation really so bad ?

My feeling was that he was trying to say that there is no SAN/RAID
solution without data integrity problem. Is it really true ?
Does anybody have any paper (*) about percentage of problems in SAN/RAID
because of data integrity ? Is it 5 % ? Or 30 % ? Or maybe 60 % ?

(*) Maybe such paper/report should be a start point for our discussion.

przemol

--
Gdy nie ma dzieci... - zobacz  http://link.interia.pl/f19eb

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: How much do we really want zpool remove?

2007-02-22 Thread przemolicc
On Wed, Feb 21, 2007 at 04:43:34PM +0100, [EMAIL PROTECTED] wrote:
 
 I cannot let you say that.
 Here in my company we are very interested in ZFS, but we do not care
 about the RAID/mirror features, because we already have a SAN with
 RAID-5 disks, and dual fabric connection to the hosts.
 
 But you understand that these underlying RAID mechanism give absolutely
 no guarantee about data integrity but only that some data was found were
 some (possibly other) data was written?  (RAID5 never verifies the
 checkum is correct on reads; it only uses it to reconstruct data when
 reads fail)

But you understand that he perhaps knows that but so far nothing wrong
happened [*] and migration is still very important feature for him ?

[*] almost every big company has its data center with SAN and FC
connections with RAID-5 or RAID-10 in their storage arrays
and they are treated as reliable

przemol

--
Wpadka w kosciele - zobacz  http://link.interia.pl/f19ea

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] How much do we really want zpool remove?

2007-01-18 Thread przemolicc
On Thu, Jan 18, 2007 at 06:55:39PM +0800, Jeremy Teo wrote:
 On the issue of the ability to remove a device from a zpool, how
 useful/pressing is this feature? Or is this more along the line of
 nice to have?

If you think remove a device from a zpool = to shrink a pool then
it is really usefull. Definitely really usefull.
Do you need any example ?


przemol

--
Lufa dla generala. Zobacz  http://link.interia.pl/f19e1


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Difference between ZFS and UFS with one LUN from a SAN

2006-12-22 Thread przemolicc
On Thu, Dec 21, 2006 at 04:45:34PM +0100, Robert Milkowski wrote:
 Hello Shawn,
 
 Thursday, December 21, 2006, 4:28:39 PM, you wrote:
 
 SJ All,
 
 SJ I understand that ZFS gives you more error correction when using
 SJ two LUNS from a SAN. But, does it provide you with less features
 SJ than UFS does on one LUN from a SAN (i.e is it less stable).
 
 With only one LUN you still get error detection which UFS doesn't give
 you. You still can use snapshots, clones, quotas, etc. so in general
 you still have more features than UFS.
 
 Now when in comes to stability - depends. UFS is for years in use
 while ZFS much younger.
 
 More and more people are using ZFS in production and while there're
 some corner cases mostly performance related, it works really good.
 And I haven't heard of verified data lost due to ZFS. I've been using
 ZFS for quite some time (much sooner than it was available in SX) and
 I haven't also lost any data.

Robert,

I don't understand why not loosing any data is an advantage of ZFS.
No filesystem should lose any data. It is like saying that an advantage
of football player is that he/she plays football (he/she should do that !)
or an advantage of chef is that he/she cooks (he/she should do that !).
Every filesystem should _save_ our data, not lose it.

Regards
przemol

--
Jestes kierowca? To poczytaj!  http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: !

2006-12-22 Thread przemolicc
Ulrich,

in his e-mail Robert mentioned _two_ things regarding ZFS:
[1] ability to detect errors (checksums)
[2] using ZFS didn't caused data lost so far
I completely agree that [1] is wonderful and this is huge advantage. And you
also underlined [1] in you e-mail !
The _only_ thing I mentioned is [2]. And I guess Robert wrote about
it only because ZFS is relatively young. When you talk
about VxFS/UFS you don't underline that they don't lose data - it
would be ridiculous. 

Regards
przemol

On Fri, Dec 22, 2006 at 11:39:44AM +0100, Ulrich Graef wrote:
 [EMAIL PROTECTED] wrote:
 
 Robert,
 
 I don't understand why not loosing any data is an advantage of ZFS.
 No filesystem should lose any data. It is like saying that an advantage
 of football player is that he/she plays football (he/she should do that !)
 or an advantage of chef is that he/she cooks (he/she should do that !).
 Every filesystem should _save_ our data, not lose it.
  
 
 yes, you are right: every filesystem should save the data.
 (... and every program should have no error! ;-)
 
 Unfortunately there are some cases, where the disks lose data,
 these cannot be detected by traditional filesystems but with ZFS:
 
* bit rot: some bits on the disk gets flipped (~ 1 in 10^11)
  (cosmic rays, static particles in airflow, random thermodynamics)
* phantom writes: a disk 'forgets' to write data (~ 1 in 10^8)
  (positioning errors, disk firmware errors, ...)
* misdirected reads/writes: disk writes to the wrong position (~ 1
  in 10^8)
  (disks use very small structures, head can move after positioning)
* errors on the data transfer connection
 
 You can look up the probabilities at several disk vendors, the are 
 published.
 Traditional filesystems do not check the data they read. You get strange 
 effects
 when the filesystem code runs with wrong metadata (worst case: panic).
 If you use the wrong data in your applicaton, you 'only' have the wrong 
 results...
 
 ZFS on the contrary checks every block it reads and is able to find the 
 mirror
 or reconstruct the data in a raidz config.
 Therefore ZFS uses only valid data and is able to repair the data blocks 
 automatically.
 This is not possible in a traditional filesystem/volume manager 
 configuration.
 
 You may say, you never heard of a disk losing data; but you have heard 
 of systems,
 which behave strange and a re-installation fixed everything.
 Or some data have gone bad and you have to recover from backup.
 
 It may be, that this was one of these cases.
 Our service encounters a number of these cases every year,
 where the customer was not able to re-install or did not want to restore 
 his data,
 which can be traced back to such a disk error.
 These are always nasty problems and it gets nastier, because customers
 have more and more data and there is a trend to save money on backup/restore
 infrastructures which make it hurt to restore data.
 
 Regards,
 
Ulrich
 
 -- 
 | Ulrich Graef, Senior Consultant, OS Ambassador \
 |  Operating Systems, Performance \ Platform Technology   \
 |   Mail: [EMAIL PROTECTED] \ Global Systems Enginering \
 |Phone: +49 6103 752 359\ Sun Microsystems Inc  \
 

--
Jestes kierowca? To poczytaj!  http://link.interia.pl/f199e

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-09 Thread przemolicc
On Fri, Oct 06, 2006 at 11:57:36AM -0700, Matthew Ahrens wrote:
 [EMAIL PROTECTED] wrote:
 On Fri, Oct 06, 2006 at 01:14:23AM -0600, Chad Leigh -- Shire.Net LLC 
 wrote:
 But I would dearly like to have a versioning capability.
 
 Me too.
 Example (real life scenario): there is a samba server for about 200
 concurrent connected users. They keep mainly doc/xls files on the
 server.  From time to time they (somehow) currupt their files (they
 share the files so it is possible) so they are recovered from backup.
 Having versioning they could be said that if their main file is
 corrupted they can open previous version and keep working.
 ZFS snapshots is not solution in this case because we would have to
 create snapshots for 400 filesystems (yes, each user has its filesystem
 and I said that there are 200 concurrent connections but there much more
 accounts on the server) each hour or so.
 
 I completely disagree.  In this scenario (and almost all others), use of 
 regular snapshots will solve the problem.  'zfs snapshot -r' is 
 extremely fast, and I'm working on some new features that will make 
 using snapshots for this even easier and better-performing.
 
 If you disagree, please tell us *why* you think snapshots don't solve 
 the problem.

Matt,

think of night when some (maybe 5 %) people still work. Having snapshot
I would still have to create snapshots for 400 filesystems each hour because I
don't know which of them are working. And what about weekend ? Still
400 snaphosts each hour ? And 'zfs list' will list me 400*24*2=19200 lines ?
And how about organizations which has thousends people and keep their
files on one server ? Or ISP/free e-maila account providers who have millions ?

Imagine just ordinary people who use ZFS in their homes and forgot
creating snapshots ? Or they turn their computer on once and then don't
turn it off: they work daily (and create snapshot an hour) and don't
turn it off in the evening but leave it working and downloading some
films and musics. Still one snapshot an hour ? How many snapshot's a
day, a week a month ? Thousands ? And having ZFS which is _so_easy_ to use
does managing so many snapshots is ZFS-like feature ? (ZFS-like =
extremely easy).

The way ZFS is working right now is that it cares about disks
(checksumming), redundancy (raid*) and performance. Having versioning
would let ZFS care about people mistakes. And people do mistakes.
Yes, Matt, you are right that snapshots are a feature which might be used
here but it is not the most convenient in such scenarios. Snapshots are
probably much more useful then versioning in predictable scenarios: backup at 
night,
software development (commit new version) etc.  In highly unpredictable
environment (many users working in _diferent_ hours in different part ot
the world) you would have to create many thousands of snapshots. To deal
with them might be painfull.

Matt, I agree with you that having snapshots *solve* the problem with
400 filesystems because in SVM/UFS environemnt I _wouldn't_ have such
solution. But I feel that versioning would be much more convenient here.
Imagine that you are the admin of the server and ZFS has versioning: having a 
choice
what would you choose in this case ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-09 Thread przemolicc
On Fri, Oct 06, 2006 at 02:08:34PM -0700, Erik Trimble wrote:
 Also, save-early-save-often  results in a version explosion, as does 
 auto-save in the app.  While this may indeed mean that you have all of 
 your changes around, figuring out which version has them can be 
 massively time-consuming.  Let's say you have auto-save set for 5 
 minutes (very common in MS Word). That gives you 12 versions per hour.  
 If you suddenly decide you want to back up a couple of hours, that 
 leaves you with looking at a whole bunch of files, trying to figure out 
 which one you want.  E.g. I want a file from about 3 hours ago. Do I 
 want the one from 2:45, 2:50, 2:55, 3:00, 3:05, 3:10, or 3:15 hours 
 ago?  And, what if I've mis-remembered, and it really was closer to 4 
 hours ago?  Yes, the data is eventually there. However, wouldn't a 
 1-hour snapshot capability have saved you an enormous amount of time, by 
 being able to simplify your search (and, yes, you won't have _exactly_ 
 the version you want, but odds are you will have something close, and 
 you can put all the time you would have spent searching the FV tree into 
 restarting work from the snapshot-ed version).

Erik,

versioning could be managed by sort of versioning policy managed by
users. E.g. if a file, which is going to be saved right now (auto-saving),
has a previous version saved within last 30 minuts, don't create another
previous version. 
10:00 open  file f.xls
10:10   (...working...)
10:20   file.xls;1  (...auto save ...)
10:30   (...working...)
10:40   (...auto save ...) -don't create another
version because within
last 30 minuts there is
another, previous 
version

Another policy might be based on number of previous version: e.g. if there
are more then 10, purge the older.


 [...]
 
 
 To me, FV is/was very useful in TOPS-20 and VMS, where you were looking 
 at a system DESIGNED with the idea in mind, already have a user base 
 trained to use and expect it, and virtually all usage was local (i.e. no 
 network filesharing). None of this is true in the UNIX/POSIX world.

Versioning could be turned off per filesystem. And also could be
inherited from a parent - exactly like current compression.

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] A versioning FS

2006-10-06 Thread przemolicc
On Fri, Oct 06, 2006 at 01:14:23AM -0600, Chad Leigh -- Shire.Net LLC wrote:
 
 But I would dearly like to have a versioning capability.

Me too.
Example (real life scenario): there is a samba server for about 200
concurrent connected users. They keep mainly doc/xls files on the
server.  From time to time they (somehow) currupt their files (they
share the files so it is possible) so they are recovered from backup.
Having versioning they could be said that if their main file is
corrupted they can open previous version and keep working.
ZFS snapshots is not solution in this case because we would have to
create snapshots for 400 filesystems (yes, each user has its filesystem
and I said that there are 200 concurrent connections but there much more
accounts on the server) each hour or so.


przemol


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Info on OLTP Perf

2006-09-26 Thread przemolicc
On Mon, Sep 25, 2006 at 08:37:23AM -0700, Neelakanth Nadgir wrote:
 We did an experiment where we placed the logs on UFS+DIO and the
 rest on ZFS. This was a write heavy benchmark. We did not see
 much gain in performance by doing that (around 5%). I suspect
 you would be willing to trade 5% for all the benefits of ZFS.
 Moreover this penalty is for the current version, ZFS can/will
 get much faster.
 -neel

Neel,

I thought of the other solution: the logs on ZFS and rest on UFS+DIO.
I suspect that writes on ZFS are faster then on UFS+DIO (in OLTP
environment).

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Info on OLTP Perf

2006-09-25 Thread przemolicc
On Fri, Sep 22, 2006 at 03:38:05PM +0200, Roch wrote:
 
 
   http://blogs.sun.com/roch/entry/zfs_and_oltp

After reading this page and taking into consideration my (not so big) knowledge
of ZFS it came to my mind that putting e.g. Oracle on both UFS+DIO _and_
ZFS would be the best solution _at_the_moment_. E.g redo logs and undo
tablespace on ZFS (because its COW nature so that all writes to these
files will go with full speed) and all the rest database files on UFS+DIO.

What do you think ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool iostat

2006-09-19 Thread przemolicc
On Mon, Sep 18, 2006 at 11:05:07AM -0400, Krzys wrote:
 Hello folks, is there any way to get timestamps when doing zpool iostat 1 
 for example?
 
 Well I did run zpool iostat 60 starting last night and I got some loads 
 indicating along the way but without a time stamps I cant figure out at 
 around what time did they happen.

Workaround (how I did it with vmstat):

/usr/bin/vmstat 1 85530 | ./time.pl

where time.pl is perl script:

#!/usr/bin/perl -w

use FileHandle;
STDOUT-autoflush(1);

while (  ) {
chomp $_;
my @Time = localtime();
printf %04d-%02d-%02d %02d:%02d:%02d %-100s\n, 1900 + $Time[5], 1 + 
$Time[4], $Time[3], $Time[2], $Time[1], $Time[0], $_;
}

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320 - offtopic

2006-09-08 Thread przemolicc
On Thu, Sep 07, 2006 at 12:14:20PM -0700, Richard Elling - PAE wrote:
 [EMAIL PROTECTED] wrote:
 This is the case where I don't understand Sun's politics at all: Sun
 doesn't offer really cheap JBOD which can be bought just for ZFS. And
 don't even tell me about 3310/3320 JBODs - they are horrible expansive :-(
 
 Yep, multipacks are EOL for some time now -- killed by big disks.  Back when
 disks were small, people would buy multipacks to attach to their 
 workstations.
 There was a time when none of the workstations had internal disks, but I'd
 be dating myself :-)
 
 For datacenter-class storage, multipacks were not appropriate.  They only
 had single-ended SCSI interfaces which have a limited cable budget which
 limited their use in racks.  Also, they weren't designed to be used in a
 rack environment, so they weren't mechanically appropriate either.  I 
 suppose
 you can still find them on eBay.
 If Sun wants ZFS to be absorbed quicker it should have such _really_ cheap
 JBOD.
 
 I don't quite see this in my crystal ball.  Rather, I see all of the 
 SAS/SATA
 chipset vendors putting RAID in the chipset.  Basically, you can't get a
 dumb interface anymore, except for fibre channel :-).  In other words, if
 we were to design a system in a chassis with perhaps 8 disks, then we would
 also use a controller which does RAID.  So, we're right back to square 1.

Richard, when I talk about cheap JBOD I think about home users/small
servers/small companies. I guess you can sell 100 X4500 and at the same
time 1000 (or even more) cheap JBODs to the small companies which for sure
will not buy the big boxes. Yes, I know, you earn more selling
X4500. But what do you think, how Linux found its way to data centers
and become important player in OS space ? Through home users/enthusiasts who
become familiar with it and then started using the familiar things in
their job. 

Proven way to achieve world domination.  ;-))

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Recommendation ZFS on StorEdge 3320 - offtopic

2006-09-08 Thread przemolicc
On Fri, Sep 08, 2006 at 09:41:58AM +0100, Darren J Moffat wrote:
 [EMAIL PROTECTED] wrote:
 Richard, when I talk about cheap JBOD I think about home users/small
 servers/small companies. I guess you can sell 100 X4500 and at the same
 time 1000 (or even more) cheap JBODs to the small companies which for sure
 will not buy the big boxes. Yes, I know, you earn more selling
 X4500. But what do you think, how Linux found its way to data centers
 and become important player in OS space ? Through home users/enthusiasts 
 who
 become familiar with it and then started using the familiar things in
 their job. 
 
 But Linux isn't a hardware vendor and doesn't make cheap JBOD or 
 multipack for the home user.

Linux is used as a symbol.

 So I don't see how we get from Sun should make cheap home user JBOD 
 (which BTW we don't really have the channel to sell for anyway) to but 
 Linux dominated this way.

Home user = tech/geek/enthusiasts who is an admin in job

[ Linux ]
Home user is using linux at home and is satisfied with it. He/she then goes 
to job and says
Let's install/use it on less important servers. He/she (and
management) is again satisfied with it. So lets use it at more important
servers ... etc.

[ ZFS ]
Home user is using ZFS (Solaris) at home (remember easiness and even WEB
interface to ZFS operations !,) to keep photos, musics, etc. and is satisfied 
with it.
He/she the goes to his/her job and says I use for a while a fantastic
filesystem. Lets use it on less important servers. Ok. Later on Works ok.
Let's use on more important . Etc...

Yes, I know, a bit naive. But remember that not only Linux spreads this
way but also Solaris as well. I guess most of downloaded Solaris CD/DVD
are for x86. You as a company attack at high end/midrange level. Let
users/admins/fans attack at lower end level.


przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Oracle on ZFS

2006-08-30 Thread przemolicc
On Fri, Aug 25, 2006 at 06:52:17PM +0200, Daniel Rock wrote:
 [EMAIL PROTECTED] schrieb:
 Hi all,
 
 Does anybody use Oracle on ZFS in production (but not as a
 background/testing database but as a front line) ?

[ ... ]

Robert and Daniel,

How did you put oracle on ZFS:
- one zpool+ one  filesystem
- one zpool+ many filesystems
- a few zpools + one  filesystem on each
- a few zpools + many filesystem on each

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: Re: Re: zpool status panics server

2006-08-30 Thread przemolicc
On Fri, Aug 25, 2006 at 01:21:25PM -0600, Cindy Swearingen wrote:
 Hi Neal,
 
 The ZFS administration class, available in the fall, I think, covers
 basically the same content as the ZFS admin guide only with extensive
 lab exercises.

You could not give any ZFS class and advertise it as: ZFS is so
simple that we don't have any ZFS class !.

Just kidding ;-)

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Oracle on ZFS

2006-08-25 Thread przemolicc
Hi all,

Does anybody use Oracle on ZFS in production (but not as a
background/testing database but as a front line) ?
I am interesting especially in:
- how does it behave after a long time of using it. Becasue COW nature
  of ZFS I don't know how it influences performance of queries.
- general opinion of such pair (ZFS + Oracle)
- which version of Oracle

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS Performance compared to UFS VxFS - offtopic

2006-08-23 Thread przemolicc
On Tue, Aug 22, 2006 at 06:15:08AM -0700, Tony Galway wrote:
 A question (well lets make it 3 really) ??? Is vdbench a useful tool when 
 testing file system performance of a ZFS file system? Secondly - is ZFS write 
 performance really much worse than UFS or VxFS? and Third - what is a good 
 benchmarking tool to test ZFS vs UFS vs VxFS?

Are vdbench and SWAT going to be released to the public ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Unreliable ZFS backups or....

2006-08-14 Thread przemolicc
On Fri, Aug 11, 2006 at 05:25:11PM -0700, Peter Looyenga wrote:
 I looked into backing up ZFS and quite honostly I can't say I am convinced 
 about its usefullness here when compared to the traditional ufsdump/restore. 
 While snapshots are nice they can never substitute offline backups. And 
 although you can keep quite some snapshots lying about it will consume 
 diskspace, one of the reasons why people also keep offline backups.
 
 However, while you can make one using 'zfs send' it somewhat worries me that 
 the only way to perform a restore is by restoring the entire filesystem 
 (/snapshot). I somewhat shudder at the thought of having to restore 
 /export/home this way to retrieve but a single file/directory.

To achive that you can combine ZFS snapshots + ZFS send/receive.

[ UFS ]
  - to restore one/two/... files
  use ufsrestore
  - to restore entire filesystem
  use ufsrestore

[ ZFS ]
  - to restore one/two/... files 
  use snapshot(s)
  - to restore entire filesystem 
  use 'zfs receive'


 Am I overlooking something here or are people indeed resorting to tools like 
 tar and the likes again to overcome all this? In my opinion ufsdump / 
 ufsrestore was a major advantage over tar and I really would consider it a 
 major drawback if that would be the only way to backup data in such a way 
 where it can be more easily restored.

Now in zfs-era :-) it seems we have to change way of thinking about 
filesystems. Usually system administrators
are restoring two things:
- particular files
- entire filesystems
Both are achievable by above combination. What is more important is that 
zfs-way (restore a file) seems to be easier.
:-)

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DTrace IO provider and oracle

2006-08-09 Thread przemolicc
On Tue, Aug 08, 2006 at 04:47:51PM +0200, Robert Milkowski wrote:
 Hello przemolicc,
 
 Tuesday, August 8, 2006, 3:54:26 PM, you wrote:
 
 ppf Hello,
 
 ppf Solaris 10 GA + latest recommended patches:
 
 ppf while runing dtrace:
 
 ppf bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED],
 ppf args[2]-fi_pathname] = count();}'
 ppf ...
 ppf   vim
 ppf /zones/obsdb3/root/opt/sfw/bin/vim  296
 ppf   tnslsnr none 2373
 ppf   fsflush none 2952
 ppf   sched   none 9949
 ppf   ar60run none 13590
 ppf   RACUST  none 39252
 ppf   RAXTRX  none 39789
 ppf   RAXMTR  none 40671
 ppf   FNDLIBR none 64956
 ppf   oracle  none 2096052
 
 ppf How can I interpret 'none' ? Is it possible to get full path (like in 
 vim) ?
 
 I guess Oracle is on ZFS.


Ooops ! After your e-mail I realised I should sent it to dtrace-discuss not 
zfs-discuss !
No ! Oracle is not on zfs.  Sorry !

 Well right now IO provider won't work properly with ZFS.
 I guess it would be hard to implement IO provider in ZFS - but maybe
 not?
 
 However you can use syscall provider and monitor pread, pwrite, etc.
 syscalls.
 
 -- 
 Best regards,
  Robertmailto:[EMAIL PROTECTED]
http://milek.blogspot.com
 
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] DTrace IO provider and oracle

2006-08-09 Thread przemolicc
On Tue, Aug 08, 2006 at 11:33:28AM -0500, Tao Chen wrote:
 On 8/8/06, [EMAIL PROTECTED] [EMAIL PROTECTED] wrote:
 
 Hello,
 
 Solaris 10 GA + latest recommended patches:
 
 while runing dtrace:
 
 bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]-fi_pathname] =
 count();}'
 ...
 
   oracle  none   
   2096052
 
 How can I interpret 'none' ? Is it possible to get full path (like in
 vim) ?
 
 
 Section 27.2.3 fileinfo_t of DTrace Guide
 explains in detail why you see 'none' in many cases.
 http://www.sun.com/bigadmin/content/dtrace/d10_latest.pdf
 or
 http://docs.sun.com/app/docs/doc/817-6223/6mlkidllf?a=view
 
 The execname part can also be misleading, as many I/O activities are
 asynchronous (including but not limited to Asynchronous I/O), so whatever
 thread running on CPU may have nothing to do with the I/O that's occuring.
 
 This is working as designed and not a problem that limited to ZFS, IMO.

Thanks Tao for the doc pointers. I haven't noticed them.

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] DTrace IO provider and oracle

2006-08-08 Thread przemolicc
Hello,

Solaris 10 GA + latest recommended patches:

while runing dtrace:

bash-3.00# dtrace -n 'io:::start [EMAIL PROTECTED], args[2]-fi_pathname] = 
count();}'
...
  vim 
/zones/obsdb3/root/opt/sfw/bin/vim  296
  tnslsnr none
 2373
  fsflush none
 2952
  sched   none
 9949
  ar60run none
13590
  RACUST  none
39252
  RAXTRX  none
39789
  RAXMTR  none
40671
  FNDLIBR none
64956
  oracle  none
  2096052

How can I interpret 'none' ? Is it possible to get full path (like in vim) ?

Regards
przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs list - column width

2006-07-10 Thread przemolicc
Hello,

because creating/using filesystems in ZFS becoms cheap it is useful now
to create/organize filesystems in hierarchy:

bash-3.00# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
dns-pool   136K  43.1G  25.5K  /dns-pool
dns-pool/zones  50K  43.1G  25.5K  /dns-pool/zones
dns-pool/zones/dns1   24.5K  43.1G  24.5K  /dns-pool/zones/dns1

But if I want bigger hierarchy (for sure I will !) 'zfs list' becomes less 
readable:

bash-3.00# zfs create dns-pool/zones/dns1/root
bash-3.00# zfs create dns-pool/zones/dns1/local
bash-3.00# zfs list
NAME   USED  AVAIL  REFER  MOUNTPOINT
dns-pool   197K  43.1G  25.5K  /dns-pool
dns-pool/zones 100K  43.1G  25.5K  /dns-pool/zones
dns-pool/zones/dns1   74.5K  43.1G  25.5K  /dns-pool/zones/dns1
dns-pool/zones/dns1/local  24.5K  43.1G  24.5K  /dns-pool/zones/dns1/local
dns-pool/zones/dns1/root  24.5K  43.1G  24.5K  /dns-pool/zones/dns1/root

I would like to customise column width to have nicer view of the hierarchy.
Wouldn't be good to have some sort of configuration file in which I could set 
up the column
width ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] COW question

2006-07-07 Thread przemolicc
On Fri, Jul 07, 2006 at 11:59:29AM +0800, Raymond Xiong wrote:
 It doesn't. Page 11 of the following slides illustrates how COW
 works in ZFS:
 
 http://www.opensolaris.org/os/community/zfs/docs/zfs_last.pdf
 
 Blocks containing active data are never overwritten in place;
 instead, a new block is allocated, modified data is written to
 it, and then any metadata blocks referencing it are similarly
 read, reallocated, and written. To reduce the overhead of this
 process, multiple updates are grouped into transaction groups,
 and an intent log is used when synchronous write semantics are
 required.(from http://en.wikipedia.org/wiki/ZFS)
 
 IN snapshot scenario, COW consumes much less disk space and is
 much faster.

It says also that updating uberblock is an atomic operations. How is it
achieved ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Storage

2006-06-29 Thread przemolicc
On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
 ppf What I wanted to point out is the Al's example: he wrote about damaged 
 data. Data
 ppf were damaged by firmware _not_ disk surface ! In such case ZFS doesn't 
 help. ZFS can
 ppf detect (and repair) errors on disk surface, bad cables, etc. But cannot 
 detect and repair
 ppf errors in its (ZFS) code.
 
 Not in its code but definitely in a firmware code in a controller.

As Jeff pointed out: if you mirror two different storage arrays.

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Storage

2006-06-29 Thread przemolicc
On Thu, Jun 29, 2006 at 10:01:15AM +0200, Robert Milkowski wrote:
 Hello przemolicc,
 
 Thursday, June 29, 2006, 8:01:26 AM, you wrote:
 
 ppf On Wed, Jun 28, 2006 at 03:30:28PM +0200, Robert Milkowski wrote:
  ppf What I wanted to point out is the Al's example: he wrote about 
  damaged data. Data
  ppf were damaged by firmware _not_ disk surface ! In such case ZFS 
  doesn't help. ZFS can
  ppf detect (and repair) errors on disk surface, bad cables, etc. But 
  cannot detect and repair
  ppf errors in its (ZFS) code.
  
  Not in its code but definitely in a firmware code in a controller.
 
 ppf As Jeff pointed out: if you mirror two different storage arrays.
 
 Not only I belive. There are some classes of problems that even in one
 array ZFS could help for fw problems (with many controllers in
 active-active config like Symetrix).

Any real example ?

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and Storage

2006-06-28 Thread przemolicc
On Wed, Jun 28, 2006 at 02:23:32PM +0200, Robert Milkowski wrote:
 Hello przemolicc,
 
 Wednesday, June 28, 2006, 10:57:17 AM, you wrote:
 
 ppf On Tue, Jun 27, 2006 at 04:16:13PM -0500, Al Hopper wrote:
  Case in point, there was a gentleman who posted on the Yahoo Groups solx86
  list and described how faulty firmware on a Hitach HDS system damaged a
  bunch of data.  The HDS system moves disk blocks around, between one disk
  and another, in the background, to optimized the filesystem layout.  Long
  after he had written data, blocks from one data set were intermingled with
  blocks for other data sets/files causing extensive data corruption.
 
 ppf Al,
 
 ppf the problem you described comes probably from failures in code of 
 firmware
 ppf not the failure of disk surface.  Sun's engineers can also do some 
 mistakes
 ppf in ZFS code, right ?
 
 But the point is that ZFS should detect also such errors and take
 proper actions. Other filesystems can't.

Does it mean that ZFS can detect errors in ZFS's code itself ? ;-)

What I wanted to point out is the Al's example: he wrote about damaged data. 
Data
were damaged by firmware _not_ disk surface ! In such case ZFS doesn't help. 
ZFS can
detect (and repair) errors on disk surface, bad cables, etc. But cannot detect 
and repair
errors in its (ZFS) code.

I am comparing firmware code to ZFS code.

przemol
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss