Hello Matthew,
Tuesday, August 8, 2006, 8:08:39 PM, you wrote:
MA On Tue, Aug 08, 2006 at 10:42:41AM -0700, Robert Milkowski wrote:
filebench in varmail by default creates 16 threads - I configrm it
with prstat, 16 threrads are created and running.
MA Ah, OK. Looking at these results, it
The core dump timed out (related to the SCSI bus reset?), so I don't
have one. I can try it again, though, it's easy enough to reproduce.
I was seeing errors on the fibre channel disks as well, so it's possible
the whole thing was locked up.
BP
--
[EMAIL PROTECTED]
Steffen,
On 8/10/06 8:12 AM, Steffen Weiberle [EMAIL PROTECTED] wrote:
Those are compelling numbers! Have you seen them yourself? Or know who has?
O'Reilly Research is a good one, they were using MySQL for data mining work
and each query was taking 10 hours, despite all tuning on modern
Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write block only once (for synchronous
IO) and than just point to that block instead of copying it
Hi Eric,
Eric Schrock wrote:
...
Second, it forced the CLI to distinguish between a container and a
filesystem. At first this was accomplished with a trailing slash on the
name, and later introducing the 'ctr' type. Both were confusing to
users for different reasons.
Speaking of
On Thu, Aug 10, 2006 at 10:23:20AM -0700, Eric Schrock wrote:
A new option will be added, 'canmount', which specifies whether the
given filesystem can be mounted with 'zfs mount'. This is a boolean
property, and is not inherited.
Cool, looks good. Do you plan to implement this using the
Yet another reason it was removed. This proposal specifically does not
use the word 'container', nor will the documentation refer to it as
such. I was merely providing background (possibly too much) for why
this option was originally implemented and then removed.
- Eric
On Thu, Aug 10, 2006 at
Right now I'm using the generic property mechanism, but have a special
case in dsl_prop_get_all() to ignore searching parents for this
particular property. I'm not thrilled about it, but I only see two
other options:
1. Do not use the generic infrastructure. This requires much more
invasive
On Thu, Aug 10, 2006 at 10:44:46AM -0700, Eric Schrock wrote:
Right now I'm using the generic property mechanism, but have a special
case in dsl_prop_get_all() to ignore searching parents for this
particular property. I'm not thrilled about it, but I only see two
other options:
1. Do not
Robert Milkowski wrote:
Hello Neil,
Thursday, August 10, 2006, 7:02:58 PM, you wrote:
NP Robert Milkowski wrote:
Hello Matthew,
Thursday, August 10, 2006, 6:55:41 PM, you wrote:
MA On Thu, Aug 10, 2006 at 06:50:45PM +0200, Robert Milkowski wrote:
btw: wouldn't it be possible to write
So far I understand that if a file is to be modified, it will first copy the
data to be modified in a new location in the ZFS pool, then modify that new
data and do all the ZFS voodoo it does, and ultimately do the very last step of
the operation (which I think is the final change of the
On Thu, Aug 10, 2006 at 12:11:46PM -0700, David Coronel wrote:
So far I understand that if a file is to be modified, it will first
copy the data to be modified in a new location in the ZFS pool, then
modify that new data and do all the ZFS voodoo it does, and ultimately
do the very last step
Myron Scott wrote:
Is there any difference between fdatasync and fsync on ZFS?
-No. ZFS does not log data and meta data separately. rather
it logs essentially the system call records, eg writes, mkdir,
truncate, setattr, etc. So fdatasync and fsync are identical
on ZFS.
On 08/08/2006, at 10:44 PM, Luke Scharf wrote:
The release I'm playing with (Alpha 5) does, indeed, have ZFS.
However, I can't determine what version of ZFS is included.
Dselect gives the following information, which doesn't ring any
bells for me:
*** Req base sunwzfsr
I remember a discussion about I/O write failures causing a panic for a
non-replicated pool and a plan to fix this in the future. I couldn't
find a bug for this work though. Is there still a plan to fix this?
Phi
___
zfs-discuss mailing list
Yes, there are three incremental fixes that we plan in this area:
6417772 need nicer message on write failure
This just cleans up the failure mode so that we get a nice
FMA failure message and can distinguish this from a random
failed assert.
6417779 ZFS: I/O failure
Thanks for the list.
Phi
Eric Schrock wrote:
Yes, there are three incremental fixes that we plan in this area:
6417772 need nicer message on write failure
This just cleans up the failure mode so that we get a nice
FMA failure message and can distinguish this from a random
I'm wondering if I can get all the benefits of EVMS and LVM with ZFS
I'm planing on ZFS with a Raid-Z
1.- Can I expand volumes like I do with LVM ?
2.- Is there a central cosole (like EVMS) to do all the managment ?
3.- How do I monitor the hard drives ?
4.- If there is a drive failure do the
Eric Schrock wrote:
Also, I am working on 6367103, which would allow for this option (and
any others) to be set at create time, so you don't have the weird
situation where the filesystem is temporarily mounted.
I'm assuming since you don't use that syntax in your examples that
there will be a
On Fri, Aug 11, 2006 at 01:50:05AM +0100, Darren J Moffat wrote:
I'm assuming since you don't use that syntax in your examples that
there will be a separate proposal/ARC case for the set at create time.
Yes. I have a prototype almost done and will send out a similar
proposal, probably
Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0 c4t0017380101400012d0
each of those devices is a 64GB lun, right?
I did
Hello eric,
Friday, August 11, 2006, 3:04:38 AM, you wrote:
ek Leon Koll wrote:
...
So having 4 pools isn't a recommended config - i would destroy those 4
pools and just create 1 RAID-0 pool:
#zpool create sfsrocks c4t00173801014Bd0 c4t00173801014Cd0
c4t001738010140001Cd0
22 matches
Mail list logo