Hello Matthew,
Thursday, April 5, 2007, 1:08:25 AM, you wrote:
MA Lori Alt wrote:
Can write-cache not be turned on manually as the user is sure that it is
only ZFS that is using the entire disk?
yes it can be turned on. But I don't know if ZFS would then know about it.
I'd still feel
Hello Adam,
Wednesday, April 4, 2007, 11:41:58 PM, you wrote:
AL On Wed, Apr 04, 2007 at 11:04:06PM +0200, Robert Milkowski wrote:
If I stop all activity to x4500 with a pool made of several raidz2 and
then I issue spare attach I get really poor performance (1-2MB/s) on a
pool with lot of
Hi,
- RAID-Z is _very_ slow when one disk is broken.
Do you have data on this? The reconstruction should be relatively cheap
especially when compared with the initial disk access.
Also, what is your definition of broken? Does this mean the device
appears as FAULTED in the pool status, or
Hello James,
Thursday, April 5, 2007, 7:19:41 AM, you wrote:
JCM Ivan Wang wrote:
So any date on when install utility will support zfs root fresh install?
almost can't wait for that.
JCM Hi Ivan,
JCM there's no firm date for this yet, though the install team are
JCM working *really* hard at
Le 5 avr. 07 à 08:28, Robert Milkowski a écrit :
Hello Matthew,
Thursday, April 5, 2007, 1:08:25 AM, you wrote:
MA Lori Alt wrote:
Can write-cache not be turned on manually as the user is sure
that it is
only ZFS that is using the entire disk?
yes it can be turned on. But I don't know
Now, given proper I/O concurrency (like recently improved NCQ in our
drivers) or SCSI CTQ,
I don't not expect the write caches to provide much performance
gains, if any, over the situation
with write caches off.
Write caches can be extremelly effective when dealing with drives
that do not
Le 4 avr. 07 à 10:01, Paul Boven a écrit :
Hi everyone,
Swap would probably have to go on a zvol - would that be best
placed on
the n-way mirror, or on the raidz?
From the book of Richard Elling,
Shouldn't matter. The 'existence' of a swap device is sometimes
required.
If the
I have some further data now, and I don't think that it is a hardware
problem. Half way through the scrub, I rebooted and exchanged the
controller and cable used with the bad disk. After restarting the
scrub, it proceeded error free until about the point where it left
off, and then it resumed
My two cents,
Assuming that you may pick a specific compression algorithm,
most algorithms can have different levels/percentages of
deflations/inflations which is effects the time to compress
and/or inflate wrt the CPU capacity.
Secondly, if I can add an
Hi.
What do you think about adding functionality similar to disk's spare
sectors - if a sector die, a new one is assigned from the spare sectors
pool. This will be very helpful especially for laptops, where you have
only one disk. I simulated returning EIO for one sector from a one-disk
pool and
After some discussions with Matt I removed all the previous snapshots before
the one causing the memory issues.
Guess what - it worked. I was able to remove that snapshot after I removed all
previous ones. It took 2 seconds.
I will definitely have to upgrade that machine these days to 64 bit
Hello Pawel,
Thursday, April 5, 2007, 3:10:11 PM, you wrote:
PJD Hi.
PJD What do you think about adding functionality similar to disk's spare
PJD sectors - if a sector die, a new one is assigned from the spare sectors
PJD pool. This will be very helpful especially for laptops, where you have
This sounds a lot like:
6417779 ZFS: I/O failure (write on ...) -- need to reallocate writes
Which would allow us to retry write failures on alternate vdevs.
- Eric
On Thu, Apr 05, 2007 at 03:10:11PM +0200, Pawel Jakub Dawidek wrote:
Hi.
What do you think about adding functionality similar
hi all,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the system is: SunOS store1 5.10 Generic_118855-33 i86pc
i386 i86pc.
Now I have seen that there is a new rootfs support for solaris starting
with build: snv_62.
On 4/5/07, Jakob Praher [EMAIL PROTECTED] wrote:
hi all,
Hi Jacob,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the system is: SunOS store1 5.10 Generic_118855-33 i86pc
i386 i86pc.
Now I have seen that there is a new rootfs support for
Is there anyone interested in a kernel dump?
We are sill unable to import the corrupted zpool, even in readonly mode ..
Apr 5 22:27:34 SERVER142 ^Mpanic[cpu2]/thread=fffec9eef0e0:
Apr 5 22:27:34 SERVER142 genunix: [ID 603766 kern.notice] assertion failed:
ss-ss_start = start (0x67b800
Hi Cyril,
thanks for your quick response!
Cyril Plisko wrote:
On 4/5/07, Jakob Praher [EMAIL PROTECTED] wrote:
hi all,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the system is: SunOS store1 5.10 Generic_118855-33 i86pc
i386 i86pc.
a) possible to start from a raidz pool?
No. At this point raidz pool is not usable as a boot pool.
Is this possible then to use a mirror pool?
Yes (to some extent). You can use one disk/slice or a mirror.
2. Prepare the disk for a ZFS rootpool. A rootpool can be a single
disk
But even with b62, you won't be able to start with the zfs mirror right?
You'll have to do UFS then convert it?
Malachi
On 4/5/07, Darren Dunham [EMAIL PROTECTED] wrote:
a) possible to start from a raidz pool?
No. At this point raidz pool is not usable as a boot pool.
Is this possible
On Thu, 2007-04-05 at 14:27 -0700, Malachi de Ælfweald wrote:
But even with b62, you won't be able to start with the zfs mirror
right? You'll have to do UFS then convert it?
Malachi
Correct. The direct support for installing on ZFS as root will come
with the fixing of the Install binary,
On 05/04/07, Jakob Praher [EMAIL PROTECTED] wrote:
Hi Cyril,
thanks for your quick response!
Cyril Plisko wrote:
On 4/5/07, Jakob Praher [EMAIL PROTECTED] wrote:
hi all,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the system is: SunOS
On Thu, 2007-04-05 at 22:59 +0200, Jakob Praher wrote:
Hi Cyril,
thanks for your quick response!
Cyril Plisko wrote:
On 4/5/07, Jakob Praher [EMAIL PROTECTED] wrote:
hi all,
I am new to solaris.
I am creating a zfs filestore which should boot via rootfs.
The version of the
Assuming that you may pick a specific compression algorithm,
most algorithms can have different levels/percentages of
deflations/inflations which affects the time to compress
and/or inflate wrt the CPU capacity.
Yes? I'm not sure what your point is. Are you suggesting that, rather than
Hi.
I'm happy to inform that the ZFS file system is now part of the FreeBSD
operating system. ZFS is available in the HEAD branch and will be
available in FreeBSD 7.0-RELEASE as an experimental feature.
Commit log:
Please welcome ZFS - The last word in file systems.
ZFS file system was
On 05/04/07, Pawel Jakub Dawidek [EMAIL PROTECTED] wrote:
I'm happy to inform that the ZFS file system is now part of the FreeBSD
operating system. ZFS is available in the HEAD branch and will be
available in FreeBSD 7.0-RELEASE as an experimental feature.
Wow! This is great news Pawel, and
Isn't it more likely that these are errors on data as well? I think zfs
retries read operations when there's a checksum failure, so maybe these
are transient hardware problems (faulty cables, high temperature..)?
This would explain the non-existence of unrecoverable errors.
Robert Milkowski
Hi Pawel,
Pawel Jakub Dawidek wrote:
Other than that, ZFS should be fully-functional.
Congratulations, nice work! :)
I'm interested in the cross-platform portability of ZFS pools, so I have
one question: did you implement the Solaris ZFS whole-disk support
(specifically, the creation and
I'm happy to inform that the ZFS file system is now part of the FreeBSD
operating system. ZFS is available in the HEAD branch and will be
available in FreeBSD 7.0-RELEASE as an experimental feature.
This is fantastic news! At the risk of raking over ye olde arguments,
as the old saying goes:
Pawel Jakub Dawidek wrote:
Hi.
I'm happy to inform that the ZFS file system is now part of the FreeBSD
operating system. ZFS is available in the HEAD branch and will be
available in FreeBSD 7.0-RELEASE as an experimental feature.
Well done, team! Everyone who cares about their data will be
29 matches
Mail list logo