I believe this is what you're hitting:
6456888 zpool attach leads to memory exhaustion and system hang
We are currently looking at fixing this so stay tuned.
Thanks,
George
Daniel Rock wrote:
Joseph Mocker schrieb:
Today I attempted to upgrade to S10_U2 and migrate some mirrored UFS
SVM
Hello Eric,
Wednesday, August 16, 2006, 4:48:46 PM, you wrote:
ES What does 'zfs list -o name,mountpoint' and 'zfs mount' show after the
ES import? My only guess is that you have some explicit mountpoint set
ES that's confusing the DSl-orderered mounting code. If this is the case,
ES this was
And it started replacement/resilvering... after few minutes system became
unavailbale. Reboot only gives me a few minutes, then resilvering make system
unresponsible.
Is there any workaroud or patch for this problem???
Argh, sorry -- the problem is that we don't do aggressive enough
WYT said:
Hi all,
My company will be acquiring the Sun SE6920 for our storage
virtualization project and we intend to use quite a bit of ZFS as
well. The 2 technologies seems somewhat at odds since the 6920 means
layers of hardware abstraction but ZFS seems to prefer more direct
Hi,
IHAC who is simulating disk failure and came across behaviour which seems
wrong:
1. zpool status -v
pool: data
state: ONLINE
scrub: resilver completed with 0 errors on Thu Aug 10 16:55:22 2006
config:
NAMESTATE READ WRITE CKSUM
dataONLINE 0
Therein lies my dillemma:
- We know the I/O sub-system is capable of much higher I/O rates
- Under the test setup I've SAS datasets which are lending themselves to
compression. This should manifest itself as lots of read I/O resulting in much
smaller (4x) write I/O due to compression. This
Anantha N. Srirama writes:
Therein lies my dillemma:
- We know the I/O sub-system is capable of much higher I/O rates
- Under the test setup I've SAS datasets which are lending
themselves to compression. This should manifest itself as lots of read
I/O resulting in much smaller
Hello Eric,
Wednesday, August 16, 2006, 4:49:27 PM, you wrote:
ES This seems like a reasonable RFE. Feel free to file it at
ES bugs.opensolaris.org.
I've just did :)
However currently 'zpool import A B' means importing pool A and
renaming it to pool B.
I think it would be better to change
Hello Roch,
Thursday, August 17, 2006, 11:08:37 AM, you wrote:
R My general principles are:
R If you can, to improve you 'Availability' metrics,
R let ZFS handle one level of redundancy;
R For Random Read performance prefer mirrors over
R raid-z. If you use
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
--
Best regards,
Robert mailto:[EMAIL PROTECTED]
http://milek.blogspot.com
___
zfs-discuss
Hi there
Did a backup/restore on TSM, works fine.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Robert Milkowski writes:
Hello Roch,
Thursday, August 17, 2006, 11:08:37 AM, you wrote:
R My general principles are:
R If you can, to improve you 'Availability' metrics,
R let ZFS handle one level of redundancy;
R For Random Read performance prefer
Hi Bob,
you are using some non-Sun SCSI HBA. Could you please be more specific
about HBA model and driver?
You are getting pretty the same high CPU load with write to single-disk
UFS and raid-z. This may mean that the problem is not with ZFS itself.
Victor
Bob Evans wrote:
Robert,
Sorry
No ACL's ...
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
First, I apologize, I listed the Antares in my original post, it was one of two
scsi cards I tested with. The posted CPU snapshots were from the LSI 22320
card (mentioned below).
I've tried this with two different SCSI cards. As far as I know, both are
standard SCSI cards used for Suns.
On Thu, Aug 17, 2006 at 02:53:09PM +0200, Robert Milkowski wrote:
Hello zfs-discuss,
Is someone actually working on it? Or any other algorithms?
Any dates?
Not that I know of. Any volunteers? :-)
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority, but
On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority, but if someone from the community wants to step up, we
won't turn your code away!)
Is RLE likely to be more efficient for metadata? Have you
On Thu, Aug 17, 2006 at 10:28:10AM -0700, Adam Leventhal wrote:
On Thu, Aug 17, 2006 at 10:00:32AM -0700, Matthew Ahrens wrote:
(Actually, I think that a RLE compression algorithm for metadata is a
higher priority, but if someone from the community wants to step up, we
won't turn your code
Bob Evans wrote:
Hi, this is a follow up to Significant pauses to zfs writes.
I'm getting about 15% slower performance using ZFS raidz than if I just mount
the same type of drive using ufs.
What is your expectation?
-- richard
___
zfs-discuss
Following up on a string of related proposals, here is another draft
proposal for user-defined properties. As usual, all feedback and
comments are welcome.
The prototype is finished, and I would expect the code to be integrated
sometime within the next month.
- Eric
INTRODUCTION
ZFS currently
On 8/15/06, Kevin Maguire [EMAIL PROTECTED] wrote:
Hi
Is the following an accurate sstatment of the current status with (for me) the
3 main commercial ackup software solutions out there
It seems to me that if zfs send/receive where hooked in with ndmp
(http://ndmp.org), that zfs would very
21 matches
Mail list logo