Does un-taring something count? It is what I used for our tests.
I tested with ZIL disable, zil cache on /tmp/zil, CF-card (300x) and
cheap SSD. Waiting for X-25E SSDs to arrive for testing those:
http://mail.opensolaris.org/pipermail/zfs-discuss/2009-July/030183.html
If you want a quick ans
Hello Greg,
I'm curious how much performance benefit you gain from the ZIL accelerator.
Have you measured that? If not, do you have a gut feel about how much it
helped? Also, for what kind of applications does it help?
(I know it helps with synchronous writes. I'm looking for real world
a
Finally came to the reboot maintenance to reboot the x4540 to make it
see the newly replaced HDD.
I tried, reboot, then power-cycle, and reboot -- -r,
but I can not make the x4540 accept any HDD in that bay. I'm starting to
think that perhaps we did not lose the original HDD, but rather the
Hi David,
We are using them in our Sun X4540 filers. We are actually using 2 SSDs
per pool, to improve throughput (since the logbias feature isn't in an
official release of OpenSolaris yet). I kind of wish they made an 8G or
16G part, since the 32G capacity is kind of a waste.
We had to go t
On Aug 18, 2009, at 1:16 PM, Paul Kraus wrote:
Is the speed of a 'zfs send' dependant on file size / number of
files ?
Not directly. It is dependent on the amount of changes per unit time.
We have a system with some large datasets (3.3 TB and about 35
million files) and conventional
On 08/19/09 14:57, Kris Kasner wrote:
Hi.
First - Thanks very much for releasing this as a patch and not making
us wait until U8. It's very much appreciated. Things we put on hold
can start moving again which is a good thing.
Is there further documentation on this yet?
I just asked C
We have a setup with ZFS/ESX/NFS and I am looking to move our zil to a solid
state drive.
So far I am looking into this one
http://www.newegg.com/Product/Product.aspx?Item=N82E16820167013
Does anyone have any experience with this drive as a poorman¹s logzilla?
And also what have other people done
It's not too often we see good news on the zfs-discuss list, so here's some:
We at the High Performance Computing Center at MSU have finally worked
out the root cause of a long-standing issue with our OpenSolaris NFS
servers. It was a minor configuration issue, involving a ZFS file system
prop
I bought a 1 TB external USB disk from Western Digital (1) and put it in my
2008.11 machine.The machine discovered the disk directly and I did a 'zpool
create xpool c11t0d0´ command
# zpool list
NAME SIZE USED AVAILCAP HEALTH ALTROOT
[...]
xpool 928G81K 928G 0% ONL
Hi.
First - Thanks very much for releasing this as a patch and not making us wait
until U8. It's very much appreciated. Things we put on hold can start moving
again which is a good thing.
Is there further documentation on this yet? I haven't been able to find it, and
it looks like most of
I have a raidz1 zpool ("Companion") I am unable to import. One of the
disks has physically failed and cannot be recognized by any computer,
and it is unlikely physical repairs or data recovery is possible.
No other disks are reporting errors. One other disk (not the failed
one) threw occasi
I'm running into a problem trying to do "zfs receive", with data being
replicated from a Solaris 10 (11/06 release) to a storage server running OS
118. Here is the error:
r...@lznas2:/backup# cat backup+mcc+use...@zn2---pre_messages_delete_20090430 |
zfs receive backup/mcc/users
cannot restore
I had
> a LUN on a system that died but the pool on the lun was still good. I made
> it available to my new system but cant import it. It appears in format but
> zpool import has not gave me any results. The pool was not exported or
> anything it was just detached after the net connection and that
I have a zfs dataset that I use for network home directories. The box is
running 2008.11 with the auto-snapshot service enabled. To help debug some
mysterious file deletion issues, I've enabled nfs logging (all my clients are
NFSv3 Linux boxes).
I keep seeing lines like this in the nfslog:
Thank you for all your replies, I'm collecting my responses in one
message below:
On Tue, Aug 18, 2009 at 7:43 PM, Nicolas
Williams wrote:
> On Tue, Aug 18, 2009 at 04:22:19PM -0400, Paul Kraus wrote:
>> We have a system with some large datasets (3.3 TB and about 35
>> million files) and co
15 matches
Mail list logo