Doug has been doing some performance optimization to
the sharemgr to allow faster boot up in loading
Doug has blogged about his performance numbers here:
http://blogs.sun.com/dougm/entry/recent_performance_improvement_in_zfs
This message posted from opensolaris.org
On 9-May-07, at 3:44 PM, Bakul Shah wrote:
Robert Milkowski wrote:
Hello Mario,
Wednesday, May 9, 2007, 5:56:18 PM, you wrote:
MG I've read that it's supposed to go at full speed, i.e. as
fast as
MG possible. I'm doing a disk replace and what zpool reports
kind of
MG surprises me. The
Brothers,
I've fixed the issue by reconfigure the system device tree as:
# devfsadm -Cv
Some new devices were added,and then zfs works fine.
Thanks for your kind attention.
Rgds,
Simon
On 5/10/07, Simon [EMAIL PROTECTED] wrote:
Gurus,
My fresh installed Solaris 10 U3 can't bootup normally
Hi Malachi
Tims SMF bits work well (and also supports remote backups (via send/recv)).
I use something like the process laid out at the bottom of:
http://blogs.sun.com/mmusante/entry/rolling_snapshots_made_easy
because it's dirt-simple and easily understandable.
On 10/05/07, Malachi de
which one is the most performant: copies=2 or zfs-mirror?
What type of copies are you talking about?
Mirrored data in underlying storage subsystem or a (new) feature in zfs?
- Andreas
This message posted from opensolaris.org
___
zfs-discuss
Hello,
I've got some weird problem: ZFS does not seem to be utilizing all disks in my
pool properly. For some reason, it's only using 2 of the 3 disks in my pool:
capacity operationsbandwidth
pool used avail read write read write
-- - -
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some reason, it's only using 2 of the 3
disks in my pool:
LMcapacity operationsbandwidth
LM
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some reason, it's only using 2 of the 3
disks in my pool:
LMcapacity
Simple test - mkfile 8gb now and see where the data goes... :)
Victor Latushkin wrote:
Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some
Hello Victor,
Thursday, May 10, 2007, 11:26:35 AM, you wrote:
VL Robert Milkowski wrote:
Hello Leon,
Thursday, May 10, 2007, 10:43:27 AM, you wrote:
LM Hello,
LM I've got some weird problem: ZFS does not seem to be utilizing
LM all disks in my pool properly. For some reason, it's only
What does zpool status database say?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
What does zpool status database say?
Hello,
As far as I can see, there are no real errors:
-bash-3.00# zpool status database
pool: database
state: ONLINE
scrub: none requested
config:
NAMESTATE READ WRITE CKSUM
databaseONLINE 0 0 0
The host for this is up in the air. I'd hope I could use a Shuttle XPC.
It's an 8 drive USB enclosure. The total bandwidth to all 8 drives
would be 480Mbps, which is fine for me. I was hoping to do a RAID-Z or
RAID-Z2. I would have it export the drives as JBOD.
To clarify further; EMC note EMC Host Connectivity Guide for Solaris
indicates that ZFS is supported on 11/06 (aka Update 3) and onwards. However,
they sneak in a cautionary disclaimer that snapshot and clone features are
supported by Sun. If one reads it carefully it appears that they do
Lot of small files perhaps? What kind of protection
have you used?
No protection, and as much small files as a full distro install has, plus some
more source code for some libs. It's just 28GB that needs to be resilvered, yet
it takes like 6 hours at this abysmal speed.
At first I thought it
Oh god I found it. So freakin' bizarre. I'm pushing now 27MB/s average, instead
of meager 1.6MB/s. That's more like it.
This is what happened:
Back in the day when I bought my first SATA drive, incidentally a WD Raptor, I
wanted Windows to boot off it, including bootloader placement on it and
That page says the b62 (which I have installed, with the ZFS-root bits)
doesn't support the recursive '-r'???
So it looks like I have to learn something else first So how do I
upgrade to b63 without corrupting the existing root ZFS mirroring bits?
Thanks,
Malachi
On 5/10/07, Dick Davies
On Thu, 10 May 2007, mike wrote:
The host for this is up in the air. I'd hope I could use a Shuttle XPC.
It's an 8 drive USB enclosure. The total bandwidth to all 8 drives
would be 480Mbps, which is fine for me. I was hoping to do a RAID-Z or
RAID-Z2. I would have it export the drives as
We have around 1000 users all with quotas set on their ZFS filesystems on
Solaris 10 U3. We take snapshots daily and rotate out the week old ones. The
situation is that some users ignore the advice of keeping space used below 80%
and keep creating large temporary files. They then try to
Andreas Koppenhoefer wrote:
which one is the most performant: copies=2 or zfs-mirror?
Good question, hope to have some data soon. From the back of the napkin
analysis, for the 2-disk case, it will be very similar. However, copies
offers more possibilities than just 2 disks, so there is more
i have the same problem, the users cant remove there files when the quota is
reached.
workaround is to raise the quota, remove the files and set the original quota.
so you can keep your snapshots.
This message posted from opensolaris.org
___
On 10 May, 2007 - Bakul Shah sent me these 3,2K bytes:
[1] Top down resilvering seems very much like a copying
garbage collector. That similarity make me wonder if the
physical layout can be rearranged in some way for a more
efficient access to data -- the idea is to resilver and
compactify
Bart wrote:
Adam Leventhal wrote:
On Wed, May 09, 2007 at 11:52:06AM +0100, Darren J Moffat wrote:
Can you give some more info on what these problems are.
I was thinking of this bug:
6460622 zio_nowait() doesn't live up to its name
Which was surprised to find was fixed by
Side note: Is this right? ditto blocks are extra parity blocks
stored on the same disk (won't prevent total disk failures, but could
provide data recovery if enough parity is available)
Yes. See Richard Ellings' excellent blog titled ZFS, copies, and data
protection, where one picture
On Thu, 2007-05-10 at 10:10 -0700, Jürgen Keil wrote:
Btw: In one experiment I tried to boot the kernel under kmdb
control (-kd), patched minclsyspri := 61 and used a
breakpoint inside spa_active() to patch the spa_zio_* taskq
to use prio 60 when importing the gzip compressed pool
(so that
On Wed, 9 May 2007, Anantha N. Srirama wrote:
However, the poor performance of the destroy is still valid. It is quite
possible that we might create another clone for reasons beyond my
original reason.
There are a few open bugs against destroy. It sounds like you may be
running into 6509628
I have a scenario where I have several ORACLE databases. I'm trying to
keep system downtime to a minimum for business reasons. I've created
zpools on three devices, an internal 148 Gb drive (data) and two
partitions on an HP SAN. HP won't do JBOD so I'm stuck with relying
upon HP to give me a
On Thu, 10 May 2007, Bruce Shaw wrote:
I don't have enough disk to do clones and I haven't figured out how to
mount snapshots directly.
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is exactly
the way to mount a snapshot. Creating a clone uses up a negligible amount
of disk
Mark J Musante [EMAIL PROTECTED] wrote:
Maybe I'm misunderstanding what you're saying, but 'zfs clone' is
exactly
the way to mount a snapshot. Creating a clone uses up a negligible
amount
of disk space, provided you never write to it. And you can always set
readonly=on if that's a concern.
So
On 5/8/07, Mario Goebbels [EMAIL PROTECTED] wrote:
While trying some things earlier in figuring out how zpool iostat is
supposed to be interpreted, I noticed that ZFS behaves kind of weird when
writing data. Not to say that it's bad, just interesting. I wrote 160MB of
zeroed data with dd. I had
mike wrote:
this is exactly the kind of feedback i was hoping for.
i'm wondering if some people consider firewire to be better in opensolaris?
I've written some about a 4-drive Firewire-attached box based on the
Oxford 911 chipset, and I've had I/O grind to a halt in the face of
media errors
mike wrote:
thanks for the reply.
On 5/10/07, Al Hopper [EMAIL PROTECTED] wrote:
Suggestion - try two 4-way raidz pools.
wouldn't that bring usable space down to 2 pairs of 3x750?
can those be combined into a single filesystem (for a total of 6x750
usable, but underlying would actually
[EMAIL PROTECTED] wrote on 05/10/2007 02:19:17 PM:
I have a scenario where I have several ORACLE databases. I'm trying to
keep system downtime to a minimum for business reasons. I've created
zpools on three devices, an internal 148 Gb drive (data) and two
partitions on an HP SAN. HP
No, you will not be able to change the number of
disks in a raid-z set
(I think that answers questions 1-4). There is no
plan to implement
this feature.
Am I interpreting this correctly that there are no plans to allow
expansion of raid-z vdevs? This is one feature that I see as
Hi,
I have a test server that I use for testing my different jumpstart
installations. This system is continuously installed and reinstalled with
different system builds.
For some builds I have a finish script that creates a zpool using the utility
found in the Solaris 10 update 3 miniroot.
I
35 matches
Mail list logo