Bill, Sommerfeld, Sorry,
However, I am trying to explain what I think is
happening on your system and why I consider this
normal.
Most of the reads/FS "replace" are normally
at the block level.
To copy a FS, some level of reading MUST be done
Hi Robert,
Yes, it could be related, or even the bug. Certainly the replay
was (prior to this bug fix) extremely slow. I don't really have enough
information to determine if it's the exact problem, though after
re-reading your original post I strongly suspect it is.
I also putback a companion f
On Thu, 2006-11-09 at 19:18 -0800, Erblichs wrote:
> Bill Sommerfield,
Again, that's not how my name is spelled.
> With some normal sporadic read failure, accessing
> the whole spool may force repeated reads for
> the replace.
please look again at the iostat I posted:
Bill Sommerfield,
Because, first, I have seen alot of I/O
occur while a snapshot is being aged out
of a system.
I don't think that during the resilvering process
accesses (read, writes) are completely
stopped to the orig_dev.
I expect at l
Thanks for Anton, Robert and Mark's replies. Your answer verified my
observation, ;-) .
The reason that I want to use up the inode is we need to test the
behaviors in the case of both block and inode are used up. If only fill
up the block, creating an empty file still succeeds.
Thanks,
Anto
Thanks for all the feedback. This PSARC case was approved yesterday and
will be integrated relatively soon.
Adam
On Wed, Nov 01, 2006 at 01:33:33AM -0800, Adam Leventhal wrote:
> Rick McNeal and I have been working on building support for sharing ZVOLs
> as iSCSI targets directly into ZFS. Below
Hello Neil,
I can see http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6478388
integrated. I guess it could be related to problem I described here,
right?
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://milek.bl
Tomas Ögren wrote On 11/09/06 13:47,:
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked ov
Hello flama,
Friday, November 10, 2006, 12:09:04 AM, you wrote:
f> thx,I hope that it is soon. Great blog Robert, is in my list of favorite
blogs.
Glad you like it :)
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
http://mil
thx,I hope that it is soon. Great blog Robert, is in my list of favorite blogs.
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Hello Richard,
Thursday, November 2, 2006, 8:09:44 PM, you wrote:
REP> I've added my RAIDoptimizer predictions below...
RAIDoptimizer - what is it exactly and where can I get it? :)
--
Best regards,
Robertmailto:[EMAIL PROTECTED]
Hello Richard,
Tuesday, November 7, 2006, 5:19:07 PM, you wrote:
REP> Robert Milkowski wrote:
>> Saturday, November 4, 2006, 12:46:05 AM, you wrote:
>> REP> Incidentally, since ZFS schedules the resync iops itself, then it can
>> REP> really move along on a mostly idle system. You should be able
Hello Tomas,
Thursday, November 9, 2006, 9:47:17 PM, you wrote:
TÖ> On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
TÖ> Current memory usage (for some values of usage ;):
TÖ> # echo ::memstat|mdb -k
TÖ> Page SummaryPagesMB %Tot
TÖ>
Hello flama,
Thursday, November 9, 2006, 5:44:36 PM, you wrote:
f> Hi people,
f> Is possible detach a device from a stripe zfs without to destroy the pool?.
f> Zfs is similar to doms in tru64, and it have un detach device from
f> stripe, and it realloc the space of the datasets in free disks.
No
Brian Wong wrote:
eric kustarz wrote:
If the ARC detects low memory (via arc_reclaim_needed()), then we call
arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which
reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT).
So yeah, dnlc_nentries would be really interesting to
On 09 November, 2006 - Tomas Ögren sent me these 4,4K bytes:
> On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
>
> > nfs does have a maximum nmber of rnodes which is calculated from the
> > memory available. It doesn't look like nrnode_max can be overridden.
>
> rnode seems to take
eric kustarz wrote:
If the ARC detects low memory (via arc_reclaim_needed()), then we call
arc_kmem_reap_now() and subsequently dnlc_reduce_cache() - which
reduces the # of dnlc entries by 3% (ARC_REDUCE_DNLC_PERCENT).
So yeah, dnlc_nentries would be really interesting to see (especially
if
On 09 November, 2006 - Darren Dunham sent me these 0,7K bytes:
> > I don't think you'd see the same performance benefits on RAID-Z since
> > parity isn't always on the same disk. Are you seeing hot/cool disks?
>
> In addition, doesn't it always have to read all columns so that the
> parity can be
On 09 November, 2006 - Neil Perrin sent me these 1,6K bytes:
>
>
> Tomas Ögren wrote On 11/09/06 09:59,:
>
> >1. DNLC-through-ZFS doesn't seem to listen to ncsize.
> >
> >The filesystem currently has ~550k inodes and large portions of it is
> >frequently looked over with rsync (over nfs). mdb s
Neil Perrin wrote:
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s said we had a hitrat
> I don't think you'd see the same performance benefits on RAID-Z since
> parity isn't always on the same disk. Are you seeing hot/cool disks?
In addition, doesn't it always have to read all columns so that the
parity can be validated?
--
Darren Dunham [
Hi--
ZFS stripes data across all pool configurations but you can only detach
a device from mirrored storage pool.
For more information, see this section:
http://docs.sun.com/app/docs/doc/817-2271/6mhupg6ft?a=view
However, figuring out that this operation is only supported in a
mirrored conf
A UFS file system has a fixed number of inodes, set when the file system is
created. df can simply report how many of those have been used, and how many
are free.
Most file systems, including ZFS and QFS, allocate inodes dynamically. In this
case, there really isn’t a “number of files free” tha
Tomas Ögren wrote On 11/09/06 09:59,:
1. DNLC-through-ZFS doesn't seem to listen to ncsize.
The filesystem currently has ~550k inodes and large portions of it is
frequently looked over with rsync (over nfs). mdb said ncsize was about
68k and vmstat -s said we had a hitrate of ~30%, so I set
I don't think you'd see the same performance benefits on RAID-Z since
parity isn't always on the same disk. Are you seeing hot/cool disks?
Adam
On Sun, Nov 05, 2006 at 04:03:18PM +0100, Pawel Jakub Dawidek wrote:
> In my opinion RAID-Z is closer to RAID-3 than to RAID-5. In RAID-3 you
> do only f
Greetings, all.
I put myself into a bit of a predicament, and I'm hoping there's a way out.
I had a drive (EIDE) in a ZFS mirror die on me. Not a big deal, right? Well, I
bought two SATA drives to build a new mirror. Since they were about the same
size (I wanted bigger drives, but they were out
Richard Elling - PAE wrote:
ZFS fans,
Recalling our conversation about hot-plug and hot-swap terminology and
use,
I afraid to say that CR 6483250 has been closed as will-not-fix. No
explaination
was given.
A bug that is closed will-not-fix should, at the very least, have some
rationale as
ZFS fans,
Recalling our conversation about hot-plug and hot-swap terminology and use,
I afraid to say that CR 6483250 has been closed as will-not-fix. No
explaination
was given. If you feel strongly about this, please open another CR and pile on.
*Change Request ID*: 6483250
*Synopsis*: X2100 r
Hello.
We're currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt
scsi buses, skge GigE network) as a NFS backend with ZFS for
distribution of free software like Debian (cdimage.debian.org,
ftp.se.debian.org) and have run into some performance issues.
We are running SX snv_48 and have
Hi people,
Is possible detach a device from a stripe zfs without to destroy the pool?.
Zfs is similar to doms in tru64, and it have un detach device from stripe, and
it realloc the space of the datasets in free disks.
thx.
This message posted from opensolaris.org
_
Robert Milkowski wrote:
Hello John,
Thursday, November 9, 2006, 12:03:58 PM, you wrote:
JC> Hi all,
JC> When testing our programs, I got a problem. On UFS, we get the number of
JC> free inode via 'df -e', then do some things based this value, such as
JC> create an empty file, the value will de
Hello John,
Thursday, November 9, 2006, 12:03:58 PM, you wrote:
JC> Hi all,
JC> When testing our programs, I got a problem. On UFS, we get the number of
JC> free inode via 'df -e', then do some things based this value, such as
JC> create an empty file, the value will decrease by 1. But on ZFS, i
Hi all,
When testing our programs, I got a problem. On UFS, we get the number of
free inode via 'df -e', then do some things based this value, such as
create an empty file, the value will decrease by 1. But on ZFS, it does
not work. I still can get a number via 'df -e', and create a same empty
33 matches
Mail list logo