On Thu, Jun 14, 2012 at 09:56:43AM +1000, Daniel Carosone wrote:
On Tue, Jun 12, 2012 at 03:46:00PM +1000, Scott Aitken wrote:
Hi all,
Hi Scott. :-)
I have a 5 drive RAIDZ volume with data that I'd like to recover.
Yeah, still..
I tried using Jeff Bonwick's labelfix binary to
Offlist/OT - Sheer guess, straight out of my parts - maybe a cronjob to
rebuild the locate db or something similar is hammering it once a week?
In the problem condition, there appears to be very little going on on the
system. eg.,
root@server5:/tmp# /usr/local/bin/top
last pid: 3828;
On Jun 13, 2012, at 4:51 PM, Daniel Carosone wrote:
On Wed, Jun 13, 2012 at 05:56:56PM -0500, Timothy Coalson wrote:
client: ubuntu 11.10
/etc/fstab entry: server:/mainpool/storage /mnt/myelin nfs
bg,retry=5,soft,proto=tcp,intr,nfsvers=3,noatime,nodiratime,async 0
0
In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, tpc...@mklab.ph.r
hul.ac.uk writes:
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your zpool history is hanging due to lack of
RAM.
John
groenv...@acm.org
In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk,
tpc...@mklab.ph.r
hul.ac.uk writes:
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your zpool history is hanging due to lack of
RAM.
Interesting. In the problem state the system is
2012-06-14 19:11, tpc...@mklab.ph.rhul.ac.uk wrote:
In message 201206141413.q5eedvzq017...@mklab.ph.rhul.ac.uk, tpc...@mklab.ph.r
hul.ac.uk writes:
Memory: 2048M phys mem, 32M free mem, 16G total swap, 16G free swap
My WAG is that your zpool history is hanging due to lack of
RAM.
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8% of available space) during a copy
operation from 37.30 with
Hi Tim,
On Jun 14, 2012, at 12:20 PM, Timothy Coalson wrote:
Thanks for the script. Here is some sample output from 'sudo
./nfssvrtop -b 512 5' (my disks are 512B-sector emulated and the pool
is ashift=9, some benchmarking didn't show much difference with
ashift=12 other than giving up 8%
The client is using async writes, that include commits. Sync writes do
not need commits.
What happens is that the ZFS transaction group commit occurs at more-
or-less regular intervals, likely 5 seconds for more modern ZFS
systems. When the commit occurs, any data that is in the ARC but not
The client is using async writes, that include commits. Sync writes do not
need commits.
Are you saying nfs commit operations sent by the client aren't always
reported by that script?
What happens is that the ZFS transaction group commit occurs at more-or-less
regular intervals, likely 5
On 14 Jun 2012, at 23:15, Timothy Coalson tsc...@mst.edu wrote:
The client is using async writes, that include commits. Sync writes do not
need commits.
Are you saying nfs commit operations sent by the client aren't always
reported by that script?
They are not reported in your case because
Indeed they are there, shown with 1 second interval. So, it is the
client's fault after all. I'll have to see whether it is somehow
possible to get the server to write cached data sooner (and hopefully
asynchronous), and the client to issue commits less often. Luckily I
can live with the
12 matches
Mail list logo