Hi All,
Just a follow up - it seems like whatever it was doing it eventually got
done with and the speed picked back up again. The send/recv finally
finished -- I guess I could do with a little patience :)
Lachlan
On Mon, Dec 5, 2011 at 10:47 AM, Lachlan Mulcahy wrote:
> Hi All,
>
> We are cur
Hi Bob,
On Mon, Dec 5, 2011 at 12:31 PM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>>
>> Anything else you suggest I'd check for faults? (Though I'm sort of
>> doubting it is an issue, I'm happy to be
>> thorough)
>>
>
> Try running
>
>
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
Anything else you suggest I'd check for faults? (Though I'm sort of doubting it
is an issue, I'm happy to be
thorough)
Try running
fmdump -ef
and see if new low-level fault events are comming in during the zfs
receive.
Bob
--
Bob Friesenhahn
b
Hi Bob,
On Mon, Dec 5, 2011 at 11:19 AM, Bob Friesenhahn <
bfrie...@simple.dallas.tx.us> wrote:
> On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
>
>> genunix`list_next ** 5822 3.7%
>> unix`mach_cpu_idle** 150261 96.1%
>>
>
>
On 12/05/11 10:47, Lachlan Mulcahy wrote:
> zfs`lzjb_decompress10 0.0%
> unix`page_nextn31 0.0%
> genunix`fsflush_do_pages 37 0.0%
> zfs`dbuf_free_range
On Mon, 5 Dec 2011, Lachlan Mulcahy wrote:
genunix`list_next 5822 3.7%
unix`mach_cpu_idle 150261 96.1%
Rather idle.
Top shows:
PID USERNAME NLWP PRI NICE SIZE RES STATE TIME CPU COMMAND
22945 root