On 06/19/2012 11:05 AM, Sašo Kiselkov wrote:
On 06/18/2012 07:50 PM, Roch wrote:
Are we hitting :
7167903 Configuring VLANs results in single threaded soft ring fanout
Confirmed, it is definitely this.
Hold the phone, I just tried unconfiguring all of the VLANs in the
system and went
On 06/18/2012 12:05 AM, Richard Elling wrote:
You might try some of the troubleshooting techniques described in Chapter 5
of the DTtrace book by Brendan Gregg and Jim Mauro. It is not clear from your
description that you are seeing the same symptoms, but the technique should
apply.
--
On 06/13/2012 03:43 PM, Roch wrote:
Sašo Kiselkov writes:
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to
tear down the TLB entry mapping the physical memory (which can from here on
be repurposed).
So
Sao Kiselkov writes:
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to
tear down the TLB entry mapping the physical memory (which can from here
on be repurposed).
So the xcall are just part of this. Should
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I need to somehow resolve
this... I'm running four
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad data, so
ultimately, this kind of isn't workable... I
-discuss] Occasional storm of xcalls on segkmem_zio_free
On 06/12/2012 03:57 PM, Sašo Kiselkov wrote:
Seems the problem is somewhat more egregious than I thought. The xcall
storm causes my network drivers to stop receiving IP multicast packets
and subsequently my recording applications record bad
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do. They
consume a cpu for some time.
That in turn can
On 06/12/2012 05:21 PM, Matt Breitbach wrote:
I saw this _exact_ problem after I bumped ram from 48GB to 192GB. Low
memory pressure seemed to be the cuplrit. Happened usually during storage
vmotions or something like that which effectively nullified the data in the
ARC (sometimes 50GB of
On 06/12/2012 05:37 PM, Roch Bourbonnais wrote:
So the xcall are necessary part of memory reclaiming, when one needs to tear
down the TLB entry mapping the physical memory (which can from here on be
repurposed).
So the xcall are just part of this. Should not cause trouble, but they do.
2012-06-12 19:52, Sašo Kiselkov wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm choosing to
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm choosing to affect
these CPUs either…
In
On 06/12/2012 06:06 PM, Jim Mauro wrote:
So try unbinding the mac threads; it may help you here.
How do I do that? All I can find on interrupt fencing and the like is to
simply set certain processors to no-intr, which moves all of the
interrupts and it doesn't prevent the xcall storm
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
find where your nics are bound too
mdb -k
::interrupts
create a processor set including those cpus [ so just the nic code will
run there ]
andy
Tried and didn't help, unfortunately. I'm still seeing drops. What's
On Tue, Jun 12, 2012 at 11:17 AM, Sašo Kiselkov skiselkov...@gmail.com wrote:
On 06/12/2012 05:58 PM, Andy Bowers - Performance Engineering wrote:
find where your nics are bound too
mdb -k
::interrupts
create a processor set including those cpus [ so just the nic code will
run there ]
On 06/12/2012 07:19 PM, Roch Bourbonnais wrote:
Try with this /etc/system tunings :
set mac:mac_soft_ring_thread_bind=0 set mac:mac_srs_thread_bind=0
set zfs:zio_taskq_batch_pct=50
Thanks for the recommendations, I'll try and see whether it helps, but
this is going to take me a while
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I'm
occasionally seeing a storm of
On 06/06/2012 04:55 PM, Richard Elling wrote:
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
I'll try and load the machine with dd(1) to the max to see if access
patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
storm issue, writing more data to it only exacerbates it
On Jun 6, 2012, at 8:01 AM, Sašo Kiselkov wrote:
On 06/06/2012 04:55 PM, Richard Elling wrote:
On Jun 6, 2012, at 12:48 AM, Sašo Kiselkov wrote:
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
On Jun 6, 2012, at 8:22 AM, Sašo Kiselkov wrote:
On 06/06/2012 05:01 PM, Sašo Kiselkov wrote:
I'll try and load the machine with dd(1) to the max to see if access
patterns of my software have something to do with it.
Tried and tested, any and all write I/O to the pool causes this xcall
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not prove empirically is that the 100% sys time of
CPU
On 06/06/2012 09:43 PM, Jim Mauro wrote:
I can't help but be curious about something, which perhaps you verified but
did not post.
What the data here shows is;
- CPU 31 is buried in the kernel (100% sys).
- CPU 31 is handling a moderate-to-high rate of xcalls.
What the data does not
23 matches
Mail list logo