Changing the order seems to have solved the problem.
Jim
On 10/26/2016 06:31 AM, Alfredo Cardigliano wrote:
> Hi Jim
> ideally you should kill snort/suri and then zbalance_ipc, however killing
> zbalance_ipc
> before should not lead to issues, this is something we usually do in our
> tests.
>
Hi Jim
ideally you should kill snort/suri and then zbalance_ipc, however killing
zbalance_ipc
before should not lead to issues, this is something we usually do in our tests.
Is this something always reproducible? Could you try changing the order
(killing
zbalance_ipc after snort/suri) to check
Yes, this is the ixgbe driver. All I did was run my version of
/etc/init.d/{suri,snortd} stop (testing both at the moment), and
both hosts locked up. I had to do a hard reset as the reboot hung
as well.
All the script does is kill zbalance_ipc then snort/suri . Should
I do that in reverse order
This seems to be driver-related. You were using ixgbe in this test right? Did
you do something like putting the interface down or unloading the driver
perhaps? Trying to figure out what caused this..
Thank you
Alfredo
> On 14 Oct 2016, at 22:25, Jim Hranicky wrote:
>
> Logs
Logs attached.
Jim
On 10/14/2016 03:44 PM, Alfredo Cardigliano wrote:
> Uhm, hard to say, could you provide also dmesg?
>
> Alfredo
>
>> On 14 Oct 2016, at 18:07, Jim Hranicky wrote:
>>
>> And one more, sorry. I tried to stop zbalance_ipc to move to
>> 32 queues and am getting
Hi Jim
please note the hashing algorithm and the distribution function themselves
handle
more than 32 queues, the limit is in the fan-out support (multi applications)
which
uses a 32bit mask: in essence if you use -n 72 in place of -n 72,1 you are able
to handle 72 instances. Changing the
Uhm, hard to say, could you provide also dmesg?
Alfredo
> On 14 Oct 2016, at 18:07, Jim Hranicky wrote:
>
> And one more, sorry. I tried to stop zbalance_ipc to move to
> 32 queues and am getting this error:
>
> Message from syslogd@host at Oct 14 12:05:23 ...
> kernel:BUG:
Yes, RSS in 82599 supports up to 16 queues, if you need more moving to fm10k
could be an option.
Alfredo
> On 14 Oct 2016, at 18:04, Jim Hranicky wrote:
>
> Another question. It seems that suricata can go into ZC mode
> without using zbalance_ipc, however, the card I have
I have 2 CPUS, 18 cores each, HT gives 36 cores per CPU for a total
of 72 "cpus" as per /proc/cpuinfo .
I only have a 10g feed currently, but it appears the fm10k can take
10g SFPs and supports RSS values upto 128 according to fm10k_type.h:
#define FM10K_MAX_RSS_INDICES 128
Jim
On
Hi Jim,
I faced a similar problem with the same NIC some time ago. I found the same
upper bound in a 32 physical-core (64 cores with HT) server. The point is
that this server had two different CPUs (with 16 physical cores (32 cores
with HT) each one) and, maybe I'm wrong, but I seem to remember
And one more, sorry. I tried to stop zbalance_ipc to move to
32 queues and am getting this error:
Message from syslogd@host at Oct 14 12:05:23 ...
kernel:BUG: soft lockup - CPU#17 stuck for 22s! [migration/17:237]
Message from syslogd@host at Oct 14 12:05:23 ...
kernel:BUG: soft lockup
Another question. It seems that suricata can go into ZC mode
without using zbalance_ipc, however, the card I have (82599)
only supports RSS values of upto 16. Would I be able to take
advantage of all the cores I have with suri in this instance
if I moved to a card that can support more RSS entries
How difficult would it be to add a hashing algorithm based
on the 5-tuple that can support more cores? Is that even
feasible?
Jim
On 10/14/2016 03:53 AM, Alfredo Cardigliano wrote:
> Hi Jim
> please note that when using distribution to multiple applications (using a
> comma-separated list in
13 matches
Mail list logo