[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
On Fri, Oct 11, 2019 at 10:18:18AM +0200, Paolo Bonzini wrote: > On 11/10/19 08:05, Jan Glauber wrote: > > On Wed, Oct 09, 2019 at 11:15:04AM +0200, Paolo Bonzini wrote: > >>> ...but if I bump notify_me size to uint64_t the issue goes away. > >> > >> Ouch. :) Is this with or without my patch(es)? > > You didn't answer this question. Oh, sorry... I did but the mail probably didn't make it out. I have both of your changes applied (as I think they make sense). > >> Also, what if you just add a dummy uint32_t after notify_me? > > > > With the dummy the testcase also runs fine for 500 iterations. > > You might be lucky and causing list_lock to be in another cache line. > What if you add __attribute__((aligned(16)) to notify_me (and keep the > dummy)? Good point. I'll try to force both into the same cacheline. --Jan > Paolo > > > Dann, can you try if this works on the Hi1620 too? -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/kunpeng920/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
On Wed, Oct 09, 2019 at 11:15:04AM +0200, Paolo Bonzini wrote: > On 09/10/19 10:02, Jan Glauber wrote: > > I'm still not sure what the actual issue is here, but could it be some bad > > interaction between the notify_me and the list_lock? The are both 4 byte > > and side-by-side: > > > > address notify_me: 0xdb528aa0 sizeof notify_me: 4 > > address list_lock: 0xdb528aa4 sizeof list_lock: 4 > > > > AFAICS the generated code looks OK (all load/store exclusive done > > with 32 bit size): > > > > e6c: 885ffc01ldaxr w1, [x0] > > e70: 11000821add w1, w1, #0x2 > > e74: 8802fc01stlxr w2, w1, [x0] > > > > ...but if I bump notify_me size to uint64_t the issue goes away. > > Ouch. :) Is this with or without my patch(es)? > > Also, what if you just add a dummy uint32_t after notify_me? With the dummy the testcase also runs fine for 500 iterations. Dann, can you try if this works on the Hi1620 too? --Jan -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/kunpeng920/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
On Mon, Oct 07, 2019 at 04:58:30PM +0200, Paolo Bonzini wrote: > On 07/10/19 16:44, dann frazier wrote: > > On Mon, Oct 07, 2019 at 01:06:20PM +0200, Paolo Bonzini wrote: > >> On 02/10/19 11:23, Jan Glauber wrote: > >>> I've looked into this on ThunderX2. The arm64 code generated for the > >>> atomic_[add|sub] accesses of ctx->notify_me doesn't contain any > >>> memory barriers. It is just plain ldaxr/stlxr. > >>> > >>> From my understanding this is not sufficient for SMP sync. > >>> > >>> If I read this comment correct: > >>> > >>> void aio_notify(AioContext *ctx) > >>> { > >>> /* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > >>> * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > >>> */ > >>> smp_mb(); > >>> if (ctx->notify_me) { > >>> > >>> it points out that the smp_mb() should be paired. But as > >>> I said the used atomics don't generate any barriers at all. > >> > >> Based on the rest of the thread, this patch should also fix the bug: > >> > >> diff --git a/util/async.c b/util/async.c > >> index 47dcbfa..721ea53 100644 > >> --- a/util/async.c > >> +++ b/util/async.c > >> @@ -249,7 +249,7 @@ aio_ctx_check(GSource *source) > >> aio_notify_accept(ctx); > >> > >> for (bh = ctx->first_bh; bh; bh = bh->next) { > >> -if (bh->scheduled) { > >> +if (atomic_mb_read(>scheduled)) { > >> return true; > >> } > >> } > >> > >> > >> And also the memory barrier in aio_notify can actually be replaced > >> with a SEQ_CST load: > >> > >> diff --git a/util/async.c b/util/async.c > >> index 47dcbfa..721ea53 100644 > >> --- a/util/async.c > >> +++ b/util/async.c > >> @@ -349,11 +349,11 @@ LinuxAioState *aio_get_linux_aio(AioContext *ctx) > >> > >> void aio_notify(AioContext *ctx) > >> { > >> -/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > >> - * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > >> +/* Using atomic_mb_read ensures that e.g. bh->scheduled is written > >> before > >> + * ctx->notify_me is read. Pairs with atomic_or in aio_ctx_prepare or > >> + * atomic_add in aio_poll. > >> */ > >> -smp_mb(); > >> -if (ctx->notify_me) { > >> +if (atomic_mb_read(>notify_me)) { > >> event_notifier_set(>notifier); > >> atomic_mb_set(>notified, true); > >> } > >> > >> > >> Would you be able to test these (one by one possibly)? > > > > Paolo, > > I tried them both separately and together on a Hi1620 system, each > > time it hung in the first iteration. Here's a backtrace of a run with > > both patches applied: > > Ok, not a great start... I'll find myself an aarch64 machine and look > at it more closely. I'd like the patch to be something we can > understand and document, since this is probably the second most-used > memory barrier idiom in QEMU. > > Paolo I'm still not sure what the actual issue is here, but could it be some bad interaction between the notify_me and the list_lock? The are both 4 byte and side-by-side: address notify_me: 0xdb528aa0 sizeof notify_me: 4 address list_lock: 0xdb528aa4 sizeof list_lock: 4 AFAICS the generated code looks OK (all load/store exclusive done with 32 bit size): e6c: 885ffc01ldaxr w1, [x0] e70: 11000821add w1, w1, #0x2 e74: 8802fc01stlxr w2, w1, [x0] ...but if I bump notify_me size to uint64_t the issue goes away. BTW, the image file I convert in the testcase is ~20 GB. --Jan diff --git a/include/block/aio.h b/include/block/aio.h index a1d6b9e24939..e8a5ea3860bb 100644 --- a/include/block/aio.h +++ b/include/block/aio.h @@ -83,7 +83,7 @@ struct AioContext { * Instead, the aio_poll calls include both the prepare and the * dispatch phase, hence a simple counter is enough for them. */ -uint32_t notify_me; +uint64_t notify_me; /* A lock to protect between QEMUBH and AioHandler adders and deleter, * and to ensure that no callbacks are removed while we're walking and -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/kunpeng920/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
On Mon, Oct 07, 2019 at 01:06:20PM +0200, Paolo Bonzini wrote: > On 02/10/19 11:23, Jan Glauber wrote: > > I've looked into this on ThunderX2. The arm64 code generated for the > > atomic_[add|sub] accesses of ctx->notify_me doesn't contain any > > memory barriers. It is just plain ldaxr/stlxr. > > > > From my understanding this is not sufficient for SMP sync. > > > > If I read this comment correct: > > > > void aio_notify(AioContext *ctx) > > { > > /* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > > * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > > */ > > smp_mb(); > > if (ctx->notify_me) { > > > > it points out that the smp_mb() should be paired. But as > > I said the used atomics don't generate any barriers at all. > > Based on the rest of the thread, this patch should also fix the bug: > > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -249,7 +249,7 @@ aio_ctx_check(GSource *source) > aio_notify_accept(ctx); > > for (bh = ctx->first_bh; bh; bh = bh->next) { > -if (bh->scheduled) { > +if (atomic_mb_read(>scheduled)) { > return true; > } > } > > > And also the memory barrier in aio_notify can actually be replaced > with a SEQ_CST load: > > diff --git a/util/async.c b/util/async.c > index 47dcbfa..721ea53 100644 > --- a/util/async.c > +++ b/util/async.c > @@ -349,11 +349,11 @@ LinuxAioState *aio_get_linux_aio(AioContext *ctx) > > void aio_notify(AioContext *ctx) > { > -/* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs > - * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. > +/* Using atomic_mb_read ensures that e.g. bh->scheduled is written before > + * ctx->notify_me is read. Pairs with atomic_or in aio_ctx_prepare or > + * atomic_add in aio_poll. > */ > -smp_mb(); > -if (ctx->notify_me) { > +if (atomic_mb_read(>notify_me)) { > event_notifier_set(>notifier); > atomic_mb_set(>notified, true); > } > > > Would you be able to test these (one by one possibly)? Sure. > > I've tried to verify me theory with this patch and didn't run into the > > issue for ~500 iterations (usually I would trigger the issue ~20 > > iterations). > > Sorry for asking the obvious---500 iterations of what? The testcase mentioned in the Canonical issue: https://bugs.launchpad.net/qemu/+bug/1805256 It's a simple image convert: qemu-img convert -f qcow2 -O qcow2 ./disk01.qcow2 ./output.qcow2 Usually it got stuck after 3-20 iterations. --Jan -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/kunpeng920/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
On Wed, Oct 02, 2019 at 11:45:19AM +0200, Paolo Bonzini wrote: > On 02/10/19 11:23, Jan Glauber wrote: > > I've tried to verify me theory with this patch and didn't run into the > > issue for ~500 iterations (usually I would trigger the issue ~20 > > iterations). > > Awesome! That would be a compiler bug though, as atomic_add and atomic_sub > are defined as sequentially consistent: > > #define atomic_add(ptr, n) ((void) __atomic_fetch_add(ptr, n, > __ATOMIC_SEQ_CST)) > #define atomic_sub(ptr, n) ((void) __atomic_fetch_sub(ptr, n, > __ATOMIC_SEQ_CST)) Compiler bug sounds kind of unlikely... > What compiler are you using and what distro? Can you compile util/aio-posix.c > with "-fdump-rtl-all -fdump-tree-all", zip the boatload of debugging files and > send them my way? This is on Ubuntu 18.04.3, gcc version 7.4.0 (Ubuntu/Linaro 7.4.0-1ubuntu1~18.04.1) I've uploaded the debug files to: https://bugs.launchpad.net/qemu/+bug/1805256/+attachment/5293619/+files/aio-posix.tar.xz Thanks, Jan > Thanks, > > Paolo -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images
Debug files for aio-posix generated on 18.04 on ThunderX2. Compiler: gcc version 7.4.0 (Ubuntu/Linaro 7.4.0-1ubuntu1~18.04.1) Distro: Ubuntu 18.04.3 LTS ** Attachment added: "aio-posix.tar.xz" https://bugs.launchpad.net/qemu/+bug/1805256/+attachment/5293619/+files/aio-posix.tar.xz -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1805256] Re: [Qemu-devel] qemu_futex_wait() lockups in ARM64: 2 possible issues
I've looked into this on ThunderX2. The arm64 code generated for the atomic_[add|sub] accesses of ctx->notify_me doesn't contain any memory barriers. It is just plain ldaxr/stlxr. >From my understanding this is not sufficient for SMP sync. If I read this comment correct: void aio_notify(AioContext *ctx) { /* Write e.g. bh->scheduled before reading ctx->notify_me. Pairs * with atomic_or in aio_ctx_prepare or atomic_add in aio_poll. */ smp_mb(); if (ctx->notify_me) { it points out that the smp_mb() should be paired. But as I said the used atomics don't generate any barriers at all. I've tried to verify me theory with this patch and didn't run into the issue for ~500 iterations (usually I would trigger the issue ~20 iterations). --Jan diff --git a/util/aio-posix.c b/util/aio-posix.c index d8f0cb4af8dd..d07dcd4e9993 100644 --- a/util/aio-posix.c +++ b/util/aio-posix.c @@ -591,6 +591,7 @@ bool aio_poll(AioContext *ctx, bool blocking) */ if (blocking) { atomic_add(>notify_me, 2); +smp_mb(); } qemu_lockcnt_inc(>list_lock); @@ -632,6 +633,7 @@ bool aio_poll(AioContext *ctx, bool blocking) if (blocking) { atomic_sub(>notify_me, 2); +smp_mb(); } /* Adjust polling time */ diff --git a/util/async.c b/util/async.c index 4dd9d95a9e73..92ac209c4615 100644 --- a/util/async.c +++ b/util/async.c @@ -222,6 +222,7 @@ aio_ctx_prepare(GSource *source, gint*timeout) AioContext *ctx = (AioContext *) source; atomic_or(>notify_me, 1); +smp_mb(); /* We assume there is no timeout already supplied */ *timeout = qemu_timeout_ns_to_ms(aio_compute_timeout(ctx)); @@ -240,6 +241,7 @@ aio_ctx_check(GSource *source) QEMUBH *bh; atomic_and(>notify_me, ~1); +smp_mb(); aio_notify_accept(ctx); for (bh = ctx->first_bh; bh; bh = bh->next) { -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1805256 Title: qemu-img hangs on rcu_call_ready_event logic in Aarch64 when converting images To manage notifications about this bug go to: https://bugs.launchpad.net/qemu/+bug/1805256/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1755073] Re: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64
Patches posted: https://marc.info/?l=linux-kernel=152224242828223=2 https://marc.info/?l=linux-kernel=152224242228218=2 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1755073 Title: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755073/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1755073] Re: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64
This is a regression caused by CONFIG_VMAP_STACK. The driver uses __pa (phys_to_virt) on a stack address which does not work with virtual mapped stacks. To solve this upstream will require a bit more work, I'm attaching a minimal patch to work-around by using kmalloc for the problematic allocation. ** Patch added: "zip.patch" https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755073/+attachment/5090802/+files/zip.patch -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1755073 Title: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755073/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1755073] Re: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64
I can reproduce this issue with 4.13.0-37-generic. The debugfs info for the zip driver shows that 2 requests are pending. Either the requests are not submitted properly (Decomp Req Submitted=0) or not completed (DOORBELL regs = 0). root@crb1s:/sys/kernel/debug/thunderx_zip# cat zip_stats ZIP Device 0 Stats --- Comp Req Submitted: 0 Comp Req Completed: 0 Compress In Bytes : 0 Compressed Out Bytes : 0 Average Chunk size: 0 Average Compression ratio : 0 Decomp Req Submitted : 0 Decomp Req Completed : 0 Decompress In Bytes : 0 Decompressed Out Bytes: 0 Decompress Bad requests : 0 Pending Req : 2 - root@crb1s:/sys/kernel/debug/thunderx_zip# cat zip_regs ZIP Device 0 Registers ZIP_CMD_CTL: 0x0002 ZIP_THROTTLE : 0x0010 ZIP_CONSTANTS : 0x02017c002006 ZIP_QUE0_MAP : 0x0003 ZIP_QUE1_MAP : 0x0003 ZIP_QUE_ENA: 0x0003 ZIP_QUE_PRI: 0x0003 ZIP_QUE0_DONE : 0x ZIP_QUE1_DONE : 0x ZIP_QUE0_DOORBELL : 0x ZIP_QUE1_DOORBELL : 0x ZIP_QUE0_SBUF_ADDR : 0xfa040100 ZIP_QUE1_SBUF_ADDR : 0xfa042000 ZIP_QUE0_SBUF_CTL : 0x03f1 ZIP_QUE1_SBUF_CTL : 0x03f1 -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1755073 Title: ubuntu_zram_smoke test will cause soft lockup on Artful ThunderX ARM64 To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1755073/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1747523] Re: support thunderx2 vendor pmu events
I've successfully verified that this issue is resolved after updating to -proposed: thunderx2 imp def: bus_access_rd [Bus access read] bus_access_wr [Bus access write] l1d_cache_rd [L1D cache read] l1d_cache_refill_rd [L1D cache refill read] l1d_cache_refill_wr [L1D refill write] l1d_cache_wr [L1D cache write] l1d_tlb_rd [L1D tlb read] l1d_tlb_refill_rd [L1D tlb refill read] l1d_tlb_refill_wr [L1D tlb refill write] l1d_tlb_wr [L1D tlb write] Linux ubuntu 4.13.0-38-generic #43~16.04.1-Ubuntu SMP Wed Mar 14 17:49:43 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1747523 Title: support thunderx2 vendor pmu events To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1747523/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1745246] Re: perf stat segfaults on uncore events w/o -a
Hi Stefan :) I've successfully verified that this issue is resolved after updating to -proposed: ubuntu@ubuntu:~$ sudo perf stat -e l1d_cache_rd -- sleep 1 Performance counter stats for 'sleep 1': 159,369 l1d_cache_rd 1.001034006 seconds time elapsed Linux ubuntu 4.13.0-38-generic #43~16.04.1-Ubuntu SMP Wed Mar 14 17:49:43 UTC 2018 aarch64 aarch64 aarch64 GNU/Linux -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1745246 Title: perf stat segfaults on uncore events w/o -a To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1745246/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1744875] [NEW] Enable Qlogic network drivers for installer
Public bug reported: Not all Qlogic network drivers are enabled in the installer kernel (Ubuntu 16.04) so detection of network interface fails during installer network dialog. Please enable this drivers: 1) Add all E4 FastLinQ qed* drivers (qed, qede, qedr, qedi, qedf) 2) Add all E3 FastLinQ bnx* drivers (bnx2, cnic, bnx2x, bnx2i, bnx2fc) qed and qede might already be enabled with a recent change: https://bugs.launchpad.net/yarmouth2/+bug/1743569 ** Affects: linux (Ubuntu) Importance: Undecided Status: Incomplete -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1744875 Title: Enable Qlogic network drivers for installer To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/linux/+bug/1744875/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs
[Bug 1718638] Re: Crash on Cavium ThunderX when using Openvswitch-DPDK: nicvf_eth_dev_init(): Failed to get ready message from PF / eal-intr-thread[41505]: unhandled level 2 translation fault"
Comments from Jerin Jacob: Secondary queue set (>8 queues per port) won't work in upstream kernel. Can you please test with <=8 queues per port? If more queues are needed use XFI over XAUI to make it 10G *4 instead of 1 * 40G. -- You received this bug notification because you are a member of Ubuntu Bugs, which is subscribed to Ubuntu. https://bugs.launchpad.net/bugs/1718638 Title: Crash on Cavium ThunderX when using Openvswitch-DPDK: nicvf_eth_dev_init(): Failed to get ready message from PF / eal-intr- thread[41505]: unhandled level 2 translation fault" To manage notifications about this bug go to: https://bugs.launchpad.net/ubuntu/+source/dpdk/+bug/1718638/+subscriptions -- ubuntu-bugs mailing list ubuntu-bugs@lists.ubuntu.com https://lists.ubuntu.com/mailman/listinfo/ubuntu-bugs