[vpp-dev] Latest News on FD.io

2022-06-23 Thread Dave Wallace

Folks,

FD.io TSC member Ray Kinsella has been blogging recently on FD.io VPP.  
His blog posts are now included on the FD.io web site under 'The 
Latest'->'Latest News' [0].


I highly recommend that you check them out.

Many thanks to Ray for his contributions to the FD.io Community!
-daw-

[0] https://fd.io/latest/news/

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21579): https://lists.fd.io/g/vpp-dev/message/21579
Mute This Topic: https://lists.fd.io/mt/91946428/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] [tsc-private] [csit-dev] TRex - replacing CVL with MLX for 100GbE

2022-06-23 Thread Maciek Konstantynowicz (mkonstan) via lists.fd.io
Hi Dave,

Per my action from the last week TSC meeting (item 4.c. in [1]), here is
the list of HW that FD.io project needs and that we can order any
time:

1. 28 NICs, 2p100 GbE from Nvidia / Mellanox - preferred:
   MCX613106A-VDAT, less preferred: MCX556A-EDAT, to cover the following
   testbeds:

   a. Performance 3-Node-ICX, 2 testbeds, 4 SUTs, 2 TGs
   b. Performance 2-Node-ICX, 4 testbeds, 4 SUTs, 4 TGs
   c. ICX TGs for other systems, 3 TGs
   d. 3-Node-Alt (Ampere Altra Arm N1), 1 testbed, 2 SUTs, 1 TG
   e. (exact breakdown in my email from 28 Jan 2022 in the thread below)

2. If we also want to add a MLX NIC for functional vpp_device test, that
   would be additional 2 MLX 2p100GbE NICs.

Things that we originally planned, but can't place orders as the HW is
not available yet:

3. TBC number of 2-socket Xeon SapphireRapids servers

   a. Intel Xeon processor SKUs are not available yet to us - expecting
  update any week now.
   b. Related SuperMicro SKUs are not available yet to us - expecting
  update any week now.

Hope this helps. Happy to answer any questions.

Cheers,
-Maciek

[1] 
https://ircbot.wl.linuxfoundation.org/meetings/fdio-meeting/2022/fd_io_tsc/fdio-meeting-fd_io_tsc.2022-06-09-15.00.html

On 5 Apr 2022, at 12:55, Maciek Konstantynowicz (mkonstan) 
mailto:mkons...@cisco.com>> wrote:

Super, thanks!

On 4 Apr 2022, at 20:22, Dave Wallace 
mailto:dwallac...@gmail.com>> wrote:

Hi Maciek,

I have added this information to the TSC Agenda [0].

Thanks,
-daw-
[0] https://wiki.fd.io/view/TSC#Agenda

On 4/4/2022 10:46 AM, Maciek Konstantynowicz (mkonstan) wrote:


Begin forwarded message:

From: mkonstan mailto:mkons...@cisco.com>>
Subject: Re: [tsc-private] [csit-dev] TRex - replacing CVL with MLX for 100GbE
Date: 3 March 2022 at 16:23:08 GMT
To: Ed Warnicke mailto:e...@cisco.com>>
Cc: "tsc-priv...@lists.fd.io" 
mailto:tsc-priv...@lists.fd.io>>, Lijian Zhang 
mailto:lijian.zh...@arm.com>>

+Lijian

Hi,

Resending email from January, so it’s refreshed in our collective memory, as 
discussed on TSC call just now.

Number of 2p100GE MLX NICs needed for performance testing of Ampere Altra 
servers are listed under point 4 below.

Let me know if anything unclear and/or if any questions.

Cheers,
Maciek

On 28 Jan 2022, at 17:35, Maciek Konstantynowicz (mkonstan) via 
lists.fd.io 
mailto:mkonstan=cisco@lists.fd.io>> wrote:

Hi Ed, Trishan,

One correction regarding my last email from 25-Jan:-

For Intel Xeon Icelake testbeds, apart from just replacing E810s on TRex 
servers, we should also considder adding MLX 100GbE NICs for SUTs, so that 
FD.io could benchmark MLX on latest Intel Xeon CPUs. Exactly as 
discussed on in our side conversation, Ed.

Here an updated calc with breakdown for Icelake (ICX) builds (the Cascadelake 
part stays as per previous email):

// Sorry to TL;DR, if you just want the number of NICs, scroll to the bottom of 
this message :)

(SUT, system under test, server running VPP+NICs under test)
(TG, traffic generator, server running TRex, needs link speeds matching SUTs')

1. 3-Node-ICX, 2 testbeds, 4 SUTs, 2 TGs
   - 4 SUT/VPP/dpdk servers
 - 4 ConnectX NIC, 1 per SUT - test ConnectX on SUT
   - 2 TG/TRex servers
 - 2 ConnectX NICs, 1 per TG - replace E810s and test E810 on SUT
 - 2 ConnectX NICs, 1 per TG - test ConnectX on SUT
 - 1 ConnectX NIC, 1 per testbed type - for TRex calibration
   - sub-total 9 NICs

2. 2-Node-ICX, 4 testbeds, 4 SUTs, 4 TGs
   - 4 SUT/VPP/dpdk servers
 - 4 ConnectX NIC, 1 per SUT - test ConnectX on SUT
   - 4 TG/TRex servers
 - 4 ConnectX NICs, 1 per TG - replace E810s and test E810 on SUT
 - 4 ConnectX NICs, 1 per TG - test ConnectX on SUT
 - 1 ConnectX NIC, 1 per testbed type - for TRex calibration
   - sub-total 13 NICs

3. ICX TGs for other systems, 3 TGs
   - 3 TG/TRex servers
 - 3 ConnectX NICs, 1 per TG - replace E810s and test ConnectX and other 
100GbE NICs on SUTs
 - 1 ConnectX NIC, 1 per testbed type - for TRex calibration
   - sub-total 4 NICs

4. 3-Node-Alt (Ampere Altra Arm N1), 1 testbed, 2 SUTs, 1 TG
   - 2 SUT/VPP/dpdk servers
 - 2 ConnectX NIC, 1 per SUT - test ConnectX on SUT
   - 1 TG/TRex server
 - will use one of the ICX TGs as listed in point 3.
   - sub-total 2 NICs

Total 28 NICs.

Hope this makes sense ...

Cheers,
Maciek

P.S. I'm on PTO now until 7-Feb, so email responses delayed.

On 25 Jan 2022, at 16:38, mkonstan 
mailto:mkons...@cisco.com>> wrote:

Hi Ed, Trishan,

Following from the last TSC call, here are the details about Nvidia Mellanox 
NICs that we are after for CSIT.

For existing Intel Xeon Cascadelake testbeds we have one option:

- MCX556A-EDAT NIC 2p100GbE - $1,195.00 - details in [2].
- need 4 NICs, plus 1 spare => 5 NICs

For the new Intel Xeon Icelake testbeds we have two options:

- MCX556A-EDAT NIC 2p100GbE - $1,195.00 - same as above, OR
- MCX613106A-VDAT 

Re: [vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?

2022-06-23 Thread Benoit Ganne (bganne) via lists.fd.io
Hi Chetan,

> You mean we don't support centos anymore and we only support ubuntu.

I mean it is not run in CI anymore so things will break w/o us realizing, but 
the packaging scripts etc still exists.
So it is mostly a matter of someone using VPP on CentOS volunteering to 
maintain it and proposing patches in gerrit.
For example, SUSE is not run in CI but is regularly maintained.

Best
ben

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21577): https://lists.fd.io/g/vpp-dev/message/21577
Mute This Topic: https://lists.fd.io/mt/91916475/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-



Re: [vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?

2022-06-23 Thread chetan bhasin
Hi Ben,

You mean we don't support centos anymore and we only support ubuntu.

Thanks,
Chetan


On Wed, Jun 22, 2022, 12:48 Benoit Ganne (bganne) via lists.fd.io  wrote:

> We do not run CentOS/opensource RHEL-equivalent anymore and I do not think
> anybody is looking at it.
> Chances are it is quietly bit-rotting...
>
> ben
>
> > -Original Message-
> > From: vpp-dev@lists.fd.io  On Behalf Of chetan
> bhasin
> > Sent: Wednesday, June 22, 2022 8:00
> > To: vpp-dev 
> > Subject: [vpp-dev] Is Vpp21.06 compatible with Redhat8.5 ?
> >
> > Hi ,
> >
> > We are trying to compile vpp21.06 over Redhat 8.5 . We are facing some
> > error in make install-dep. Is it compatible?
> >
> > Thanks,
> > CB
>
> 
>
>

-=-=-=-=-=-=-=-=-=-=-=-
Links: You receive all messages sent to this group.
View/Reply Online (#21576): https://lists.fd.io/g/vpp-dev/message/21576
Mute This Topic: https://lists.fd.io/mt/91916475/21656
Group Owner: vpp-dev+ow...@lists.fd.io
Unsubscribe: https://lists.fd.io/g/vpp-dev/leave/1480452/21656/631435203/xyzzy 
[arch...@mail-archive.com]
-=-=-=-=-=-=-=-=-=-=-=-