RE: [PATCH net-next] liquidio: add support for OVS offload

2017-06-04 Thread Chickles, Derek


> From: David Miller [mailto:da...@davemloft.net]
> Sent: Saturday, May 27, 2017 5:07 PM
> Subject: Re: [PATCH net-next] liquidio: add support for OVS offload
> 
> From: Felix Manlunas <felix.manlu...@cavium.com>
> Date: Sat, 27 May 2017 08:56:33 -0700
> 
> > From: VSR Burru <veerasenareddy.bu...@cavium.com>
> >
> > Add support for OVS offload.  By default PF driver runs in basic NIC
> > mode as usual.  To run in OVS mode, use the insmod parameter
> "fw_type=ovs".
> >
> > For OVS mode, create a management interface for communication with NIC
> > firmware.  This communication channel uses PF0's I/O rings.
> >
> > Bump up driver version to 1.6.0 to match newer firmware.
> >
> > Signed-off-by: VSR Burru <veerasenareddy.bu...@cavium.com>
> > Signed-off-by: Felix Manlunas <felix.manlu...@cavium.com>
> 

Hi David,

We probably should have put in a cover letter before submitting this patch, but 
we'll try to explain how everything works here. We've also reassessed the 
mechanism for triggering the creating of the management interface, and will be 
submitting a revised patch shortly that triggers creation of this interface 
based on a message coming from the firmware at startup, instead of that module 
parameter check.

Anyway, this patch is really a foundational change for a series of new features 
that depend on having a private ethernet interface into the adapter. It will 
enable features like OVS, traffic monitoring, remote management of LiquidIO and 
other things that are planned for LiquidIO down the road.

> How does this work?
> 

LiquidIO can be thought of as a second computer within a NIC form factor. It 
has a processor complex, various I/Os, and may run a full blown operating 
system, like Linux. In such a configuration it is natural to have an ethernet 
interface from the host operating system so one could communicate with the 
Linux instance running on the card. Things like regular TCP sockets, SSH, etc 
are all possible. That's what this patch specifically enables. Again, we're not 
going to create this interface based on some module parameter, but rather the 
card will tell the host that it needs this interface when it starts.

Our initial versions of the LiquidIO host driver required LiquidIO firmware 
that provided basic NIC features, as seen by our previous submissions into the 
driver tree. Now, we're moving to running Linux on the LiquidIO adapter itself. 

So, in the OVS case, we simply replace the Ethernet Bridge in the "basic NIC" 
implementation with Linux running OVS. Since we have the management interface 
available to the host, this OVS implementation can communicate with an external 
controller residing on the host.

> What in userspace installs the OVS rules onto the card?
> 

There is no direct host involvement in the LiquidIO OVS configuration. The 
ovs-vswitchd runs on LiquidIO and can communicate with ovsdb-server running on 
the LiquidIO processor or on the host using the management interface supplied 
in this patch.

> We do not support direct offload of OVS, as an OVS entity, instead we
> required all vendors to make their OVS offloads visible as packet
> scheduler classifiers and actions.
> 
> The same rules apply to liquidio.
> 

We are running OVS as the switching infrastructure on the card instead of a 
VEB. If someone wants to run OVS in the host they can still do that. We do have 
plans to add the ndo interfaces for supporting classifiers and filters, so the 
host could have accelerated OVS when a LiquidIO card is installed. However, in 
the near term, we're focusing on enabling an ethernet interface to the Linux 
instance running on the card, so you can simply connect to ovs-switchd running 
on the card. This also enables a lot of flexibility in the types of 
applications that could be running on the adapter.

> If there is some special set of userspace interfaces that are used to
> comunicate with these different firmwares in some liquidio specific way, I
> am going to be very upset.  That is definitely not allowed.
> 

Our solution does not require any special user or kernel space components on 
the host. The OVS-based LiquidIO firmware can be configured just like OVS on 
the host by usual means such as a remote Openflow controller on the network.


> I'm not applying this patch until the above is resolved and at least more
> information is added to this commit log message to explain how this stuff
> works.

Thanks and regards,
Derek


Re: [PATCH net-next] liquidio: add support for OVS offload

2017-05-27 Thread David Miller
From: Felix Manlunas 
Date: Sat, 27 May 2017 08:56:33 -0700

> From: VSR Burru 
> 
> Add support for OVS offload.  By default PF driver runs in basic NIC mode
> as usual.  To run in OVS mode, use the insmod parameter "fw_type=ovs".
> 
> For OVS mode, create a management interface for communication with NIC
> firmware.  This communication channel uses PF0's I/O rings.
> 
> Bump up driver version to 1.6.0 to match newer firmware.
> 
> Signed-off-by: VSR Burru 
> Signed-off-by: Felix Manlunas 

How does this work?

What in userspace installs the OVS rules onto the card?

We do not support direct offload of OVS, as an OVS entity, instead we
required all vendors to make their OVS offloads visible as packet
scheduler classifiers and actions.

The same rules apply to liquidio.

If there is some special set of userspace interfaces that are used to
comunicate with these different firmwares in some liquidio specific
way, I am going to be very upset.  That is definitely not allowed.

I'm not applying this patch until the above is resolved and at least
more information is added to this commit log message to explain how
this stuff works.


[PATCH net-next] liquidio: add support for OVS offload

2017-05-27 Thread Felix Manlunas
From: VSR Burru 

Add support for OVS offload.  By default PF driver runs in basic NIC mode
as usual.  To run in OVS mode, use the insmod parameter "fw_type=ovs".

For OVS mode, create a management interface for communication with NIC
firmware.  This communication channel uses PF0's I/O rings.

Bump up driver version to 1.6.0 to match newer firmware.

Signed-off-by: VSR Burru 
Signed-off-by: Felix Manlunas 
---
 drivers/net/ethernet/cavium/liquidio/Makefile  |   1 +
 drivers/net/ethernet/cavium/liquidio/lio_main.c|  27 +-
 .../net/ethernet/cavium/liquidio/liquidio_common.h |  23 +-
 .../net/ethernet/cavium/liquidio/liquidio_image.h  |   1 +
 .../net/ethernet/cavium/liquidio/liquidio_mgmt.c   | 439 +
 .../net/ethernet/cavium/liquidio/octeon_console.c  |  27 +-
 drivers/net/ethernet/cavium/liquidio/octeon_main.h |   9 +
 7 files changed, 516 insertions(+), 11 deletions(-)

diff --git a/drivers/net/ethernet/cavium/liquidio/Makefile 
b/drivers/net/ethernet/cavium/liquidio/Makefile
index c4d411d..2064157 100644
--- a/drivers/net/ethernet/cavium/liquidio/Makefile
+++ b/drivers/net/ethernet/cavium/liquidio/Makefile
@@ -15,6 +15,7 @@ liquidio-$(CONFIG_LIQUIDIO) += lio_ethtool.o \
octeon_mailbox.o   \
octeon_mem_ops.o   \
octeon_droq.o  \
+   liquidio_mgmt.o  \
octeon_nic.o
 
 liquidio-objs := lio_main.o octeon_console.o $(liquidio-y)
diff --git a/drivers/net/ethernet/cavium/liquidio/lio_main.c 
b/drivers/net/ethernet/cavium/liquidio/lio_main.c
index ba01242..b22eb74 100644
--- a/drivers/net/ethernet/cavium/liquidio/lio_main.c
+++ b/drivers/net/ethernet/cavium/liquidio/lio_main.c
@@ -43,6 +43,8 @@ MODULE_FIRMWARE(LIO_FW_DIR LIO_FW_BASE_NAME LIO_210SV_NAME 
LIO_FW_NAME_SUFFIX);
 MODULE_FIRMWARE(LIO_FW_DIR LIO_FW_BASE_NAME LIO_210NV_NAME LIO_FW_NAME_SUFFIX);
 MODULE_FIRMWARE(LIO_FW_DIR LIO_FW_BASE_NAME LIO_410NV_NAME LIO_FW_NAME_SUFFIX);
 MODULE_FIRMWARE(LIO_FW_DIR LIO_FW_BASE_NAME LIO_23XX_NAME LIO_FW_NAME_SUFFIX);
+MODULE_FIRMWARE(LIO_FW_DIR LIO_FW_BASE_NAME LIO_23XX_NAME "_"
+LIO_FW_NAME_TYPE_OVS LIO_FW_NAME_SUFFIX);
 
 static int ddr_timeout = 1;
 module_param(ddr_timeout, int, 0644);
@@ -57,7 +59,7 @@ MODULE_PARM_DESC(debug, "NETIF_MSG debug bits");
 
 static char fw_type[LIO_MAX_FW_TYPE_LEN];
 module_param_string(fw_type, fw_type, sizeof(fw_type), );
-MODULE_PARM_DESC(fw_type, "Type of firmware to be loaded. Default \"nic\"");
+MODULE_PARM_DESC(fw_type, "Type of firmware to be loaded (nic,ovs,none). 
Default \"nic\".  Use \"none\" to load firmware from flash on LiquidIO 
adapter.");
 
 static int ptp_enable = 1;
 
@@ -1414,6 +1416,12 @@ static bool fw_type_is_none(void)
   sizeof(LIO_FW_NAME_TYPE_NONE)) == 0;
 }
 
+static bool is_fw_type_ovs(void)
+{
+   return strncmp(fw_type, LIO_FW_NAME_TYPE_OVS,
+  sizeof(LIO_FW_NAME_TYPE_OVS)) == 0;
+}
+
 /**
  *\brief Destroy resources associated with octeon device
  * @param pdev PCI device structure
@@ -1776,6 +1784,9 @@ static void liquidio_remove(struct pci_dev *pdev)
 
dev_dbg(_dev->pci_dev->dev, "Stopping device\n");
 
+   if (is_fw_type_ovs())
+   lio_mgmt_exit();
+
if (oct_dev->watchdog_task)
kthread_stop(oct_dev->watchdog_task);
 
@@ -3933,6 +3944,8 @@ static int setup_nic_devices(struct octeon_device 
*octeon_dev)
u32 resp_size, ctx_size, data_size;
u32 ifidx_or_pfnum;
struct lio_version *vdata;
+   union oct_nic_vf_info vf_info;
+
 
/* This is to handle link status changes */
octeon_register_dispatch_fn(octeon_dev, OPCODE_NIC,
@@ -4001,9 +4014,16 @@ static int setup_nic_devices(struct octeon_device 
*octeon_dev)
 
sc->iq_no = 0;
 
+   /* Populate VF info for OVS firmware */
+   vf_info.u64 = 0;
+
+   vf_info.s.bus_num = octeon_dev->pci_dev->bus->number;
+   vf_info.s.dev_fn = octeon_dev->pci_dev->devfn;
+   vf_info.s.max_vfs = octeon_dev->sriov_info.max_vfs;
+
octeon_prepare_soft_command(octeon_dev, sc, OPCODE_NIC,
OPCODE_NIC_IF_CFG, 0,
-   if_cfg.u64, 0);
+   if_cfg.u64, vf_info.u64);
 
sc->callback = if_cfg_callback;
sc->callback_arg = sc;
@@ -4382,6 +4402,9 @@ static int liquidio_init_nic_module(struct octeon_device 
*oct)
goto octnet_init_failure;
}
 
+   if (is_fw_type_ovs())
+   lio_mgmt_init(oct);
+
liquidio_ptp_init(oct);
 
dev_dbg(>pci_dev->dev, "Network interfaces ready\n");
diff --git a/drivers/net/ethernet/cavium/liquidio/liquidio_common.h