Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-11 Thread Santosh Shilimkar
Dave,

On Monday 08 September 2014 10:41 AM, Santosh Shilimkar wrote:
> Hi Dave,
> 
> On 8/22/14 3:45 PM, Santosh Shilimkar wrote:
>> Hi David,
>>
>> On Thursday 21 August 2014 07:36 PM, David Miller wrote:
>>> From: Santosh Shilimkar 
>>> Date: Fri, 15 Aug 2014 11:12:39 -0400
>>>
 Update version after incorporating David Miller's comment from earlier
 posting [1]. I would like to get these merged for upcoming 3.18 merge
 window if there are no concerns on this version.

 The network coprocessor (NetCP) is a hardware accelerator that processes
 Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a 
 ethernet
 switch sub-module to send and receive packets. NetCP also includes a packet
 accelerator (PA) module to perform packet classification operations such as
 header matching, and packet modification operations such as checksum
 generation. NetCP can also optionally include a Security Accelerator(SA)
 capable of performing IPSec operations on ingress/egress packets.

 Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
 includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
 1Gb/s rates per Ethernet port.

 NetCP driver has a plug-in module architecture where each of the NetCP
 sub-modules exist as a loadable kernel module which plug in to the netcp
 core. These sub-modules are represented as "netcp-devices" in the dts
 bindings. It is mandatory to have the ethernet switch sub-module for
 the ethernet interface to be operational. Any other sub-module like the
 PA is optional.

 Both GBE and XGBE network processors supported using common driver. It
 is also designed to handle future variants of NetCP.
>>>
>>> I don't want to see an offload driver that doesn't plug into the existing
>>> generic frameworks for configuration et al.
>>>
>>> If no existing facility exists to support what you need, you must work
>>> with the upstream maintainers to design and create one.
>>>
>>> It is absolutely no reasonable for every "switch on a chip" driver to
>>> export it's own configuration knob, we need a standard interface all
>>> such drivers will plug into and provide.
>>>
As discussed on other thread, we are dropping the custom exports. I
will be spinning updated version with the exports removed.

And for the future offload support additions, we will plug into
generic frameworks as when they are available.

Regards,
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-11 Thread Santosh Shilimkar
On Wednesday 10 September 2014 07:33 AM, Jamal Hadi Salim wrote:
> On 09/09/14 11:19, Santosh Shilimkar wrote:
> 
>> All the documentation is open including packet accelerator offload
>> in ti.com.
> 
> Very nice.
> Would you do me a kindness and point to the switch interface
> documentation (and other ones on that soc)?
>
You can find it here [1], [2], [3]

>> We got such requests from customers but couldn't
>> support it for Linux.
> 
> It has been difficult because every chip vendor is trying
> to do their own thing. Some have huge (fugly) SDKs in user space
> which make it worse. Thats the struggle we are trying to
> deal with. Of course none of those vendors want to open
> up their specs. You present a nice opportunity to not follow
> that path.
> 
>> We are also looking for such
>> support and any direction are welcome. Your slide
>> deck seems to capture the key topics like L2/IPSEC
>> offload which we are also interested to hear.
>>
> 
> The slides list the most popular offloads. But not necessarily
> all known offloads.
> 
>> Just to be clear, your point was about L2 switch offload
>> which the driver don't support at the moment. It might confuse
>> others. The driver doesn't implements anything non-standard.
>>
> 
> If i understood you correctly:
> Your initial patches dont intend to expose any offloads - you are just
> abstracting this as a NIC. I think that is a legit reason.
Yes. The NetCP hardware is abstracted as a regular NIC.

> However, the problem is you are also exposing the packet processors
> and switch offloading in a proprietary way.
> For a sample of how L2 basic functions like FDB tables are controlled
> within a NIC - take a look at the Intel NICs.
> Either that or you hide all the offload interfaces and over time add
> them (starting with L2 - NICs with L2 are common).
>
Switch offload isn't supported but we do agree that for packet
accelerator, we are using custom hooks because of lack of other
mechanism.

We will definitely use the new ndo based fdb offload scheme when
we get to it. We understand that the forward direction is to
have ndo operation based offloads and its the right way probably.

We will update the patch and drop all the custom exports. Anyway
the current driver doesn't support any offloads now. We can add
support for it as the frameworks evolves.

Thanks a lot for informative discussion and those links.

Regards,
Santosh
[1] http://www.ti.com/lit/pdf/sprugv9
[2] http://www.ti.com/lit/pdf/spruhj5
[3] http://www.ti.com/lit/pdf/sprugs4


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-11 Thread Santosh Shilimkar
On Wednesday 10 September 2014 07:33 AM, Jamal Hadi Salim wrote:
 On 09/09/14 11:19, Santosh Shilimkar wrote:
 
 All the documentation is open including packet accelerator offload
 in ti.com.
 
 Very nice.
 Would you do me a kindness and point to the switch interface
 documentation (and other ones on that soc)?

You can find it here [1], [2], [3]

 We got such requests from customers but couldn't
 support it for Linux.
 
 It has been difficult because every chip vendor is trying
 to do their own thing. Some have huge (fugly) SDKs in user space
 which make it worse. Thats the struggle we are trying to
 deal with. Of course none of those vendors want to open
 up their specs. You present a nice opportunity to not follow
 that path.
 
 We are also looking for such
 support and any direction are welcome. Your slide
 deck seems to capture the key topics like L2/IPSEC
 offload which we are also interested to hear.

 
 The slides list the most popular offloads. But not necessarily
 all known offloads.
 
 Just to be clear, your point was about L2 switch offload
 which the driver don't support at the moment. It might confuse
 others. The driver doesn't implements anything non-standard.

 
 If i understood you correctly:
 Your initial patches dont intend to expose any offloads - you are just
 abstracting this as a NIC. I think that is a legit reason.
Yes. The NetCP hardware is abstracted as a regular NIC.

 However, the problem is you are also exposing the packet processors
 and switch offloading in a proprietary way.
 For a sample of how L2 basic functions like FDB tables are controlled
 within a NIC - take a look at the Intel NICs.
 Either that or you hide all the offload interfaces and over time add
 them (starting with L2 - NICs with L2 are common).

Switch offload isn't supported but we do agree that for packet
accelerator, we are using custom hooks because of lack of other
mechanism.

We will definitely use the new ndo based fdb offload scheme when
we get to it. We understand that the forward direction is to
have ndo operation based offloads and its the right way probably.

We will update the patch and drop all the custom exports. Anyway
the current driver doesn't support any offloads now. We can add
support for it as the frameworks evolves.

Thanks a lot for informative discussion and those links.

Regards,
Santosh
[1] http://www.ti.com/lit/pdf/sprugv9
[2] http://www.ti.com/lit/pdf/spruhj5
[3] http://www.ti.com/lit/pdf/sprugs4


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-11 Thread Santosh Shilimkar
Dave,

On Monday 08 September 2014 10:41 AM, Santosh Shilimkar wrote:
 Hi Dave,
 
 On 8/22/14 3:45 PM, Santosh Shilimkar wrote:
 Hi David,

 On Thursday 21 August 2014 07:36 PM, David Miller wrote:
 From: Santosh Shilimkar santosh.shilim...@ti.com
 Date: Fri, 15 Aug 2014 11:12:39 -0400

 Update version after incorporating David Miller's comment from earlier
 posting [1]. I would like to get these merged for upcoming 3.18 merge
 window if there are no concerns on this version.

 The network coprocessor (NetCP) is a hardware accelerator that processes
 Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a 
 ethernet
 switch sub-module to send and receive packets. NetCP also includes a packet
 accelerator (PA) module to perform packet classification operations such as
 header matching, and packet modification operations such as checksum
 generation. NetCP can also optionally include a Security Accelerator(SA)
 capable of performing IPSec operations on ingress/egress packets.

 Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
 includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
 1Gb/s rates per Ethernet port.

 NetCP driver has a plug-in module architecture where each of the NetCP
 sub-modules exist as a loadable kernel module which plug in to the netcp
 core. These sub-modules are represented as netcp-devices in the dts
 bindings. It is mandatory to have the ethernet switch sub-module for
 the ethernet interface to be operational. Any other sub-module like the
 PA is optional.

 Both GBE and XGBE network processors supported using common driver. It
 is also designed to handle future variants of NetCP.

 I don't want to see an offload driver that doesn't plug into the existing
 generic frameworks for configuration et al.

 If no existing facility exists to support what you need, you must work
 with the upstream maintainers to design and create one.

 It is absolutely no reasonable for every switch on a chip driver to
 export it's own configuration knob, we need a standard interface all
 such drivers will plug into and provide.

As discussed on other thread, we are dropping the custom exports. I
will be spinning updated version with the exports removed.

And for the future offload support additions, we will plug into
generic frameworks as when they are available.

Regards,
Santosh
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-10 Thread Jamal Hadi Salim

On 09/09/14 11:19, Santosh Shilimkar wrote:


All the documentation is open including packet accelerator offload
in ti.com.


Very nice.
Would you do me a kindness and point to the switch interface
documentation (and other ones on that soc)?


We got such requests from customers but couldn't
support it for Linux.


It has been difficult because every chip vendor is trying
to do their own thing. Some have huge (fugly) SDKs in user space
which make it worse. Thats the struggle we are trying to
deal with. Of course none of those vendors want to open
up their specs. You present a nice opportunity to not follow
that path.


We are also looking for such
support and any direction are welcome. Your slide
deck seems to capture the key topics like L2/IPSEC
offload which we are also interested to hear.



The slides list the most popular offloads. But not necessarily
all known offloads.


Just to be clear, your point was about L2 switch offload
which the driver don't support at the moment. It might confuse
others. The driver doesn't implements anything non-standard.



If i understood you correctly:
Your initial patches dont intend to expose any offloads - you are just
abstracting this as a NIC. I think that is a legit reason.
However, the problem is you are also exposing the packet processors
and switch offloading in a proprietary way.
For a sample of how L2 basic functions like FDB tables are controlled
within a NIC - take a look at the Intel NICs.
Either that or you hide all the offload interfaces and over time add
them (starting with L2 - NICs with L2 are common).

cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-10 Thread Jamal Hadi Salim

On 09/09/14 11:19, Santosh Shilimkar wrote:


All the documentation is open including packet accelerator offload
in ti.com.


Very nice.
Would you do me a kindness and point to the switch interface
documentation (and other ones on that soc)?


We got such requests from customers but couldn't
support it for Linux.


It has been difficult because every chip vendor is trying
to do their own thing. Some have huge (fugly) SDKs in user space
which make it worse. Thats the struggle we are trying to
deal with. Of course none of those vendors want to open
up their specs. You present a nice opportunity to not follow
that path.


We are also looking for such
support and any direction are welcome. Your slide
deck seems to capture the key topics like L2/IPSEC
offload which we are also interested to hear.



The slides list the most popular offloads. But not necessarily
all known offloads.


Just to be clear, your point was about L2 switch offload
which the driver don't support at the moment. It might confuse
others. The driver doesn't implements anything non-standard.



If i understood you correctly:
Your initial patches dont intend to expose any offloads - you are just
abstracting this as a NIC. I think that is a legit reason.
However, the problem is you are also exposing the packet processors
and switch offloading in a proprietary way.
For a sample of how L2 basic functions like FDB tables are controlled
within a NIC - take a look at the Intel NICs.
Either that or you hide all the offload interfaces and over time add
them (starting with L2 - NICs with L2 are common).

cheers,
jamal
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-09 Thread Santosh Shilimkar
On Tuesday 09 September 2014 07:44 AM, Jamal Hadi Salim wrote:
> On 09/08/14 10:41, Santosh Shilimkar wrote:
> 
>>> The NetCP plugin module infrastructure use all the standard kernel
>>> infrastructure and its very tiny.
> 
> So i found this manual here:
> http://www.silica.com/fileadmin/02_Products/Productdetails/Texas_Instruments/SILICA_TI_66AK2E05-ds.pdf
> 
> Glad there is an open document!
> There are a couple of ethernet switch chips I can spot there.
> 
All the documentation is open including packet accelerator offload
in ti.com.

> Can i control those with "bridge" or say "brctl" utilities?
> 
> I can see the bridge ports are exposed and i should be able to
> control them via ifconfig or ip link. Thats what "standard
> kernel infrastructure" means. Magic hidden in a driver is
> not.
> 
There is nothing magic hidden in the driver. The bridge ports
are exposed as standard network interfaces. Currently the
drivers don't support bridge offload functionality and
the bridging is disabled in the switch by default. 

> Take a look at recent netconf discussion (as well as earlier
> referenced discussions):
> http://vger.kernel.org/netconf-nf-offload.pdf
> 
> Maybe we can help providing you some direction?
> The problem is it doesnt seem that the offload specs for
> those other pieces are open? e.g how do i add an entry
> to the L2 switch?
> 
We got such requests from customers but couldn't
support it for Linux. We are also looking for such
support and any direction are welcome. Your slide
deck seems to capture the key topics like L2/IPSEC
offload which we are also interested to hear.

Just to be clear, your point was about L2 switch offload
which the driver don't support at the moment. It might confuse
others. The driver doesn't implements anything non-standard.

Regards,
Santosh


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-09 Thread Jamal Hadi Salim

On 09/08/14 10:41, Santosh Shilimkar wrote:


The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny.


So i found this manual here:
http://www.silica.com/fileadmin/02_Products/Productdetails/Texas_Instruments/SILICA_TI_66AK2E05-ds.pdf

Glad there is an open document!
There are a couple of ethernet switch chips I can spot there.

Can i control those with "bridge" or say "brctl" utilities?

I can see the bridge ports are exposed and i should be able to
control them via ifconfig or ip link. Thats what "standard
kernel infrastructure" means. Magic hidden in a driver is
not.

Take a look at recent netconf discussion (as well as earlier
referenced discussions):
http://vger.kernel.org/netconf-nf-offload.pdf

Maybe we can help providing you some direction?
The problem is it doesnt seem that the offload specs for
those other pieces are open? e.g how do i add an entry
to the L2 switch?

cheers,
jamal
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-09 Thread Jamal Hadi Salim

On 09/08/14 10:41, Santosh Shilimkar wrote:


The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny.


So i found this manual here:
http://www.silica.com/fileadmin/02_Products/Productdetails/Texas_Instruments/SILICA_TI_66AK2E05-ds.pdf

Glad there is an open document!
There are a couple of ethernet switch chips I can spot there.

Can i control those with bridge or say brctl utilities?

I can see the bridge ports are exposed and i should be able to
control them via ifconfig or ip link. Thats what standard
kernel infrastructure means. Magic hidden in a driver is
not.

Take a look at recent netconf discussion (as well as earlier
referenced discussions):
http://vger.kernel.org/netconf-nf-offload.pdf

Maybe we can help providing you some direction?
The problem is it doesnt seem that the offload specs for
those other pieces are open? e.g how do i add an entry
to the L2 switch?

cheers,
jamal
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-09 Thread Santosh Shilimkar
On Tuesday 09 September 2014 07:44 AM, Jamal Hadi Salim wrote:
 On 09/08/14 10:41, Santosh Shilimkar wrote:
 
 The NetCP plugin module infrastructure use all the standard kernel
 infrastructure and its very tiny.
 
 So i found this manual here:
 http://www.silica.com/fileadmin/02_Products/Productdetails/Texas_Instruments/SILICA_TI_66AK2E05-ds.pdf
 
 Glad there is an open document!
 There are a couple of ethernet switch chips I can spot there.
 
All the documentation is open including packet accelerator offload
in ti.com.

 Can i control those with bridge or say brctl utilities?
 
 I can see the bridge ports are exposed and i should be able to
 control them via ifconfig or ip link. Thats what standard
 kernel infrastructure means. Magic hidden in a driver is
 not.
 
There is nothing magic hidden in the driver. The bridge ports
are exposed as standard network interfaces. Currently the
drivers don't support bridge offload functionality and
the bridging is disabled in the switch by default. 

 Take a look at recent netconf discussion (as well as earlier
 referenced discussions):
 http://vger.kernel.org/netconf-nf-offload.pdf
 
 Maybe we can help providing you some direction?
 The problem is it doesnt seem that the offload specs for
 those other pieces are open? e.g how do i add an entry
 to the L2 switch?
 
We got such requests from customers but couldn't
support it for Linux. We are also looking for such
support and any direction are welcome. Your slide
deck seems to capture the key topics like L2/IPSEC
offload which we are also interested to hear.

Just to be clear, your point was about L2 switch offload
which the driver don't support at the moment. It might confuse
others. The driver doesn't implements anything non-standard.

Regards,
Santosh


--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-08 Thread Santosh Shilimkar

Hi Dave,

On 8/22/14 3:45 PM, Santosh Shilimkar wrote:

Hi David,

On Thursday 21 August 2014 07:36 PM, David Miller wrote:

From: Santosh Shilimkar 
Date: Fri, 15 Aug 2014 11:12:39 -0400


Update version after incorporating David Miller's comment from earlier
posting [1]. I would like to get these merged for upcoming 3.18 merge
window if there are no concerns on this version.

The network coprocessor (NetCP) is a hardware accelerator that processes
Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
switch sub-module to send and receive packets. NetCP also includes a packet
accelerator (PA) module to perform packet classification operations such as
header matching, and packet modification operations such as checksum
generation. NetCP can also optionally include a Security Accelerator(SA)
capable of performing IPSec operations on ingress/egress packets.

Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
1Gb/s rates per Ethernet port.

NetCP driver has a plug-in module architecture where each of the NetCP
sub-modules exist as a loadable kernel module which plug in to the netcp
core. These sub-modules are represented as "netcp-devices" in the dts
bindings. It is mandatory to have the ethernet switch sub-module for
the ethernet interface to be operational. Any other sub-module like the
PA is optional.

Both GBE and XGBE network processors supported using common driver. It
is also designed to handle future variants of NetCP.


I don't want to see an offload driver that doesn't plug into the existing
generic frameworks for configuration et al.

If no existing facility exists to support what you need, you must work
with the upstream maintainers to design and create one.

It is absolutely no reasonable for every "switch on a chip" driver to
export it's own configuration knob, we need a standard interface all
such drivers will plug into and provide.


The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny. To best represent the Network processor
and its sub module hardware which have inter dependency and ordering
needs, we needed such infrastructure. This lets us handle all the
hardware needs without any code duplication per module.

To elaborate more, there are 4 variants of network switch modules and
then few accelerator modules like Packet accelerator, QOS and Security
accelerator. There can be multiple instances of switches on same SOC.
Example 1 Gbe and 10 Gbe switches. Then additional accelerator modules
are inter connected with switch, streaming fabric and packet DMA.
Packet routing changes based on the various offload modules presence and hence
needs hooks for tx/rx to be called in particular order with special
handling. This scheme is very hardware specific and doesn't have ways
to isolate the modules from each other.

On the other hand, we definitely wanted to have minimal code
instead of duplicating ndo operations and core packet processing logic
in multiple drivers or layers. The module approach helps
to isolate the code based on the customer choice who can choose
say not to build 10 Gbe hardware or say don't need QOS or Security
accelerators. That way we keep the packet processing hot path as
what we need without any overhead.

As you can see, the tiny module handling was added more to represent
the hardware, keep the modularity and avoid code duplication. The
infrastructure is very minimal and NETCP specific. With this small
infrastructure we are able to re-use code for NetCP1.0, NetCP1.5,
10 GBe and upcoming NetCP variants from just *one* driver.

Hope this gives you a better idea and rationale behind the design.


Did you happen to see the reply ?
I am hoping to get this driver in for upcoming merge window.

Regards,
Santosh

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-09-08 Thread Santosh Shilimkar

Hi Dave,

On 8/22/14 3:45 PM, Santosh Shilimkar wrote:

Hi David,

On Thursday 21 August 2014 07:36 PM, David Miller wrote:

From: Santosh Shilimkar santosh.shilim...@ti.com
Date: Fri, 15 Aug 2014 11:12:39 -0400


Update version after incorporating David Miller's comment from earlier
posting [1]. I would like to get these merged for upcoming 3.18 merge
window if there are no concerns on this version.

The network coprocessor (NetCP) is a hardware accelerator that processes
Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
switch sub-module to send and receive packets. NetCP also includes a packet
accelerator (PA) module to perform packet classification operations such as
header matching, and packet modification operations such as checksum
generation. NetCP can also optionally include a Security Accelerator(SA)
capable of performing IPSec operations on ingress/egress packets.

Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
1Gb/s rates per Ethernet port.

NetCP driver has a plug-in module architecture where each of the NetCP
sub-modules exist as a loadable kernel module which plug in to the netcp
core. These sub-modules are represented as netcp-devices in the dts
bindings. It is mandatory to have the ethernet switch sub-module for
the ethernet interface to be operational. Any other sub-module like the
PA is optional.

Both GBE and XGBE network processors supported using common driver. It
is also designed to handle future variants of NetCP.


I don't want to see an offload driver that doesn't plug into the existing
generic frameworks for configuration et al.

If no existing facility exists to support what you need, you must work
with the upstream maintainers to design and create one.

It is absolutely no reasonable for every switch on a chip driver to
export it's own configuration knob, we need a standard interface all
such drivers will plug into and provide.


The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny. To best represent the Network processor
and its sub module hardware which have inter dependency and ordering
needs, we needed such infrastructure. This lets us handle all the
hardware needs without any code duplication per module.

To elaborate more, there are 4 variants of network switch modules and
then few accelerator modules like Packet accelerator, QOS and Security
accelerator. There can be multiple instances of switches on same SOC.
Example 1 Gbe and 10 Gbe switches. Then additional accelerator modules
are inter connected with switch, streaming fabric and packet DMA.
Packet routing changes based on the various offload modules presence and hence
needs hooks for tx/rx to be called in particular order with special
handling. This scheme is very hardware specific and doesn't have ways
to isolate the modules from each other.

On the other hand, we definitely wanted to have minimal code
instead of duplicating ndo operations and core packet processing logic
in multiple drivers or layers. The module approach helps
to isolate the code based on the customer choice who can choose
say not to build 10 Gbe hardware or say don't need QOS or Security
accelerators. That way we keep the packet processing hot path as
what we need without any overhead.

As you can see, the tiny module handling was added more to represent
the hardware, keep the modularity and avoid code duplication. The
infrastructure is very minimal and NETCP specific. With this small
infrastructure we are able to re-use code for NetCP1.0, NetCP1.5,
10 GBe and upcoming NetCP variants from just *one* driver.

Hope this gives you a better idea and rationale behind the design.


Did you happen to see the reply ?
I am hoping to get this driver in for upcoming merge window.

Regards,
Santosh

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-22 Thread Santosh Shilimkar
Hi David,

On Thursday 21 August 2014 07:36 PM, David Miller wrote:
> From: Santosh Shilimkar 
> Date: Fri, 15 Aug 2014 11:12:39 -0400
> 
>> Update version after incorporating David Miller's comment from earlier
>> posting [1]. I would like to get these merged for upcoming 3.18 merge
>> window if there are no concerns on this version.
>>
>> The network coprocessor (NetCP) is a hardware accelerator that processes
>> Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a 
>> ethernet
>> switch sub-module to send and receive packets. NetCP also includes a packet
>> accelerator (PA) module to perform packet classification operations such as
>> header matching, and packet modification operations such as checksum
>> generation. NetCP can also optionally include a Security Accelerator(SA)
>> capable of performing IPSec operations on ingress/egress packets.
>> 
>> Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
>> includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
>> 1Gb/s rates per Ethernet port.
>> 
>> NetCP driver has a plug-in module architecture where each of the NetCP
>> sub-modules exist as a loadable kernel module which plug in to the netcp
>> core. These sub-modules are represented as "netcp-devices" in the dts
>> bindings. It is mandatory to have the ethernet switch sub-module for
>> the ethernet interface to be operational. Any other sub-module like the
>> PA is optional.
>>
>> Both GBE and XGBE network processors supported using common driver. It
>> is also designed to handle future variants of NetCP.
> 
> I don't want to see an offload driver that doesn't plug into the existing
> generic frameworks for configuration et al.
> 
> If no existing facility exists to support what you need, you must work
> with the upstream maintainers to design and create one.
> 
> It is absolutely no reasonable for every "switch on a chip" driver to
> export it's own configuration knob, we need a standard interface all
> such drivers will plug into and provide.
> 
The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny. To best represent the Network processor
and its sub module hardware which have inter dependency and ordering
needs, we needed such infrastructure. This lets us handle all the
hardware needs without any code duplication per module.

To elaborate more, there are 4 variants of network switch modules and
then few accelerator modules like Packet accelerator, QOS and Security
accelerator. There can be multiple instances of switches on same SOC.
Example 1 Gbe and 10 Gbe switches. Then additional accelerator modules
are inter connected with switch, streaming fabric and packet DMA.
Packet routing changes based on the various offload modules presence and hence
needs hooks for tx/rx to be called in particular order with special
handling. This scheme is very hardware specific and doesn't have ways
to isolate the modules from each other.

On the other hand, we definitely wanted to have minimal code
instead of duplicating ndo operations and core packet processing logic
in multiple drivers or layers. The module approach helps
to isolate the code based on the customer choice who can choose
say not to build 10 Gbe hardware or say don't need QOS or Security
accelerators. That way we keep the packet processing hot path as
what we need without any overhead.

As you can see, the tiny module handling was added more to represent
the hardware, keep the modularity and avoid code duplication. The
infrastructure is very minimal and NETCP specific. With this small
infrastructure we are able to re-use code for NetCP1.0, NetCP1.5,
10 GBe and upcoming NetCP variants from just *one* driver.

Hope this gives you a better idea and rationale behind the design.

Regards,
Santosh
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-22 Thread Santosh Shilimkar
Hi David,

On Thursday 21 August 2014 07:36 PM, David Miller wrote:
 From: Santosh Shilimkar santosh.shilim...@ti.com
 Date: Fri, 15 Aug 2014 11:12:39 -0400
 
 Update version after incorporating David Miller's comment from earlier
 posting [1]. I would like to get these merged for upcoming 3.18 merge
 window if there are no concerns on this version.

 The network coprocessor (NetCP) is a hardware accelerator that processes
 Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a 
 ethernet
 switch sub-module to send and receive packets. NetCP also includes a packet
 accelerator (PA) module to perform packet classification operations such as
 header matching, and packet modification operations such as checksum
 generation. NetCP can also optionally include a Security Accelerator(SA)
 capable of performing IPSec operations on ingress/egress packets.
 
 Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
 includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
 1Gb/s rates per Ethernet port.
 
 NetCP driver has a plug-in module architecture where each of the NetCP
 sub-modules exist as a loadable kernel module which plug in to the netcp
 core. These sub-modules are represented as netcp-devices in the dts
 bindings. It is mandatory to have the ethernet switch sub-module for
 the ethernet interface to be operational. Any other sub-module like the
 PA is optional.

 Both GBE and XGBE network processors supported using common driver. It
 is also designed to handle future variants of NetCP.
 
 I don't want to see an offload driver that doesn't plug into the existing
 generic frameworks for configuration et al.
 
 If no existing facility exists to support what you need, you must work
 with the upstream maintainers to design and create one.
 
 It is absolutely no reasonable for every switch on a chip driver to
 export it's own configuration knob, we need a standard interface all
 such drivers will plug into and provide.
 
The NetCP plugin module infrastructure use all the standard kernel
infrastructure and its very tiny. To best represent the Network processor
and its sub module hardware which have inter dependency and ordering
needs, we needed such infrastructure. This lets us handle all the
hardware needs without any code duplication per module.

To elaborate more, there are 4 variants of network switch modules and
then few accelerator modules like Packet accelerator, QOS and Security
accelerator. There can be multiple instances of switches on same SOC.
Example 1 Gbe and 10 Gbe switches. Then additional accelerator modules
are inter connected with switch, streaming fabric and packet DMA.
Packet routing changes based on the various offload modules presence and hence
needs hooks for tx/rx to be called in particular order with special
handling. This scheme is very hardware specific and doesn't have ways
to isolate the modules from each other.

On the other hand, we definitely wanted to have minimal code
instead of duplicating ndo operations and core packet processing logic
in multiple drivers or layers. The module approach helps
to isolate the code based on the customer choice who can choose
say not to build 10 Gbe hardware or say don't need QOS or Security
accelerators. That way we keep the packet processing hot path as
what we need without any overhead.

As you can see, the tiny module handling was added more to represent
the hardware, keep the modularity and avoid code duplication. The
infrastructure is very minimal and NETCP specific. With this small
infrastructure we are able to re-use code for NetCP1.0, NetCP1.5,
10 GBe and upcoming NetCP variants from just *one* driver.

Hope this gives you a better idea and rationale behind the design.

Regards,
Santosh
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-21 Thread David Miller
From: Santosh Shilimkar 
Date: Fri, 15 Aug 2014 11:12:39 -0400

> Update version after incorporating David Miller's comment from earlier
> posting [1]. I would like to get these merged for upcoming 3.18 merge
> window if there are no concerns on this version.
> 
> The network coprocessor (NetCP) is a hardware accelerator that processes
> Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
> switch sub-module to send and receive packets. NetCP also includes a packet
> accelerator (PA) module to perform packet classification operations such as
> header matching, and packet modification operations such as checksum
> generation. NetCP can also optionally include a Security Accelerator(SA)
> capable of performing IPSec operations on ingress/egress packets.
> 
> Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
> includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
> 1Gb/s rates per Ethernet port.
> 
> NetCP driver has a plug-in module architecture where each of the NetCP
> sub-modules exist as a loadable kernel module which plug in to the netcp
> core. These sub-modules are represented as "netcp-devices" in the dts
> bindings. It is mandatory to have the ethernet switch sub-module for
> the ethernet interface to be operational. Any other sub-module like the
> PA is optional.
> 
> Both GBE and XGBE network processors supported using common driver. It
> is also designed to handle future variants of NetCP.

I don't want to see an offload driver that doesn't plug into the existing
generic frameworks for configuration et al.

If no existing facility exists to support what you need, you must work
with the upstream maintainers to design and create one.

It is absolutely no reasonable for every "switch on a chip" driver to
export it's own configuration knob, we need a standard interface all
such drivers will plug into and provide.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


Re: [PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-21 Thread David Miller
From: Santosh Shilimkar santosh.shilim...@ti.com
Date: Fri, 15 Aug 2014 11:12:39 -0400

 Update version after incorporating David Miller's comment from earlier
 posting [1]. I would like to get these merged for upcoming 3.18 merge
 window if there are no concerns on this version.
 
 The network coprocessor (NetCP) is a hardware accelerator that processes
 Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
 switch sub-module to send and receive packets. NetCP also includes a packet
 accelerator (PA) module to perform packet classification operations such as
 header matching, and packet modification operations such as checksum
 generation. NetCP can also optionally include a Security Accelerator(SA)
 capable of performing IPSec operations on ingress/egress packets.
 
 Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
 includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
 1Gb/s rates per Ethernet port.
 
 NetCP driver has a plug-in module architecture where each of the NetCP
 sub-modules exist as a loadable kernel module which plug in to the netcp
 core. These sub-modules are represented as netcp-devices in the dts
 bindings. It is mandatory to have the ethernet switch sub-module for
 the ethernet interface to be operational. Any other sub-module like the
 PA is optional.
 
 Both GBE and XGBE network processors supported using common driver. It
 is also designed to handle future variants of NetCP.

I don't want to see an offload driver that doesn't plug into the existing
generic frameworks for configuration et al.

If no existing facility exists to support what you need, you must work
with the upstream maintainers to design and create one.

It is absolutely no reasonable for every switch on a chip driver to
export it's own configuration knob, we need a standard interface all
such drivers will plug into and provide.
--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-15 Thread Santosh Shilimkar
Update version after incorporating David Miller's comment from earlier
posting [1]. I would like to get these merged for upcoming 3.18 merge
window if there are no concerns on this version.

The network coprocessor (NetCP) is a hardware accelerator that processes
Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
switch sub-module to send and receive packets. NetCP also includes a packet
accelerator (PA) module to perform packet classification operations such as
header matching, and packet modification operations such as checksum
generation. NetCP can also optionally include a Security Accelerator(SA)
capable of performing IPSec operations on ingress/egress packets.

Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
1Gb/s rates per Ethernet port.

NetCP driver has a plug-in module architecture where each of the NetCP
sub-modules exist as a loadable kernel module which plug in to the netcp
core. These sub-modules are represented as "netcp-devices" in the dts
bindings. It is mandatory to have the ethernet switch sub-module for
the ethernet interface to be operational. Any other sub-module like the
PA is optional.

Both GBE and XGBE network processors supported using common driver. It
is also designed to handle future variants of NetCP.

Cc: David Miller 
Cc: Rob Herring 
Cc: Grant Likely 
Cc: Sandeep Nair 

Sandeep Nair (3):
  Documentation: dt: net: Add binding doc for Keystone NetCP ethernet
driver
  net: Add Keystone NetCP ethernet driver
  MAINTAINER: net: Add TI NETCP Ethernet driver entry

 .../devicetree/bindings/net/keystone-netcp.txt |  197 ++
 MAINTAINERS|6 +
 drivers/net/ethernet/ti/Kconfig|   15 +-
 drivers/net/ethernet/ti/Makefile   |4 +
 drivers/net/ethernet/ti/netcp.h|  227 ++
 drivers/net/ethernet/ti/netcp_core.c   | 2276 
 drivers/net/ethernet/ti/netcp_ethss.c  | 2178 +++
 drivers/net/ethernet/ti/netcp_sgmii.c  |  130 ++
 drivers/net/ethernet/ti/netcp_xgbepcsr.c   |  502 +
 9 files changed, 5532 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/keystone-netcp.txt
 create mode 100644 drivers/net/ethernet/ti/netcp.h
 create mode 100644 drivers/net/ethernet/ti/netcp_core.c
 create mode 100644 drivers/net/ethernet/ti/netcp_ethss.c
 create mode 100644 drivers/net/ethernet/ti/netcp_sgmii.c
 create mode 100644 drivers/net/ethernet/ti/netcp_xgbepcsr.c

regards,
Santosh

[1] https://lkml.org/lkml/2014/4/22/805

-- 
1.7.9.5

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/


[PATCH v2 0/3] net: Add Keystone NetCP ethernet driver support

2014-08-15 Thread Santosh Shilimkar
Update version after incorporating David Miller's comment from earlier
posting [1]. I would like to get these merged for upcoming 3.18 merge
window if there are no concerns on this version.

The network coprocessor (NetCP) is a hardware accelerator that processes
Ethernet packets. NetCP has a gigabit Ethernet (GbE) subsystem with a ethernet
switch sub-module to send and receive packets. NetCP also includes a packet
accelerator (PA) module to perform packet classification operations such as
header matching, and packet modification operations such as checksum
generation. NetCP can also optionally include a Security Accelerator(SA)
capable of performing IPSec operations on ingress/egress packets.

Keystone SoC's also have a 10 Gigabit Ethernet Subsystem (XGbE) which
includes a 3-port Ethernet switch sub-module capable of 10Gb/s and
1Gb/s rates per Ethernet port.

NetCP driver has a plug-in module architecture where each of the NetCP
sub-modules exist as a loadable kernel module which plug in to the netcp
core. These sub-modules are represented as netcp-devices in the dts
bindings. It is mandatory to have the ethernet switch sub-module for
the ethernet interface to be operational. Any other sub-module like the
PA is optional.

Both GBE and XGBE network processors supported using common driver. It
is also designed to handle future variants of NetCP.

Cc: David Miller da...@davemloft.net
Cc: Rob Herring robh...@kernel.org
Cc: Grant Likely grant.lik...@linaro.org
Cc: Sandeep Nair sandee...@ti.com

Sandeep Nair (3):
  Documentation: dt: net: Add binding doc for Keystone NetCP ethernet
driver
  net: Add Keystone NetCP ethernet driver
  MAINTAINER: net: Add TI NETCP Ethernet driver entry

 .../devicetree/bindings/net/keystone-netcp.txt |  197 ++
 MAINTAINERS|6 +
 drivers/net/ethernet/ti/Kconfig|   15 +-
 drivers/net/ethernet/ti/Makefile   |4 +
 drivers/net/ethernet/ti/netcp.h|  227 ++
 drivers/net/ethernet/ti/netcp_core.c   | 2276 
 drivers/net/ethernet/ti/netcp_ethss.c  | 2178 +++
 drivers/net/ethernet/ti/netcp_sgmii.c  |  130 ++
 drivers/net/ethernet/ti/netcp_xgbepcsr.c   |  502 +
 9 files changed, 5532 insertions(+), 3 deletions(-)
 create mode 100644 Documentation/devicetree/bindings/net/keystone-netcp.txt
 create mode 100644 drivers/net/ethernet/ti/netcp.h
 create mode 100644 drivers/net/ethernet/ti/netcp_core.c
 create mode 100644 drivers/net/ethernet/ti/netcp_ethss.c
 create mode 100644 drivers/net/ethernet/ti/netcp_sgmii.c
 create mode 100644 drivers/net/ethernet/ti/netcp_xgbepcsr.c

regards,
Santosh

[1] https://lkml.org/lkml/2014/4/22/805

-- 
1.7.9.5

--
To unsubscribe from this list: send the line unsubscribe linux-kernel in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/