[PATCH net-next 1/6] net: ethernet: ti: cpsw: use cpdma channels in backward order for txq

2018-06-11 Thread Ivan Khoronzhuk
The cpdma channel highest priority is from hi to lo number.
The driver has limited number of descriptors that are shared between
number of cpdma channels. Number of queues can be tuned with ethtool,
that allows to not spend descriptors on not needed cpdma channels.
In AVB usually only 2 tx queues can be enough with rate limitation.
The rate limitation can be used only for hi priority queues. Thus, to
use only 2 queues the 8 has to be created. It's wasteful.

So, in order to allow using only needed number of rate limited
tx queues, save resources, and be able to set rate limitation for
them, let assign tx cpdma channels in backward order to queues.

Signed-off-by: Ivan Khoronzhuk 
---
 drivers/net/ethernet/ti/cpsw.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 534596ce00d3..406537d74ec1 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -967,8 +967,8 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int 
budget)
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_txchs_state(cpsw->dma);
-   for (ch = 0, num_tx = 0; ch_map; ch_map >>= 1, ch++) {
-   if (!(ch_map & 0x01))
+   for (ch = 0, num_tx = 0; ch_map & 0xff; ch_map <<= 1, ch++) {
+   if (!(ch_map & 0x80))
continue;
 
txv = >txv[ch];
@@ -2431,7 +2431,7 @@ static int cpsw_update_channels_res(struct cpsw_priv 
*priv, int ch_num, int rx)
void (*handler)(void *, int, int);
struct netdev_queue *queue;
struct cpsw_vector *vec;
-   int ret, *ch;
+   int ret, *ch, vch;
 
if (rx) {
ch = >rx_ch_num;
@@ -2444,7 +2444,8 @@ static int cpsw_update_channels_res(struct cpsw_priv 
*priv, int ch_num, int rx)
}
 
while (*ch < ch_num) {
-   vec[*ch].ch = cpdma_chan_create(cpsw->dma, *ch, handler, rx);
+   vch = rx ? *ch : 7 - *ch;
+   vec[*ch].ch = cpdma_chan_create(cpsw->dma, vch, handler, rx);
queue = netdev_get_tx_queue(priv->ndev, *ch);
queue->tx_maxrate = 0;
 
@@ -2980,7 +2981,7 @@ static int cpsw_probe(struct platform_device *pdev)
u32 slave_offset, sliver_offset, slave_size;
const struct soc_device_attribute *soc;
struct cpsw_common  *cpsw;
-   int ret = 0, i;
+   int ret = 0, i, ch;
int irq;
 
cpsw = devm_kzalloc(>dev, sizeof(struct cpsw_common), GFP_KERNEL);
@@ -3155,7 +3156,8 @@ static int cpsw_probe(struct platform_device *pdev)
if (soc)
cpsw->quirk_irq = 1;
 
-   cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
+   ch = cpsw->quirk_irq ? 0 : 7;
+   cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, ch, cpsw_tx_handler, 0);
if (IS_ERR(cpsw->txv[0].ch)) {
dev_err(priv->dev, "error initializing tx dma channel\n");
ret = PTR_ERR(cpsw->txv[0].ch);
-- 
2.17.1



[PATCH net-next 4/6] net: ethernet: ti: cpsw: add CBS Qdisc offload

2018-06-11 Thread Ivan Khoronzhuk
The cpsw has up to 4 FIFOs per port and upper 3 FIFOs can feed rate
limited queue with shaping. In order to set and enable shaping for
those 3 FIFOs queues the network device with CBS qdisc attached is
needed. The CBS configuration is added for dual-emac/single port mode
only, but potentially can be used in switch mode also, based on
switchdev for instance.

Despite the FIFO shapers can work w/o cpdma level shapers the base
usage must be in combine with cpdma level shapers as described in TRM,
that are set as maximum rates for interface queues with sysfs.

One of the possible configuration with txq shapers and CBS shapers:

  Configured with echo RATE >
  /sys/class/net/eth0/queues/tx-0/tx_maxrate
 /---
/
   /cpdma level shapers
++ ++ ++ ++ ++ ++ ++ ++
| c7 | | c6 | | c5 | | c4 | | c3 | | c2 | | c1 | | c0 |
\/ \/ \/ \/ \/ \/ \/ \/
 \  /   \  /   \  /   \  /   \  /   \  /   \  /   \  /
  \/ \/ \/ \/ \/ \/ \/ \/
+-|--|--|--|-+
|++  |  |  +---+ |
||  ++  |  | |
|v  v   v  v |
| ++ ++ ++ ++ pp++ ++ ++ ++  |
| || || || || oo|| || || ||  |
| | f3 | | f2 | | f1 | | f0 | r  CPSW  r| f3 | | f2 | | f1 | | f0 |  |
| || || || || tt|| || || ||  |
| \/ \/ \/ \/ 01\/ \/ \/ \/  |
|  \  X   \  /   \  /   \  / \  /   \  /   \  /   \  /   |
|   \/ \   \/ \/ \/   \/ \/ \/ \/|
+---\+
 \
  \ FIFO shaper, set with CBS offload added in this patch,
   \ FIFO0 cannot be rate limited
--

CBS shaper configuration is supposed to be used with root MQPRIO Qdisc
offload allowing to add sk_prio->tc->txq maps that direct traffic to
appropriate tx queue and maps L2 priority to FIFO shaper.

The CBS shaper is intended to be used for AVB where L2 priority
(pcp field) is used to differentiate class of traffic. So additionally
vlan needs to be created with appropriate egress sk_prio->l2 prio map.

If CBS has several tx queues assigned to it, the sum of their
bandwidth has not overlap bandwidth set for CBS. It's recomended the
CBS bandwidth to be a little bit more.

The CBS shaper is configured with CBS qdisc offload interface using tc
tool from iproute2 packet.

For instance:

$ tc qdisc replace dev eth0 handle 100: parent root mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 1

$ tc -g class show dev eth0
+---(100:ffe2) mqprio
|    +---(100:3) mqprio
|    +---(100:4) mqprio
|    
+---(100:ffe1) mqprio
|    +---(100:2) mqprio
|    
+---(100:ffe0) mqprio
 +---(100:1) mqprio

$ tc qdisc add dev eth0 parent 100:1 cbs locredit -1440 \
hicredit 60 sendslope -96 idleslope 4 offload 1

$ tc qdisc add dev eth0 parent 100:2 cbs locredit -1470 \
hicredit 62 sendslope -98 idleslope 2 offload 1

The above code set CBS shapers for tc0 and tc1, for that txq0 and
txq1 is used. Pay attention, the real set bandwidth can differ a bit
due to discreteness of configuration parameters.

Here parameters like locredit, hicredit and sendslope are ignored
internally and are supposed to be set with assumption that maximum
frame size for frame - 1500.

It's supposed that interface speed is not changed while reconnection,
not always is true, so inform user in case speed of interface was
changed, as it can impact on dependent shapers configuration.

For more examples see Documentation.

Signed-off-by: Ivan Khoronzhuk 
---
 drivers/net/ethernet/ti/cpsw.c | 221 +
 1 file changed, 221 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index fd967d2bce5d..87a5586c5ea5 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -46,6 +46,8 @@
 #include "cpts.h"
 #include "davinci_cpdma.h"
 
+#include 
+
 #define CPSW_DEBUG (NETIF_MSG_HW   | NETIF_MSG_WOL | \
 NETIF_MSG_DRV  | NETIF_MSG_LINK| \
 NETIF_MSG_IFUP | NETIF_MSG_INTR| \
@@ -154,8 +156,12 @@ do {   
\
 #define IRQ_NUM2
 #define CPSW_MAX_QUEUES8
 #define CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT 256
+#define CPSW_FIFO_

[PATCH net-next 3/6] net: ethernet: ti: cpsw: add MQPRIO Qdisc offload

2018-06-11 Thread Ivan Khoronzhuk
That's possible to offload vlan to tc priority mapping with
assumption sk_prio == L2 prio.

Example:
$ ethtool -L eth0 rx 1 tx 4

$ qdisc replace dev eth0 handle 100: parent root mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 1

$ tc -g class show dev eth0
+---(100:ffe2) mqprio
|    +---(100:3) mqprio
|    +---(100:4) mqprio
|    
+---(100:ffe1) mqprio
|    +---(100:2) mqprio
|    
+---(100:ffe0) mqprio
 +---(100:1) mqprio

Here, 100:1 is txq0, 100:2 is txq1, 100:3 is txq2, 100:4 is txq3
txq0 belongs to tc0, txq1 to tc1, txq2 and txq3 to tc2
The offload part only maps L2 prio to classes of traffic, but not
to transmit queues, so to direct traffic to traffic class vlan has
to be created with appropriate egress map.

Signed-off-by: Ivan Khoronzhuk 
---
 drivers/net/ethernet/ti/cpsw.c | 82 ++
 1 file changed, 82 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 406537d74ec1..fd967d2bce5d 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -39,6 +39,7 @@
 #include 
 
 #include 
+#include 
 
 #include "cpsw.h"
 #include "cpsw_ale.h"
@@ -153,6 +154,8 @@ do {
\
 #define IRQ_NUM2
 #define CPSW_MAX_QUEUES8
 #define CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT 256
+#define CPSW_TC_NUM4
+#define CPSW_FIFO_SHAPERS_NUM  (CPSW_TC_NUM - 1)
 
 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_SHIFT  29
 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_MSKGENMASK(2, 0)
@@ -453,6 +456,7 @@ struct cpsw_priv {
u8  mac_addr[ETH_ALEN];
boolrx_pause;
booltx_pause;
+   boolmqprio_hw;
u32 emac_port;
struct cpsw_common *cpsw;
 };
@@ -1577,6 +1581,14 @@ static void cpsw_slave_stop(struct cpsw_slave *slave, 
struct cpsw_common *cpsw)
soft_reset_slave(slave);
 }
 
+static int cpsw_tc_to_fifo(int tc, int num_tc)
+{
+   if (tc == num_tc - 1)
+   return 0;
+
+   return CPSW_FIFO_SHAPERS_NUM - tc;
+}
+
 static int cpsw_ndo_open(struct net_device *ndev)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
@@ -2190,6 +2202,75 @@ static int cpsw_ndo_set_tx_maxrate(struct net_device 
*ndev, int queue, u32 rate)
return ret;
 }
 
+static int cpsw_set_tc(struct net_device *ndev, void *type_data)
+{
+   struct tc_mqprio_qopt_offload *mqprio = type_data;
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   struct cpsw_common *cpsw = priv->cpsw;
+   int fifo, num_tc, count, offset;
+   struct cpsw_slave *slave;
+   u32 tx_prio_map = 0;
+   int i, tc, ret;
+
+   num_tc = mqprio->qopt.num_tc;
+   if (num_tc > CPSW_TC_NUM)
+   return -EINVAL;
+
+   if (mqprio->mode != TC_MQPRIO_MODE_DCB)
+   return -EINVAL;
+
+   ret = pm_runtime_get_sync(cpsw->dev);
+   if (ret < 0) {
+   pm_runtime_put_noidle(cpsw->dev);
+   return ret;
+   }
+
+   if (num_tc) {
+   for (i = 0; i < 8; i++) {
+   tc = mqprio->qopt.prio_tc_map[i];
+   fifo = cpsw_tc_to_fifo(tc, num_tc);
+   tx_prio_map |= fifo << (4 * i);
+   }
+
+   netdev_set_num_tc(ndev, num_tc);
+   for (i = 0; i < num_tc; i++) {
+   count = mqprio->qopt.count[i];
+   offset = mqprio->qopt.offset[i];
+   netdev_set_tc_queue(ndev, i, count, offset);
+   }
+   }
+
+   if (!mqprio->qopt.hw) {
+   /* restore default configuration */
+   netdev_reset_tc(ndev);
+   tx_prio_map = TX_PRIORITY_MAPPING;
+   }
+
+   priv->mqprio_hw = mqprio->qopt.hw;
+
+   offset = cpsw->version == CPSW_VERSION_1 ?
+CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP;
+
+   slave = >slaves[cpsw_slave_index(cpsw, priv)];
+   slave_write(slave, tx_prio_map, offset);
+
+   pm_runtime_put_sync(cpsw->dev);
+
+   return 0;
+}
+
+static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+void *type_data)
+{
+   switch (type) {
+   case TC_SETUP_QDISC_MQPRIO:
+   return cpsw_set_tc(ndev, type_data);
+
+   default:
+   return -EOPNOTSUPP;
+   }
+}
+
 static const struct net_device_ops cpsw_netdev_ops = {
.ndo_open   = cpsw_ndo_open,
.ndo_stop   = cpsw_ndo_stop,
@@ -2205,6 +2286,7 @@ static const struct net_device_ops cpsw_netdev_ops = {
 #endif
.ndo_vlan_rx_add_vid= cpsw_ndo_vlan_rx_add_vid,
.ndo_vlan_rx_kill_vid   = cpsw_ndo_vlan

[PATCH net-next 2/6] net: ethernet: ti: cpdma: fit rated channels in backward order

2018-06-11 Thread Ivan Khoronzhuk
According to TRM tx rated channels should be in 7..0 order,
so correct it.

Signed-off-by: Ivan Khoronzhuk 
---
 drivers/net/ethernet/ti/davinci_cpdma.c | 31 -
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index cdbddf16dd29..19bb63902997 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -406,37 +406,36 @@ static int cpdma_chan_fit_rate(struct cpdma_chan *ch, u32 
rate,
struct cpdma_chan *chan;
u32 old_rate = ch->rate;
u32 new_rmask = 0;
-   int rlim = 1;
+   int rlim = 0;
int i;
 
-   *prio_mode = 0;
for (i = tx_chan_num(0); i < tx_chan_num(CPDMA_MAX_CHANNELS); i++) {
chan = ctlr->channels[i];
-   if (!chan) {
-   rlim = 0;
+   if (!chan)
continue;
-   }
 
if (chan == ch)
chan->rate = rate;
 
if (chan->rate) {
-   if (rlim) {
-   new_rmask |= chan->mask;
-   } else {
-   ch->rate = old_rate;
-   dev_err(ctlr->dev, "Prev channel of %dch is not 
rate limited\n",
-   chan->chan_num);
-   return -EINVAL;
-   }
-   } else {
-   *prio_mode = 1;
-   rlim = 0;
+   rlim = 1;
+   new_rmask |= chan->mask;
+   continue;
}
+
+   if (rlim)
+   goto err;
}
 
*rmask = new_rmask;
+   *prio_mode = rlim;
return 0;
+
+err:
+   ch->rate = old_rate;
+   dev_err(ctlr->dev, "Upper cpdma ch%d is not rate limited\n",
+   chan->chan_num);
+   return -EINVAL;
 }
 
 static u32 cpdma_chan_set_factors(struct cpdma_ctlr *ctlr,
-- 
2.17.1



[PATCH net-next 5/6] net: ethernet: ti: cpsw: restore shaper configuration while down/up

2018-06-11 Thread Ivan Khoronzhuk
Need to restore shapers configuration after interface was down/up.
This is needed as appropriate configuration is still replicated in
kernel settings. This only shapers context restore, so vlan
configuration should be restored by user if needed, especially for
devices with one port where vlan frames are sent via ALE.

Signed-off-by: Ivan Khoronzhuk 
---
 drivers/net/ethernet/ti/cpsw.c | 47 ++
 1 file changed, 47 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 87a5586c5ea5..f39d2662c5ab 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1807,6 +1807,51 @@ static int cpsw_set_cbs(struct net_device *ndev,
return ret;
 }
 
+static void cpsw_cbs_resume(struct cpsw_slave *slave, struct cpsw_priv *priv)
+{
+   int fifo, bw;
+
+   for (fifo = CPSW_FIFO_SHAPERS_NUM; fifo > 0; fifo--) {
+   bw = priv->fifo_bw[fifo];
+   if (!bw)
+   continue;
+
+   cpsw_set_fifo_rlimit(priv, fifo, bw);
+   }
+}
+
+static void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv 
*priv)
+{
+   struct cpsw_common *cpsw = priv->cpsw;
+   u32 tx_prio_map = 0;
+   int i, tc, fifo;
+   u32 tx_prio_rg;
+
+   if (!priv->mqprio_hw)
+   return;
+
+   for (i = 0; i < 8; i++) {
+   tc = netdev_get_prio_tc_map(priv->ndev, i);
+   fifo = CPSW_FIFO_SHAPERS_NUM - tc;
+   tx_prio_map |= fifo << (4 * i);
+   }
+
+   tx_prio_rg = cpsw->version == CPSW_VERSION_1 ?
+CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP;
+
+   slave_write(slave, tx_prio_map, tx_prio_rg);
+}
+
+/* restore resources after port reset */
+static void cpsw_restore(struct cpsw_priv *priv)
+{
+   /* restore MQPRIO offload */
+   for_each_slave(priv, cpsw_mqprio_resume, priv);
+
+   /* restore CBS offload */
+   for_each_slave(priv, cpsw_cbs_resume, priv);
+}
+
 static int cpsw_ndo_open(struct net_device *ndev)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
@@ -1886,6 +1931,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 
}
 
+   cpsw_restore(priv);
+
/* Enable Interrupt pacing if configured */
if (cpsw->coal_intvl != 0) {
struct ethtool_coalesce coal;
-- 
2.17.1



[PATCH net-next 0/6] net: ethernet: ti: cpsw: add MQPRIO and CBS Qdisc offload

2018-06-11 Thread Ivan Khoronzhuk
This series adds MQPRIO and CBS Qdisc offload for TI cpsw driver.
It potentially can be used in audio video bridging (AVB) and time
sensitive networking (TSN).

Patchset was tested on AM572x EVM and BBB boards. Last patch from this
series adds detailed description of configuration with examples. For
consistency reasons, in role of talker and listener, tools from
patchset "TSN: Add qdisc based config interface for CBS" were used and
can be seen here: https://www.spinics.net/lists/netdev/msg460869.html

Based on net-next/master

Ivan Khoronzhuk (6):
  net: ethernet: ti: cpsw: use cpdma channels in backward order for txq
  net: ethernet: ti: cpdma: fit rated channels in backward order
  net: ethernet: ti: cpsw: add MQPRIO Qdisc offload
  net: ethernet: ti: cpsw: add CBS Qdisc offload
  net: ethernet: ti: cpsw: restore shaper configuration while down/up
  Documentation: networking: cpsw: add MQPRIO & CBS offload examples

 Documentation/networking/cpsw.txt   | 540 
 drivers/net/ethernet/ti/cpsw.c  | 364 +++-
 drivers/net/ethernet/ti/davinci_cpdma.c |  31 +-
 3 files changed, 913 insertions(+), 22 deletions(-)
 create mode 100644 Documentation/networking/cpsw.txt

-- 
2.17.1



[PATCH net-next 6/6] Documentation: networking: cpsw: add MQPRIO & CBS offload examples

2018-06-11 Thread Ivan Khoronzhuk
This document describes MQPRIO and CBS Qdisc offload configuration
for cpsw driver based on examples. It potentially can be used in
audio video bridging (AVB) and time sensitive networking (TSN).

Signed-off-by: Ivan Khoronzhuk 
---
 Documentation/networking/cpsw.txt | 540 ++
 1 file changed, 540 insertions(+)
 create mode 100644 Documentation/networking/cpsw.txt

diff --git a/Documentation/networking/cpsw.txt 
b/Documentation/networking/cpsw.txt
new file mode 100644
index ..f5d58f502e52
--- /dev/null
+++ b/Documentation/networking/cpsw.txt
@@ -0,0 +1,540 @@
+* Texas Instruments CPSW ethernet driver
+
+Multiqueue & CBS & MQPRIO
+=
+=
+
+The cpsw has 3 CBS shapers for each external ports. This document
+describes MQPRIO and CBS Qdisc offload configuration for cpsw driver
+based on examples. It potentially can be used in audio video bridging
+(AVB) and time sensitive networking (TSN).
+
+The following examples was tested on AM572x EVM and BBB boards.
+
+Test setup
+==
+
+Under consideration two examples with AM52xx EVM running cpsw driver
+in dual_emac mode.
+
+Several prerequisites:
+- TX queues must be rated starting from txq0 that has highest priority
+- Traffic classes are used starting from 0, that has highest priority
+- CBS shapers should be used with rated queues
+- The bandwidth for CBS shapers has to be set a little bit more then
+  potential incoming rate, thus, rate of all incoming tx queues has
+  to be a little less
+- Real rates can differ, due to discreetness
+- Map skb-priority to txq is not enough, also skb-priority to l2 prio
+  map has to be created with ip or vconfig tool
+- Any l2/socket prio (0 - 7) for classes can be used, but for
+  simplicity default values are used: 3 and 2
+- only 2 classes tested: A and B, but checked and can work with more,
+  maximum allowed 4, but only for 3 rate can be set.
+
+Test setup for examples
+===
++---+
+|--+|
+|  |  Workstation0  |
+|E |  MAC 18:03:73:66:87:42 |
++-+  +--|t ||
+|| 1  | E |  |  |h |./tsn_listener -d \ |
+|  Target board: | 0  | t |--+  |0 | 18:03:73:66:87:42 -i eth0 \|
+|  AM572x EVM| 0  | h | |  | -s 1500|
+|| 0  | 0 | |--+|
+|  Only 2 classes:   |Mb  +---| +---+
+|  class A, class B  ||
+||+---| +---+
+|| 1  | E | |--+|
+|| 0  | t | |  |  Workstation1  |
+|| 0  | h |--+  |E |  MAC 20:cf:30:85:7d:fd |
+||Mb  | 1 |  +--|t ||
++-+ |h |./tsn_listener -d \ |
+|0 | 20:cf:30:85:7d:fd -i eth0 \|
+|  | -s 1500|
+|--+|
++---+
+
+*
+*
+*
+Example 1: One port tx AVB configuration scheme for target board
+--
+(prints and scheme for AM52xx evm, applicable for single port boards)
+
+tc - traffic class
+txq - transmit queue
+p - priority
+f - fifo (cpsw fifo)
+S - shaper configured
+
++--+ u
+| +---+  +---+  +--+ +--+  | s
+| |   |  |   |  |  | |  |  | e
+| | App 1 |  | App 2 |  | Apps | | Apps |  | r
+| | Class A   |  | Class B   |  | Rest | | Rest |  |
+| | Eth0  |  | Eth0  |  | Eth0 | | Eth1 |  | s
+| | VLAN100   |  | VLAN100   |  |   |  | |   |  |  | p
+| | 40 Mb/s   |  | 20 Mb/s   |  |   |  | |   |  |  | a
+| | SO_PRIORITY=3 |  | SO_PRIORITY=2 |  |   |  | |   |  |  | c
+| |   |   |  |   |   |  |   |  | |   |  | 

Re: [PATCH 0/4] RFC CPSW switchdev mode

2018-06-06 Thread Ivan Khoronzhuk

On Tue, Jun 05, 2018 at 06:23:45PM -0500, Grygorii Strashko wrote:



On 06/02/2018 07:26 PM, Andrew Lunn wrote:

*After this patch set*: goal keep things working the same as max as
possible and get rid of TI custom tool.


We are happy to keep things the same, if they fit with the switchdev
model. Anything in your customer TI tool/model which does not fit the
switchdev model you won't be able to keep, except if we agree to
extend the model.


Right. That's the main goal of RFC to identify those gaps.



I can say now, sw0p0 is going to cause problems. I really do suggest
you drop it for the moment in order to get a minimal driver
accepted. sw0p0 does not fit the switchdev model.


Honestly, this is not the first patchset and we started without sw0p0,
but then (with current LKML)
- default vlan offloading breaks traffic reception to P0
(Ilias saying it's fixed in next - good)
- adding vlan to P1/P2 works, but not for P0 (again as per Ilias -fixed)
- mcast - no way to manually add static record and include or exclude P0.


:( above are basic functionality required.




Below I've described some tested use cases (not include full static 
configuration),
but regarding sw0p0 - there is work done by Ivan Khoronzhuk [1] which enables
adds MQPRIO and CBS Qdisc and targets AVB network features. It required to
offload MQPRIO and CBS parameters on all ports including P0. In case of P0,
CPDMA TX channels shapers need to be configured, and in case
of sw0p1/sw0p2 internal FIFOS.
sw0p0 also expected to be used to configure CPDMA interface in general -
number of tx/rx channels, rates, ring sizes.


Can this be derives from the configuration on sw0p1 and sw0p2?
sw0p1 has 1 tx channel, sw0p2 has 2 tx channels, so give p0 3 tx
channels?


This not exactly what is required, sry I probably will need just repeat what 
Ivan
described in https://lkml.org/lkml/2018/5/18/1135. I'd try to provide this info 
tomorrow.

Unfortunately, I'm not sure if all this reworking and switchdev conversation 
would make sense
if we will not be able to fit Ivan's work in new CPSW driver model ;..(
and do AVB bridge.


Not sure I tracked everything, but current link example only for dual-emac mode,
where i guess no switchdev and thus we have only 2 ports having phys and no need
in some common configuration. Only common resources tx queues that are easily
shared between two ports. In case of switchdev the same, no need in port 0, at
least for cpsw implementation (not sure about others), tx queues cpdma shapers
are configured separately and can be done with any netdev presenting ports,
thus p0 can be avoided in this case also. For instance, I've created short
description, based on current switchdev RFC, showing simple configuration
commands for shapers configuration from host side and CBS for every port,
w/o p0 usage:
https://git.linaro.org/people/ivan.khoronzhuk/tsn_conf.git/tree/README_switchdev

Will add it once switchdev decision is stabilized and applied. Basically nothing
changed with dual-emac configuration except different rates set for cpdma
shapers from host side.

Probably, p0 is needed for other configurations, or for compatibility reasons
with old versions, but definitely should be created list of all cases, and on my
opinion, any common resource for both ports can be configured with any netdev
presenting ports if it doesn't conflict with ports configuration itself.






In addition there is set of global CPSW parameters (not related to P1/P2, like
MAC Authorization Mode, OUI Deny Mode, crc ) which I've
thought can be added to sw0p0 (using ethtool -priv-flags).


You should describe these features, and then we can figure out how
best to model them. devlink might be an option if they are switch
global.


Ok. that can be postponed.

--
regards,
-grygorii


--
Regards,
Ivan Khoronzhuk


[RFC PATCH 1/6] net: ethernet: ti: cpsw: use cpdma channels in backward order for txq

2018-05-18 Thread Ivan Khoronzhuk
The cpdma channel highest priority is from hi to lo number.
The driver has limited number of descriptors that are shared between
number of cpdma channels. Number of queues can be tuned with ethtool,
that allows to not spend descriptors on not needed cpdma channels.
In AVB usually only 2 tx queues can be enough with rate limitation.
The rate limitation can be used only for hi priority queues. Thus, to
use only 2 queues the 8 has to be created. It's wasteful.

So, in order to allow using only needed number of rate limited
tx queues, save resources, and be able to set rate limitation for
them, let assign tx cpdma channels in backward order to queues.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 14 --
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index a7285dddfd29..9bd615da04d3 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -967,8 +967,8 @@ static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int 
budget)
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_txchs_state(cpsw->dma);
-   for (ch = 0, num_tx = 0; ch_map; ch_map >>= 1, ch++) {
-   if (!(ch_map & 0x01))
+   for (ch = 0, num_tx = 0; ch_map & 0xff; ch_map <<= 1, ch++) {
+   if (!(ch_map & 0x80))
continue;
 
txv = >txv[ch];
@@ -2431,7 +2431,7 @@ static int cpsw_update_channels_res(struct cpsw_priv 
*priv, int ch_num, int rx)
void (*handler)(void *, int, int);
struct netdev_queue *queue;
struct cpsw_vector *vec;
-   int ret, *ch;
+   int ret, *ch, vch;
 
if (rx) {
ch = >rx_ch_num;
@@ -2444,7 +2444,8 @@ static int cpsw_update_channels_res(struct cpsw_priv 
*priv, int ch_num, int rx)
}
 
while (*ch < ch_num) {
-   vec[*ch].ch = cpdma_chan_create(cpsw->dma, *ch, handler, rx);
+   vch = rx ? *ch : 7 - *ch;
+   vec[*ch].ch = cpdma_chan_create(cpsw->dma, vch, handler, rx);
queue = netdev_get_tx_queue(priv->ndev, *ch);
queue->tx_maxrate = 0;
 
@@ -2980,7 +2981,7 @@ static int cpsw_probe(struct platform_device *pdev)
u32 slave_offset, sliver_offset, slave_size;
const struct soc_device_attribute *soc;
struct cpsw_common  *cpsw;
-   int ret = 0, i;
+   int ret = 0, i, ch;
int irq;
 
cpsw = devm_kzalloc(>dev, sizeof(struct cpsw_common), GFP_KERNEL);
@@ -3155,7 +3156,8 @@ static int cpsw_probe(struct platform_device *pdev)
if (soc)
cpsw->quirk_irq = 1;
 
-   cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
+   ch = cpsw->quirk_irq ? 0 : 7;
+   cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, ch, cpsw_tx_handler, 0);
if (IS_ERR(cpsw->txv[0].ch)) {
dev_err(priv->dev, "error initializing tx dma channel\n");
ret = PTR_ERR(cpsw->txv[0].ch);
-- 
2.17.0



[RFC PATCH 4/6] net: ethernet: ti: cpsw: add CBS Qdisc offload

2018-05-18 Thread Ivan Khoronzhuk
The cpsw has up to 4 FIFOs per port and upper 3 FIFOs can feed rate
limited queue with shaping. In order to set and enable shaping for
those 3 FIFOs queues the network device with CBS qdisc attached is
needed. The CBS configuration is added for dual-emac/single port mode
only, but potentially can be used in switch mode also, based on
switchdev for instance.

Despite the FIFO shapers can work w/o cpdma level shapers the base
usage must be in combine with cpdma level shapers as described in TRM,
that are set as maximum rates for interface queues with sysfs.

One of the possible configuration with txq shapers and CBS shapers:

  Configured with echo RATE >
  /sys/class/net/eth0/queues/tx-0/tx_maxrate
 /---
/
   /cpdma level shapers
++ ++ ++ ++ ++ ++ ++ ++
| c7 | | c6 | | c5 | | c4 | | c3 | | c2 | | c1 | | c0 |
\/ \/ \/ \/ \/ \/ \/ \/
 \  /   \  /   \  /   \  /   \  /   \  /   \  /   \  /
  \/ \/ \/ \/ \/ \/ \/ \/
+-|--|--|--|-+
|++  |  |  +---+ |
||  ++  |  | |
|v  v   v  v |
| ++ ++ ++ ++ pp++ ++ ++ ++  |
| || || || || oo|| || || ||  |
| | f3 | | f2 | | f1 | | f0 | r  CPSW  r| f3 | | f2 | | f1 | | f0 |  |
| || || || || tt|| || || ||  |
| \/ \/ \/ \/ 01\/ \/ \/ \/  |
|  \  X   \  /   \  /   \  / \  /   \  /   \  /   \  /   |
|   \/ \   \/ \/ \/   \/ \/ \/ \/|
+---\+
 \
  \ FIFO shaper, set with CBS offload added in this patch,
   \ FIFO0 cannot be rate limited
--

CBS shaper configuration is supposed to be used with root MQPRIO Qdisc
offload allowing to add sk_prio->tc->txq maps that direct traffic to
appropriate tx queue and maps L2 priority to FIFO shaper.

The CBS shaper is intended to be used for AVB where L2 priority
(pcp field) is used to differentiate class of traffic. So additionally
vlan needs to be created with appropriate egress sk_prio->l2 prio map.

If CBS has several tx queues assigned to it, the sum of their
bandwidth has not overlap bandwidth set for CBS. It's recomended the
CBS bandwidth to be a little bit more.

The CBS shaper is configured with CBS qdisc offload interface using tc
tool from iproute2 packet.

For instance:

$ tc qdisc replace dev eth0 handle 100: parent root mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 1

$ tc -g class show dev eth0
+---(100:ffe2) mqprio
|    +---(100:3) mqprio
|    +---(100:4) mqprio
|    
+---(100:ffe1) mqprio
|    +---(100:2) mqprio
|    
+---(100:ffe0) mqprio
 +---(100:1) mqprio

$ tc qdisc add dev eth0 parent 100:1 cbs locredit -1440 \
hicredit 60 sendslope -96 idleslope 4 offload 1

$ tc qdisc add dev eth0 parent 100:2 cbs locredit -1470 \
hicredit 62 sendslope -98 idleslope 2 offload 1

The above code set CBS shapers for tc0 and tc1, for that txq0 and
txq1 is used. Pay attention, the real set bandwidth can differ a bit
due to discreteness of configuration parameters.

Here parameters like locredit, hicredit and sendslope are ignored
internally and are supposed to be set with assumption that maximum
frame size for frame - 1500.

It's supposed that interface speed is not changed while reconnection,
not always is true, so inform user in case speed of interface was
changed, as it can impact on dependent shapers configuration.

For more examples see Documentation.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 221 +
 1 file changed, 221 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 4b232cda5436..c7710b0e1c17 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -46,6 +46,8 @@
 #include "cpts.h"
 #include "davinci_cpdma.h"
 
+#include 
+
 #define CPSW_DEBUG (NETIF_MSG_HW   | NETIF_MSG_WOL | \
 NETIF_MSG_DRV  | NETIF_MSG_LINK| \
 NETIF_MSG_IFUP | NETIF_MSG_INTR| \
@@ -154,8 +156,12 @@ do {   
\
 #define IRQ_NUM2
 #define CPSW_MAX_QUEUES8
 #define CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT 2

[RFC PATCH 0/6] net: ethernet: ti: cpsw: add MQPRIO and CBS Qdisc offload

2018-05-18 Thread Ivan Khoronzhuk
This series adds MQPRIO and CBS Qdisc offload for TI cpsw driver.
It potentially can be used in audio video bridging (AVB) and time
sensitive networking (TSN).

Patchset was tested on AM572x EVM and BBB boards. Last patch from this
series adds detailed description of configuration with examples. For
consistency reasons, in role of talker and listener, tools from
patchset "TSN: Add qdisc based config interface for CBS" were used and
can be seen here: https://www.spinics.net/lists/netdev/msg460869.html

Based on net-next/master

Ivan Khoronzhuk (6):
  net: ethernet: ti: cpsw: use cpdma channels in backward order for txq
  net: ethernet: ti: cpdma: fit rated channels in backward order
  net: ethernet: ti: cpsw: add MQPRIO Qdisc offload
  net: ethernet: ti: cpsw: add CBS Qdisc offload
  net: ethernet: ti: cpsw: restore shaper configuration while down/up
  Documentation: networking: cpsw: add MQPRIO & CBS offload examples

 Documentation/networking/cpsw.txt   | 540 
 drivers/net/ethernet/ti/cpsw.c  | 364 +++-
 drivers/net/ethernet/ti/davinci_cpdma.c |  31 +-
 3 files changed, 913 insertions(+), 22 deletions(-)
 create mode 100644 Documentation/networking/cpsw.txt

-- 
2.17.0



[RFC PATCH 3/6] net: ethernet: ti: cpsw: add MQPRIO Qdisc offload

2018-05-18 Thread Ivan Khoronzhuk
That's possible to offload vlan to tc priority mapping with
assumption sk_prio == L2 prio.

Example:
$ ethtool -L eth0 rx 1 tx 4

$ qdisc replace dev eth0 handle 100: parent root mqprio num_tc 3 \
map 2 2 1 0 2 2 2 2 2 2 2 2 2 2 2 2 queues 1@0 1@1 2@2 hw 1

$ tc -g class show dev eth0
+---(100:ffe2) mqprio
|    +---(100:3) mqprio
|    +---(100:4) mqprio
|    
+---(100:ffe1) mqprio
|    +---(100:2) mqprio
|    
+---(100:ffe0) mqprio
 +---(100:1) mqprio

Here, 100:1 is txq0, 100:2 is txq1, 100:3 is txq2, 100:4 is txq3
txq0 belongs to tc0, txq1 to tc1, txq2 and txq3 to tc2
The offload part only maps L2 prio to classes of traffic, but not
to transmit queues, so to direct traffic to traffic class vlan has
to be created with appropriate egress map.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 82 ++
 1 file changed, 82 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 9bd615da04d3..4b232cda5436 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -39,6 +39,7 @@
 #include 
 
 #include 
+#include 
 
 #include "cpsw.h"
 #include "cpsw_ale.h"
@@ -153,6 +154,8 @@ do {
\
 #define IRQ_NUM2
 #define CPSW_MAX_QUEUES8
 #define CPSW_CPDMA_DESCS_POOL_SIZE_DEFAULT 256
+#define CPSW_TC_NUM4
+#define CPSW_FIFO_SHAPERS_NUM  (CPSW_TC_NUM - 1)
 
 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_SHIFT  29
 #define CPSW_RX_VLAN_ENCAP_HDR_PRIO_MSKGENMASK(2, 0)
@@ -453,6 +456,7 @@ struct cpsw_priv {
u8  mac_addr[ETH_ALEN];
boolrx_pause;
booltx_pause;
+   boolmqprio_hw;
u32 emac_port;
struct cpsw_common *cpsw;
 };
@@ -1577,6 +1581,14 @@ static void cpsw_slave_stop(struct cpsw_slave *slave, 
struct cpsw_common *cpsw)
soft_reset_slave(slave);
 }
 
+static int cpsw_tc_to_fifo(int tc, int num_tc)
+{
+   if (tc == num_tc - 1)
+   return 0;
+
+   return CPSW_FIFO_SHAPERS_NUM - tc;
+}
+
 static int cpsw_ndo_open(struct net_device *ndev)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
@@ -2190,6 +2202,75 @@ static int cpsw_ndo_set_tx_maxrate(struct net_device 
*ndev, int queue, u32 rate)
return ret;
 }
 
+static int cpsw_set_tc(struct net_device *ndev, void *type_data)
+{
+   struct tc_mqprio_qopt_offload *mqprio = type_data;
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   struct cpsw_common *cpsw = priv->cpsw;
+   int fifo, num_tc, count, offset;
+   struct cpsw_slave *slave;
+   u32 tx_prio_map = 0;
+   int i, tc, ret;
+
+   num_tc = mqprio->qopt.num_tc;
+   if (num_tc > CPSW_TC_NUM)
+   return -EINVAL;
+
+   if (mqprio->mode != TC_MQPRIO_MODE_DCB)
+   return -EINVAL;
+
+   ret = pm_runtime_get_sync(cpsw->dev);
+   if (ret < 0) {
+   pm_runtime_put_noidle(cpsw->dev);
+   return ret;
+   }
+
+   if (num_tc) {
+   for (i = 0; i < 8; i++) {
+   tc = mqprio->qopt.prio_tc_map[i];
+   fifo = cpsw_tc_to_fifo(tc, num_tc);
+   tx_prio_map |= fifo << (4 * i);
+   }
+
+   netdev_set_num_tc(ndev, num_tc);
+   for (i = 0; i < num_tc; i++) {
+   count = mqprio->qopt.count[i];
+   offset = mqprio->qopt.offset[i];
+   netdev_set_tc_queue(ndev, i, count, offset);
+   }
+   }
+
+   if (!mqprio->qopt.hw) {
+   /* restore default configuration */
+   netdev_reset_tc(ndev);
+   tx_prio_map = TX_PRIORITY_MAPPING;
+   }
+
+   priv->mqprio_hw = mqprio->qopt.hw;
+
+   offset = cpsw->version == CPSW_VERSION_1 ?
+CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP;
+
+   slave = >slaves[cpsw_slave_index(cpsw, priv)];
+   slave_write(slave, tx_prio_map, offset);
+
+   pm_runtime_put_sync(cpsw->dev);
+
+   return 0;
+}
+
+static int cpsw_ndo_setup_tc(struct net_device *ndev, enum tc_setup_type type,
+void *type_data)
+{
+   switch (type) {
+   case TC_SETUP_QDISC_MQPRIO:
+   return cpsw_set_tc(ndev, type_data);
+
+   default:
+   return -EOPNOTSUPP;
+   }
+}
+
 static const struct net_device_ops cpsw_netdev_ops = {
.ndo_open   = cpsw_ndo_open,
.ndo_stop   = cpsw_ndo_stop,
@@ -2205,6 +2286,7 @@ static const struct net_device_ops cpsw_netdev_ops = {
 #endif
.ndo_vlan_rx_add_vid= cpsw_ndo_vlan_rx_add_v

[RFC PATCH 6/6] Documentation: networking: cpsw: add MQPRIO & CBS offload examples

2018-05-18 Thread Ivan Khoronzhuk
This document describes MQPRIO and CBS Qdisc offload configuration
for cpsw driver based on examples. It potentially can be used in
audio video bridging (AVB) and time sensitive networking (TSN).

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 Documentation/networking/cpsw.txt | 540 ++
 1 file changed, 540 insertions(+)
 create mode 100644 Documentation/networking/cpsw.txt

diff --git a/Documentation/networking/cpsw.txt 
b/Documentation/networking/cpsw.txt
new file mode 100644
index ..28c64896d59d
--- /dev/null
+++ b/Documentation/networking/cpsw.txt
@@ -0,0 +1,540 @@
+* Texas Instruments CPSW ethernet driver
+
+Multiqueue & CBS & MQPRIO
+=
+=
+
+The cpsw has 3 CBS shapers for each external ports. This document
+describes MQPRIO and CBS Qdisc offload configuration for cpsw driver
+based on examples. It potentially can be used in audio video bridging
+(AVB) and time sensitive networking (TSN).
+
+The following examples was tested on AM572x EVM and BBB boards.
+
+Test setup
+==
+
+Under consideration two examples with AM52xx EVM running cpsw driver
+in dual_emac mode.
+
+Several prerequisites:
+- TX queues must be rated starting from txq0 that has highest priority
+- Traffic classes are used starting from 0, that has highest priority
+- CBS shapers should be used with rated queues
+- The bandwidth for CBS shapers has to be set a little bit more then
+  potential incoming rate, thus, rate of all incoming tx queues has
+  to be a little less
+- Real rates can differ, due to discreetness
+- Map skb-priority to txq is not enough, also skb-priority to l2 prio
+  map has to be created with ip or vconfig tool
+- Any l2/socket prio (0 - 7) for classes can be used, but for
+  simplicity default values are used: 3 and 2
+- only 2 classes tested: A and B, but checked and can work with more,
+  maximum allowed 4, but only for 3 rate can be set.
+
+Test setup for examples
+===
++---+
+|--+|
+|  |  Workstation0  |
+|E |  MAC 18:03:73:66:87:42 |
++-+  +--|t ||
+|| 1  | E |  |  |h |./tsn_listener -d \ |
+|  Target board: | 0  | t |--+  |0 | 18:03:73:66:87:42 -i eth0 \|
+|  AM572x EVM| 0  | h | |  | -s 1500|
+|| 0  | 0 | |--+|
+|  Only 2 classes:   |Mb  +---| +---+
+|  class A, class B  ||
+||+---| +---+
+|| 1  | E | |--+|
+|| 0  | t | |  |  Workstation1  |
+|| 0  | h |--+  |E |  MAC 20:cf:30:85:7d:fd |
+||Mb  | 1 |  +--|t ||
++-+ |h |./tsn_listener -d \ |
+|0 | 20:cf:30:85:7d:fd -i eth0 \|
+|  | -s 1500|
+|--+|
++---+
+
+*
+*
+*
+Example 1: One port tx AVB configuration scheme for target board
+--
+(prints and scheme for AM52xx evm, applicable for single port boards)
+
+tc - traffic class
+txq - transmit queue
+p - priority
+f - fifo (cpsw fifo)
+S - shaper configured
+
++--+ u
+| +---+  +---+  +--+ +--+  | s
+| |   |  |   |  |  | |  |  | e
+| | App 1 |  | App 2 |  | Apps | | Apps |  | r
+| | Class A   |  | Class B   |  | Rest | | Rest |  |
+| | Eth0  |  | Eth0  |  | Eth0 | | Eth1 |  | s
+| | VLAN100   |  | VLAN100   |  |   |  | |   |  |  | p
+| | 40 Mb/s   |  | 20 Mb/s   |  |   |  | |   |  |  | a
+| | SO_PRIORITY=3 |  | SO_PRIORITY=2 |  |   |  | |   |  |  | c
+| |   |   |  |   |   |  |   |  | |   |  

[RFC PATCH 5/6] net: ethernet: ti: cpsw: restore shaper configuration while down/up

2018-05-18 Thread Ivan Khoronzhuk
Need to restore shapers configuration after interface was down/up.
This is needed as appropriate configuration is still replicated in
kernel settings. This only shapers context restore, so vlan
configuration should be restored by user if needed, especially for
devices with one port where vlan frames are sent via ALE.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 47 ++
 1 file changed, 47 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index c7710b0e1c17..c3e88be36c1b 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1807,6 +1807,51 @@ static int cpsw_set_cbs(struct net_device *ndev,
return ret;
 }
 
+static void cpsw_cbs_resume(struct cpsw_slave *slave, struct cpsw_priv *priv)
+{
+   int fifo, bw;
+
+   for (fifo = CPSW_FIFO_SHAPERS_NUM; fifo > 0; fifo--) {
+   bw = priv->fifo_bw[fifo];
+   if (!bw)
+   continue;
+
+   cpsw_set_fifo_rlimit(priv, fifo, bw);
+   }
+}
+
+static void cpsw_mqprio_resume(struct cpsw_slave *slave, struct cpsw_priv 
*priv)
+{
+   struct cpsw_common *cpsw = priv->cpsw;
+   u32 tx_prio_map = 0;
+   int i, tc, fifo;
+   u32 tx_prio_rg;
+
+   if (!priv->mqprio_hw)
+   return;
+
+   for (i = 0; i < 8; i++) {
+   tc = netdev_get_prio_tc_map(priv->ndev, i);
+   fifo = CPSW_FIFO_SHAPERS_NUM - tc;
+   tx_prio_map |= fifo << (4 * i);
+   }
+
+   tx_prio_rg = cpsw->version == CPSW_VERSION_1 ?
+CPSW1_TX_PRI_MAP : CPSW2_TX_PRI_MAP;
+
+   slave_write(slave, tx_prio_map, tx_prio_rg);
+}
+
+/* restore resources after port reset */
+static void cpsw_restore(struct cpsw_priv *priv)
+{
+   /* restore MQPRIO offload */
+   for_each_slave(priv, cpsw_mqprio_resume, priv);
+
+   /* restore CBS offload */
+   for_each_slave(priv, cpsw_cbs_resume, priv);
+}
+
 static int cpsw_ndo_open(struct net_device *ndev)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
@@ -1886,6 +1931,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 
}
 
+   cpsw_restore(priv);
+
/* Enable Interrupt pacing if configured */
if (cpsw->coal_intvl != 0) {
struct ethtool_coalesce coal;
-- 
2.17.0



[RFC PATCH 2/6] net: ethernet: ti: cpdma: fit rated channels in backward order

2018-05-18 Thread Ivan Khoronzhuk
According to TRM tx rated channels should be in 7..0 order,
so correct it.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/davinci_cpdma.c | 31 -
 1 file changed, 15 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index 31ae04117f0a..37fbdc668cc7 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -406,37 +406,36 @@ static int cpdma_chan_fit_rate(struct cpdma_chan *ch, u32 
rate,
struct cpdma_chan *chan;
u32 old_rate = ch->rate;
u32 new_rmask = 0;
-   int rlim = 1;
+   int rlim = 0;
int i;
 
-   *prio_mode = 0;
for (i = tx_chan_num(0); i < tx_chan_num(CPDMA_MAX_CHANNELS); i++) {
chan = ctlr->channels[i];
-   if (!chan) {
-   rlim = 0;
+   if (!chan)
continue;
-   }
 
if (chan == ch)
chan->rate = rate;
 
if (chan->rate) {
-   if (rlim) {
-   new_rmask |= chan->mask;
-   } else {
-   ch->rate = old_rate;
-   dev_err(ctlr->dev, "Prev channel of %dch is not 
rate limited\n",
-   chan->chan_num);
-   return -EINVAL;
-   }
-   } else {
-   *prio_mode = 1;
-   rlim = 0;
+   rlim = 1;
+   new_rmask |= chan->mask;
+   continue;
}
+
+   if (rlim)
+   goto err;
}
 
*rmask = new_rmask;
+   *prio_mode = rlim;
return 0;
+
+err:
+   ch->rate = old_rate;
+   dev_err(ctlr->dev, "Upper cpdma ch%d is not rate limited\n",
+   chan->chan_num);
+   return -EINVAL;
 }
 
 static u32 cpdma_chan_set_factors(struct cpdma_ctlr *ctlr,
-- 
2.17.0



[PATCH] net: ethernet: ti: cpsw: disable mq feature for "AM33xx ES1.0" devices

2018-05-16 Thread Ivan Khoronzhuk
The early versions of am33xx devices, related to ES1.0 SoC revision
have errata limiting mq support. That's the same errata as
commit 7da1160002f1 ("drivers: net: cpsw: add am335x errata workarround for
interrutps")

AM33xx Errata [1] Advisory 1.0.9
http://www.ti.com/lit/er/sprz360f/sprz360f.pdf

After additional investigation were found that drivers w/a is
propagated on all AM33xx SoCs and on DM814x. But the errata exists
only for ES1.0 of AM33xx family, limiting mq support for revisions
after ES1.0. So, disable mq support only for related SoCs and use
separate polls for revisions allowing mq.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 109 ++---
 1 file changed, 60 insertions(+), 49 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 28d893b93d30..a7285dddfd29 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -36,6 +36,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 
@@ -957,7 +958,7 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
return IRQ_HANDLED;
 }
 
-static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget)
+static int cpsw_tx_mq_poll(struct napi_struct *napi_tx, int budget)
 {
u32 ch_map;
int num_tx, cur_budget, ch;
@@ -984,7 +985,21 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int 
budget)
if (num_tx < budget) {
napi_complete(napi_tx);
writel(0xff, >wr_regs->tx_en);
-   if (cpsw->quirk_irq && cpsw->tx_irq_disabled) {
+   }
+
+   return num_tx;
+}
+
+static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget)
+{
+   struct cpsw_common *cpsw = napi_to_cpsw(napi_tx);
+   int num_tx;
+
+   num_tx = cpdma_chan_process(cpsw->txv[0].ch, budget);
+   if (num_tx < budget) {
+   napi_complete(napi_tx);
+   writel(0xff, >wr_regs->tx_en);
+   if (cpsw->tx_irq_disabled) {
cpsw->tx_irq_disabled = false;
enable_irq(cpsw->irqs_table[1]);
}
@@ -993,7 +1008,7 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int 
budget)
return num_tx;
 }
 
-static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget)
+static int cpsw_rx_mq_poll(struct napi_struct *napi_rx, int budget)
 {
u32 ch_map;
int num_rx, cur_budget, ch;
@@ -1020,7 +1035,21 @@ static int cpsw_rx_poll(struct napi_struct *napi_rx, int 
budget)
if (num_rx < budget) {
napi_complete_done(napi_rx, num_rx);
writel(0xff, >wr_regs->rx_en);
-   if (cpsw->quirk_irq && cpsw->rx_irq_disabled) {
+   }
+
+   return num_rx;
+}
+
+static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget)
+{
+   struct cpsw_common *cpsw = napi_to_cpsw(napi_rx);
+   int num_rx;
+
+   num_rx = cpdma_chan_process(cpsw->rxv[0].ch, budget);
+   if (num_rx < budget) {
+   napi_complete_done(napi_rx, num_rx);
+   writel(0xff, >wr_regs->rx_en);
+   if (cpsw->rx_irq_disabled) {
cpsw->rx_irq_disabled = false;
enable_irq(cpsw->irqs_table[0]);
}
@@ -2364,9 +2393,9 @@ static void cpsw_get_channels(struct net_device *ndev,
 {
struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
 
+   ch->max_rx = cpsw->quirk_irq ? 1 : CPSW_MAX_QUEUES;
+   ch->max_tx = cpsw->quirk_irq ? 1 : CPSW_MAX_QUEUES;
ch->max_combined = 0;
-   ch->max_rx = CPSW_MAX_QUEUES;
-   ch->max_tx = CPSW_MAX_QUEUES;
ch->max_other = 0;
ch->other_count = 0;
ch->rx_count = cpsw->rx_ch_num;
@@ -2377,6 +2406,11 @@ static void cpsw_get_channels(struct net_device *ndev,
 static int cpsw_check_ch_settings(struct cpsw_common *cpsw,
  struct ethtool_channels *ch)
 {
+   if (cpsw->quirk_irq) {
+   dev_err(cpsw->dev, "Maximum one tx/rx queue is allowed");
+   return -EOPNOTSUPP;
+   }
+
if (ch->combined_count)
return -EINVAL;
 
@@ -2917,44 +2951,20 @@ static int cpsw_probe_dual_emac(struct cpsw_priv *priv)
return ret;
 }
 
-#define CPSW_QUIRK_IRQ BIT(0)
-
-static const struct platform_device_id cpsw_devtype[] = {
-   {
-   /* keep it for existing comaptibles */
-   .name = "cpsw",
-   .driver_data = CPSW_QUIRK_IRQ,
-   }, {
-   .name = "am335x-cpsw",
-   .driver_data = CPSW_QUIRK_IRQ,
-   }, {
-   .name

[PATCH v2] net: ethernet: ti: cpsw: fix tx vlan priority mapping

2018-04-19 Thread Ivan Khoronzhuk
The CPDMA_TX_PRIORITY_MAP in real is vlan pcp field priority mapping
register and basically replaces vlan pcp field for tagged packets.
So, set it to be 1:1 mapping. Otherwise, it will cause unexpected
change of egress vlan tagged packets, like prio 2 -> prio 5.

Fixes: e05107e6b747 ("net: ethernet: ti: cpsw: add multi queue support")
Reviewed-by: Grygorii Strashko <grygorii.stras...@ti.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net/master

 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 3037127..74f8284 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -129,7 +129,7 @@ do {
\
 
 #define RX_PRIORITY_MAPPING0x76543210
 #define TX_PRIORITY_MAPPING0x33221100
-#define CPDMA_TX_PRIORITY_MAP  0x01234567
+#define CPDMA_TX_PRIORITY_MAP  0x76543210
 
 #define CPSW_VLAN_AWAREBIT(1)
 #define CPSW_RX_VLAN_ENCAP BIT(2)
-- 
2.7.4



[PATCH] net: ethernet: ti: cpsw: fix tx vlan priority mapping

2018-04-12 Thread Ivan Khoronzhuk
The CPDMA_TX_PRIORITY_MAP in real is vlan pcp field priority mapping
register and basically replaces vlan pcp field for tagged packets.
So, set it to be 1:1 mapping.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net/master

 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 3037127..74f8284 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -129,7 +129,7 @@ do {
\
 
 #define RX_PRIORITY_MAPPING0x76543210
 #define TX_PRIORITY_MAPPING0x33221100
-#define CPDMA_TX_PRIORITY_MAP  0x01234567
+#define CPDMA_TX_PRIORITY_MAP  0x76543210
 
 #define CPSW_VLAN_AWAREBIT(1)
 #define CPSW_RX_VLAN_ENCAP BIT(2)
-- 
2.7.4



Re: [PATCH] net: ethernet: ti: cpsw: fix net watchdog timeout

2018-02-07 Thread Ivan Khoronzhuk
Rechecked once again, seems it covers every case, at this moment, so

Reviewed-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>

-- 
Regards,
Ivan Khoronzhuk


Re: [PATCH] net: ethernet: ti: cpsw: fix net watchdog timeout

2018-02-07 Thread Ivan Khoronzhuk

On Wed, Feb 07, 2018 at 12:19:11PM -0600, Grygorii Strashko wrote:

On 02/07/2018 07:31 AM, Ivan Khoronzhuk wrote:

On Wed, Feb 07, 2018 at 05:03:19AM +0200, Ivan Khoronzhuk wrote:

On Tue, Feb 06, 2018 at 07:17:06PM -0600, Grygorii Strashko wrote:

It was discovered that simple program which indefinitely sends 200b UDP
packets and runs on TI AM574x SoC (SMP) under RT Kernel triggers network
watchdog timeout in TI CPSW driver (<6 hours run). The network watchdog
timeout is triggered due to race between cpsw_ndo_start_xmit() and
cpsw_tx_handler() [NAPI]

cpsw_ndo_start_xmit()
if (unlikely(!cpdma_check_free_tx_desc(txch))) {
txq = netdev_get_tx_queue(ndev, q_idx);
netif_tx_stop_queue(txq);

^^ as per [1] barrier has to be used after set_bit() otherwise new value
might not be visible to other cpus
}

cpsw_tx_handler()
if (unlikely(netif_tx_queue_stopped(txq)))
netif_tx_wake_queue(txq);

and when it happens ndev TX queue became disabled forever while driver's HW
TX queue is empty.

I'm sure it fixes test case somehow but there is some strangeness.
(I've thought about this some X months ago):
1. If no free desc, then there is bunch of descs on the queue ready to be sent
2. If one of this desc while this process was missed then next will wake queue,
because there is bunch of them on the fly. So, if desc on top of the sent queue
missed to enable the queue, then next one more likely will enable it anyway..
then how it could happen? The described race is possible only on last
descriptor, yes, packets are small the speed is hight, possibility is very small
.but then next situation is also possible:
- packets are sent fast
- all packets were sent, but no any descriptors are freed now by sw interrupt 
(NAPI)
- when interrupt had started NAPI, the queue was enabled, all other next
interrupts are throttled once NAPI not finished it's work yet.
- when new packet submitted, no free descs are present yet (NAPI has not freed
any yet), but all packets are sent, so no one can awake tx queue, as interrupt
will not arise when NAPI is started to free first descriptor interrupts are
disabled.because h/w queue to be sent is empty...
- how it can happen as submitting packet and handling packet operations is under
channel lock? Not exactly, a period between handling and freeing the descriptor
to the pool is not under channel lock, here:

spin_unlock_irqrestore(>lock, flags);
if (unlikely(status & CPDMA_DESC_TD_COMPLETE))
cb_status = -ENOSYS;
else
cb_status = status;

__cpdma_chan_free(chan, desc, outlen, cb_status);
return status;

unlock_ret:
spin_unlock_irqrestore(>lock, flags);
return status;

And:
__cpdma_chan_free(chan, desc, outlen, cb_status);
-> cpdma_desc_free(pool, desc, 1);

As result, queue deadlock as you've described.
Just thought, not checked, but theoretically possible.
What do you think?


Yep. And if this happens desc_alloc_fail should be >0 while i usually see it 0 
when
watchdog triggers.

It has to be 0 for situation I trying to describe.
See below re example for keeping all in one place


I was able to reproduce this situation once, but with bunch of debug code added
which changes timings:

I also unintentionally observed it several times but usually was thinking
that it was connected with my other experiments. It was with am572x.
But now seems it actually can happen with plane sources. And maybe here not
only 1 fix is needed, that's my concern.



Prerequisite: only one free desc available.
cpsw_ndo_start_xmit1cpsw_tx_poll
 prepare and send packet
 ->Free desc queue is empty at this moment
if (unlikely(!cpdma_check_free_tx_desc(txch)))
netif_tx_stop_queue(txq);
--- tx queue stopped
cpdma_chan_process()

spin_lock_irqsave(>lock, flags);
chan->count--

spin_unlock_irqrestore(>lock, flags)

cpsw_tx_handler()
   if 
(unlikely(netif_tx_queue_stopped(txq)))

netif_tx_wake_queue(txq);
--- tx queue is woken up
cpsw_ndo_start_xmit2
 prepare packet
ret = cpsw_tx_packet_submit(priv, skb, txch);
//fail as desc not returned to the pool yet
if (unlikely(ret != 0)) {
cpsw_err(priv, tx_err, "desc submit failed\n");
goto fail;
}
cpdma_desc_free()   

fail:
ndev->stats.tx_dropped++;
netif_tx_stop_queue(txq);
// oops.

That's why I added double check and barri

Re: [PATCH] net: ethernet: ti: cpsw: fix net watchdog timeout

2018-02-07 Thread Ivan Khoronzhuk
On Wed, Feb 07, 2018 at 05:03:19AM +0200, Ivan Khoronzhuk wrote:
> On Tue, Feb 06, 2018 at 07:17:06PM -0600, Grygorii Strashko wrote:
> > It was discovered that simple program which indefinitely sends 200b UDP
> > packets and runs on TI AM574x SoC (SMP) under RT Kernel triggers network
> > watchdog timeout in TI CPSW driver (<6 hours run). The network watchdog
> > timeout is triggered due to race between cpsw_ndo_start_xmit() and
> > cpsw_tx_handler() [NAPI]
> > 
> > cpsw_ndo_start_xmit()
> > if (unlikely(!cpdma_check_free_tx_desc(txch))) {
> > txq = netdev_get_tx_queue(ndev, q_idx);
> > netif_tx_stop_queue(txq);
> > 
> > ^^ as per [1] barier has to be used after set_bit() otherwise new value
> > might not be visible to other cpus
> > }
> > 
> > cpsw_tx_handler()
> > if (unlikely(netif_tx_queue_stopped(txq)))
> > netif_tx_wake_queue(txq);
> > 
> > and when it happens ndev TX queue became disabled forever while driver's HW
> > TX queue is empty.
> I'm sure it fixes test case somehow but there is some strangeness.
> (I've thought about this some X months ago):
> 1. If no free desc, then there is bunch of descs on the queue ready to be sent
> 2. If one of this desc while this process was missed then next will wake 
> queue,
> because there is bunch of them on the fly. So, if desc on top of the sent 
> queue
> missed to enable the queue, then next one more likely will enable it anyway..
> then how it could happen? The described race is possible only on last
> descriptor, yes, packets are small the speed is hight, possibility is very 
> small
> .but then next situation is also possible:
> - packets are sent fast
> - all packets were sent, but no any descriptors are freed now by sw interrupt 
> (NAPI)
> - when interrupt had started NAPI, the queue was enabled, all other next 
> interrupts are throttled once NAPI not finished it's work yet.
> - when new packet submitted, no free descs are present yet (NAPI has not freed
> any yet), but all packets are sent, so no one can awake tx queue, as 
> interrupt 
> will not arise when NAPI is started to free first descriptor interrupts are 
> disabled.because h/w queue to be sent is empty...
> - how it can happen as submitting packet and handling packet operations is 
> under 
> channel lock? Not exactly, a period between handling and freeing the 
> descriptor
> to the pool is not under channel lock, here:
> 
>   spin_unlock_irqrestore(>lock, flags);
>   if (unlikely(status & CPDMA_DESC_TD_COMPLETE))
>   cb_status = -ENOSYS;
>   else
>   cb_status = status;
> 
>   __cpdma_chan_free(chan, desc, outlen, cb_status);
>   return status;
> 
> unlock_ret:
>   spin_unlock_irqrestore(>lock, flags);
>   return status;
> 
> And:
> __cpdma_chan_free(chan, desc, outlen, cb_status);
>   -> cpdma_desc_free(pool, desc, 1);
> 
> As result, queue deadlock as you've described.
> Just thought, not checked, but theoretically possible.
> What do you think?

Better explanation, for rare race:

start conditions:
- all descs are submitted, except last one, and queue is not stopped
- any desc was returned to the pool yet (but packets can be sent)

time
 ||
 \/

submit process  NAPI poll process

new packet is scheduled for submission
stated that no free descs (with locks)
lock is freed
returned all descs to the pool
and queue is enabled
interrupt enabled, poll exit
queue is disabled
submit exit

Result:
- all descs are returned to the pool, submission to the queue disabled
- NAPI cannot wake queue, as all desc were handled already

According to packet size in 200B
Data size, bits: 200B * 63desc * 10 = 128000bit roughly
Time all of them are sent: 128000 / 1Gb = 128us

That's enough the CPU to be occupied by other process in RT even.

> 
> > 
> > Fix this, by adding smp_mb__after_atomic() after netif_tx_stop_queue()
> > calls and double check for free TX descriptors after stopping ndev TX queue
> > - if there are free TX descriptors wake up ndev TX queue.
> > 
> > [1] https://www.kernel.org/doc/html/latest/core-api/atomic_ops.html
> > Signed-off-by: Grygorii Strashko <grygorii.stras...@ti.com>
> > ---
> >  drivers/net/ethernet/ti/cpsw.c | 16 ++--
> >  1 file changed, 14 insertions(+), 2 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethern

Re: [PATCH] net: ethernet: ti: cpsw: fix net watchdog timeout

2018-02-06 Thread Ivan Khoronzhuk
tx_desc(txch))
> + netif_tx_wake_queue(txq);
> +
>   return NETDEV_TX_BUSY;
>  }
>  
> -- 
> 2.10.5
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-omap" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html

-- 
Regards,
Ivan Khoronzhuk


Re: Issue with commit fea23fb591cc "net: phy: convert read-modify-write to phy_modify()"

2018-01-04 Thread Ivan Khoronzhuk
+ G.Strashko
The below change also brokes phy connect for am572x..

 int genphy_restart_aneg(struct phy_device *phydev)
 {
-   int ctl = phy_read(phydev, MII_BMCR);
-
-   if (ctl < 0)
-   return ctl;
-
-   ctl |= BMCR_ANENABLE | BMCR_ANRESTART;
-
/* Don't isolate the PHY if we're negotiating */
-   ctl &= ~BMCR_ISOLATE;
-
-   return phy_write(phydev, MII_BMCR, ctl);
+   return phy_modify(phydev, MII_BMCR, ~BMCR_ISOLATE,
+ BMCR_ANENABLE | BMCR_ANRESTART);



-- 
Regards,
Ivan Khoronzhuk


[PATCH v3 net-next] net: ethernet: ti: cpdma: correct error handling for chan create

2017-12-12 Thread Ivan Khoronzhuk
It's not correct to return NULL when that is actually an error and
function returns errors in any other wrong case. In the same time,
the cpsw driver and davinci emac doesn't check error case while
creating channel and it can miss actual error. Also remove WARNs
replacing them on dev_err msgs.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c  | 12 +---
 drivers/net/ethernet/ti/davinci_cpdma.c |  2 +-
 drivers/net/ethernet/ti/davinci_emac.c  | 11 +--
 3 files changed, 19 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index a60a378..3c85a08 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3065,10 +3065,16 @@ static int cpsw_probe(struct platform_device *pdev)
}
 
cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
+   if (IS_ERR(cpsw->txv[0].ch)) {
+   dev_err(priv->dev, "error initializing tx dma channel\n");
+   ret = PTR_ERR(cpsw->txv[0].ch);
+   goto clean_dma_ret;
+   }
+
cpsw->rxv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_rx_handler, 1);
-   if (WARN_ON(!cpsw->rxv[0].ch || !cpsw->txv[0].ch)) {
-   dev_err(priv->dev, "error initializing dma channels\n");
-   ret = -ENOMEM;
+   if (IS_ERR(cpsw->rxv[0].ch)) {
+   dev_err(priv->dev, "error initializing rx dma channel\n");
+   ret = PTR_ERR(cpsw->rxv[0].ch);
goto clean_dma_ret;
}
 
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index e4d6edf..6f9173f 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -893,7 +893,7 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr 
*ctlr, int chan_num,
chan_num = rx_type ? rx_chan_num(chan_num) : tx_chan_num(chan_num);
 
if (__chan_linear(chan_num) >= ctlr->num_chan)
-   return NULL;
+   return ERR_PTR(-EINVAL);
 
chan = devm_kzalloc(ctlr->dev, sizeof(*chan), GFP_KERNEL);
if (!chan)
diff --git a/drivers/net/ethernet/ti/davinci_emac.c 
b/drivers/net/ethernet/ti/davinci_emac.c
index f58c0c6..abceea8 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -1870,10 +1870,17 @@ static int davinci_emac_probe(struct platform_device 
*pdev)
 
priv->txchan = cpdma_chan_create(priv->dma, EMAC_DEF_TX_CH,
 emac_tx_handler, 0);
+   if (IS_ERR(priv->txchan)) {
+   dev_err(>dev, "error initializing tx dma channel\n");
+   rc = PTR_ERR(priv->txchan);
+   goto no_cpdma_chan;
+   }
+
priv->rxchan = cpdma_chan_create(priv->dma, EMAC_DEF_RX_CH,
 emac_rx_handler, 1);
-   if (WARN_ON(!priv->txchan || !priv->rxchan)) {
-   rc = -ENOMEM;
+   if (IS_ERR(priv->rxchan)) {
+   dev_err(>dev, "error initializing rx dma channel\n");
+   rc = PTR_ERR(priv->rxchan);
goto no_cpdma_chan;
}
 
-- 
2.7.4



Re: [PATCH v2 net-next] net: ethernet: ti: cpdma: correct error handling for chan create

2017-12-12 Thread Ivan Khoronzhuk
On Tue, Dec 12, 2017 at 11:08:51AM -0600, Grygorii Strashko wrote:
> 
> 
> On 12/12/2017 10:35 AM, Ivan Khoronzhuk wrote:
> > It's not correct to return NULL when that is actually an error and
> > function returns errors in any other wrong case. In the same time,
> > the cpsw driver and davinci emac doesn't check error case while
> > creating channel and it can miss actual error. Also remove WARNs
> > duplicated dev_err msgs.
> > 
> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> > ---
> >   drivers/net/ethernet/ti/cpsw.c  | 12 +---
> >   drivers/net/ethernet/ti/davinci_cpdma.c |  2 +-
> >   drivers/net/ethernet/ti/davinci_emac.c  |  9 +++--
> >   3 files changed, 17 insertions(+), 6 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> > index a60a378..3c85a08 100644
> > --- a/drivers/net/ethernet/ti/cpsw.c
> > +++ b/drivers/net/ethernet/ti/cpsw.c
> > @@ -3065,10 +3065,16 @@ static int cpsw_probe(struct platform_device *pdev)
> > }
> >   
> > cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
> > +   if (IS_ERR(cpsw->txv[0].ch)) {
> > +   dev_err(priv->dev, "error initializing tx dma channel\n");
> > +   ret = PTR_ERR(cpsw->txv[0].ch);
> > +   goto clean_dma_ret;
> > +   }
> > +
> > cpsw->rxv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_rx_handler, 1);
> > -   if (WARN_ON(!cpsw->rxv[0].ch || !cpsw->txv[0].ch)) {
> > -   dev_err(priv->dev, "error initializing dma channels\n");
> > -   ret = -ENOMEM;
> > +   if (IS_ERR(cpsw->rxv[0].ch)) {
> > +   dev_err(priv->dev, "error initializing rx dma channel\n");
> > +   ret = PTR_ERR(cpsw->rxv[0].ch);
> > goto clean_dma_ret;
> > }
> >   
> > diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> > b/drivers/net/ethernet/ti/davinci_cpdma.c
> > index e4d6edf..6f9173f 100644
> > --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> > +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> > @@ -893,7 +893,7 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr 
> > *ctlr, int chan_num,
> > chan_num = rx_type ? rx_chan_num(chan_num) : tx_chan_num(chan_num);
> >   
> > if (__chan_linear(chan_num) >= ctlr->num_chan)
> > -   return NULL;
> > +   return ERR_PTR(-EINVAL);
> >   
> > chan = devm_kzalloc(ctlr->dev, sizeof(*chan), GFP_KERNEL);
> > if (!chan)
> > diff --git a/drivers/net/ethernet/ti/davinci_emac.c 
> > b/drivers/net/ethernet/ti/davinci_emac.c
> > index f58c0c6..3d4af64 100644
> > --- a/drivers/net/ethernet/ti/davinci_emac.c
> > +++ b/drivers/net/ethernet/ti/davinci_emac.c
> > @@ -1870,10 +1870,15 @@ static int davinci_emac_probe(struct 
> > platform_device *pdev)
> >   
> > priv->txchan = cpdma_chan_create(priv->dma, EMAC_DEF_TX_CH,
> >  emac_tx_handler, 0);
> > +   if (WARN_ON(IS_ERR(priv->txchan))) {
> 
> So, logically WARN_ON() should be removed in  davinci_emac.c also. Right?
It doesn't have dev_err() duplicate, so not very.
But would be better to replace them on dev_err() if no objection.


> 
> > +   rc = PTR_ERR(priv->txchan);
> > +   goto no_cpdma_chan;
> > +   }
> > +
> > priv->rxchan = cpdma_chan_create(priv->dma, EMAC_DEF_RX_CH,
> >  emac_rx_handler, 1);
> > -   if (WARN_ON(!priv->txchan || !priv->rxchan)) {
> > -   rc = -ENOMEM;
> > +   if (WARN_ON(IS_ERR(priv->rxchan))) {
> > +   rc = PTR_ERR(priv->rxchan);
> > goto no_cpdma_chan;
> > }
> >   
> > 
> 
> -- 
> regards,
> -grygorii

-- 
Regards,
Ivan Khoronzhuk


[PATCH v2 net-next] net: ethernet: ti: cpdma: correct error handling for chan create

2017-12-12 Thread Ivan Khoronzhuk
It's not correct to return NULL when that is actually an error and
function returns errors in any other wrong case. In the same time,
the cpsw driver and davinci emac doesn't check error case while
creating channel and it can miss actual error. Also remove WARNs
duplicated dev_err msgs.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c  | 12 +---
 drivers/net/ethernet/ti/davinci_cpdma.c |  2 +-
 drivers/net/ethernet/ti/davinci_emac.c  |  9 +++--
 3 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index a60a378..3c85a08 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3065,10 +3065,16 @@ static int cpsw_probe(struct platform_device *pdev)
}
 
cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
+   if (IS_ERR(cpsw->txv[0].ch)) {
+   dev_err(priv->dev, "error initializing tx dma channel\n");
+   ret = PTR_ERR(cpsw->txv[0].ch);
+   goto clean_dma_ret;
+   }
+
cpsw->rxv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_rx_handler, 1);
-   if (WARN_ON(!cpsw->rxv[0].ch || !cpsw->txv[0].ch)) {
-   dev_err(priv->dev, "error initializing dma channels\n");
-   ret = -ENOMEM;
+   if (IS_ERR(cpsw->rxv[0].ch)) {
+   dev_err(priv->dev, "error initializing rx dma channel\n");
+   ret = PTR_ERR(cpsw->rxv[0].ch);
goto clean_dma_ret;
}
 
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index e4d6edf..6f9173f 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -893,7 +893,7 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr 
*ctlr, int chan_num,
chan_num = rx_type ? rx_chan_num(chan_num) : tx_chan_num(chan_num);
 
if (__chan_linear(chan_num) >= ctlr->num_chan)
-   return NULL;
+   return ERR_PTR(-EINVAL);
 
chan = devm_kzalloc(ctlr->dev, sizeof(*chan), GFP_KERNEL);
if (!chan)
diff --git a/drivers/net/ethernet/ti/davinci_emac.c 
b/drivers/net/ethernet/ti/davinci_emac.c
index f58c0c6..3d4af64 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -1870,10 +1870,15 @@ static int davinci_emac_probe(struct platform_device 
*pdev)
 
priv->txchan = cpdma_chan_create(priv->dma, EMAC_DEF_TX_CH,
 emac_tx_handler, 0);
+   if (WARN_ON(IS_ERR(priv->txchan))) {
+   rc = PTR_ERR(priv->txchan);
+   goto no_cpdma_chan;
+   }
+
priv->rxchan = cpdma_chan_create(priv->dma, EMAC_DEF_RX_CH,
 emac_rx_handler, 1);
-   if (WARN_ON(!priv->txchan || !priv->rxchan)) {
-   rc = -ENOMEM;
+   if (WARN_ON(IS_ERR(priv->rxchan))) {
+   rc = PTR_ERR(priv->rxchan);
goto no_cpdma_chan;
}
 
-- 
2.7.4



Re: [PATCH net-next] net: ethernet: ti: cpdma: rate is not changed - correct case

2017-12-07 Thread Ivan Khoronzhuk
On Thu, Dec 07, 2017 at 03:13:15PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Thu, 7 Dec 2017 22:10:06 +0200
> 
> > On Thu, Dec 07, 2017 at 02:50:24PM -0500, David Miller wrote:
> >> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >> Date: Thu, 7 Dec 2017 21:48:56 +0200
> >> 
> >> > On Wed, Dec 06, 2017 at 04:35:45PM -0500, David Miller wrote:
> >> >> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >> >> Date: Wed,  6 Dec 2017 16:41:18 +0200
> >> >> 
> >> >> > If rate is the same as set it's correct case.
> >> >> > 
> >> >> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >> >> > ---
> >> >> > Based on net-next/master
> >> >> > 
> >> >> >  drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
> >> >> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >> >> > 
> >> >> > diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> >> >> > b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> >> > index e4d6edf..dbe9167 100644
> >> >> > --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> >> >> > +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> >> > @@ -841,7 +841,7 @@ int cpdma_chan_set_rate(struct cpdma_chan *ch, 
> >> >> > u32 rate)
> >> >> >   return -EINVAL;
> >> >> >  
> >> >> >   if (ch->rate == rate)
> >> >> > - return rate;
> >> >> > + return 0;
> >> >> 
> >> >> Looking at the one and only caller of this function, 
> >> >> cpsw_ndo_set_tx_maxrate, it
> >> >> makes sure this can never, ever, happen.
> >> > In current circumstances yes, it will never happen.
> >> > But I caught it while adding related code and better return 0 if upper 
> >> > caller
> >> > doesn't have such check. Suppose that cpdma module is responsible for 
> >> > itself
> >> > and if it's critical I can send this patch along with whole related 
> >> > series.
> >> 
> >> You have to decide one way or the other, who is responsible.
> >> 
> >> I think checking higher up is better because it's cheaper at that point to
> >> look at the per-netdev queue rate setting before moving down deeper into 
> >> the
> >> driver specific data-structures.
> > 
> > No objection, but upper caller not always knows current rate and for doing 
> > like
> > this it needs read it first, and this is also some redundancy.
> 
> How can the upper caller not know the current rate?  The rate is
> always stored in the generic netdev per-queue datastructure.
> 
> And that's what existing code checks right now.
Right now, when generic netdev only caller - yes.

-- 
Regards,
Ivan Khoronzhuk


Re: [PATCH net-next] net: ethernet: ti: cpdma: rate is not changed - correct case

2017-12-07 Thread Ivan Khoronzhuk
On Thu, Dec 07, 2017 at 02:50:24PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Thu, 7 Dec 2017 21:48:56 +0200
> 
> > On Wed, Dec 06, 2017 at 04:35:45PM -0500, David Miller wrote:
> >> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >> Date: Wed,  6 Dec 2017 16:41:18 +0200
> >> 
> >> > If rate is the same as set it's correct case.
> >> > 
> >> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >> > ---
> >> > Based on net-next/master
> >> > 
> >> >  drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
> >> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >> > 
> >> > diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> >> > b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> > index e4d6edf..dbe9167 100644
> >> > --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> >> > +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> > @@ -841,7 +841,7 @@ int cpdma_chan_set_rate(struct cpdma_chan *ch, u32 
> >> > rate)
> >> >  return -EINVAL;
> >> >  
> >> >  if (ch->rate == rate)
> >> > -return rate;
> >> > +return 0;
> >> 
> >> Looking at the one and only caller of this function, 
> >> cpsw_ndo_set_tx_maxrate, it
> >> makes sure this can never, ever, happen.
> > In current circumstances yes, it will never happen.
> > But I caught it while adding related code and better return 0 if upper 
> > caller
> > doesn't have such check. Suppose that cpdma module is responsible for itself
> > and if it's critical I can send this patch along with whole related series.
> 
> You have to decide one way or the other, who is responsible.
> 
> I think checking higher up is better because it's cheaper at that point to
> look at the per-netdev queue rate setting before moving down deeper into the
> driver specific data-structures.

No objection, but upper caller not always knows current rate and for doing like
this it needs read it first, and this is also some redundancy.

-- 
Regards,
Ivan Khoronzhuk


Re: [PATCH net-next] net: ethernet: ti: cpdma: correct error handling for chan create

2017-12-07 Thread Ivan Khoronzhuk
On Wed, Dec 06, 2017 at 04:38:14PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Wed,  6 Dec 2017 16:54:09 +0200
> 
> > @@ -3065,10 +3065,16 @@ static int cpsw_probe(struct platform_device *pdev)
> > }
> >  
> > cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
> > +   if (WARN_ON(IS_ERR(cpsw->txv[0].ch))) {
> > +   dev_err(priv->dev, "error initializing tx dma channel\n");
> > +   ret = PTR_ERR(cpsw->txv[0].ch);
> > +   goto clean_dma_ret;
> > +   }
> > +
> 
> You're already emitting a proper dev_err() message, therefore WARN_ON()
> is a duplicate notification to the logs and therefore not appropriate.

Yes, will remove unneeded WARNs

-- 
Regards,
Ivan Khoronzhuk


Re: [PATCH net-next] net: ethernet: ti: cpdma: rate is not changed - correct case

2017-12-07 Thread Ivan Khoronzhuk
On Wed, Dec 06, 2017 at 04:35:45PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Wed,  6 Dec 2017 16:41:18 +0200
> 
> > If rate is the same as set it's correct case.
> > 
> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> > ---
> > Based on net-next/master
> > 
> >  drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> > b/drivers/net/ethernet/ti/davinci_cpdma.c
> > index e4d6edf..dbe9167 100644
> > --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> > +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> > @@ -841,7 +841,7 @@ int cpdma_chan_set_rate(struct cpdma_chan *ch, u32 rate)
> > return -EINVAL;
> >  
> > if (ch->rate == rate)
> > -   return rate;
> > +   return 0;
> 
> Looking at the one and only caller of this function, cpsw_ndo_set_tx_maxrate, 
> it
> makes sure this can never, ever, happen.
In current circumstances yes, it will never happen.
But I caught it while adding related code and better return 0 if upper caller
doesn't have such check. Suppose that cpdma module is responsible for itself
and if it's critical I can send this patch along with whole related series.

> 
> So I would instead remove this check completely since it can never trigger.

-- 
Regards,
Ivan Khoronzhuk


[PATCH net-next] net: ethernet: ti: cpdma: correct error handling for chan create

2017-12-06 Thread Ivan Khoronzhuk
It's not correct to return NULL when that is actually an error and
function returns errors in any other wrong case. In the same time,
the cpsw driver and davinci emac doesn't check error case while
creating channel and it can miss actual error. Correct this mess.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c  | 12 +---
 drivers/net/ethernet/ti/davinci_cpdma.c |  2 +-
 drivers/net/ethernet/ti/davinci_emac.c  |  9 +++--
 3 files changed, 17 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index a60a378..ee58f48 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3065,10 +3065,16 @@ static int cpsw_probe(struct platform_device *pdev)
}
 
cpsw->txv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_tx_handler, 0);
+   if (WARN_ON(IS_ERR(cpsw->txv[0].ch))) {
+   dev_err(priv->dev, "error initializing tx dma channel\n");
+   ret = PTR_ERR(cpsw->txv[0].ch);
+   goto clean_dma_ret;
+   }
+
cpsw->rxv[0].ch = cpdma_chan_create(cpsw->dma, 0, cpsw_rx_handler, 1);
-   if (WARN_ON(!cpsw->rxv[0].ch || !cpsw->txv[0].ch)) {
-   dev_err(priv->dev, "error initializing dma channels\n");
-   ret = -ENOMEM;
+   if (WARN_ON(IS_ERR(cpsw->rxv[0].ch))) {
+   dev_err(priv->dev, "error initializing rx dma channel\n");
+   ret = PTR_ERR(cpsw->rxv[0].ch);
goto clean_dma_ret;
}
 
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index dbe9167..a9fab42 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -893,7 +893,7 @@ struct cpdma_chan *cpdma_chan_create(struct cpdma_ctlr 
*ctlr, int chan_num,
chan_num = rx_type ? rx_chan_num(chan_num) : tx_chan_num(chan_num);
 
if (__chan_linear(chan_num) >= ctlr->num_chan)
-   return NULL;
+   return ERR_PTR(-EINVAL);
 
chan = devm_kzalloc(ctlr->dev, sizeof(*chan), GFP_KERNEL);
if (!chan)
diff --git a/drivers/net/ethernet/ti/davinci_emac.c 
b/drivers/net/ethernet/ti/davinci_emac.c
index f58c0c6..3d4af64 100644
--- a/drivers/net/ethernet/ti/davinci_emac.c
+++ b/drivers/net/ethernet/ti/davinci_emac.c
@@ -1870,10 +1870,15 @@ static int davinci_emac_probe(struct platform_device 
*pdev)
 
priv->txchan = cpdma_chan_create(priv->dma, EMAC_DEF_TX_CH,
 emac_tx_handler, 0);
+   if (WARN_ON(IS_ERR(priv->txchan))) {
+   rc = PTR_ERR(priv->txchan);
+   goto no_cpdma_chan;
+   }
+
priv->rxchan = cpdma_chan_create(priv->dma, EMAC_DEF_RX_CH,
 emac_rx_handler, 1);
-   if (WARN_ON(!priv->txchan || !priv->rxchan)) {
-   rc = -ENOMEM;
+   if (WARN_ON(IS_ERR(priv->rxchan))) {
+   rc = PTR_ERR(priv->rxchan);
goto no_cpdma_chan;
}
 
-- 
2.7.4



Re: [next-queue PATCH v7 4/6] net/sched: Introduce Credit Based Shaper (CBS) qdisc

2017-10-13 Thread Ivan Khoronzhuk
On Thu, Oct 12, 2017 at 05:40:03PM -0700, Vinicius Costa Gomes wrote:
> This queueing discipline implements the shaper algorithm defined by
> the 802.1Q-2014 Section 8.6.8.2 and detailed in Annex L.
> 
> It's primary usage is to apply some bandwidth reservation to user
> defined traffic classes, which are mapped to different queues via the
> mqprio qdisc.
> 
> Only a simple software implementation is added for now.
> 
> Signed-off-by: Vinicius Costa Gomes 
> Signed-off-by: Jesus Sanchez-Palencia 
> ---
>  include/uapi/linux/pkt_sched.h |  18 +++
>  net/sched/Kconfig  |  11 ++
>  net/sched/Makefile |   1 +
>  net/sched/sch_cbs.c| 314 
> +
>  4 files changed, 344 insertions(+)
>  create mode 100644 net/sched/sch_cbs.c
> 
> diff --git a/include/uapi/linux/pkt_sched.h b/include/uapi/linux/pkt_sched.h
> index 099bf5528fed..41e349df4bf4 100644
> --- a/include/uapi/linux/pkt_sched.h
> +++ b/include/uapi/linux/pkt_sched.h
> @@ -871,4 +871,22 @@ struct tc_pie_xstats {
>   __u32 maxq; /* maximum queue size */
>   __u32 ecn_mark; /* packets marked with ecn*/
>  };
> +
> +/* CBS */
> +struct tc_cbs_qopt {
> + __u8 offload;
> + __s32 hicredit;
> + __s32 locredit;
> + __s32 idleslope;
> + __s32 sendslope;
> +};
> +
> +enum {
> + TCA_CBS_UNSPEC,
> + TCA_CBS_PARMS,
> + __TCA_CBS_MAX,
> +};
> +
> +#define TCA_CBS_MAX (__TCA_CBS_MAX - 1)
> +
>  #endif
> diff --git a/net/sched/Kconfig b/net/sched/Kconfig
> index e70ed26485a2..c03d86a7775e 100644
> --- a/net/sched/Kconfig
> +++ b/net/sched/Kconfig
> @@ -172,6 +172,17 @@ config NET_SCH_TBF
> To compile this code as a module, choose M here: the
> module will be called sch_tbf.
>  
> +config NET_SCH_CBS
> + tristate "Credit Based Shaper (CBS)"
> + ---help---
> +   Say Y here if you want to use the Credit Based Shaper (CBS) packet
> +   scheduling algorithm.
> +
> +   See the top of  for more details.
> +
> +   To compile this code as a module, choose M here: the
> +   module will be called sch_cbs.
> +
>  config NET_SCH_GRED
>   tristate "Generic Random Early Detection (GRED)"
>   ---help---
> diff --git a/net/sched/Makefile b/net/sched/Makefile
> index 7b915d226de7..80c8f92d162d 100644
> --- a/net/sched/Makefile
> +++ b/net/sched/Makefile
> @@ -52,6 +52,7 @@ obj-$(CONFIG_NET_SCH_FQ_CODEL)  += sch_fq_codel.o
>  obj-$(CONFIG_NET_SCH_FQ) += sch_fq.o
>  obj-$(CONFIG_NET_SCH_HHF)+= sch_hhf.o
>  obj-$(CONFIG_NET_SCH_PIE)+= sch_pie.o
> +obj-$(CONFIG_NET_SCH_CBS)+= sch_cbs.o
>  
>  obj-$(CONFIG_NET_CLS_U32)+= cls_u32.o
>  obj-$(CONFIG_NET_CLS_ROUTE4) += cls_route.o
> diff --git a/net/sched/sch_cbs.c b/net/sched/sch_cbs.c
> new file mode 100644
> index ..0643587e6dc8
> --- /dev/null
> +++ b/net/sched/sch_cbs.c
> @@ -0,0 +1,314 @@
> +/*
> + * net/sched/sch_cbs.c   Credit Based Shaper
> + *
> + *   This program is free software; you can redistribute it and/or
> + *   modify it under the terms of the GNU General Public License
> + *   as published by the Free Software Foundation; either version
> + *   2 of the License, or (at your option) any later version.
> + *
> + * Authors:  Vinicius Costa Gomes 
> + *
> + */
> +
> +/* Credit Based Shaper (CBS)
> + * =
> + *
> + * This is a simple rate-limiting shaper aimed at TSN applications on
> + * systems with known traffic workloads.
> + *
> + * Its algorithm is defined by the IEEE 802.1Q-2014 Specification,
> + * Section 8.6.8.2, and explained in more detail in the Annex L of the
> + * same specification.
> + *
> + * There are four tunables to be considered:
> + *
> + *   'idleslope': Idleslope is the rate of credits that is
> + *   accumulated (in kilobits per second) when there is at least
> + *   one packet waiting for transmission. Packets are transmitted
> + *   when the current value of credits is equal or greater than
> + *   zero. When there is no packet to be transmitted the amount of
> + *   credits is set to zero. This is the main tunable of the CBS
> + *   algorithm.
> + *
> + *   'sendslope':
> + *   Sendslope is the rate of credits that is depleted (it should be a
> + *   negative number of kilobits per second) when a transmission is
> + *   ocurring. It can be calculated as follows, (IEEE 802.1Q-2014 Section
> + *   8.6.8.2 item g):
> + *
> + *   sendslope = idleslope - port_transmit_rate
> + *
> + *   'hicredit': Hicredit defines the maximum amount of credits (in
> + *   bytes) that can be accumulated. Hicredit depends on the
> + *   characteristics of interfering traffic,
> + *   'max_interference_size' is the maximum size of any burst of
> + *   traffic that can delay the transmission of a frame that is
> + *   available for transmission for this traffic class, 

[PATCH] net: ethernet: ti: netcp_core: no need in netif_napi_del

2017-09-07 Thread Ivan Khoronzhuk
Don't remove rx_napi specifically just before free_netdev(),
it's supposed to be done in it and is confusing w/o tx_napi deletion.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/netcp_core.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/netcp_core.c 
b/drivers/net/ethernet/ti/netcp_core.c
index eb96a69..437d362 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -2145,7 +2145,6 @@ static void netcp_delete_interface(struct netcp_device 
*netcp_device,
 
of_node_put(netcp->node_interface);
unregister_netdev(ndev);
-   netif_napi_del(>rx_napi);
free_netdev(ndev);
 }
 
-- 
2.7.4



Re: [PATCH v3 3/3] net: ethernet: ti: cpts: fix tx timestamping timeout

2017-07-28 Thread Ivan Khoronzhuk
On Wed, Jul 26, 2017 at 05:11:38PM -0500, Grygorii Strashko wrote:
> With the low speed Ethernet connection CPDMA notification about packet
> processing can be received before CPTS TX timestamp event, which is set
> when packet actually left CPSW while cpdma notification is sent when packet
> pushed in CPSW fifo.  As result, when connection is slow and CPU is fast
> enough TX timestamping is not working properly.
> 
> Fix it, by introducing TX SKB queue to store PTP SKBs for which Ethernet
> Transmit Event hasn't been received yet and then re-check this queue
> with new Ethernet Transmit Events by scheduling CPTS overflow
> work more often (every 1 jiffies) until TX SKB queue is not empty.
> 
> Side effect of this change is:
>  - User space tools require to take into account possible delay in TX
> timestamp processing (for example ptp4l works with tx_timestamp_timeout=400
> under net traffic and tx_timestamp_timeout=25 in idle).
> 
> Signed-off-by: Grygorii Strashko 


> ---
>  drivers/net/ethernet/ti/cpts.c | 86 
> --
>  drivers/net/ethernet/ti/cpts.h |  1 +
>  2 files changed, 84 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ti/cpts.c b/drivers/net/ethernet/ti/cpts.c
> index 3ed438e..c2121d2 100644
> --- a/drivers/net/ethernet/ti/cpts.c
> +++ b/drivers/net/ethernet/ti/cpts.c
> @@ -31,9 +31,18 @@
>  
>  #include "cpts.h"
>  
> +#define CPTS_SKB_TX_WORK_TIMEOUT 1 /* jiffies */
> +
> +struct cpts_skb_cb_data {
> + unsigned long tmo;
> +};
> +
>  #define cpts_read32(c, r)readl_relaxed(>reg->r)
>  #define cpts_write32(c, v, r)writel_relaxed(v, >reg->r)
>  
> +static int cpts_match(struct sk_buff *skb, unsigned int ptp_class,
> +   u16 ts_seqid, u8 ts_msgtype);
> +
>  static int event_expired(struct cpts_event *event)
>  {
>   return time_after(jiffies, event->tmo);
> @@ -77,6 +86,47 @@ static int cpts_purge_events(struct cpts *cpts)
>   return removed ? 0 : -1;
>  }
>  
> +static bool cpts_match_tx_ts(struct cpts *cpts, struct cpts_event *event)
> +{
> + struct sk_buff *skb, *tmp;
> + u16 seqid;
> + u8 mtype;
> + bool found = false;
> +
> + mtype = (event->high >> MESSAGE_TYPE_SHIFT) & MESSAGE_TYPE_MASK;
> + seqid = (event->high >> SEQUENCE_ID_SHIFT) & SEQUENCE_ID_MASK;
> +
> + /* no need to grab txq.lock as access is always done under cpts->lock */
> + skb_queue_walk_safe(>txq, skb, tmp) {
> + struct skb_shared_hwtstamps ssh;
> + unsigned int class = ptp_classify_raw(skb);
> + struct cpts_skb_cb_data *skb_cb =
> + (struct cpts_skb_cb_data *)skb->cb;
> +
> + if (cpts_match(skb, class, seqid, mtype)) {
> + u64 ns = timecounter_cyc2time(>tc, event->low);
> +
> + memset(, 0, sizeof(ssh));
> + ssh.hwtstamp = ns_to_ktime(ns);
> + skb_tstamp_tx(skb, );
> + found = true;
> + __skb_unlink(skb, >txq);
> + dev_consume_skb_any(skb);
> + dev_dbg(cpts->dev, "match tx timestamp mtype %u seqid 
> %04x\n",
> + mtype, seqid);
> + } else if (time_after(jiffies, skb_cb->tmo)) {
> + /* timeout any expired skbs over 1s */
> + dev_dbg(cpts->dev,
> + "expiring tx timestamp mtype %u seqid %04x\n",
> + mtype, seqid);
> + __skb_unlink(skb, >txq);
> + dev_consume_skb_any(skb);
> + }
> + }
> +
> + return found;
> +}
> +
>  /*
>   * Returns zero if matching event type was found.
>   */
> @@ -101,9 +151,15 @@ static int cpts_fifo_read(struct cpts *cpts, int match)
>   event->low = lo;
>   type = event_type(event);
>   switch (type) {
> + case CPTS_EV_TX:
> + if (cpts_match_tx_ts(cpts, event)) {
> + /* if the new event matches an existing skb,
> +  * then don't queue it
> +  */
> + break;
> + }
>   case CPTS_EV_PUSH:
>   case CPTS_EV_RX:
> - case CPTS_EV_TX:
>   list_del_init(>list);
>   list_add_tail(>list, >events);
>   break;
> @@ -229,8 +285,15 @@ static long cpts_overflow_check(struct ptp_clock_info 
> *ptp)
>   struct cpts *cpts = container_of(ptp, struct cpts, info);
>   unsigned long delay = cpts->ov_check_period;
>   struct timespec64 ts;
> + unsigned long flags;
> +
> + spin_lock_irqsave(>lock, flags);
> + ts = ns_to_timespec64(timecounter_read(>tc));
> +
> + if (!skb_queue_empty(>txq))
> + delay = 

Re: [RFC/RFT PATCH 2/4] net: ethernat: ti: cpts: enable irq

2017-07-03 Thread Ivan Khoronzhuk
On Mon, Jul 03, 2017 at 02:31:06PM -0500, Grygorii Strashko wrote:
> 
> 
> On 06/30/2017 08:31 PM, Ivan Khoronzhuk wrote:
> > On Tue, Jun 13, 2017 at 06:16:21PM -0500, Grygorii Strashko wrote:
> >> There are two reasons for this change:
> >> 1) enabling of HW_TS_PUSH events as suggested by Richard Cochran and
> >> discussed in [1]
> >> 2) fixing an TX timestamping miss issue which happens with low speed
> >> ethernet connections and was reproduced on am57xx and am335x boards.
> >> Issue description: With the low Ethernet connection speed CPDMA 
> >> notification
> >> about packet processing can be received before CPTS TX timestamp event,
> >> which is sent when packet actually left CPSW while cpdma notification is
> >> sent when packet pushed in CPSW fifo.  As result, when connection is slow
> >> and CPU is fast enough TX timestamp can be missed and not working properly.
> >>
> >> This patch converts CPTS driver to use IRQ instead of polling in the
> >> following way:
> >>
> >>   - CPTS_EV_PUSH: CPTS_EV_PUSH is used to get current CPTS counter value 
> >> and
> >> triggered from PTP callbacks and cpts_overflow_check() work. With this
> >> change current CPTS counter value will be read in IRQ handler and saved in
> >> CPTS context "cur_timestamp" field. The compeltion event will be signalled 
> >> to the
> >> requestor. The timecounter->read() will just read saved value. Access to
> >> the "cur_timestamp" is protected by mutex "ptp_clk_mutex".
> >>
> >> cpts_get_time:
> >>reinit_completion(>ts_push_complete);
> >>cpts_write32(cpts, TS_PUSH, ts_push);
> >>wait_for_completion_interruptible_timeout(>ts_push_complete, HZ);
> >>ns = timecounter_read(>tc);
> >>
> >> cpts_irq:
> >>case CPTS_EV_PUSH:
> >>cpts->cur_timestamp = lo;
> >>complete(>ts_push_complete);
> >>
> >> - CPTS_EV_TX: signals when CPTS timestamp is ready for valid TX PTP
> >> packets. The TX timestamp is requested from cpts_tx_timestamp() which is
> >> called for each transmitted packet from NAPI cpsw_tx_poll() callback. With
> >> this change, CPTS event queue will be checked for existing CPTS_EV_TX
> >> event, corresponding to the current TX packet, and if event is not found - 
> >> packet
> >> will be placed in CPTS TX packet queue for later processing. CPTS TX packet
> >> queue will be processed from hi-priority cpts_ts_work() work which is 
> >> scheduled
> >> as from cpts_tx_timestamp() as from CPTS IRQ handler when CPTS_EV_TX event
> >> is received.
> >>
> >> cpts_tx_timestamp:
> >>   check if packet is PTP packet
> >>   try to find corresponding CPTS_EV_TX event
> >> if found: report timestamp
> >> if not found: put packet in TX queue, schedule cpts_ts_work()
> > I've not read patch itself yet, but why schedule is needed if timestamp is 
> > not
> > found? Anyway it is scheduled with irq when timestamp arrives. It's rather 
> > should
> > be scheduled if timestamp is found,
> 
> CPTS IRQ, cpts_ts_work and Net SoftIRQ processing might happen on
> different CPUs, as result - CPTS IRQ will detect TX event and schedule 
> cpts_ts_work on
> one CPU and this work might race with SKB processing in Net SoftIRQ on 
> another, so
> both SKB and CPTS TX event might be queued, but no cpts_ts_work scheduled 
> until
> next CPTS event is received (worst case for cpts_overflow_check period).

Couldn't be better to put packet in TX/RX queue under cpts->lock?
Then, probably, no need to schedule work in rx/tx timestamping and potentially
cpts_ts_work() will not be scheduled twice. I know it makes Irq handler to
wait a little, but it waits anyway while NetSoftIRQ retrieves ts.

> 
> Situation became even more complex on RT kernel where everything is
> executed in kthread contexts.
> 
> > 
> >>
> >> cpts_irq:
> >>   case CPTS_EV_TX:
> >>   put event in CPTS event queue
> >>   schedule cpts_ts_work()
> >>
> >> cpts_ts_work:
> >> for each packet in  CPTS TX packet queue
> >> try to find corresponding CPTS_EV_TX event
> >> if found: report timestamp
> >> if timeout: drop packet
> >>
> >> - CPTS_EV_RX: signals when CPTS timestamp is ready for valid RX PTP
> >> packets. The RX timestamp is requested from cpts_rx_timestamp() which is
> >> called for each rece

Re: [RFC/RFT PATCH 2/4] net: ethernat: ti: cpts: enable irq

2017-06-30 Thread Ivan Khoronzhuk
On Tue, Jun 13, 2017 at 06:16:21PM -0500, Grygorii Strashko wrote:
> There are two reasons for this change:
> 1) enabling of HW_TS_PUSH events as suggested by Richard Cochran and
> discussed in [1]
> 2) fixing an TX timestamping miss issue which happens with low speed
> ethernet connections and was reproduced on am57xx and am335x boards.
> Issue description: With the low Ethernet connection speed CPDMA notification
> about packet processing can be received before CPTS TX timestamp event,
> which is sent when packet actually left CPSW while cpdma notification is
> sent when packet pushed in CPSW fifo.  As result, when connection is slow
> and CPU is fast enough TX timestamp can be missed and not working properly.
> 
> This patch converts CPTS driver to use IRQ instead of polling in the
> following way:
> 
>  - CPTS_EV_PUSH: CPTS_EV_PUSH is used to get current CPTS counter value and
> triggered from PTP callbacks and cpts_overflow_check() work. With this
> change current CPTS counter value will be read in IRQ handler and saved in
> CPTS context "cur_timestamp" field. The compeltion event will be signalled to 
> the
> requestor. The timecounter->read() will just read saved value. Access to
> the "cur_timestamp" is protected by mutex "ptp_clk_mutex".
> 
> cpts_get_time:
>   reinit_completion(>ts_push_complete);
>   cpts_write32(cpts, TS_PUSH, ts_push);
>   wait_for_completion_interruptible_timeout(>ts_push_complete, HZ);
>   ns = timecounter_read(>tc);
> 
> cpts_irq:
>   case CPTS_EV_PUSH:
>   cpts->cur_timestamp = lo;
>   complete(>ts_push_complete);
> 
> - CPTS_EV_TX: signals when CPTS timestamp is ready for valid TX PTP
> packets. The TX timestamp is requested from cpts_tx_timestamp() which is
> called for each transmitted packet from NAPI cpsw_tx_poll() callback. With
> this change, CPTS event queue will be checked for existing CPTS_EV_TX
> event, corresponding to the current TX packet, and if event is not found - 
> packet
> will be placed in CPTS TX packet queue for later processing. CPTS TX packet
> queue will be processed from hi-priority cpts_ts_work() work which is 
> scheduled
> as from cpts_tx_timestamp() as from CPTS IRQ handler when CPTS_EV_TX event
> is received.
> 
> cpts_tx_timestamp:
>  check if packet is PTP packet
>  try to find corresponding CPTS_EV_TX event
>if found: report timestamp
>if not found: put packet in TX queue, schedule cpts_ts_work()
I've not read patch itself yet, but why schedule is needed if timestamp is not
found? Anyway it is scheduled with irq when timestamp arrives. It's rather 
should
be scheduled if timestamp is found, 

> 
> cpts_irq:
>  case CPTS_EV_TX:
>  put event in CPTS event queue
>  schedule cpts_ts_work()
> 
> cpts_ts_work:
>for each packet in  CPTS TX packet queue
>try to find corresponding CPTS_EV_TX event
>if found: report timestamp
>if timeout: drop packet
> 
> - CPTS_EV_RX: signals when CPTS timestamp is ready for valid RX PTP
> packets. The RX timestamp is requested from cpts_rx_timestamp() which is
> called for each received packet from NAPI cpsw_rx_poll() callback. With
> this change, CPTS event queue will be checked for existing CPTS_EV_RX
> event, corresponding to the current RX packet, and if event is not found - 
> packet
> will be placed in CPTS RX packet queue for later processing. CPTS RX packet
> queue will be processed from hi-priority cpts_ts_work() work which is 
> scheduled
> as from cpts_rx_timestamp() as from CPTS IRQ handler when CPTS_EV_RX event
> is received. cpts_rx_timestamp() has been updated to return failure in case
> of RX timestamp processing delaying and, in such cases, caller of
> cpts_rx_timestamp() should not call netif_receive_skb().
It's much similar to tx path, but fix is needed for tx only according to targets
of patch, why rx uses the same approach? Does rx has same isue, then how it 
happens
as the delay caused race for tx packet should allow race for rx packet?
tx : send packet -> tx poll (no ts) -> latency -> hw timstamp (race)
rx : hw timestamp -> latency -> rx poll (ts) -> rx packet (no race)

Is to be consistent or race is realy present?

> 
> cpts_rx_timestamp:
>  check if packet is PTP packet
>  try to find corresponding CPTS_EV_RX event
>if found: report timestamp, return success
>if not found: put packet in RX queue, schedule cpts_ts_work(),
>  return failure
> 
> cpts_irq:
>  case CPTS_EV_RX:
>  put event in CPTS event queue
>  schedule cpts_ts_work()
> 
> cpts_ts_work:
>for each packet in  CPTS RX packet queue
>try to find corresponding CPTS_EV_RX event
>if found: add timestamp and report packet netif_receive_skb()
>if timeout: drop packet
> 
> there are some statistic added in cpsw_get_strings() for debug purposes.
> 
> User space tools (like ptp4l) might require to take into account possible
> delay in timestamp reception (for example ptp4l works with
> tx_timestamp_timeout=100) as TX 

[PATCH net-next 1/3] net: ethernet: ti: cpsw: move skb timestamp to packet_submit

2017-06-27 Thread Ivan Khoronzhuk
Move sw timestamp function close to channel submit function.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b7a0f5e..422994e 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1236,6 +1236,7 @@ static inline int cpsw_tx_packet_submit(struct cpsw_priv 
*priv,
 {
struct cpsw_common *cpsw = priv->cpsw;
 
+   skb_tx_timestamp(skb);
return cpdma_chan_submit(txch, skb, skb->data, skb->len,
 priv->emac_port + cpsw->data.dual_emac);
 }
@@ -1611,8 +1612,6 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff 
*skb,
cpts_is_tx_enabled(cpsw->cpts))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
 
-   skb_tx_timestamp(skb);
-
q_idx = skb_get_queue_mapping(skb);
if (q_idx >= cpsw->tx_ch_num)
q_idx = q_idx % cpsw->tx_ch_num;
-- 
2.7.4



[PATCH net-next 3/3] net: ethernet: ti: netcp_ethss: use cpts to check if packet needs timestamping

2017-06-27 Thread Ivan Khoronzhuk
There is cpts function to check if packet can be timstamped with cpts.
Seems that ptp_classify_raw cover all cases listed with "case".

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/netcp_ethss.c | 18 +-
 1 file changed, 1 insertion(+), 17 deletions(-)

diff --git a/drivers/net/ethernet/ti/netcp_ethss.c 
b/drivers/net/ethernet/ti/netcp_ethss.c
index 0847a8f..28cb38a 100644
--- a/drivers/net/ethernet/ti/netcp_ethss.c
+++ b/drivers/net/ethernet/ti/netcp_ethss.c
@@ -2503,24 +2503,8 @@ static bool gbe_need_txtstamp(struct gbe_intf *gbe_intf,
  const struct netcp_packet *p_info)
 {
struct sk_buff *skb = p_info->skb;
-   unsigned int class = ptp_classify_raw(skb);
 
-   if (class == PTP_CLASS_NONE)
-   return false;
-
-   switch (class) {
-   case PTP_CLASS_V1_IPV4:
-   case PTP_CLASS_V1_IPV6:
-   case PTP_CLASS_V2_IPV4:
-   case PTP_CLASS_V2_IPV6:
-   case PTP_CLASS_V2_L2:
-   case (PTP_CLASS_V2_VLAN | PTP_CLASS_L2):
-   case (PTP_CLASS_V2_VLAN | PTP_CLASS_IPV4):
-   case (PTP_CLASS_V2_VLAN | PTP_CLASS_IPV6):
-   return true;
-   }
-
-   return false;
+   return cpts_can_timestamp(gbe_intf->gbe_dev->cpts, skb);
 }
 
 static int gbe_txtstamp_mark_pkt(struct gbe_intf *gbe_intf,
-- 
2.7.4



[PATCH net-next 2/3] net: ethernet: ti: cpsw: fix sw timestamping for non PTP packets

2017-06-27 Thread Ivan Khoronzhuk
The cpts can timestmap only ptp packets at this moment, so driver
cannot mark every packet as though it's going to be timestamped,
only because h/w timestamping for given skb is enabled with
SKBTX_HW_TSTAMP. It doesn't allow to use sw timestamping, as result
outgoing packet is not timestamped at all if it's not PTP and h/w
timestamping is enabled. So, fix it by setting SKBTX_IN_PROGRESS
only for PTP packets.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c |  3 ++-
 drivers/net/ethernet/ti/cpts.h | 16 
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 422994e..1850e34 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1598,6 +1598,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff 
*skb,
 {
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_common *cpsw = priv->cpsw;
+   struct cpts *cpts = cpsw->cpts;
struct netdev_queue *txq;
struct cpdma_chan *txch;
int ret, q_idx;
@@ -1609,7 +1610,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff 
*skb,
}
 
if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP &&
-   cpts_is_tx_enabled(cpsw->cpts))
+   cpts_is_tx_enabled(cpts) && cpts_can_timestamp(cpts, skb))
skb_shinfo(skb)->tx_flags |= SKBTX_IN_PROGRESS;
 
q_idx = skb_get_queue_mapping(skb);
diff --git a/drivers/net/ethernet/ti/cpts.h b/drivers/net/ethernet/ti/cpts.h
index c96eca2..01ea82b 100644
--- a/drivers/net/ethernet/ti/cpts.h
+++ b/drivers/net/ethernet/ti/cpts.h
@@ -30,6 +30,7 @@
 #include 
 #include 
 #include 
+#include 
 #include 
 
 struct cpsw_cpts {
@@ -155,6 +156,16 @@ static inline bool cpts_is_tx_enabled(struct cpts *cpts)
return !!cpts->tx_enable;
 }
 
+static inline bool cpts_can_timestamp(struct cpts *cpts, struct sk_buff *skb)
+{
+   unsigned int class = ptp_classify_raw(skb);
+
+   if (class == PTP_CLASS_NONE)
+   return false;
+
+   return true;
+}
+
 #else
 struct cpts;
 
@@ -203,6 +214,11 @@ static inline bool cpts_is_tx_enabled(struct cpts *cpts)
 {
return false;
 }
+
+static inline bool cpts_can_timestamp(struct cpts *cpts, struct sk_buff *skb)
+{
+   return false;
+}
 #endif
 
 
-- 
2.7.4



[PATCH net-next 0/3] fix sw timestamping for non PTP packets

2017-06-27 Thread Ivan Khoronzhuk
This series contains several corrections connected with timestamping
for cpsw and netcp drivers based on same cpts module.

Based on net/next

Ivan Khoronzhuk (3):
  net: ethernet: ti: cpsw: move skb timestamp to packet_submit
  net: ethernet: ti: cpsw: fix sw timestamping for non PTP packets
  net: ethernet: ti: netcp_ethss: use cpts to check if packet needs
timestamping

 drivers/net/ethernet/ti/cpsw.c|  6 +++---
 drivers/net/ethernet/ti/cpts.h| 16 
 drivers/net/ethernet/ti/netcp_ethss.c | 18 +-
 3 files changed, 20 insertions(+), 20 deletions(-)

-- 
2.7.4



[PATCH] net: ethernet: ti: netcp_core: return error while dma channel open issue

2017-05-10 Thread Ivan Khoronzhuk
Fix error path while dma open channel issue. Also, no need to check output
on NULL if it's never returned.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next

 drivers/net/ethernet/ti/netcp_core.c | 6 --
 drivers/soc/ti/knav_dma.c| 2 +-
 2 files changed, 5 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/ti/netcp_core.c 
b/drivers/net/ethernet/ti/netcp_core.c
index 729a7da..e6222e5 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1353,9 +1353,10 @@ int netcp_txpipe_open(struct netcp_tx_pipe *tx_pipe)
 
tx_pipe->dma_channel = knav_dma_open_channel(dev,
tx_pipe->dma_chan_name, );
-   if (IS_ERR_OR_NULL(tx_pipe->dma_channel)) {
+   if (IS_ERR(tx_pipe->dma_channel)) {
dev_err(dev, "failed opening tx chan(%s)\n",
tx_pipe->dma_chan_name);
+   ret = PTR_ERR(tx_pipe->dma_channel);
goto err;
}
 
@@ -1673,9 +1674,10 @@ static int netcp_setup_navigator_resources(struct 
net_device *ndev)
 
netcp->rx_channel = knav_dma_open_channel(netcp->netcp_device->device,
netcp->dma_chan_name, );
-   if (IS_ERR_OR_NULL(netcp->rx_channel)) {
+   if (IS_ERR(netcp->rx_channel)) {
dev_err(netcp->ndev_dev, "failed opening rx chan(%s\n",
netcp->dma_chan_name);
+   ret = PTR_ERR(netcp->rx_channel);
goto fail;
}
 
diff --git a/drivers/soc/ti/knav_dma.c b/drivers/soc/ti/knav_dma.c
index ecebe2e..026182d 100644
--- a/drivers/soc/ti/knav_dma.c
+++ b/drivers/soc/ti/knav_dma.c
@@ -413,7 +413,7 @@ static int of_channel_match_helper(struct device_node *np, 
const char *name,
  * @name:  slave channel name
  * @config:dma configuration parameters
  *
- * Returns pointer to appropriate DMA channel on success or NULL.
+ * Returns pointer to appropriate DMA channel on success or error.
  */
 void *knav_dma_open_channel(struct device *dev, const char *name,
struct knav_dma_cfg *config)
-- 
2.7.4



[PATCH] net: ethernet: ti: netcp_core: remove unused compl queue mapping

2017-04-24 Thread Ivan Khoronzhuk
This code is unused and probably was unintentionally left while
moving completion queue mapping in submit function.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/netcp_core.c | 4 
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/ethernet/ti/netcp_core.c 
b/drivers/net/ethernet/ti/netcp_core.c
index 9027c9c..729a7da 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1134,7 +1134,6 @@ netcp_tx_map_skb(struct sk_buff *skb, struct netcp_intf 
*netcp)
u32 buf_len = skb_frag_size(frag);
dma_addr_t desc_dma;
u32 desc_dma_32;
-   u32 pkt_info;
 
dma_addr = dma_map_page(dev, page, page_offset, buf_len,
DMA_TO_DEVICE);
@@ -1151,9 +1150,6 @@ netcp_tx_map_skb(struct sk_buff *skb, struct netcp_intf 
*netcp)
}
 
desc_dma = knav_pool_desc_virt_to_dma(netcp->tx_pool, ndesc);
-   pkt_info =
-   (netcp->tx_compl_qid & KNAV_DMA_DESC_RETQ_MASK) <<
-   KNAV_DMA_DESC_RETQ_SHIFT;
set_pkt_info(dma_addr, buf_len, 0, ndesc);
desc_dma_32 = (u32)desc_dma;
set_words(_dma_32, 1, >next_desc);
-- 
2.7.4



[PATCH] net: ethernet: ti: cpsw: correct ale dev to cpsw

2017-02-15 Thread Ivan Khoronzhuk
The ale is a property of cpsw, so change dev to cpsw->dev,
aka pdev->dev, to be consistent.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index e86f226..57c8308 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3032,7 +3032,7 @@ static int cpsw_probe(struct platform_device *pdev)
goto clean_dma_ret;
}
 
-   ale_params.dev  = >dev;
+   ale_params.dev  = >dev;
ale_params.ale_ageout   = ale_ageout;
ale_params.ale_entries  = data->ale_entries;
ale_params.ale_ports= data->slaves;
-- 
2.7.4



[PATCH] net: ethernet: ti: cpsw: use var instead of func for usage count

2017-02-14 Thread Ivan Khoronzhuk
The usage count function is based on ndev_running flag that is
updated before calling ndo_open/close, but if ndo is called in
another place, as with suspend/resume, the counter is not changed,
that breaks sus/resume. For common resource no difference which
device is using it, does matter only device count. So, replace
usage count function on var and inc and dec it in ndo_open/close.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 42 --
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 503fa8a..e86f226 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -399,6 +399,7 @@ struct cpsw_common {
struct cpts *cpts;
int rx_ch_num, tx_ch_num;
int speed;
+   int usage_count;
 };
 
 struct cpsw_priv {
@@ -671,18 +672,6 @@ static void cpsw_intr_disable(struct cpsw_common *cpsw)
return;
 }
 
-static int cpsw_get_usage_count(struct cpsw_common *cpsw)
-{
-   u32 i;
-   u32 usage_count = 0;
-
-   for (i = 0; i < cpsw->data.slaves; i++)
-   if (cpsw->slaves[i].ndev && netif_running(cpsw->slaves[i].ndev))
-   usage_count++;
-
-   return usage_count;
-}
-
 static void cpsw_tx_handler(void *token, int len, int status)
 {
struct netdev_queue *txq;
@@ -716,8 +705,7 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
 
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
/* In dual emac mode check for all interfaces */
-   if (cpsw->data.dual_emac &&
-   cpsw_get_usage_count(cpsw) &&
+   if (cpsw->data.dual_emac && cpsw->usage_count &&
(status >= 0)) {
/* The packet received is for the interface which
 * is already down and the other interface is up
@@ -1492,11 +1480,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 CPSW_MAJOR_VERSION(reg), CPSW_MINOR_VERSION(reg),
 CPSW_RTL_VERSION(reg));
 
-   /* Initialize host and slave ports.
-* Given ndev is marked as opened already, so init port only if 1 ndev
-* is opened
-*/
-   if (cpsw_get_usage_count(cpsw) < 2)
+   /* Initialize host and slave ports */
+   if (!cpsw->usage_count)
cpsw_init_host_port(priv);
for_each_slave(priv, cpsw_slave_open, priv);
 
@@ -1507,10 +1492,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
-   /* Given ndev is marked as opened already, so if more ndev
-* are opened - no need to init shared resources.
-*/
-   if (cpsw_get_usage_count(cpsw) < 2) {
+   /* initialize shared resources for every ndev */
+   if (!cpsw->usage_count) {
/* disable priority elevation */
__raw_writel(0, >regs->ptype);
 
@@ -1552,6 +1535,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
 
cpdma_ctlr_start(cpsw->dma);
cpsw_intr_enable(cpsw);
+   cpsw->usage_count++;
 
return 0;
 
@@ -1572,10 +1556,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
netif_tx_stop_all_queues(priv->ndev);
netif_carrier_off(priv->ndev);
 
-   /* Given ndev is marked as close already,
-* so disable shared resources if no open devices
-*/
-   if (!cpsw_get_usage_count(cpsw)) {
+   if (cpsw->usage_count <= 1) {
napi_disable(>napi_rx);
napi_disable(>napi_tx);
cpts_unregister(cpsw->cpts);
@@ -1588,6 +1569,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
if (cpsw_need_resplit(cpsw))
cpsw_split_res(ndev);
 
+   cpsw->usage_count--;
pm_runtime_put_sync(cpsw->dev);
return 0;
 }
@@ -2393,7 +2375,7 @@ static int cpsw_resume_data_pass(struct net_device *ndev)
netif_dormant_off(slave->ndev);
 
/* After this receive is started */
-   if (cpsw_get_usage_count(cpsw)) {
+   if (cpsw->usage_count) {
ret = cpsw_fill_rx_channels(priv);
if (ret)
return ret;
@@ -2447,7 +2429,7 @@ static int cpsw_set_channels(struct net_device *ndev,
}
}
 
-   if (cpsw_get_usage_count(cpsw))
+   if (cpsw->usage_count)
cpsw_split_res(ndev);
 
ret = cpsw_resume_data_pass(ndev);

[PATCH v2] net: ethernet: ti: cpsw: fix cpsw assignment in resume

2017-02-14 Thread Ivan Khoronzhuk
There is a copy-paste error, which hides breaking of resume
for CPSW driver: there was replaced netdev_priv() to ndev_to_cpsw(ndev)
in suspend, but left it unchanged in resume.

Fixes: 606f39939595a4d4540406bfc11f265b2036af6d
(ti: cpsw: move platform data and slaves info to cpsw_common)

Reported-by: Alexey Starikovskiy <astarikovs...@topcon.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net/master

v1:
https://patchwork.ozlabs.org/patch/725876/

Since v1:
- rebased on net/master

 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b203143..6508822 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3160,7 +3160,7 @@ static int cpsw_resume(struct device *dev)
 {
struct platform_device  *pdev = to_platform_device(dev);
struct net_device   *ndev = platform_get_drvdata(pdev);
-   struct cpsw_common  *cpsw = netdev_priv(ndev);
+   struct cpsw_common  *cpsw = ndev_to_cpsw(ndev);
 
/* Select default pin state */
pinctrl_pm_select_default_state(dev);
-- 
2.7.4



[PATCH] net: ethernet: ti: cpsw: return NET_XMIT_DROP if skb_padto failed

2017-02-10 Thread Ivan Khoronzhuk
If skb_padto failed the skb has been dropped already, so it was
consumed, but it doesn't mean it was sent, thus no need to update
queue tx time, etc. So, return NET_XMIT_DROP as more appropriate.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 4d1c0c3..503fa8a 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1604,7 +1604,7 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff 
*skb,
if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
cpsw_err(priv, tx_err, "packet pad failed\n");
ndev->stats.tx_dropped++;
-   return NETDEV_TX_OK;
+   return NET_XMIT_DROP;
}
 
if (skb_shinfo(skb)->tx_flags & SKBTX_HW_TSTAMP &&
-- 
2.7.4



Re: [PATCH] net: ethernet: ti: netcp_core: return netdev_tx_t in xmit

2017-02-10 Thread Ivan Khoronzhuk
On Fri, Feb 10, 2017 at 02:45:21PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Thu,  9 Feb 2017 16:24:14 +0200
> 
> > @@ -1300,7 +1301,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, 
> > struct net_device *ndev)
> > dev_warn(netcp->ndev_dev, "padding failed (%d), packet 
> > dropped\n",
> >  ret);
> > tx_stats->tx_dropped++;
> > -   return ret;
> > +   return NETDEV_TX_BUSY;
> > }
> > skb->len = NETCP_MIN_PACKET_SIZE;
> > }
> > @@ -1329,7 +1330,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, 
> > struct net_device *ndev)
> > if (desc)
> > netcp_free_tx_desc_chain(netcp, desc, sizeof(*desc));
> > dev_kfree_skb(skb);
> > -   return ret;
> > +   return NETDEV_TX_BUSY;
> >  }
> 
> I really think these should be returning NET_XMIT_DROP.

Yes, it seems here can be a little more changes then, will send new version
later.


Re: [PATCH 0/2] net: ethernet: ti: cpsw: fix susp/resume

2017-02-10 Thread Ivan Khoronzhuk
On Fri, Feb 10, 2017 at 12:05:07PM -0600, Grygorii Strashko wrote:
> 
> 
> On 02/09/2017 07:45 PM, David Miller wrote:
> >From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >Date: Fri, 10 Feb 2017 00:54:24 +0200
> >
> >>On Thu, Feb 09, 2017 at 05:21:26PM -0500, David Miller wrote:
> >>>From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >>>Date: Thu,  9 Feb 2017 02:07:34 +0200
> >>>
> >>>>These two patches fix suspend/resume chain.
> >>>
> >>>Patch 2 doesn't apply cleanly to the 'net' tree, please
> >>>respin this series.
> >>
> >>Strange, I've just checked it on net-next/master, it was applied w/o any
> >>warnings.
> >
> >It makes no sense to test "net-next" when I am telling you that it is
> >the "net" tree it doesn't apply to.
> >
> >This is a bug fix, so it should be targetting the "net" tree.
> >
> 
> Looks like the first fix is for net, but the second one is for net-next
> I do not see
> 03fd01ad0eead23eb79294b6fb4d71dcac493855
> "net: ethernet: ti: cpsw: don't duplicate ndev_running"
> in net.

There is dependency, both for net-next and only first is for net tree

> 
> -- 
> regards,
> -grygorii


Re: [PATCH 0/2] net: ethernet: ti: cpsw: fix susp/resume

2017-02-09 Thread Ivan Khoronzhuk
On Thu, Feb 09, 2017 at 05:21:26PM -0500, David Miller wrote:
> From: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Date: Thu,  9 Feb 2017 02:07:34 +0200
> 
> > These two patches fix suspend/resume chain.
> 
> Patch 2 doesn't apply cleanly to the 'net' tree, please
> respin this series.

Strange, I've just checked it on net-next/master, it was applied w/o any
warnings.


[PATCH] net: ethernet: ti: netcp_core: return netdev_tx_t in xmit

2017-02-09 Thread Ivan Khoronzhuk
The _xmit function should return netdev_tx_t type.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/netcp_core.c | 7 ---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/ti/netcp_core.c 
b/drivers/net/ethernet/ti/netcp_core.c
index 7c7ae08..5845ad8 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1280,7 +1280,8 @@ static int netcp_tx_submit_skb(struct netcp_intf *netcp,
 }
 
 /* Submit the packet */
-static int netcp_ndo_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+static netdev_tx_t netcp_ndo_start_xmit(struct sk_buff *skb,
+   struct net_device *ndev)
 {
struct netcp_intf *netcp = netdev_priv(ndev);
struct netcp_stats *tx_stats = >stats;
@@ -1300,7 +1301,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, 
struct net_device *ndev)
dev_warn(netcp->ndev_dev, "padding failed (%d), packet 
dropped\n",
 ret);
tx_stats->tx_dropped++;
-   return ret;
+   return NETDEV_TX_BUSY;
}
skb->len = NETCP_MIN_PACKET_SIZE;
}
@@ -1329,7 +1330,7 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, 
struct net_device *ndev)
if (desc)
netcp_free_tx_desc_chain(netcp, desc, sizeof(*desc));
dev_kfree_skb(skb);
-   return ret;
+   return NETDEV_TX_BUSY;
 }
 
 int netcp_txpipe_close(struct netcp_tx_pipe *tx_pipe)
-- 
2.7.4



[PATCH] net: ethernet: ti: netcp_core: remove netif_trans_update

2017-02-09 Thread Ivan Khoronzhuk
No need to update jiffies in txq->trans_start twice and only for tx 0,
it's supposed to be done in netdev_start_xmit() and per tx queue.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/netcp_core.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/netcp_core.c 
b/drivers/net/ethernet/ti/netcp_core.c
index ebab1473..7c7ae08 100644
--- a/drivers/net/ethernet/ti/netcp_core.c
+++ b/drivers/net/ethernet/ti/netcp_core.c
@@ -1316,8 +1316,6 @@ static int netcp_ndo_start_xmit(struct sk_buff *skb, 
struct net_device *ndev)
if (ret)
goto drop;
 
-   netif_trans_update(ndev);
-
/* Check Tx pool count & stop subqueue if needed */
desc_count = knav_pool_count(netcp->tx_pool);
if (desc_count < netcp->tx_pause_threshold) {
-- 
2.7.4



[PATCH 2/2] net: ethernet: ti: cpsw: fix resume because of usage count

2017-02-08 Thread Ivan Khoronzhuk
The usage count function is based on ndev_running flag that is
updated before calling ndo_open/close, but if ndo is called in
another place, in this case in suspend/resume, the counter is not
changed, that breaks sus/resume. For common resource no difference
which device is using it, does matter only device count. So, replace
usage count function on var and inc and dec it in ndo_open/close.

Fixes: 03fd01ad0eead23eb79294b6fb4d71dcac493855
"net: ethernet: ti: cpsw: don't duplicate ndev_running"

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 42 --
 1 file changed, 12 insertions(+), 30 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 9714fab..1ffaad1 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -399,6 +399,7 @@ struct cpsw_common {
struct cpts *cpts;
int rx_ch_num, tx_ch_num;
int speed;
+   int usage_count;
 };
 
 struct cpsw_priv {
@@ -671,18 +672,6 @@ static void cpsw_intr_disable(struct cpsw_common *cpsw)
return;
 }
 
-static int cpsw_get_usage_count(struct cpsw_common *cpsw)
-{
-   u32 i;
-   u32 usage_count = 0;
-
-   for (i = 0; i < cpsw->data.slaves; i++)
-   if (cpsw->slaves[i].ndev && netif_running(cpsw->slaves[i].ndev))
-   usage_count++;
-
-   return usage_count;
-}
-
 static void cpsw_tx_handler(void *token, int len, int status)
 {
struct netdev_queue *txq;
@@ -716,8 +705,7 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
 
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
/* In dual emac mode check for all interfaces */
-   if (cpsw->data.dual_emac &&
-   cpsw_get_usage_count(cpsw) &&
+   if (cpsw->data.dual_emac && cpsw->usage_count &&
(status >= 0)) {
/* The packet received is for the interface which
 * is already down and the other interface is up
@@ -1492,11 +1480,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
 CPSW_MAJOR_VERSION(reg), CPSW_MINOR_VERSION(reg),
 CPSW_RTL_VERSION(reg));
 
-   /* Initialize host and slave ports.
-* Given ndev is marked as opened already, so init port only if 1 ndev
-* is opened
-*/
-   if (cpsw_get_usage_count(cpsw) < 2)
+   /* Initialize host and slave ports */
+   if (!cpsw->usage_count)
cpsw_init_host_port(priv);
for_each_slave(priv, cpsw_slave_open, priv);
 
@@ -1507,10 +1492,8 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
-   /* Given ndev is marked as opened already, so if more ndev
-* are opened - no need to init shared resources.
-*/
-   if (cpsw_get_usage_count(cpsw) < 2) {
+   /* initialize shared resources for every ndev */
+   if (!cpsw->usage_count) {
/* disable priority elevation */
__raw_writel(0, >regs->ptype);
 
@@ -1552,6 +1535,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
 
cpdma_ctlr_start(cpsw->dma);
cpsw_intr_enable(cpsw);
+   cpsw->usage_count++;
 
return 0;
 
@@ -1572,10 +1556,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
netif_tx_stop_all_queues(priv->ndev);
netif_carrier_off(priv->ndev);
 
-   /* Given ndev is marked as close already,
-* so disable shared resources if no open devices
-*/
-   if (!cpsw_get_usage_count(cpsw)) {
+   if (cpsw->usage_count <= 1) {
napi_disable(>napi_rx);
napi_disable(>napi_tx);
cpts_unregister(cpsw->cpts);
@@ -1588,6 +1569,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
if (cpsw_need_resplit(cpsw))
cpsw_split_res(ndev);
 
+   cpsw->usage_count--;
pm_runtime_put_sync(cpsw->dev);
return 0;
 }
@@ -2393,7 +2375,7 @@ static int cpsw_resume_data_pass(struct net_device *ndev)
netif_dormant_off(slave->ndev);
 
/* After this receive is started */
-   if (cpsw_get_usage_count(cpsw)) {
+   if (cpsw->usage_count) {
ret = cpsw_fill_rx_channels(priv);
if (ret)
return ret;
@@ -2447,7 +2429,7 @@ static int cpsw_set_channels(struct net_device *ndev,
}
}
 
-   if (cpsw_get_usage_count(cpsw))
+   if (cpsw->usage_count)
   

[PATCH 1/2] net: ethernet: ti: cpsw: fix cpsw assignment in resume

2017-02-08 Thread Ivan Khoronzhuk
There is a copy-paste error, which hides breaking of resume
for CPSW driver: there was replaced netdev_priv() to ndev_to_cpsw(ndev)
in suspend, but left it unchanged in resume.

Fixes: 606f39939595a4d4540406bfc11f265b2036af6d
(ti: cpsw: move platform data and slaves info to cpsw_common)

Reported-by: Alexey Starikovskiy <astarikovs...@topcon.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 4d1c0c3..9714fab 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -3225,7 +3225,7 @@ static int cpsw_resume(struct device *dev)
 {
struct platform_device  *pdev = to_platform_device(dev);
struct net_device   *ndev = platform_get_drvdata(pdev);
-   struct cpsw_common  *cpsw = netdev_priv(ndev);
+   struct cpsw_common  *cpsw = ndev_to_cpsw(ndev);
 
/* Select default pin state */
pinctrl_pm_select_default_state(dev);
-- 
2.7.4



[PATCH 0/2] net: ethernet: ti: cpsw: fix susp/resume

2017-02-08 Thread Ivan Khoronzhuk
These two patches fix suspend/resume chain.

Ivan Khoronzhuk (2):
  net: ethernet: ti: cpsw: fix cpsw assignment in resume
  net: ethernet: ti: cpsw: fix resume because of usage count

 drivers/net/ethernet/ti/cpsw.c | 44 +-
 1 file changed, 13 insertions(+), 31 deletions(-)

-- 
2.7.4



[PATCH] net: ethernet: ti: cpsw: remove netif_trans_update

2017-02-06 Thread Ivan Khoronzhuk
No need to update jiffies in txq->trans_start twice, it's supposed to be
done in netdev_start_xmit() and anyway is re-written. Also, no reason to
update trans time in case of an error.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 35a95dc..4d1c0c3 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1601,8 +1601,6 @@ static netdev_tx_t cpsw_ndo_start_xmit(struct sk_buff 
*skb,
struct cpdma_chan *txch;
int ret, q_idx;
 
-   netif_trans_update(ndev);
-
if (skb_padto(skb, CPSW_MIN_PACKET_SIZE)) {
cpsw_err(priv, tx_err, "packet pad failed\n");
ndev->stats.tx_dropped++;
-- 
2.7.4



Re: [PATCH] net: ethernet: ti: cpsw: fix NULL pointer dereference in switch mode

2017-02-01 Thread Ivan Khoronzhuk
On Tue, Jan 31, 2017 at 02:04:04PM -0600, Grygorii Strashko wrote:
> In switch mode on struct cpsw_slave->ndev field will be initialized with
> proper value only for the one cpsw slave port, as result
> cpsw_get_usage_count() will generate "Unable to handle kernel NULL pointer
> dereference" exception when first ethernet interface is opening
> cpsw_ndo_open(). This issue causes boot regression on AM335x EVM and
> reproducible on am57xx-evm (switch mode).
> Fix it by adding additional check for !cpsw->slaves[i].ndev in
> cpsw_get_usage_count().
> 
> Cc: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> fixes: 03fd01ad0eea ("net: ethernet: ti: cpsw: don't duplicate ndev_running")
> Signed-off-by: Grygorii Strashko <grygorii.stras...@ti.com>
> ---

Yes, unfortunately forgot to add it. Thanks.
Reviewed-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>


>  drivers/net/ethernet/ti/cpsw.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> index 67b7323..35a95dc 100644
> --- a/drivers/net/ethernet/ti/cpsw.c
> +++ b/drivers/net/ethernet/ti/cpsw.c
> @@ -677,7 +677,7 @@ static int cpsw_get_usage_count(struct cpsw_common *cpsw)
>   u32 usage_count = 0;
>  
>   for (i = 0; i < cpsw->data.slaves; i++)
> - if (netif_running(cpsw->slaves[i].ndev))
> + if (cpsw->slaves[i].ndev && netif_running(cpsw->slaves[i].ndev))
>   usage_count++;
>  
>   return usage_count;
> -- 
> 2.10.1.dirty
> 


[PATCH v2 5/5] net: ethernet: ti: cpsw: clarify ethtool ops changing num of descs

2017-01-19 Thread Ivan Khoronzhuk
After adding cpsw_set_ringparam ethtool op, better to carry out
common parts of similar ops splitting descriptors in runtime. It
allows to reuse these parts and shows what the ops actually do.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 132 ++---
 1 file changed, 59 insertions(+), 73 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 1f14afd..897ebbe 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -2358,17 +2358,11 @@ static int cpsw_update_channels(struct cpsw_priv *priv,
return 0;
 }
 
-static int cpsw_set_channels(struct net_device *ndev,
-struct ethtool_channels *chs)
+static void cpsw_suspend_data_pass(struct net_device *ndev)
 {
-   struct cpsw_priv *priv = netdev_priv(ndev);
-   struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_common *cpsw = ndev_to_cpsw(ndev);
struct cpsw_slave *slave;
-   int i, ret;
-
-   ret = cpsw_check_ch_settings(cpsw, chs);
-   if (ret < 0)
-   return ret;
+   int i;
 
/* Disable NAPI scheduling */
cpsw_intr_disable(cpsw);
@@ -2386,6 +2380,51 @@ static int cpsw_set_channels(struct net_device *ndev,
 
/* Handle rest of tx packets and stop cpdma channels */
cpdma_ctlr_stop(cpsw->dma);
+}
+
+static int cpsw_resume_data_pass(struct net_device *ndev)
+{
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_slave *slave;
+   int i, ret;
+
+   /* Allow rx packets handling */
+   for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++)
+   if (slave->ndev && netif_running(slave->ndev))
+   netif_dormant_off(slave->ndev);
+
+   /* After this receive is started */
+   if (cpsw_get_usage_count(cpsw)) {
+   ret = cpsw_fill_rx_channels(priv);
+   if (ret)
+   return ret;
+
+   cpdma_ctlr_start(cpsw->dma);
+   cpsw_intr_enable(cpsw);
+   }
+
+   /* Resume transmit for every affected interface */
+   for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++)
+   if (slave->ndev && netif_running(slave->ndev))
+   netif_tx_start_all_queues(slave->ndev);
+
+   return 0;
+}
+
+static int cpsw_set_channels(struct net_device *ndev,
+struct ethtool_channels *chs)
+{
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_slave *slave;
+   int i, ret;
+
+   ret = cpsw_check_ch_settings(cpsw, chs);
+   if (ret < 0)
+   return ret;
+
+   cpsw_suspend_data_pass(ndev);
ret = cpsw_update_channels(priv, chs);
if (ret)
goto err;
@@ -2408,30 +2447,14 @@ static int cpsw_set_channels(struct net_device *ndev,
dev_err(priv->dev, "cannot set real number of rx 
queues\n");
goto err;
}
-
-   /* Enable rx packets handling */
-   netif_dormant_off(slave->ndev);
}
 
-   if (cpsw_get_usage_count(cpsw)) {
-   ret = cpsw_fill_rx_channels(priv);
-   if (ret)
-   goto err;
-
+   if (cpsw_get_usage_count(cpsw))
cpsw_split_res(ndev);
 
-   /* After this receive is started */
-   cpdma_ctlr_start(cpsw->dma);
-   cpsw_intr_enable(cpsw);
-   }
-
-   /* Resume transmit for every affected interface */
-   for (i = cpsw->data.slaves, slave = cpsw->slaves; i; i--, slave++) {
-   if (!(slave->ndev && netif_running(slave->ndev)))
-   continue;
-   netif_tx_start_all_queues(slave->ndev);
-   }
-   return 0;
+   ret = cpsw_resume_data_pass(ndev);
+   if (!ret)
+   return 0;
 err:
dev_err(priv->dev, "cannot update channels number, closing device\n");
dev_close(ndev);
@@ -2492,8 +2515,7 @@ static int cpsw_set_ringparam(struct net_device *ndev,
 {
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_common *cpsw = priv->cpsw;
-   struct cpsw_slave *slave;
-   int i, ret;
+   int ret;
 
/* ignore ering->tx_pending - only rx_pending adjustment is supported */
 
@@ -2505,54 +2527,18 @@ static int cpsw_set_ringparam(struct net_device *ndev,
if (ering->rx_pending == cpdma_get_num_rx_descs(cpsw->dma))
return 0;
 
-   /* Disable NAPI scheduling */
-   cpsw_intr_disable(cpsw);
-
-   /* Stop all transmit queues for every network device.
-  

[PATCH v2 1/5] net: ethernet: ti: cpsw: remove dual check from common res usage function

2017-01-19 Thread Ivan Khoronzhuk
Common res usage is possible only in case an interface is
running. In case of not dual emac here can be only one interface,
so while ndo_open and switch mode, only one interface can be opened,
thus if open is called no any interface is running ... and no common
res are used. So remove check on dual emac, it will simplify
code/understanding and will match the name it's called.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 02b03ee..296ddf2 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1240,9 +1240,6 @@ static int cpsw_common_res_usage_state(struct cpsw_common 
*cpsw)
u32 i;
u32 usage_count = 0;
 
-   if (!cpsw->data.dual_emac)
-   return 0;
-
for (i = 0; i < cpsw->data.slaves; i++)
if (cpsw->slaves[i].open_stat)
usage_count++;
-- 
2.7.4



[PATCH v2 3/5] net: ethernet: ti: cpsw: don't duplicate ndev_running

2017-01-19 Thread Ivan Khoronzhuk
No need to create additional vars to identify if interface is running.
So simplify code by removing redundant var and checking usage counter
instead.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 31 +--
 1 file changed, 17 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index f798905..c681d39 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -357,7 +357,6 @@ struct cpsw_slave {
struct phy_device   *phy;
struct net_device   *ndev;
u32 port_vlan;
-   u32 open_stat;
 };
 
 static inline u32 slave_read(struct cpsw_slave *slave, u32 offset)
@@ -1235,13 +1234,13 @@ static void cpsw_get_ethtool_stats(struct net_device 
*ndev,
}
 }
 
-static int cpsw_common_res_usage_state(struct cpsw_common *cpsw)
+static int cpsw_get_usage_count(struct cpsw_common *cpsw)
 {
u32 i;
u32 usage_count = 0;
 
for (i = 0; i < cpsw->data.slaves; i++)
-   if (cpsw->slaves[i].open_stat)
+   if (netif_running(cpsw->slaves[i].ndev))
usage_count++;
 
return usage_count;
@@ -1501,8 +1500,11 @@ static int cpsw_ndo_open(struct net_device *ndev)
 CPSW_MAJOR_VERSION(reg), CPSW_MINOR_VERSION(reg),
 CPSW_RTL_VERSION(reg));
 
-   /* initialize host and slave ports */
-   if (!cpsw_common_res_usage_state(cpsw))
+   /* Initialize host and slave ports.
+* Given ndev is marked as opened already, so init port only if 1 ndev
+* is opened
+*/
+   if (cpsw_get_usage_count(cpsw) < 2)
cpsw_init_host_port(priv);
for_each_slave(priv, cpsw_slave_open, priv);
 
@@ -1513,7 +1515,10 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
-   if (!cpsw_common_res_usage_state(cpsw)) {
+   /* Given ndev is marked as opened already, so if more ndev
+* are opened - no need to init shared resources.
+*/
+   if (cpsw_get_usage_count(cpsw) < 2) {
/* disable priority elevation */
__raw_writel(0, >regs->ptype);
 
@@ -1556,9 +1561,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpdma_ctlr_start(cpsw->dma);
cpsw_intr_enable(cpsw);
 
-   if (cpsw->data.dual_emac)
-   cpsw->slaves[priv->emac_port].open_stat = true;
-
return 0;
 
 err_cleanup:
@@ -1578,7 +1580,10 @@ static int cpsw_ndo_stop(struct net_device *ndev)
netif_tx_stop_all_queues(priv->ndev);
netif_carrier_off(priv->ndev);
 
-   if (cpsw_common_res_usage_state(cpsw) <= 1) {
+   /* Given ndev is marked as close already,
+* so disable shared resources if no open devices
+*/
+   if (!cpsw_get_usage_count(cpsw)) {
napi_disable(>napi_rx);
napi_disable(>napi_tx);
cpts_unregister(cpsw->cpts);
@@ -1592,8 +1597,6 @@ static int cpsw_ndo_stop(struct net_device *ndev)
cpsw_split_res(ndev);
 
pm_runtime_put_sync(cpsw->dev);
-   if (cpsw->data.dual_emac)
-   cpsw->slaves[priv->emac_port].open_stat = false;
return 0;
 }
 
@@ -2418,7 +2421,7 @@ static int cpsw_set_channels(struct net_device *ndev,
netif_dormant_off(slave->ndev);
}
 
-   if (cpsw_common_res_usage_state(cpsw)) {
+   if (cpsw_get_usage_count(cpsw)) {
ret = cpsw_fill_rx_channels(priv);
if (ret)
goto err;
@@ -2537,7 +2540,7 @@ static int cpsw_set_ringparam(struct net_device *ndev,
netif_dormant_off(slave->ndev);
}
 
-   if (cpsw_common_res_usage_state(cpsw)) {
+   if (cpsw_get_usage_count(cpsw)) {
cpdma_chan_split_pool(cpsw->dma);
 
ret = cpsw_fill_rx_channels(priv);
-- 
2.7.4



[PATCH v2 4/5] net: ethernet: ti: cpsw: don't duplicate common res in rx handler

2017-01-19 Thread Ivan Khoronzhuk
No need to duplicate the same function in rx handler to get info
if any interface is running.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 40 
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index c681d39..1f14afd 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -671,6 +671,18 @@ static void cpsw_intr_disable(struct cpsw_common *cpsw)
return;
 }
 
+static int cpsw_get_usage_count(struct cpsw_common *cpsw)
+{
+   u32 i;
+   u32 usage_count = 0;
+
+   for (i = 0; i < cpsw->data.slaves; i++)
+   if (netif_running(cpsw->slaves[i].ndev))
+   usage_count++;
+
+   return usage_count;
+}
+
 static void cpsw_tx_handler(void *token, int len, int status)
 {
struct netdev_queue *txq;
@@ -703,18 +715,10 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
cpsw_dual_emac_src_port_detect(cpsw, status, ndev, skb);
 
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
-   bool ndev_status = false;
-   struct cpsw_slave *slave = cpsw->slaves;
-   int n;
-
-   if (cpsw->data.dual_emac) {
-   /* In dual emac mode check for all interfaces */
-   for (n = cpsw->data.slaves; n; n--, slave++)
-   if (netif_running(slave->ndev))
-   ndev_status = true;
-   }
-
-   if (ndev_status && (status >= 0)) {
+   /* In dual emac mode check for all interfaces */
+   if (cpsw->data.dual_emac &&
+   cpsw_get_usage_count(cpsw) &&
+   (status >= 0)) {
/* The packet received is for the interface which
 * is already down and the other interface is up
 * and running, instead of freeing which results
@@ -1234,18 +1238,6 @@ static void cpsw_get_ethtool_stats(struct net_device 
*ndev,
}
 }
 
-static int cpsw_get_usage_count(struct cpsw_common *cpsw)
-{
-   u32 i;
-   u32 usage_count = 0;
-
-   for (i = 0; i < cpsw->data.slaves; i++)
-   if (netif_running(cpsw->slaves[i].ndev))
-   usage_count++;
-
-   return usage_count;
-}
-
 static inline int cpsw_tx_packet_submit(struct cpsw_priv *priv,
struct sk_buff *skb,
struct cpdma_chan *txch)
-- 
2.7.4



[PATCH v2 2/5] net: ethernet: ti: cpsw: don't disable interrupts in ndo_open

2017-01-19 Thread Ivan Khoronzhuk
No need to disable interrupts if no open devices,
they are disabled anyway.

Even no need to disable interrupts if some ndev is opened, In this
case shared resources are not touched, only parameters of ndev shell,
so no reason to disable them also. Removed lines have proved it.

So, no need in redundant check and interrupt disable.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 296ddf2..f798905 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1480,8 +1480,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
return ret;
}
 
-   if (!cpsw_common_res_usage_state(cpsw))
-   cpsw_intr_disable(cpsw);
netif_carrier_off(ndev);
 
/* Notify the stack of the actual queue counts. */
-- 
2.7.4



[PATCH v2 0/5] net: ethernet: ti: cpsw: correct common res usage

2017-01-19 Thread Ivan Khoronzhuk
This series is intended to remove unneeded redundancies connected with
common resource usage function.

Since v1:
- changed name to cpsw_get_usage_count()
- added comments to open/closw for cpsw_get_usage_count()
- added patch:
  net: ethernet: ti: cpsw: clarify ethtool ops changing num of descs

Based on net-next/master

Ivan Khoronzhuk (5):
  net: ethernet: ti: cpsw: remove dual check from common res usage
function
  net: ethernet: ti: cpsw: don't disable interrupts in ndo_open
  net: ethernet: ti: cpsw: don't duplicate ndev_running
  net: ethernet: ti: cpsw: don't duplicate common res in rx handler
  net: ethernet: ti: cpsw: clarify ethtool ops changing num of descs

 drivers/net/ethernet/ti/cpsw.c | 200 ++---
 1 file changed, 88 insertions(+), 112 deletions(-)

-- 
2.7.4



Re: [PATCH 3/4] net: ethernet: ti: cpsw: don't duplicate ndev_running

2017-01-17 Thread Ivan Khoronzhuk
On Thu, Jan 12, 2017 at 11:34:47AM -0600, Grygorii Strashko wrote:

Hi Grygorii,
Sorry for late reply.

> 
> 
> On 01/10/2017 07:56 PM, Ivan Khoronzhuk wrote:
> > On Mon, Jan 09, 2017 at 11:25:38AM -0600, Grygorii Strashko wrote:
> >>
> >>
> >> On 01/08/2017 10:41 AM, Ivan Khoronzhuk wrote:
> >>> No need to create additional vars to identify if interface is running.
> >>> So simplify code by removing redundant var and checking usage counter
> >>> instead.
> >>>
> >>> Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> >>> ---
> >>>  drivers/net/ethernet/ti/cpsw.c | 14 --
> >>>  1 file changed, 4 insertions(+), 10 deletions(-)
> >>>
> >>> diff --git a/drivers/net/ethernet/ti/cpsw.c 
> >>> b/drivers/net/ethernet/ti/cpsw.c
> >>> index 40d7fc9..daae87f 100644
> >>> --- a/drivers/net/ethernet/ti/cpsw.c
> >>> +++ b/drivers/net/ethernet/ti/cpsw.c
> >>> @@ -357,7 +357,6 @@ struct cpsw_slave {
> >>>   struct phy_device   *phy;
> >>>   struct net_device   *ndev;
> >>>   u32 port_vlan;
> >>> - u32 open_stat;
> >>>  };
> >>>  
> >>>  static inline u32 slave_read(struct cpsw_slave *slave, u32 offset)
> >>> @@ -1241,7 +1240,7 @@ static int cpsw_common_res_usage_state(struct 
> >>> cpsw_common *cpsw)
> >>>   u32 usage_count = 0;
> >>>  
> >>>   for (i = 0; i < cpsw->data.slaves; i++)
> >>> - if (cpsw->slaves[i].open_stat)
> >>> + if (netif_running(cpsw->slaves[i].ndev))
> >>>   usage_count++;
> >>
> >> Not sure this will work as you expected, but may be I've missed smth :(
> > I've changed conditions, will work.
> > 
> >>
> >> code in static int __dev_open(struct net_device *dev)
> >> ..
> >>set_bit(__LINK_STATE_START, >state);
> >>
> >>if (ops->ndo_validate_addr)
> >>ret = ops->ndo_validate_addr(dev);
> >>
> >>if (!ret && ops->ndo_open)
> >>ret = ops->ndo_open(dev);
> >>
> >>netpoll_poll_enable(dev);
> >>
> >>if (ret)
> >>clear_bit(__LINK_STATE_START, >state);
> >> ..
> >>
> >> so, netif_running(ndev) will start returning true before calling 
> >> ops->ndo_open(dev);
> > Yes, It's done bearing it in mind of course.
> > 
> >>
> >>>  
> >>>   return usage_count;
> >>> @@ -1502,7 +1501,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
> >>>CPSW_RTL_VERSION(reg));
> >>>  
> >>>   /* initialize host and slave ports */
> >>> - if (!cpsw_common_res_usage_state(cpsw))
> >>> + if (cpsw_common_res_usage_state(cpsw) < 2)
> >>
> >> Ah. You've changed the condition here.
> >>
> >> I think it might be reasonable to hide this inside 
> >> cpsw_common_res_usage_state()
> >> and seems it can be renamed to smth like cpsw_is_running().
> > It probably needs to be renamed to smth a little different,
> > like cpsw_get_usage_count ...or cpsw_get_open_ndev_count
> 
> cpsw_get_usage_count () sounds good
Like it more also. Will change it.

> 
> > 
> >>
> >>
> >>>   cpsw_init_host_port(priv);
> >>>   for_each_slave(priv, cpsw_slave_open, priv);
> >>>  
> >>> @@ -1513,7 +1512,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
> >>>   cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
> >>> ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
> >>>  
> >>> - if (!cpsw_common_res_usage_state(cpsw)) {
> >>> + if (cpsw_common_res_usage_state(cpsw) < 2) {
> >>>   /* disable priority elevation */
> >>>   __raw_writel(0, >regs->ptype);
> >>>  
> >>> @@ -1556,9 +1555,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
> >>>   cpdma_ctlr_start(cpsw->dma);
> >>>   cpsw_intr_enable(cpsw);
> >>>  
> >>> - if (cpsw->data.dual_emac)
> >>> - cpsw->slaves[priv->emac_port].open_stat = true;
> >>> -
> >>>   return 0;
> >>>  
> >>>  err_cleanup:
> >>>

[PATCH] net: ethernet: ti: davinci_cpdma: correct check on NULL in set rate

2017-01-17 Thread Ivan Khoronzhuk
Check "ch" on NULL first, then get ctlr.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on net-next/master

 drivers/net/ethernet/ti/davinci_cpdma.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index d80bff1..7ecc6b7 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -835,8 +835,8 @@ EXPORT_SYMBOL_GPL(cpdma_chan_get_min_rate);
  */
 int cpdma_chan_set_rate(struct cpdma_chan *ch, u32 rate)
 {
-   struct cpdma_ctlr *ctlr = ch->ctlr;
unsigned long flags, ch_flags;
+   struct cpdma_ctlr *ctlr;
int ret, prio_mode;
u32 rmask;
 
@@ -846,6 +846,7 @@ int cpdma_chan_set_rate(struct cpdma_chan *ch, u32 rate)
if (ch->rate == rate)
return rate;
 
+   ctlr = ch->ctlr;
spin_lock_irqsave(>lock, flags);
spin_lock_irqsave(>lock, ch_flags);
 
-- 
2.7.4



Re: [PATCH 3/4] net: ethernet: ti: cpsw: don't duplicate ndev_running

2017-01-10 Thread Ivan Khoronzhuk
On Mon, Jan 09, 2017 at 11:25:38AM -0600, Grygorii Strashko wrote:
> 
> 
> On 01/08/2017 10:41 AM, Ivan Khoronzhuk wrote:
> > No need to create additional vars to identify if interface is running.
> > So simplify code by removing redundant var and checking usage counter
> > instead.
> > 
> > Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> > ---
> >  drivers/net/ethernet/ti/cpsw.c | 14 --
> >  1 file changed, 4 insertions(+), 10 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> > index 40d7fc9..daae87f 100644
> > --- a/drivers/net/ethernet/ti/cpsw.c
> > +++ b/drivers/net/ethernet/ti/cpsw.c
> > @@ -357,7 +357,6 @@ struct cpsw_slave {
> > struct phy_device   *phy;
> > struct net_device   *ndev;
> > u32 port_vlan;
> > -   u32 open_stat;
> >  };
> >  
> >  static inline u32 slave_read(struct cpsw_slave *slave, u32 offset)
> > @@ -1241,7 +1240,7 @@ static int cpsw_common_res_usage_state(struct 
> > cpsw_common *cpsw)
> > u32 usage_count = 0;
> >  
> > for (i = 0; i < cpsw->data.slaves; i++)
> > -   if (cpsw->slaves[i].open_stat)
> > +   if (netif_running(cpsw->slaves[i].ndev))
> > usage_count++;
> 
> Not sure this will work as you expected, but may be I've missed smth :(
I've changed conditions, will work.

> 
> code in static int __dev_open(struct net_device *dev)
> ..
>   set_bit(__LINK_STATE_START, >state);
> 
>   if (ops->ndo_validate_addr)
>   ret = ops->ndo_validate_addr(dev);
> 
>   if (!ret && ops->ndo_open)
>   ret = ops->ndo_open(dev);
> 
>   netpoll_poll_enable(dev);
> 
>   if (ret)
>   clear_bit(__LINK_STATE_START, >state);
> ..
> 
> so, netif_running(ndev) will start returning true before calling 
> ops->ndo_open(dev);
Yes, It's done bearing it in mind of course.

> 
> >  
> > return usage_count;
> > @@ -1502,7 +1501,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
> >  CPSW_RTL_VERSION(reg));
> >  
> > /* initialize host and slave ports */
> > -   if (!cpsw_common_res_usage_state(cpsw))
> > +   if (cpsw_common_res_usage_state(cpsw) < 2)
> 
> Ah. You've changed the condition here.
> 
> I think it might be reasonable to hide this inside 
> cpsw_common_res_usage_state()
> and seems it can be renamed to smth like cpsw_is_running().
It probably needs to be renamed to smth a little different,
like cpsw_get_usage_count ...or cpsw_get_open_ndev_count

> 
> 
> > cpsw_init_host_port(priv);
> > for_each_slave(priv, cpsw_slave_open, priv);
> >  
> > @@ -1513,7 +1512,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
> > cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
> >   ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
> >  
> > -   if (!cpsw_common_res_usage_state(cpsw)) {
> > +   if (cpsw_common_res_usage_state(cpsw) < 2) {
> > /* disable priority elevation */
> > __raw_writel(0, >regs->ptype);
> >  
> > @@ -1556,9 +1555,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
> > cpdma_ctlr_start(cpsw->dma);
> > cpsw_intr_enable(cpsw);
> >  
> > -   if (cpsw->data.dual_emac)
> > -   cpsw->slaves[priv->emac_port].open_stat = true;
> > -
> > return 0;
> >  
> >  err_cleanup:
> > @@ -1578,7 +1574,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
> > netif_tx_stop_all_queues(priv->ndev);
> > netif_carrier_off(priv->ndev);
> >  
> > -   if (cpsw_common_res_usage_state(cpsw) <= 1) {
> > +   if (!cpsw_common_res_usage_state(cpsw)) {
> 
> and here __LINK_STATE_START will be cleared before calling ops->ndo_stop(dev);
Actually it's changed because of it.

> So, from one side netif_running(ndev) usage will simplify 
> cpsw_common_res_usage_state() internals,
> but from another side - it will make places where it's used even more 
> entangled :( as for me,
> because when cpsw_common_res_usage_state() will return 1 in cpsw_ndo_open() 
> it will mean
> "no interfaces is really running yet", but the same value 1 in cpsw_ndo_stop()
why not? no interfaces running, except the one excuting ndo_open now.
It's more clear then duplicating it and using two different ways in
different places for identifing running device

[PATCH] net: ethernet: ti: cpsw: extend limits for cpsw_get/set_ringparam

2017-01-08 Thread Ivan Khoronzhuk
Allow to set number of descs close to possible values. In case of
minimum limit it's equal to number of channels to be able to set
at least one desc per channel. For maximum limit leave enough descs
number for tx channels.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 7 +++
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 458298d..09e0ed6 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -2474,8 +2474,7 @@ static void cpsw_get_ringparam(struct net_device *ndev,
/* not supported */
ering->tx_max_pending = 0;
ering->tx_pending = cpdma_get_num_tx_descs(cpsw->dma);
-   /* Max 90% RX buffers */
-   ering->rx_max_pending = (descs_pool_size * 9) / 10;
+   ering->rx_max_pending = descs_pool_size - CPSW_MAX_QUEUES;
ering->rx_pending = cpdma_get_num_rx_descs(cpsw->dma);
 }
 
@@ -2490,8 +2489,8 @@ static int cpsw_set_ringparam(struct net_device *ndev,
/* ignore ering->tx_pending - only rx_pending adjustment is supported */
 
if (ering->rx_mini_pending || ering->rx_jumbo_pending ||
-   ering->rx_pending < (descs_pool_size / 10) ||
-   ering->rx_pending > ((descs_pool_size * 9) / 10))
+   ering->rx_pending < CPSW_MAX_QUEUES ||
+   ering->rx_pending > (descs_pool_size - CPSW_MAX_QUEUES))
return -EINVAL;
 
if (ering->rx_pending == cpdma_get_num_rx_descs(cpsw->dma))
-- 
2.7.4



[PATCH 0/4] net: ethernet: ti: cpsw: correct common res usage

2017-01-08 Thread Ivan Khoronzhuk
This series is intended to remove unneeded redundancies connected with
common resource usage function.

Based on net-next/master
Tested on am572x idk

Ivan Khoronzhuk (4):
  net: ethernet: ti: cpsw: remove dual check from common res usage
function
  net: ethernet: ti: cpsw: don't disable interrupts in ndo_open
  net: ethernet: ti: cpsw: don't duplicate ndev_running
  net: ethernet: ti: cpsw: don't duplicate common res in rx handler

 drivers/net/ethernet/ti/cpsw.c | 57 ++
 1 file changed, 19 insertions(+), 38 deletions(-)

-- 
2.7.4



[PATCH 2/4] net: ethernet: ti: cpsw: don't disable interrupts in ndo_open

2017-01-08 Thread Ivan Khoronzhuk
If any interface is running the interrupts are disabled anyway.
It make sense to disable interrupts if any of interfaces is running,
but in this place, obviously, it didn't have any effect. So, no need
in redundant check and interrupt disable.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index d261024..40d7fc9 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1480,8 +1480,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
return ret;
}
 
-   if (!cpsw_common_res_usage_state(cpsw))
-   cpsw_intr_disable(cpsw);
netif_carrier_off(ndev);
 
/* Notify the stack of the actual queue counts. */
-- 
2.7.4



[PATCH 1/4] net: ethernet: ti: cpsw: remove dual check from common res usage function

2017-01-08 Thread Ivan Khoronzhuk
Common res usage is possible only in case an interface is
running. In case of not dual emac here can be only one interface,
so while ndo_open and switch mode, only one interface can be opened,
thus if open is called no any interface is running ... and no common
res are used. So remove check on dual emac, it will simplify
code/understanding and will match the name it's called.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index f339268..d261024 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1240,9 +1240,6 @@ static int cpsw_common_res_usage_state(struct cpsw_common 
*cpsw)
u32 i;
u32 usage_count = 0;
 
-   if (!cpsw->data.dual_emac)
-   return 0;
-
for (i = 0; i < cpsw->data.slaves; i++)
if (cpsw->slaves[i].open_stat)
usage_count++;
-- 
2.7.4



[PATCH 4/4] net: ethernet: ti: cpsw: don't duplicate common res in rx handler

2017-01-08 Thread Ivan Khoronzhuk
No need to duplicate the same function in rx handler to get info
if any interface is running.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 40 
 1 file changed, 16 insertions(+), 24 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index daae87f..458298d 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -671,6 +671,18 @@ static void cpsw_intr_disable(struct cpsw_common *cpsw)
return;
 }
 
+static int cpsw_common_res_usage_state(struct cpsw_common *cpsw)
+{
+   u32 i;
+   u32 usage_count = 0;
+
+   for (i = 0; i < cpsw->data.slaves; i++)
+   if (netif_running(cpsw->slaves[i].ndev))
+   usage_count++;
+
+   return usage_count;
+}
+
 static void cpsw_tx_handler(void *token, int len, int status)
 {
struct netdev_queue *txq;
@@ -703,18 +715,10 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
cpsw_dual_emac_src_port_detect(cpsw, status, ndev, skb);
 
if (unlikely(status < 0) || unlikely(!netif_running(ndev))) {
-   bool ndev_status = false;
-   struct cpsw_slave *slave = cpsw->slaves;
-   int n;
-
-   if (cpsw->data.dual_emac) {
-   /* In dual emac mode check for all interfaces */
-   for (n = cpsw->data.slaves; n; n--, slave++)
-   if (netif_running(slave->ndev))
-   ndev_status = true;
-   }
-
-   if (ndev_status && (status >= 0)) {
+   /* In dual emac mode check for all interfaces */
+   if (cpsw->data.dual_emac &&
+   cpsw_common_res_usage_state(cpsw) &&
+   (status >= 0)) {
/* The packet received is for the interface which
 * is already down and the other interface is up
 * and running, instead of freeing which results
@@ -1234,18 +1238,6 @@ static void cpsw_get_ethtool_stats(struct net_device 
*ndev,
}
 }
 
-static int cpsw_common_res_usage_state(struct cpsw_common *cpsw)
-{
-   u32 i;
-   u32 usage_count = 0;
-
-   for (i = 0; i < cpsw->data.slaves; i++)
-   if (netif_running(cpsw->slaves[i].ndev))
-   usage_count++;
-
-   return usage_count;
-}
-
 static inline int cpsw_tx_packet_submit(struct cpsw_priv *priv,
struct sk_buff *skb,
struct cpdma_chan *txch)
-- 
2.7.4



[PATCH 3/4] net: ethernet: ti: cpsw: don't duplicate ndev_running

2017-01-08 Thread Ivan Khoronzhuk
No need to create additional vars to identify if interface is running.
So simplify code by removing redundant var and checking usage counter
instead.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 14 --
 1 file changed, 4 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 40d7fc9..daae87f 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -357,7 +357,6 @@ struct cpsw_slave {
struct phy_device   *phy;
struct net_device   *ndev;
u32 port_vlan;
-   u32 open_stat;
 };
 
 static inline u32 slave_read(struct cpsw_slave *slave, u32 offset)
@@ -1241,7 +1240,7 @@ static int cpsw_common_res_usage_state(struct cpsw_common 
*cpsw)
u32 usage_count = 0;
 
for (i = 0; i < cpsw->data.slaves; i++)
-   if (cpsw->slaves[i].open_stat)
+   if (netif_running(cpsw->slaves[i].ndev))
usage_count++;
 
return usage_count;
@@ -1502,7 +1501,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
 CPSW_RTL_VERSION(reg));
 
/* initialize host and slave ports */
-   if (!cpsw_common_res_usage_state(cpsw))
+   if (cpsw_common_res_usage_state(cpsw) < 2)
cpsw_init_host_port(priv);
for_each_slave(priv, cpsw_slave_open, priv);
 
@@ -1513,7 +1512,7 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpsw_ale_add_vlan(cpsw->ale, cpsw->data.default_vlan,
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
-   if (!cpsw_common_res_usage_state(cpsw)) {
+   if (cpsw_common_res_usage_state(cpsw) < 2) {
/* disable priority elevation */
__raw_writel(0, >regs->ptype);
 
@@ -1556,9 +1555,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
cpdma_ctlr_start(cpsw->dma);
cpsw_intr_enable(cpsw);
 
-   if (cpsw->data.dual_emac)
-   cpsw->slaves[priv->emac_port].open_stat = true;
-
return 0;
 
 err_cleanup:
@@ -1578,7 +1574,7 @@ static int cpsw_ndo_stop(struct net_device *ndev)
netif_tx_stop_all_queues(priv->ndev);
netif_carrier_off(priv->ndev);
 
-   if (cpsw_common_res_usage_state(cpsw) <= 1) {
+   if (!cpsw_common_res_usage_state(cpsw)) {
napi_disable(>napi_rx);
napi_disable(>napi_tx);
cpts_unregister(cpsw->cpts);
@@ -1592,8 +1588,6 @@ static int cpsw_ndo_stop(struct net_device *ndev)
cpsw_split_res(ndev);
 
pm_runtime_put_sync(cpsw->dev);
-   if (cpsw->data.dual_emac)
-   cpsw->slaves[priv->emac_port].open_stat = false;
return 0;
 }
 
-- 
2.7.4



Re: [PATCH] net: ethernet: ti: cpsw: remove dual check from common res usage function

2017-01-08 Thread Ivan Khoronzhuk
Please ignore it, I've included it in new series

On Sun, Jan 08, 2017 at 03:56:27PM +0200, Ivan Khoronzhuk wrote:
> Common res usage is possible only in case an interface is
> running. In case of not dual emac here can be only one interface,
> so while ndo_open and switch mode, only one interface can be opened,
> thus if open is called no any interface is running ... and no common
> res are used. So remove check on dual emac, it will simplify
> code/understanding and will match the name it's called.
> 
> Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> ---
> 
> Based on linux-next/master
> 
>  drivers/net/ethernet/ti/cpsw.c | 3 ---
>  1 file changed, 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> index b203143..91684f1 100644
> --- a/drivers/net/ethernet/ti/cpsw.c
> +++ b/drivers/net/ethernet/ti/cpsw.c
> @@ -1235,9 +1235,6 @@ static int cpsw_common_res_usage_state(struct 
> cpsw_common *cpsw)
>   u32 i;
>   u32 usage_count = 0;
>  
> - if (!cpsw->data.dual_emac)
> - return 0;
> -
>   for (i = 0; i < cpsw->data.slaves; i++)
>   if (cpsw->slaves[i].open_stat)
>   usage_count++;
> -- 
> 2.7.4
> 


[PATCH] net: ethernet: ti: cpsw: remove dual check from common res usage function

2017-01-08 Thread Ivan Khoronzhuk
Common res usage is possible only in case an interface is
running. In case of not dual emac here can be only one interface,
so while ndo_open and switch mode, only one interface can be opened,
thus if open is called no any interface is running ... and no common
res are used. So remove check on dual emac, it will simplify
code/understanding and will match the name it's called.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on linux-next/master

 drivers/net/ethernet/ti/cpsw.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b203143..91684f1 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1235,9 +1235,6 @@ static int cpsw_common_res_usage_state(struct cpsw_common 
*cpsw)
u32 i;
u32 usage_count = 0;
 
-   if (!cpsw->data.dual_emac)
-   return 0;
-
for (i = 0; i < cpsw->data.slaves; i++)
if (cpsw->slaves[i].open_stat)
usage_count++;
-- 
2.7.4



Re: [PATCH] net: ethernet: ti: davinci_cpdma: fix access to uninitialized variable in cpdma_chan_set_descs()

2016-12-28 Thread Ivan Khoronzhuk
On Wed, Dec 28, 2016 at 05:42:13PM -0600, Grygorii Strashko wrote:
Grygorii,

> Now below code sequence causes "Unable to handle kernel NULL pointer
> dereference.." exception and system crash during CPSW CPDMA initialization:
> 
> cpsw_probe
> |-cpdma_chan_create (TX channel)
>   |-cpdma_chan_split_pool
> |-cpdma_chan_set_descs(for TX channels)
> |-cpdma_chan_set_descs(for RX channels) [1]
> 
> - and -
> static void cpdma_chan_set_descs(struct cpdma_ctlr *ctlr,
>int rx, int desc_num,
>int per_ch_desc)
> {
>   struct cpdma_chan *chan, *most_chan = NULL;
> 
> ...
> 
>   for (i = min; i < max; i++) {
>   chan = ctlr->channels[i];
>   if (!chan)
>   continue;
> ...
> 
>   if (most_dnum < chan->desc_num) {
>   most_dnum = chan->desc_num;
>   most_chan = chan;
>   }
>   }
>   /* use remains */
>   most_chan->desc_num += desc_cnt; [2]
> }
> 
> So, most_chan value will never be reassigned when cpdma_chan_set_descs() is
> called second time [1], because there are no RX channels yet and system
> will crash at [2].

How did you get this?
I just remember as I fixed it before sending patchset.

Maybe it was some experiment with it.
I just wonder and want to find actual reason what's happening.

Look bellow:

cpsw_probe
|-cpdma_chan_create (TX channel)
  |-cpdma_chan_split_pool
|-cpdma_chan_set_descs(for TX channels)
|-cpdma_chan_set_descs(for RX channels) [1]

|-cpdma_chan_set_descs(for RX channels) in case you'be described has to be
called with rx_desc_num = 0, because all descs are assigned already for tx
channel. And, if desc_num = 0, cpdma_chan_set_descs just exits and no issues.
So, could you please explain how you get this, in what circumstances.

> 
> Hence, fix the issue by checking most_chan for NULL before accessing it.
> 
> Fixes: 0fc6432cc78d ("net: ethernet: ti: davinci_cpdma: add weight function 
> for channels")
> Signed-off-by: Grygorii Strashko 
> ---
>  drivers/net/ethernet/ti/davinci_cpdma.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> b/drivers/net/ethernet/ti/davinci_cpdma.c
> index 36518fc..b349d572 100644
> --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> @@ -708,7 +708,8 @@ static void cpdma_chan_set_descs(struct cpdma_ctlr *ctlr,
>   }
>   }
>   /* use remains */
> - most_chan->desc_num += desc_cnt;
> + if (most_chan)
> + most_chan->desc_num += desc_cnt;
>  }
>  
>  /**
> -- 
> 2.10.1.dirty
> 


Re: [PATCH net 2/3] cpsw/netcp: davinci_cpdma: sanitize inter-module API

2016-12-16 Thread Ivan Khoronzhuk
On Fri, Dec 16, 2016 at 10:19:58AM +0100, Arnd Bergmann wrote:
> The davinci_cpdma module is a helper library that is used by the
> actual device drivers and does nothing by itself, so all its API
> functions need to be exported.
> 
> Four new functions were added recently without an export, so now
> we get a build error:
> 
> ERROR: "cpdma_chan_set_weight" [drivers/net/ethernet/ti/ti_cpsw.ko] undefined!
> ERROR: "cpdma_chan_get_rate" [drivers/net/ethernet/ti/ti_cpsw.ko] undefined!
> ERROR: "cpdma_chan_get_min_rate" [drivers/net/ethernet/ti/ti_cpsw.ko] 
> undefined!
> ERROR: "cpdma_chan_set_rate" [drivers/net/ethernet/ti/ti_cpsw.ko] undefined!
/\
||
A patch correcting this has been applied.
(397c5ad153f0ea62023accb67b3d2b07e5039948)

> 
> This exports those symbols. After taking a closer look, I found two
> more global functions in this file that are not exported and not used
> anywhere, so they can obviously be removed. There is also one function
> that is only used internally in this module, and should be 'static'.
> 
> Fixes: 8f32b90981dc ("net: ethernet: ti: davinci_cpdma: add set rate for a 
> channel")
> Fixes: 0fc6432cc78d ("net: ethernet: ti: davinci_cpdma: add weight function 
> for channels")
> Signed-off-by: Arnd Bergmann 
> ---
>  drivers/net/ethernet/ti/davinci_cpdma.c | 59 
> ++---
>  drivers/net/ethernet/ti/davinci_cpdma.h |  4 ---
>  2 files changed, 17 insertions(+), 46 deletions(-)
> 
> diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> b/drivers/net/ethernet/ti/davinci_cpdma.c
> index 36518fc5c7cc..b9d40f0cdf6c 100644
> --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> @@ -628,6 +628,23 @@ int cpdma_ctlr_destroy(struct cpdma_ctlr *ctlr)
>  }
>  EXPORT_SYMBOL_GPL(cpdma_ctlr_destroy);
>  
> +static int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool enable)
> +{
> + unsigned long flags;
> +
> + spin_lock_irqsave(>lock, flags);
> + if (chan->state != CPDMA_STATE_ACTIVE) {
> + spin_unlock_irqrestore(>lock, flags);
> + return -EINVAL;
> + }
> +
> + dma_reg_write(chan->ctlr, enable ? chan->int_set : chan->int_clear,
> +   chan->mask);
> + spin_unlock_irqrestore(>lock, flags);
> +
> + return 0;
> +}
> +
>  int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable)
>  {
>   unsigned long flags;
> @@ -1274,46 +1291,4 @@ int cpdma_chan_stop(struct cpdma_chan *chan)
>  }
>  EXPORT_SYMBOL_GPL(cpdma_chan_stop);
>  
> -int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool enable)
> -{
> - unsigned long flags;
> -
> - spin_lock_irqsave(>lock, flags);
> - if (chan->state != CPDMA_STATE_ACTIVE) {
> - spin_unlock_irqrestore(>lock, flags);
> - return -EINVAL;
> - }
> -
> - dma_reg_write(chan->ctlr, enable ? chan->int_set : chan->int_clear,
> -   chan->mask);
> - spin_unlock_irqrestore(>lock, flags);
> -
> - return 0;
> -}
> -
> -int cpdma_control_get(struct cpdma_ctlr *ctlr, int control)
> -{
> - unsigned long flags;
> - int ret;
> -
> - spin_lock_irqsave(>lock, flags);
> - ret = _cpdma_control_get(ctlr, control);
> - spin_unlock_irqrestore(>lock, flags);
> -
> - return ret;
> -}
> -
> -int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value)
> -{
> - unsigned long flags;
> - int ret;
> -
> - spin_lock_irqsave(>lock, flags);
> - ret = _cpdma_control_set(ctlr, control, value);
> - spin_unlock_irqrestore(>lock, flags);
> -
> - return ret;
> -}
> -EXPORT_SYMBOL_GPL(cpdma_control_set);
> -
>  MODULE_LICENSE("GPL");
> diff --git a/drivers/net/ethernet/ti/davinci_cpdma.h 
> b/drivers/net/ethernet/ti/davinci_cpdma.h
> index 4a167db2abab..36d0a09a3d44 100644
> --- a/drivers/net/ethernet/ti/davinci_cpdma.h
> +++ b/drivers/net/ethernet/ti/davinci_cpdma.h
> @@ -87,7 +87,6 @@ int cpdma_chan_process(struct cpdma_chan *chan, int quota);
>  
>  int cpdma_ctlr_int_ctrl(struct cpdma_ctlr *ctlr, bool enable);
>  void cpdma_ctlr_eoi(struct cpdma_ctlr *ctlr, u32 value);
> -int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool enable);
>  u32 cpdma_ctrl_rxchs_state(struct cpdma_ctlr *ctlr);
>  u32 cpdma_ctrl_txchs_state(struct cpdma_ctlr *ctlr);
>  bool cpdma_check_free_tx_desc(struct cpdma_chan *chan);
> @@ -111,7 +110,4 @@ enum cpdma_control {
>   CPDMA_RX_BUFFER_OFFSET, /* read-write */
>  };
>  
> -int cpdma_control_get(struct cpdma_ctlr *ctlr, int control);
> -int cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value);
> -
>  #endif
> -- 
> 2.9.0
> 


[PATCH 1/5] net: ethernet: ti: cpsw: use same macros to get active slave

2016-12-10 Thread Ivan Khoronzhuk
Use the same, more convenient macros, to get active slave.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 5 +
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b62d958..c45f7d2 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1624,10 +1624,7 @@ static void cpsw_hwtstamp_v2(struct cpsw_priv *priv)
struct cpsw_common *cpsw = priv->cpsw;
u32 ctrl, mtype;
 
-   if (cpsw->data.dual_emac)
-   slave = >slaves[priv->emac_port];
-   else
-   slave = >slaves[cpsw->data.active_slave];
+   slave = >slaves[cpsw_slave_index(cpsw, priv)];
 
ctrl = slave_read(slave, CPSW2_CONTROL);
switch (cpsw->version) {
-- 
2.7.4



[PATCH 2/5] net: ethernet: ti: cpsw: don't start queue twice

2016-12-10 Thread Ivan Khoronzhuk
No need to start queues after cpsw is started as it will be done
while cpsw_adjust_link(), after phy connection.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 2 --
 1 file changed, 2 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index c45f7d2..23213a3 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1506,8 +1506,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
if (cpsw->data.dual_emac)
cpsw->slaves[priv->emac_port].open_stat = true;
 
-   netif_tx_start_all_queues(ndev);
-
return 0;
 
 err_cleanup:
-- 
2.7.4



[PATCH 3/5] net: ethernet: ti: cpsw: combine budget and weight split and check

2016-12-10 Thread Ivan Khoronzhuk
Re-split weight along with budget. It simplify code a little
and update state after every rate change. Also it's necessarily
to move arguments checks to this combined function. Replace
maximum rate check for an interface on maximum possible rate.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 107 +
 1 file changed, 34 insertions(+), 73 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 23213a3..a2c2c06 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -753,27 +753,18 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
dev_kfree_skb_any(new_skb);
 }
 
-/* split budget depending on channel rates */
-static void cpsw_split_budget(struct net_device *ndev)
+static void cpsw_split_res(struct net_device *ndev)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
+   u32 consumed_rate = 0, bigest_rate = 0;
struct cpsw_common *cpsw = priv->cpsw;
struct cpsw_vector *txv = cpsw->txv;
-   u32 consumed_rate, bigest_rate = 0;
+   int i, ch_weight, rlim_ch_num = 0;
int budget, bigest_rate_ch = 0;
struct cpsw_slave *slave;
-   int i, rlim_ch_num = 0;
u32 ch_rate, max_rate;
int ch_budget = 0;
 
-   if (cpsw->data.dual_emac)
-   slave = >slaves[priv->emac_port];
-   else
-   slave = >slaves[cpsw->data.active_slave];
-
-   max_rate = slave->phy->speed * 1000;
-
-   consumed_rate = 0;
for (i = 0; i < cpsw->tx_ch_num; i++) {
ch_rate = cpdma_chan_get_rate(txv[i].ch);
if (!ch_rate)
@@ -785,7 +776,14 @@ static void cpsw_split_budget(struct net_device *ndev)
 
if (cpsw->tx_ch_num == rlim_ch_num) {
max_rate = consumed_rate;
+   } else if (!rlim_ch_num) {
+   ch_budget = CPSW_POLL_WEIGHT / cpsw->tx_ch_num;
+   bigest_rate = 0;
+   max_rate = consumed_rate;
} else {
+   slave = >slaves[cpsw_slave_index(cpsw, priv)];
+   max_rate = slave->phy->speed * 1000;
+
ch_budget = (consumed_rate * CPSW_POLL_WEIGHT) / max_rate;
ch_budget = (CPSW_POLL_WEIGHT - ch_budget) /
(cpsw->tx_ch_num - rlim_ch_num);
@@ -793,22 +791,28 @@ static void cpsw_split_budget(struct net_device *ndev)
  (cpsw->tx_ch_num - rlim_ch_num);
}
 
-   /* split tx budget */
+   /* split tx weight/budget */
budget = CPSW_POLL_WEIGHT;
for (i = 0; i < cpsw->tx_ch_num; i++) {
ch_rate = cpdma_chan_get_rate(txv[i].ch);
if (ch_rate) {
txv[i].budget = (ch_rate * CPSW_POLL_WEIGHT) / max_rate;
if (!txv[i].budget)
-   txv[i].budget = 1;
+   txv[i].budget++;
if (ch_rate > bigest_rate) {
bigest_rate_ch = i;
bigest_rate = ch_rate;
}
+
+   ch_weight = (ch_rate * 100) / max_rate;
+   if (!ch_weight)
+   ch_weight++;
+   cpdma_chan_set_weight(cpsw->txv[i].ch, ch_weight);
} else {
txv[i].budget = ch_budget;
if (!bigest_rate_ch)
bigest_rate_ch = i;
+   cpdma_chan_set_weight(cpsw->txv[i].ch, 0);
}
 
budget -= txv[i].budget;
@@ -1017,7 +1021,7 @@ static void cpsw_adjust_link(struct net_device *ndev)
for_each_slave(priv, _cpsw_adjust_link, priv, );
 
if (link) {
-   cpsw_split_budget(priv->ndev);
+   cpsw_split_res(priv->ndev);
netif_carrier_on(ndev);
if (netif_running(ndev))
netif_tx_wake_all_queues(ndev);
@@ -1962,64 +1966,25 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device 
*ndev,
 static int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 
rate)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
-   int tx_ch_num = ndev->real_num_tx_queues;
-   u32 consumed_rate, min_rate, max_rate;
struct cpsw_common *cpsw = priv->cpsw;
-   struct cpsw_slave *slave;
-   int ret, i, weight;
-   int rlim_num = 0;
+   u32 min_rate;
u32 ch_rate;
+   int ret;
 
ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate;
if (ch_rate == rate)
return 0;
 
-   if (cpsw->data.dual_emac)
-   slave = >slaves[priv->emac_port];
-   else
-   slave = >slaves[cpsw->dat

[PATCH 1/5] net: ethernet: ti: cpsw: improve re-split policy

2016-12-10 Thread Ivan Khoronzhuk
This patches add several simplifications and improvements to set
maximum rate for channels taking in account switch and dual emac mode.

Don't re-split res in the following cases:
- speed of phys is not changed
- speed of phys is changed and no rate limited channels
- speed of phys is changed and all channels are rate limited
- phy is unlinked while dev is open
- phy is linked back but speed is not changed

The maximum speed is sum of "linked" phys, thus res are split taken
into account two interfaces, both for dual emac mode and for
switch mode.

Tested on am572x

Based on net-next/master

Ivan Khoronzhuk (5):
  net: ethernet: ti: cpsw: use same macros to get active slave
  net: ethernet: ti: cpsw: don't start queue twice
  net: ethernet: ti: cpsw: combine budget and weight split and check
  net: ethernet: ti: cpsw: re-split res only when speed is changed
  net: ethernet: ti: cpsw: sync rates for channels in dual emac mode

 drivers/net/ethernet/ti/cpsw.c | 178 +++--
 1 file changed, 99 insertions(+), 79 deletions(-)

-- 
2.7.4



[PATCH 4/5] net: ethernet: ti: cpsw: re-split res only when speed is changed

2016-12-10 Thread Ivan Khoronzhuk
Don't re-split res in the following cases:
- speed of phys is not changed
- speed of phys is changed and no rate limited channels
- speed of phys is changed and all channels are rate limited
- phy is unlinked while dev is open
- phy is linked back but speed is not changed

The maximum speed is sum of "linked" phys, thus res are split taken
in account two interfaces, both for dual emac mode and for
switch mode.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 64 ++
 1 file changed, 59 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index a2c2c06..7ccfa63 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -394,6 +394,7 @@ struct cpsw_common {
u32 irqs_table[IRQ_NUM];
struct cpts *cpts;
int rx_ch_num, tx_ch_num;
+   int speed;
 };
 
 struct cpsw_priv {
@@ -761,7 +762,6 @@ static void cpsw_split_res(struct net_device *ndev)
struct cpsw_vector *txv = cpsw->txv;
int i, ch_weight, rlim_ch_num = 0;
int budget, bigest_rate_ch = 0;
-   struct cpsw_slave *slave;
u32 ch_rate, max_rate;
int ch_budget = 0;
 
@@ -781,8 +781,16 @@ static void cpsw_split_res(struct net_device *ndev)
bigest_rate = 0;
max_rate = consumed_rate;
} else {
-   slave = >slaves[cpsw_slave_index(cpsw, priv)];
-   max_rate = slave->phy->speed * 1000;
+   max_rate = cpsw->speed * 1000;
+
+   /* if max_rate is less then expected due to reduced link speed,
+* split proportionally according next potential max speed
+*/
+   if (max_rate < consumed_rate)
+   max_rate *= 10;
+
+   if (max_rate < consumed_rate)
+   max_rate *= 10;
 
ch_budget = (consumed_rate * CPSW_POLL_WEIGHT) / max_rate;
ch_budget = (CPSW_POLL_WEIGHT - ch_budget) /
@@ -1013,15 +1021,56 @@ static void _cpsw_adjust_link(struct cpsw_slave *slave,
slave->mac_control = mac_control;
 }
 
+static int cpsw_get_common_speed(struct cpsw_common *cpsw)
+{
+   int i, speed;
+
+   for (i = 0, speed = 0; i < cpsw->data.slaves; i++)
+   if (cpsw->slaves[i].phy && cpsw->slaves[i].phy->link)
+   speed += cpsw->slaves[i].phy->speed;
+
+   return speed;
+}
+
+static int cpsw_need_resplit(struct cpsw_common *cpsw)
+{
+   int i, rlim_ch_num;
+   int speed, ch_rate;
+
+   /* re-split resources only in case speed was changed */
+   speed = cpsw_get_common_speed(cpsw);
+   if (speed == cpsw->speed || !speed)
+   return 0;
+
+   cpsw->speed = speed;
+
+   for (i = 0, rlim_ch_num = 0; i < cpsw->tx_ch_num; i++) {
+   ch_rate = cpdma_chan_get_rate(cpsw->txv[i].ch);
+   if (!ch_rate)
+   break;
+
+   rlim_ch_num++;
+   }
+
+   /* cases not dependent on speed */
+   if (!rlim_ch_num || rlim_ch_num == cpsw->tx_ch_num)
+   return 0;
+
+   return 1;
+}
+
 static void cpsw_adjust_link(struct net_device *ndev)
 {
struct cpsw_priv*priv = netdev_priv(ndev);
+   struct cpsw_common  *cpsw = priv->cpsw;
boollink = false;
 
for_each_slave(priv, _cpsw_adjust_link, priv, );
 
if (link) {
-   cpsw_split_res(priv->ndev);
+   if (cpsw_need_resplit(cpsw))
+   cpsw_split_res(ndev);
+
netif_carrier_on(ndev);
if (netif_running(ndev))
netif_tx_wake_all_queues(ndev);
@@ -1538,6 +1587,10 @@ static int cpsw_ndo_stop(struct net_device *ndev)
cpsw_ale_stop(cpsw->ale);
}
for_each_slave(priv, cpsw_slave_stop, cpsw);
+
+   if (cpsw_need_resplit(cpsw))
+   cpsw_split_res(ndev);
+
pm_runtime_put_sync(cpsw->dev);
if (cpsw->data.dual_emac)
cpsw->slaves[priv->emac_port].open_stat = false;
@@ -1983,7 +2036,7 @@ static int cpsw_ndo_set_tx_maxrate(struct net_device 
*ndev, int queue, u32 rate)
return -EINVAL;
}
 
-   if (rate > 2000) {
+   if (rate > cpsw->speed) {
dev_err(priv->dev, "The channel rate cannot be more than 
2Gbps");
return -EINVAL;
}
@@ -2998,6 +3051,7 @@ static int cpsw_probe(struct platform_device *pdev)
ndev->ethtool_ops = _ethtool_ops;
netif_napi_add(ndev, >napi_rx, cpsw_rx_poll, CPSW_POLL_WEIGHT);
netif_tx_napi_add(ndev, >napi_tx, cpsw_tx_poll, CPSW_POLL_WEIGHT);
+   cpsw_split_res(ndev);
 
/* register the network device */
SET_NETDEV_DEV(ndev, >dev);
-- 
2.7.4



[PATCH 5/5] net: ethernet: ti: cpsw: sync rates for channels in dual emac mode

2016-12-10 Thread Ivan Khoronzhuk
The channels are common for both ndevs in dual emac mode. Hence, keep
in sync their rates.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 12 +++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 7ccfa63..b203143 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -2020,9 +2020,10 @@ static int cpsw_ndo_set_tx_maxrate(struct net_device 
*ndev, int queue, u32 rate)
 {
struct cpsw_priv *priv = netdev_priv(ndev);
struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_slave *slave;
u32 min_rate;
u32 ch_rate;
-   int ret;
+   int i, ret;
 
ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate;
if (ch_rate == rate)
@@ -2053,6 +2054,15 @@ static int cpsw_ndo_set_tx_maxrate(struct net_device 
*ndev, int queue, u32 rate)
if (ret)
return ret;
 
+   /* update rates for slaves tx queues */
+   for (i = 0; i < cpsw->data.slaves; i++) {
+   slave = >slaves[i];
+   if (!slave->ndev)
+   continue;
+
+   netdev_get_tx_queue(slave->ndev, queue)->tx_maxrate = rate;
+   }
+
cpsw_split_res(ndev);
return ret;
 }
-- 
2.7.4



Re: [PATCH 0/5] cpsw: add per channel shaper configuration

2016-12-06 Thread Ivan Khoronzhuk
On Mon, Dec 05, 2016 at 02:33:40PM -0600, Grygorii Strashko wrote:
> Hi Ivan,

Hi, Grygorii

I've sent patch that fixes issue in question.
https://lkml.org/lkml/2016/12/5/811
Could you please review it. Also I'm preparing several patches
to sophisticate cases with unexpected speed value.

> 
> On 11/29/2016 09:00 AM, Ivan Khoronzhuk wrote:
> > This series is intended to allow user to set rate for per channel
> > shapers at cpdma level. This patchset doesn't have impact on performance.
> > The rate can be set with:
> > 
> > echo 100 > /sys/class/net/ethX/queues/tx-0/tx_maxrate
> > 
> > Tested on am572xx
> > Based on net-next/master
> > 
> > Ivan Khoronzhuk (5):
> >   net: ethernet: ti: davinci_cpdma: add weight function for channels
> >   net: ethernet: ti: davinci_cpdma: add set rate for a channel
> >   net: ethernet: ti: cpsw: add .ndo to set per-queue rate
> >   net: ethernet: ti: cpsw: optimize end of poll cycle
> >   net: ethernet: ti: cpsw: split tx budget according between channels
> > 
> >  drivers/net/ethernet/ti/cpsw.c  | 264 +++
> >  drivers/net/ethernet/ti/davinci_cpdma.c | 453 
> > 
> >  drivers/net/ethernet/ti/davinci_cpdma.h |   6 +
> >  3 files changed, 624 insertions(+), 99 deletions(-)
> > 
> 
> 
> I've just tried net-next on BBB and got below back-trace:
> INIT: Entering runlevel: 5
> Configuring network interfaces... [   15.018356] net eth0: initializing cpsw 
> version 1.12 (0)
> [   15.120153] SMSC LAN8710/LAN8720 4a101000.mdio:00: attached PHY driver 
> [SMSC LAN8710/LAN8720] (mii_bus:phy_addr=4a101000.mdio:00, irq=-1)
> [   15.138578] Division by zero in kernel.
> [   15.142667] CPU: 0 PID: 755 Comm: ifconfig Not tainted 
> 4.9.0-rc7-01617-g6ea3f00 #5
> [   15.150277] Hardware name: Generic AM33XX (Flattened Device Tree)
> [   15.156399] Backtrace: 
> [   15.158898] [] (dump_backtrace) from [] 
> (show_stack+0x18/0x1c)
> [   15.166508]  r7: r6:600f0013 r5: r4:c0d395d0
> [   15.172200] [] (show_stack) from [] 
> (dump_stack+0x8c/0xa0)
> [   15.179460] [] (dump_stack) from [] (__div0+0x1c/0x20)
> [   15.186368]  r7: r6:ddf1c010 r5:0001 r4:0001
> [   15.192068] [] (__div0) from [] (Ldiv0+0x8/0x14)
> [   15.198503] [] (cpsw_split_budget [ti_cpsw]) from [] 
> (cpsw_ndo_open+0x4b8/0x5e4 [ti_cpsw])
> [   15.208554]  r10:ddf1c010 r9: r8: r7:ddf1c010 r6:dcc88000 
> r5:dcc88500
> [   15.216418]  r4:ddf1c0b0
> [   15.218985] [] (cpsw_ndo_open [ti_cpsw]) from [] 
> (__dev_open+0xb0/0x114)
> [   15.227466]  r10: r9: r8: r7:dcc88030 r6:bf080364 
> r5:
> [   15.235330]  r4:dcc88000
> [   15.237880] [] (__dev_open) from [] 
> (__dev_change_flags+0x9c/0x14c)
> [   15.245923]  r7:1002 r6:0001 r5:1043 r4:dcc88000
> [   15.251613] [] (__dev_change_flags) from [] 
> (dev_change_flags+0x20/0x50)
> [   15.260093]  r9: r8: r7:dcbabf0c r6:1002 r5:dcc8813c 
> r4:dcc88000
> [   15.267886] [] (dev_change_flags) from [] 
> (devinet_ioctl+0x6d4/0x794)
> [   15.276105]  r9: r8:bee8cc64 r7:dcbabf0c r6: r5:dc921e80 
> r4:
> [   15.283891] [] (devinet_ioctl) from [] 
> (inet_ioctl+0x19c/0x1c8)
> [   15.291587]  r10: r9:dc92 r8:bee8cc64 r7:c0d64bc0 r6:bee8cc64 
> r5:dd2728e0
> [   15.299450]  r4:8914
> [   15.302002] [] (inet_ioctl) from [] 
> (sock_ioctl+0x14c/0x300)
> [   15.309443] [] (sock_ioctl) from [] 
> (do_vfs_ioctl+0xa8/0x98c)
> [   15.316962]  r7:0003 r6:ddf0a780 r5:dd2728e0 r4:bee8cc64
> [   15.322653] [] (do_vfs_ioctl) from [] 
> (SyS_ioctl+0x3c/0x64)
> [   15.33]  r10: r9:dc92 r8:bee8cc64 r7:8914 r6:ddf0a780 
> r5:0003
> [   15.337864]  r4:ddf0a780
> [   15.340416] [] (SyS_ioctl) from [] 
> (ret_fast_syscall+0x0/0x3c)
> [   15.348024]  r9:dc92 r8:c0107f24 r7:0036 r6:bee8cf4d r5:bee8ce4c 
> r4:000949f0
> [   15.361174] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
> udhcpc (v1.23.1) started
> 
> 
> -- 
> regards,
> -grygorii


[PATCH] net: ethernet: ti: cpsw: fix early budget split

2016-12-05 Thread Ivan Khoronzhuk
The budget split function requires the phy speed to be known.
While ndo open a phy speed identification is postponed till the
moment link is up. Hence, move it to appropriate callback, when link
is up.

Reported-by: Grygorii Strashko <grygorii.stras...@ti.com>
Fixes: 8feb0a196507 ("net: ethernet: ti: cpsw: split tx budget according 
between channels")
Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/cpsw.c | 154 -
 1 file changed, 77 insertions(+), 77 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 3f96c57..f373a4b 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -753,6 +753,82 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
dev_kfree_skb_any(new_skb);
 }
 
+/* split budget depending on channel rates */
+static void cpsw_split_budget(struct net_device *ndev)
+{
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_vector *txv = cpsw->txv;
+   u32 consumed_rate, bigest_rate = 0;
+   int budget, bigest_rate_ch = 0;
+   struct cpsw_slave *slave;
+   int i, rlim_ch_num = 0;
+   u32 ch_rate, max_rate;
+   int ch_budget = 0;
+
+   if (cpsw->data.dual_emac)
+   slave = >slaves[priv->emac_port];
+   else
+   slave = >slaves[cpsw->data.active_slave];
+
+   max_rate = slave->phy->speed * 1000;
+
+   consumed_rate = 0;
+   for (i = 0; i < cpsw->tx_ch_num; i++) {
+   ch_rate = cpdma_chan_get_rate(txv[i].ch);
+   if (!ch_rate)
+   continue;
+
+   rlim_ch_num++;
+   consumed_rate += ch_rate;
+   }
+
+   if (cpsw->tx_ch_num == rlim_ch_num) {
+   max_rate = consumed_rate;
+   } else {
+   ch_budget = (consumed_rate * CPSW_POLL_WEIGHT) / max_rate;
+   ch_budget = (CPSW_POLL_WEIGHT - ch_budget) /
+   (cpsw->tx_ch_num - rlim_ch_num);
+   bigest_rate = (max_rate - consumed_rate) /
+ (cpsw->tx_ch_num - rlim_ch_num);
+   }
+
+   /* split tx budget */
+   budget = CPSW_POLL_WEIGHT;
+   for (i = 0; i < cpsw->tx_ch_num; i++) {
+   ch_rate = cpdma_chan_get_rate(txv[i].ch);
+   if (ch_rate) {
+   txv[i].budget = (ch_rate * CPSW_POLL_WEIGHT) / max_rate;
+   if (!txv[i].budget)
+   txv[i].budget = 1;
+   if (ch_rate > bigest_rate) {
+   bigest_rate_ch = i;
+   bigest_rate = ch_rate;
+   }
+   } else {
+   txv[i].budget = ch_budget;
+   if (!bigest_rate_ch)
+   bigest_rate_ch = i;
+   }
+
+   budget -= txv[i].budget;
+   }
+
+   if (budget)
+   txv[bigest_rate_ch].budget += budget;
+
+   /* split rx budget */
+   budget = CPSW_POLL_WEIGHT;
+   ch_budget = budget / cpsw->rx_ch_num;
+   for (i = 0; i < cpsw->rx_ch_num; i++) {
+   cpsw->rxv[i].budget = ch_budget;
+   budget -= ch_budget;
+   }
+
+   if (budget)
+   cpsw->rxv[0].budget += budget;
+}
+
 static irqreturn_t cpsw_tx_interrupt(int irq, void *dev_id)
 {
struct cpsw_common *cpsw = dev_id;
@@ -941,6 +1017,7 @@ static void cpsw_adjust_link(struct net_device *ndev)
for_each_slave(priv, _cpsw_adjust_link, priv, );
 
if (link) {
+   cpsw_split_budget(priv->ndev);
netif_carrier_on(ndev);
if (netif_running(ndev))
netif_tx_wake_all_queues(ndev);
@@ -1280,82 +1357,6 @@ static void cpsw_init_host_port(struct cpsw_priv *priv)
}
 }
 
-/* split budget depending on channel rates */
-static void cpsw_split_budget(struct net_device *ndev)
-{
-   struct cpsw_priv *priv = netdev_priv(ndev);
-   struct cpsw_common *cpsw = priv->cpsw;
-   struct cpsw_vector *txv = cpsw->txv;
-   u32 consumed_rate, bigest_rate = 0;
-   int budget, bigest_rate_ch = 0;
-   struct cpsw_slave *slave;
-   int i, rlim_ch_num = 0;
-   u32 ch_rate, max_rate;
-   int ch_budget = 0;
-
-   if (cpsw->data.dual_emac)
-   slave = >slaves[priv->emac_port];
-   else
-   slave = >slaves[cpsw->data.active_slave];
-
-   max_rate = slave->phy->speed * 1000;
-
-   consumed_rate = 0;
-   for (i = 0; i < cpsw->tx_ch_num; i++) {
-   ch_rate = cpdma_chan_get_rate(txv[i].ch);
-   if (!ch_rate)
-

Re: [PATCH 0/5] cpsw: add per channel shaper configuration

2016-12-05 Thread Ivan Khoronzhuk
On Mon, Dec 05, 2016 at 02:33:40PM -0600, Grygorii Strashko wrote:
> Hi Ivan,
> 
> On 11/29/2016 09:00 AM, Ivan Khoronzhuk wrote:
> > This series is intended to allow user to set rate for per channel
> > shapers at cpdma level. This patchset doesn't have impact on performance.
> > The rate can be set with:
> > 
> > echo 100 > /sys/class/net/ethX/queues/tx-0/tx_maxrate
> > 
> > Tested on am572xx
> > Based on net-next/master
> > 
> > Ivan Khoronzhuk (5):
> >   net: ethernet: ti: davinci_cpdma: add weight function for channels
> >   net: ethernet: ti: davinci_cpdma: add set rate for a channel
> >   net: ethernet: ti: cpsw: add .ndo to set per-queue rate
> >   net: ethernet: ti: cpsw: optimize end of poll cycle
> >   net: ethernet: ti: cpsw: split tx budget according between channels
> > 
> >  drivers/net/ethernet/ti/cpsw.c  | 264 +++
> >  drivers/net/ethernet/ti/davinci_cpdma.c | 453 
> > 
> >  drivers/net/ethernet/ti/davinci_cpdma.h |   6 +
> >  3 files changed, 624 insertions(+), 99 deletions(-)
> > 
> 
> 
> I've just tried net-next on BBB and got below back-trace:
I'd not tested on BBB, and expected some review before apply.
Seems that's because of phy speed, let me finish a fix patch.
Thanks.


> INIT: Entering runlevel: 5
> Configuring network interfaces... [   15.018356] net eth0: initializing cpsw 
> version 1.12 (0)
> [   15.120153] SMSC LAN8710/LAN8720 4a101000.mdio:00: attached PHY driver 
> [SMSC LAN8710/LAN8720] (mii_bus:phy_addr=4a101000.mdio:00, irq=-1)
> [   15.138578] Division by zero in kernel.
> [   15.142667] CPU: 0 PID: 755 Comm: ifconfig Not tainted 
> 4.9.0-rc7-01617-g6ea3f00 #5
> [   15.150277] Hardware name: Generic AM33XX (Flattened Device Tree)
> [   15.156399] Backtrace: 
> [   15.158898] [] (dump_backtrace) from [] 
> (show_stack+0x18/0x1c)
> [   15.166508]  r7: r6:600f0013 r5: r4:c0d395d0
> [   15.172200] [] (show_stack) from [] 
> (dump_stack+0x8c/0xa0)
> [   15.179460] [] (dump_stack) from [] (__div0+0x1c/0x20)
> [   15.186368]  r7: r6:ddf1c010 r5:0001 r4:0001
> [   15.192068] [] (__div0) from [] (Ldiv0+0x8/0x14)
> [   15.198503] [] (cpsw_split_budget [ti_cpsw]) from [] 
> (cpsw_ndo_open+0x4b8/0x5e4 [ti_cpsw])
> [   15.208554]  r10:ddf1c010 r9: r8: r7:ddf1c010 r6:dcc88000 
> r5:dcc88500
> [   15.216418]  r4:ddf1c0b0
> [   15.218985] [] (cpsw_ndo_open [ti_cpsw]) from [] 
> (__dev_open+0xb0/0x114)
> [   15.227466]  r10: r9: r8: r7:dcc88030 r6:bf080364 
> r5:
> [   15.235330]  r4:dcc88000
> [   15.237880] [] (__dev_open) from [] 
> (__dev_change_flags+0x9c/0x14c)
> [   15.245923]  r7:1002 r6:0001 r5:1043 r4:dcc88000
> [   15.251613] [] (__dev_change_flags) from [] 
> (dev_change_flags+0x20/0x50)
> [   15.260093]  r9: r8: r7:dcbabf0c r6:1002 r5:dcc8813c 
> r4:dcc88000
> [   15.267886] [] (dev_change_flags) from [] 
> (devinet_ioctl+0x6d4/0x794)
> [   15.276105]  r9: r8:bee8cc64 r7:dcbabf0c r6: r5:dc921e80 
> r4:
> [   15.283891] [] (devinet_ioctl) from [] 
> (inet_ioctl+0x19c/0x1c8)
> [   15.291587]  r10: r9:dc92 r8:bee8cc64 r7:c0d64bc0 r6:bee8cc64 
> r5:dd2728e0
> [   15.299450]  r4:8914
> [   15.302002] [] (inet_ioctl) from [] 
> (sock_ioctl+0x14c/0x300)
> [   15.309443] [] (sock_ioctl) from [] 
> (do_vfs_ioctl+0xa8/0x98c)
> [   15.316962]  r7:0003 r6:ddf0a780 r5:dd2728e0 r4:bee8cc64
> [   15.322653] [] (do_vfs_ioctl) from [] 
> (SyS_ioctl+0x3c/0x64)
> [   15.33]  r10: r9:dc92 r8:bee8cc64 r7:8914 r6:ddf0a780 
> r5:0003
> [   15.337864]  r4:ddf0a780
> [   15.340416] [] (SyS_ioctl) from [] 
> (ret_fast_syscall+0x0/0x3c)
> [   15.348024]  r9:dc92 r8:c0107f24 r7:0036 r6:bee8cf4d r5:bee8ce4c 
> r4:000949f0
> [   15.361174] IPv6: ADDRCONF(NETDEV_UP): eth0: link is not ready
> udhcpc (v1.23.1) started
> 
> 
> -- 
> regards,
> -grygorii


[PATCH] net: ethernet: ti: cpdma: use desc_read in chan_process instead of raw read

2016-12-02 Thread Ivan Khoronzhuk
There is desc_read() macros to read desc fields, so no need to
use __raw_readl();

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
Based on net-next/master

 drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index c776e45..d96dca5 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -1132,7 +1132,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan)
}
desc_dma = desc_phys(pool, desc);
 
-   status  = __raw_readl(>hw_mode);
+   status = desc_read(desc, hw_mode);
outlen  = status & 0x7ff;
if (status & CPDMA_DESC_OWNER) {
chan->stats.busy_dequeue++;
-- 
2.7.4



Re: [PATCH 5/7] Documentation: DT: net: cpsw: allow to specify descriptors pool size

2016-12-02 Thread Ivan Khoronzhuk
On Fri, Dec 02, 2016 at 11:22:28AM -0600, Grygorii Strashko wrote:
> 
> 
> On 12/02/2016 05:28 AM, Ivan Khoronzhuk wrote:
> > On Thu, Dec 01, 2016 at 05:34:30PM -0600, Grygorii Strashko wrote:
> >> Add optional property "descs_pool_size" to specify buffer descriptor's
> >> pool size. The "descs_pool_size" should define total number of CPDMA
> >> CPPI descriptors to be used for both ingress/egress packets
> >> processing. If not specified - the default value 256 will be used
> >> which will allow to place descriptor's pool into the internal CPPI
> >> RAM on most of TI SoC.
> >>
> >> Signed-off-by: Grygorii Strashko <grygorii.stras...@ti.com>
> >> ---
> >>  Documentation/devicetree/bindings/net/cpsw.txt | 5 +
> >>  1 file changed, 5 insertions(+)
> >>
> >> diff --git a/Documentation/devicetree/bindings/net/cpsw.txt 
> >> b/Documentation/devicetree/bindings/net/cpsw.txt
> >> index 5ad439f..b99d196 100644
> >> --- a/Documentation/devicetree/bindings/net/cpsw.txt
> >> +++ b/Documentation/devicetree/bindings/net/cpsw.txt
> >> @@ -35,6 +35,11 @@ Optional properties:
> >>  For example in dra72x-evm, pcf gpio has to be
> >>  driven low so that cpsw slave 0 and phy data
> >>  lines are connected via mux.
> >> +- descs_pool_size : total number of CPDMA CPPI descriptors to be used for
> >> +both ingress/egress packets processing. if not
> >> +specified the default value 256 will be used which
> >> +will allow to place descriptors pool into the
> >> +internal CPPI RAM.
> > Does it describe h/w? Why now module parameter? or even smth like ethtool 
> > num
> > ring entries?
> > 
> 
> It can be module parameter too. in general this is expected to be 
>  one-time boot setting only.  
> 
> - OR
> So, do you propose to use 
>ethtool -g ethX
> 
>ethtool -G ethX [rx N] [tx N]
> ?
It has a little different names, but yes, why not?
No need, maybe, butIt's just a proposition, at least I was thinking
about it after proposition from +cc Schuyler Patton to leave rx desc num
property. In this case it's possible to tune tx/rx desc num ratio, even
with SRAM descs.

> 
> Now cpdma has one pool for all RX/TX channels, so changing this settings
> by ethtool will require: pause interfaces, reallocate cpdma pool, 
Pause can lead to losts only for rx, and only for very short time, so
it's not very bad, especially when user knows what he is doing.


> re-arrange buffers between channels, resume interface. Correct?
correct.

But, some alternative variants can be used, like replacing descriptors.
Shrink num of desc for every channels to 1, replace/add others, and expand.
In this case no losts, but it's harder to debug issues after

> 
> How do you think - we can move forward with one pool or better to have two 
> (Rx and Tx)?
I think one is enough, just split, if no harm on perf.

> 
> Wouldn't it be reasonable to still have DT (or module) parameter to avoid 
> cpdma reconfiguration on system startup (pause/resume interfaces) (faster 
> boot)?
Would be, your choice, but it's not flexible.

> 
> How about cpdma re-allocation policy (with expectation that is shouldn't 
> happen too often)?
> - increasing of Rx, Tx will grow total number of physically allocated buffers 
> (total_desc_num)
> - decreasing of Rx, Tx will just change number of available buffers (no 
> memory re-allocation)
> 
> - OR 
> Can we move forward with current patch (total number of CPDMA CPPI 
> descriptors defined in DT) 
> and add ethtool -G ethX [rx N] [tx N] which will allow to re-split descs 
> between RX and TX?
No objections, It anyway requires re-allocations. Re-split of Rx and Tx will
not have a lot changes as most code exists already.

> 
> 
> 
> -- 
> regards,
> -grygorii


Re: [PATCH 2/7] net: ethernet: ti: cpdma: fix desc re-queuing

2016-12-02 Thread Ivan Khoronzhuk
On Fri, Dec 02, 2016 at 10:45:07AM -0600, Grygorii Strashko wrote:
> 
> 
> On 12/02/2016 05:03 AM, Ivan Khoronzhuk wrote:
> > On Thu, Dec 01, 2016 at 05:34:27PM -0600, Grygorii Strashko wrote:
> >> The currently processing cpdma descriptor with EOQ flag set may
> >> contain two values in Next Descriptor Pointer field:
> >> - valid pointer: means CPDMA missed addition of new desc in queue;
> > It shouldn't happen in normal circumstances, right?
> 
> it might happen, because desc push compete with desc pop.
> You can check stats values:
> chan->stats.misqueued
> chan->stats.requeue
>  under different types of net-loads.
I've done this, of-course.
By whole logic the misqueued counter has to cover all cases.
But that's not true.

> 
> TRM:
> "
> If the pNext pointer is initially NULL, and more packets need to be queued 
> for transmit, the software
> application may alter this pointer to point to a newly appended descriptor. 
> The EMAC will use the new
> pointer value and proceed to the next descriptor unless the pNext value has 
> already been read. In this
> latter case, the transmitter will halt on the transmit channel in question, 
> and the software application may
> restart it at that time. The software can detect this case by checking for an 
> end of queue (EOQ) condition
> flag on the updated packet descriptor when it is returned by the EMAC.
> "
That's true. No issues in desc.
In the code no any place to update next_desc except submit function.

And this case is supposed to be caught here:
For submit:
cpdma_chan_submit()
spin_lock_irqsave(>lock);
...
--->__cpdma_chan_submit()
...
--> desc_write(prev, hw_next, desc_dma); // here next pointer is updated, 
it can be not in time
...
--> mode = desc_read(prev, hw_mode); // pay attention, it's read after 
updating next pointer
--> if ((mode & CPDMA_DESC_EOQ) &&
--> (chan->state == CPDMA_STATE_ACTIVE)) { // here checked if it was late 
update
-> chan_write(chan, hdp, desc_dma); // here transmit is restarted, if 
needed

For process it only caught the fact of late update, but it has to be caught in
submit() already:
__cpdma_chan_process()
spin_lock_irqsave(>lock);
--> if (mode & CPDMA_DESC_EOQ) // here transmit is restarted, if needed
-> chan_write(chan, hdp, desc_dma); // but w/o updating next pointer

Seems there is no place where hw_next is updated w/o updating hdp :-| in case
of late hw_next set. And that is strange. I know it happens, I've checked it
before of-course. Then I thought, maybe there is some problem with write order,
thus out of sync, nothing more.

> 
> 
> > So, why it happens only for egress channels? And Does that mean
> > there is some resynchronization between submit and process function,
> > or this is h/w issue?
> 
> no hw issues. this patch just removes one unnecessary I/O access 
No objections against patch. Anyway it's better then before.
Just want to know the real reason why it happens, maybe there is smth else.

> 
> > 
> >> - null: no more descriptors in queue.
> >> In the later case, it's not required to write to HDP register, but now
> >> CPDMA does it.
> >>
> >> Hence, add additional check for Next Descriptor Pointer != null in
> >> cpdma_chan_process() function before writing in HDP register.
> >>
> >> Signed-off-by: Grygorii Strashko <grygorii.stras...@ti.com>
> >> ---
> >>  drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
> >>  1 file changed, 1 insertion(+), 1 deletion(-)
> >>
> >> diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> >> b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> index 0924014..379314f 100644
> >> --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> >> +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> >> @@ -1152,7 +1152,7 @@ static int __cpdma_chan_process(struct cpdma_chan 
> >> *chan)
> >>chan->count--;
> >>chan->stats.good_dequeue++;
> >>  
> >> -  if (status & CPDMA_DESC_EOQ) {
> >> +  if ((status & CPDMA_DESC_EOQ) && chan->head) {
> >>chan->stats.requeue++;
> >>chan_write(chan, hdp, desc_phys(pool, chan->head));
> >>}
> >> -- 
> >> 2.10.1
> >>
> 
> -- 
> regards,
> -grygorii


Re: [PATCH 5/7] Documentation: DT: net: cpsw: allow to specify descriptors pool size

2016-12-02 Thread Ivan Khoronzhuk
On Thu, Dec 01, 2016 at 05:34:30PM -0600, Grygorii Strashko wrote:
> Add optional property "descs_pool_size" to specify buffer descriptor's
> pool size. The "descs_pool_size" should define total number of CPDMA
> CPPI descriptors to be used for both ingress/egress packets
> processing. If not specified - the default value 256 will be used
> which will allow to place descriptor's pool into the internal CPPI
> RAM on most of TI SoC.
> 
> Signed-off-by: Grygorii Strashko 
> ---
>  Documentation/devicetree/bindings/net/cpsw.txt | 5 +
>  1 file changed, 5 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/net/cpsw.txt 
> b/Documentation/devicetree/bindings/net/cpsw.txt
> index 5ad439f..b99d196 100644
> --- a/Documentation/devicetree/bindings/net/cpsw.txt
> +++ b/Documentation/devicetree/bindings/net/cpsw.txt
> @@ -35,6 +35,11 @@ Optional properties:
> For example in dra72x-evm, pcf gpio has to be
> driven low so that cpsw slave 0 and phy data
> lines are connected via mux.
> +- descs_pool_size: total number of CPDMA CPPI descriptors to be used for
> +   both ingress/egress packets processing. if not
> +   specified the default value 256 will be used which
> +   will allow to place descriptors pool into the
> +   internal CPPI RAM.
Does it describe h/w? Why now module parameter? or even smth like ethtool num
ring entries?

>  
>  Slave Properties:
> -- 
> 2.10.1
> 


Re: [PATCH 2/7] net: ethernet: ti: cpdma: fix desc re-queuing

2016-12-02 Thread Ivan Khoronzhuk
On Thu, Dec 01, 2016 at 05:34:27PM -0600, Grygorii Strashko wrote:
> The currently processing cpdma descriptor with EOQ flag set may
> contain two values in Next Descriptor Pointer field:
> - valid pointer: means CPDMA missed addition of new desc in queue;
It shouldn't happen in normal circumstances, right?
So, why it happens only for egress channels? And Does that mean
there is some resynchronization between submit and process function,
or this is h/w issue?

> - null: no more descriptors in queue.
> In the later case, it's not required to write to HDP register, but now
> CPDMA does it.
> 
> Hence, add additional check for Next Descriptor Pointer != null in
> cpdma_chan_process() function before writing in HDP register.
> 
> Signed-off-by: Grygorii Strashko 
> ---
>  drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
> b/drivers/net/ethernet/ti/davinci_cpdma.c
> index 0924014..379314f 100644
> --- a/drivers/net/ethernet/ti/davinci_cpdma.c
> +++ b/drivers/net/ethernet/ti/davinci_cpdma.c
> @@ -1152,7 +1152,7 @@ static int __cpdma_chan_process(struct cpdma_chan *chan)
>   chan->count--;
>   chan->stats.good_dequeue++;
>  
> - if (status & CPDMA_DESC_EOQ) {
> + if ((status & CPDMA_DESC_EOQ) && chan->head) {
>   chan->stats.requeue++;
>   chan_write(chan, hdp, desc_phys(pool, chan->head));
>   }
> -- 
> 2.10.1
> 


Re: [PATCH] net: ethernet: ti: cpsw: fix ASSERT_RTNL() warning during resume

2016-11-29 Thread Ivan Khoronzhuk
On Tue, Nov 29, 2016 at 04:27:03PM -0600, Grygorii Strashko wrote:
> netif_set_real_num_tx/rx_queues() are required to be called with rtnl_lock
> taken, otherwise ASSERT_RTNL() warning will be triggered - which happens
> now during System resume from suspend:
> cpsw_resume()
> |- cpsw_ndo_open()
>   |- netif_set_real_num_tx/rx_queues()
>  |- ASSERT_RTNL();
> 
> Hence, fix it by surrounding cpsw_ndo_open() by rtnl_lock/unlock() calls.
> 
> Cc: Dave Gerlach <d-gerl...@ti.com>
> Cc: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
> Fixes: commit e05107e6b747 ("net: ethernet: ti: cpsw: add multi queue 
> support")
> Signed-off-by: Grygorii Strashko <grygorii.stras...@ti.com>
Reviewed-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>

> ---
>  drivers/net/ethernet/ti/cpsw.c | 4 
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
> index ae1ec6a..fd6c03b 100644
> --- a/drivers/net/ethernet/ti/cpsw.c
> +++ b/drivers/net/ethernet/ti/cpsw.c
> @@ -2944,6 +2944,8 @@ static int cpsw_resume(struct device *dev)
>   /* Select default pin state */
>   pinctrl_pm_select_default_state(dev);
>  
> + /* shut up ASSERT_RTNL() warning in netif_set_real_num_tx/rx_queues */
> + rtnl_lock();
>   if (cpsw->data.dual_emac) {
>   int i;
>  
> @@ -2955,6 +2957,8 @@ static int cpsw_resume(struct device *dev)
>   if (netif_running(ndev))
>   cpsw_ndo_open(ndev);
>   }
> + rtnl_unlock();
> +
>   return 0;
>  }
>  #endif
> -- 
> 2.10.1
> 


[PATCH 3/5] net: ethernet: ti: cpsw: add .ndo to set per-queue rate

2016-11-29 Thread Ivan Khoronzhuk
This patch allows to rate limit queues tx queues for cpsw interface.
The rate is set in absolute Mb/s units and cannot be more a speed
an interface is connected with.

The rate for a tx queue can be tested with:

ethtool -L eth0 rx 4 tx 4

echo 100 > /sys/class/net/eth0/queues/tx-0/tx_maxrate
echo 200 > /sys/class/net/eth0/queues/tx-1/tx_maxrate
echo 50 > /sys/class/net/eth0/queues/tx-2/tx_maxrate
echo 30 > /sys/class/net/eth0/queues/tx-3/tx_maxrate

tc qdisc add dev eth0 root handle 1: multiq

tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\
dport 5001 0x action skbedit queue_mapping 0

tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\
dport 5002 0x action skbedit queue_mapping 1

tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\
dport 5003 0x action skbedit queue_mapping 2

tc filter add dev eth0 parent 1: protocol ip prio 1 u32 match ip\
dport 5004 0x action skbedit queue_mapping 3

iperf -c 192.168.2.1 -b 110M -p 5001 -f m -t 60
iperf -c 192.168.2.1 -b 215M -p 5002 -f m -t 60
iperf -c 192.168.2.1 -b 55M -p 5003 -f m -t 60
iperf -c 192.168.2.1 -b 32M -p 5004 -f m -t 60

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 87 ++
 1 file changed, 87 insertions(+)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index da40ea5..c102675 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1872,6 +1872,88 @@ static int cpsw_ndo_vlan_rx_kill_vid(struct net_device 
*ndev,
return ret;
 }
 
+static int cpsw_ndo_set_tx_maxrate(struct net_device *ndev, int queue, u32 
rate)
+{
+   struct cpsw_priv *priv = netdev_priv(ndev);
+   int tx_ch_num = ndev->real_num_tx_queues;
+   u32 consumed_rate, min_rate, max_rate;
+   struct cpsw_common *cpsw = priv->cpsw;
+   struct cpsw_slave *slave;
+   int ret, i, weight;
+   int rlim_num = 0;
+   u32 ch_rate;
+
+   ch_rate = netdev_get_tx_queue(ndev, queue)->tx_maxrate;
+   if (ch_rate == rate)
+   return 0;
+
+   if (cpsw->data.dual_emac)
+   slave = >slaves[priv->emac_port];
+   else
+   slave = >slaves[cpsw->data.active_slave];
+   max_rate = slave->phy->speed;
+
+   consumed_rate = 0;
+   for (i = 0; i < tx_ch_num; i++) {
+   if (i == queue)
+   ch_rate = rate;
+   else
+   ch_rate = netdev_get_tx_queue(ndev, i)->tx_maxrate;
+
+   if (!ch_rate)
+   continue;
+
+   rlim_num++;
+   consumed_rate += ch_rate;
+   }
+
+   if (consumed_rate > max_rate)
+   dev_info(priv->dev, "The common rate shouldn't be more than 
%dMbps",
+max_rate);
+
+   if (consumed_rate > max_rate) {
+   if (max_rate == 10 && consumed_rate <= 100) {
+   max_rate = 100;
+   } else if (max_rate <= 100 && consumed_rate <= 1000) {
+   max_rate = 1000;
+   } else {
+   dev_err(priv->dev, "The common rate cannot be more than 
%dMbps",
+   max_rate);
+   return -EINVAL;
+   }
+   }
+
+   if (consumed_rate > max_rate) {
+   dev_err(priv->dev, "The common rate cannot be more than %dMbps",
+   max_rate);
+   return -EINVAL;
+   }
+
+   rate *= 1000;
+   min_rate = cpdma_chan_get_min_rate(cpsw->dma);
+   if ((rate < min_rate && rate)) {
+   dev_err(priv->dev, "The common rate cannot be less than %dMbps",
+   min_rate);
+   return -EINVAL;
+   }
+
+   ret = pm_runtime_get_sync(cpsw->dev);
+   if (ret < 0) {
+   pm_runtime_put_noidle(cpsw->dev);
+   return ret;
+   }
+
+   if (rlim_num == tx_ch_num)
+   max_rate = consumed_rate;
+
+   weight = (rate * 100) / (max_rate * 1000);
+   cpdma_chan_set_weight(cpsw->txch[queue], weight);
+
+   ret = cpdma_chan_set_rate(cpsw->txch[queue], rate);
+   pm_runtime_put(cpsw->dev);
+   return ret;
+}
+
 static const struct net_device_ops cpsw_netdev_ops = {
.ndo_open   = cpsw_ndo_open,
.ndo_stop   = cpsw_ndo_stop,
@@ -1881,6 +1963,7 @@ static const struct net_device_ops cpsw_netdev_ops = {
.ndo_validate_addr  = eth_validate_addr,
.ndo_tx_timeout = cpsw_ndo_tx_timeout,
.ndo_set_rx_mode= cpsw_ndo_set_rx_mode,
+   .ndo_set_tx_maxrate = cpsw_ndo_set_tx_maxrate,
 #ifdef CONFIG_NET_POLL_CONTROLLER
.ndo_poll

[PATCH 4/5] net: ethernet: ti: cpsw: optimize end of poll cycle

2016-11-29 Thread Ivan Khoronzhuk
Check budget fullness only after it's updated and update
channel mask only once to keep budget balance between channels.
It's also needed for farther changes.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 24 ++--
 1 file changed, 6 insertions(+), 18 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index c102675..8a70298 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -788,19 +788,13 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int 
budget)
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_txchs_state(cpsw->dma);
-   for (ch = 0, num_tx = 0; num_tx < budget; ch_map >>= 1, ch++) {
-   if (!ch_map) {
-   ch_map = cpdma_ctrl_txchs_state(cpsw->dma);
-   if (!ch_map)
-   break;
-
-   ch = 0;
-   }
-
+   for (ch = 0, num_tx = 0; ch_map; ch_map >>= 1, ch++) {
if (!(ch_map & 0x01))
continue;
 
num_tx += cpdma_chan_process(cpsw->txch[ch], budget - num_tx);
+   if (num_tx >= budget)
+   break;
}
 
if (num_tx < budget) {
@@ -823,19 +817,13 @@ static int cpsw_rx_poll(struct napi_struct *napi_rx, int 
budget)
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_rxchs_state(cpsw->dma);
-   for (ch = 0, num_rx = 0; num_rx < budget; ch_map >>= 1, ch++) {
-   if (!ch_map) {
-   ch_map = cpdma_ctrl_rxchs_state(cpsw->dma);
-   if (!ch_map)
-   break;
-
-   ch = 0;
-   }
-
+   for (ch = 0, num_rx = 0; ch_map; ch_map >>= 1, ch++) {
if (!(ch_map & 0x01))
continue;
 
num_rx += cpdma_chan_process(cpsw->rxch[ch], budget - num_rx);
+   if (num_rx >= budget)
+   break;
}
 
if (num_rx < budget) {
-- 
2.7.4



[PATCH 5/5] net: ethernet: ti: cpsw: split tx budget according between channels

2016-11-29 Thread Ivan Khoronzhuk
Split device budget between channels according to channel rate.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c | 159 +
 1 file changed, 130 insertions(+), 29 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index 8a70298..ec4873f 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -365,6 +365,11 @@ static inline void slave_write(struct cpsw_slave *slave, 
u32 val, u32 offset)
__raw_writel(val, slave->regs + offset);
 }
 
+struct cpsw_vector {
+   struct cpdma_chan *ch;
+   int budget;
+};
+
 struct cpsw_common {
struct device   *dev;
struct cpsw_platform_data   data;
@@ -380,8 +385,8 @@ struct cpsw_common {
int rx_packet_max;
struct cpsw_slave   *slaves;
struct cpdma_ctlr   *dma;
-   struct cpdma_chan   *txch[CPSW_MAX_QUEUES];
-   struct cpdma_chan   *rxch[CPSW_MAX_QUEUES];
+   struct cpsw_vector  txv[CPSW_MAX_QUEUES];
+   struct cpsw_vector  rxv[CPSW_MAX_QUEUES];
struct cpsw_ale *ale;
boolquirk_irq;
boolrx_irq_disabled;
@@ -741,7 +746,7 @@ static void cpsw_rx_handler(void *token, int len, int 
status)
return;
}
 
-   ch = cpsw->rxch[skb_get_queue_mapping(new_skb)];
+   ch = cpsw->rxv[skb_get_queue_mapping(new_skb)].ch;
ret = cpdma_chan_submit(ch, new_skb, new_skb->data,
skb_tailroom(new_skb), 0);
if (WARN_ON(ret < 0))
@@ -783,8 +788,9 @@ static irqreturn_t cpsw_rx_interrupt(int irq, void *dev_id)
 static int cpsw_tx_poll(struct napi_struct *napi_tx, int budget)
 {
u32 ch_map;
-   int num_tx, ch;
+   int num_tx, cur_budget, ch;
struct cpsw_common  *cpsw = napi_to_cpsw(napi_tx);
+   struct cpsw_vector  *txv;
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_txchs_state(cpsw->dma);
@@ -792,7 +798,13 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int 
budget)
if (!(ch_map & 0x01))
continue;
 
-   num_tx += cpdma_chan_process(cpsw->txch[ch], budget - num_tx);
+   txv = >txv[ch];
+   if (unlikely(txv->budget > budget - num_tx))
+   cur_budget = budget - num_tx;
+   else
+   cur_budget = txv->budget;
+
+   num_tx += cpdma_chan_process(txv->ch, cur_budget);
if (num_tx >= budget)
break;
}
@@ -812,8 +824,9 @@ static int cpsw_tx_poll(struct napi_struct *napi_tx, int 
budget)
 static int cpsw_rx_poll(struct napi_struct *napi_rx, int budget)
 {
u32 ch_map;
-   int num_rx, ch;
+   int num_rx, cur_budget, ch;
struct cpsw_common  *cpsw = napi_to_cpsw(napi_rx);
+   struct cpsw_vector  *rxv;
 
/* process every unprocessed channel */
ch_map = cpdma_ctrl_rxchs_state(cpsw->dma);
@@ -821,7 +834,13 @@ static int cpsw_rx_poll(struct napi_struct *napi_rx, int 
budget)
if (!(ch_map & 0x01))
continue;
 
-   num_rx += cpdma_chan_process(cpsw->rxch[ch], budget - num_rx);
+   rxv = >rxv[ch];
+   if (unlikely(rxv->budget > budget - num_rx))
+   cur_budget = budget - num_rx;
+   else
+   cur_budget = rxv->budget;
+
+   num_rx += cpdma_chan_process(rxv->ch, cur_budget);
if (num_rx >= budget)
break;
}
@@ -1063,7 +1082,7 @@ static void cpsw_get_ethtool_stats(struct net_device 
*ndev,
cpsw_gstrings_stats[l].stat_offset);
 
for (ch = 0; ch < cpsw->rx_ch_num; ch++) {
-   cpdma_chan_get_stats(cpsw->rxch[ch], _stats);
+   cpdma_chan_get_stats(cpsw->rxv[ch].ch, _stats);
for (i = 0; i < CPSW_STATS_CH_LEN; i++, l++) {
p = (u8 *)_stats +
cpsw_gstrings_ch_stats[i].stat_offset;
@@ -1072,7 +1091,7 @@ static void cpsw_get_ethtool_stats(struct net_device 
*ndev,
}
 
for (ch = 0; ch < cpsw->tx_ch_num; ch++) {
-   cpdma_chan_get_stats(cpsw->txch[ch], _stats);
+   cpdma_chan_get_stats(cpsw->txv[ch].ch, _stats);
for (i = 0; i < CPSW_STATS_CH_LEN; i++, l++) {
p = (u8 *)_stats +
 

[PATCH 1/5] net: ethernet: ti: davinci_cpdma: add weight function for channels

2016-11-29 Thread Ivan Khoronzhuk
The weight of a channel is needed to split descriptors between
channels. The weight can depend on maximum rate of channels, maximum
rate of an interface or other reasons. The channel weight is in
percentage and is independent for rx and tx channels.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/davinci_cpdma.c | 124 +---
 drivers/net/ethernet/ti/davinci_cpdma.h |   1 +
 2 files changed, 115 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index 56708a7..87456a9 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -122,6 +122,7 @@ struct cpdma_chan {
struct cpdma_chan_stats stats;
/* offsets into dmaregs */
int int_set, int_clear, td;
+   int weight;
 };
 
 struct cpdma_control_info {
@@ -474,29 +475,131 @@ u32 cpdma_ctrl_txchs_state(struct cpdma_ctlr *ctlr)
 }
 EXPORT_SYMBOL_GPL(cpdma_ctrl_txchs_state);
 
+static void cpdma_chan_set_descs(struct cpdma_ctlr *ctlr,
+int rx, int desc_num,
+int per_ch_desc)
+{
+   struct cpdma_chan *chan, *most_chan = NULL;
+   int desc_cnt = desc_num;
+   int most_dnum = 0;
+   int min, max, i;
+
+   if (!desc_num)
+   return;
+
+   if (rx) {
+   min = rx_chan_num(0);
+   max = rx_chan_num(CPDMA_MAX_CHANNELS);
+   } else {
+   min = tx_chan_num(0);
+   max = tx_chan_num(CPDMA_MAX_CHANNELS);
+   }
+
+   for (i = min; i < max; i++) {
+   chan = ctlr->channels[i];
+   if (!chan)
+   continue;
+
+   if (chan->weight)
+   chan->desc_num = (chan->weight * desc_num) / 100;
+   else
+   chan->desc_num = per_ch_desc;
+
+   desc_cnt -= chan->desc_num;
+
+   if (most_dnum < chan->desc_num) {
+   most_dnum = chan->desc_num;
+   most_chan = chan;
+   }
+   }
+   /* use remains */
+   most_chan->desc_num += desc_cnt;
+}
+
 /**
  * cpdma_chan_split_pool - Splits ctrl pool between all channels.
  * Has to be called under ctlr lock
  */
-static void cpdma_chan_split_pool(struct cpdma_ctlr *ctlr)
+static int cpdma_chan_split_pool(struct cpdma_ctlr *ctlr)
 {
+   int tx_per_ch_desc = 0, rx_per_ch_desc = 0;
struct cpdma_desc_pool *pool = ctlr->pool;
+   int free_rx_num = 0, free_tx_num = 0;
+   int rx_weight = 0, tx_weight = 0;
+   int tx_desc_num, rx_desc_num;
struct cpdma_chan *chan;
-   int ch_desc_num;
-   int i;
+   int i, tx_num = 0;
 
if (!ctlr->chan_num)
-   return;
-
-   /* calculate average size of pool slice */
-   ch_desc_num = pool->num_desc / ctlr->chan_num;
+   return 0;
 
-   /* split ctlr pool */
for (i = 0; i < ARRAY_SIZE(ctlr->channels); i++) {
chan = ctlr->channels[i];
-   if (chan)
-   chan->desc_num = ch_desc_num;
+   if (!chan)
+   continue;
+
+   if (is_rx_chan(chan)) {
+   if (!chan->weight)
+   free_rx_num++;
+   rx_weight += chan->weight;
+   } else {
+   if (!chan->weight)
+   free_tx_num++;
+   tx_weight += chan->weight;
+   tx_num++;
+   }
+   }
+
+   if (rx_weight > 100 || tx_weight > 100)
+   return -EINVAL;
+
+   tx_desc_num = (tx_num * pool->num_desc) / ctlr->chan_num;
+   rx_desc_num = pool->num_desc - tx_desc_num;
+
+   if (free_tx_num) {
+   tx_per_ch_desc = tx_desc_num - (tx_weight * tx_desc_num) / 100;
+   tx_per_ch_desc /= free_tx_num;
+   }
+   if (free_rx_num) {
+   rx_per_ch_desc = rx_desc_num - (rx_weight * rx_desc_num) / 100;
+   rx_per_ch_desc /= free_rx_num;
}
+
+   cpdma_chan_set_descs(ctlr, 0, tx_desc_num, tx_per_ch_desc);
+   cpdma_chan_set_descs(ctlr, 1, rx_desc_num, rx_per_ch_desc);
+
+   return 0;
+}
+
+/* cpdma_chan_set_weight - set weight of a channel in percentage.
+ * Tx and Rx channels have separate weights. That is 100% for RX
+ * and 100% for Tx. The weight is used to split cpdma resources
+ * in correct proportion required by the channels, including number
+ * of descriptors. The channel rate is not enough to know the
+ * weight of a channel as the maximum rate of an interface is needed.
+ * If weight = 0, then channel uses rest of descriptors leaved b

[PATCH 2/5] net: ethernet: ti: davinci_cpdma: add set rate for a channel

2016-11-29 Thread Ivan Khoronzhuk
The cpdma has 8 rate limited tx channels. This patch adds
ability for cpdma driver to use 8 tx h/w shapers. If at least one
channel is not rate limited then it must have higher number, this
is because the rate limited channels have to have higher priority
then not rate limited channels. The channel priority is set in low-hi
direction already, so that when a new channel is added with ethtool
and it doesn't have rate yet, it cannot affect on rate limited
channels. It can be useful for TSN streams and just in cases when
h/w rate limited channels are needed.

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/davinci_cpdma.c | 329 +++-
 drivers/net/ethernet/ti/davinci_cpdma.h |   5 +
 2 files changed, 289 insertions(+), 45 deletions(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index 87456a9..c776e45 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -32,6 +32,7 @@
 #define CPDMA_RXCONTROL0x14
 #define CPDMA_SOFTRESET0x1c
 #define CPDMA_RXTEARDOWN   0x18
+#define CPDMA_TX_PRI0_RATE 0x30
 #define CPDMA_TXINTSTATRAW 0x80
 #define CPDMA_TXINTSTATMASKED  0x84
 #define CPDMA_TXINTMASKSET 0x88
@@ -68,6 +69,8 @@
 
 #define CPDMA_TEARDOWN_VALUE   0xfffc
 
+#define CPDMA_MAX_RLIM_CNT 16384
+
 struct cpdma_desc {
/* hardware fields */
u32 hw_next;
@@ -123,6 +126,8 @@ struct cpdma_chan {
/* offsets into dmaregs */
int int_set, int_clear, td;
int weight;
+   u32 rate_factor;
+   u32 rate;
 };
 
 struct cpdma_control_info {
@@ -135,6 +140,7 @@ struct cpdma_control_info {
 };
 
 static struct cpdma_control_info controls[] = {
+   [CPDMA_TX_RLIM]   = {CPDMA_DMACONTROL,  8,  0x, ACCESS_RW},
[CPDMA_CMD_IDLE]  = {CPDMA_DMACONTROL,  3,  1,  ACCESS_WO},
[CPDMA_COPY_ERROR_FRAMES] = {CPDMA_DMACONTROL,  4,  1,  ACCESS_RW},
[CPDMA_RX_OFF_LEN_UPDATE] = {CPDMA_DMACONTROL,  2,  1,  ACCESS_RW},
@@ -302,6 +308,186 @@ static int _cpdma_control_set(struct cpdma_ctlr *ctlr, 
int control, int value)
return 0;
 }
 
+static int _cpdma_control_get(struct cpdma_ctlr *ctlr, int control)
+{
+   struct cpdma_control_info *info = [control];
+   int ret;
+
+   if (!ctlr->params.has_ext_regs)
+   return -ENOTSUPP;
+
+   if (ctlr->state != CPDMA_STATE_ACTIVE)
+   return -EINVAL;
+
+   if (control < 0 || control >= ARRAY_SIZE(controls))
+   return -ENOENT;
+
+   if ((info->access & ACCESS_RO) != ACCESS_RO)
+   return -EPERM;
+
+   ret = (dma_reg_read(ctlr, info->reg) >> info->shift) & info->mask;
+   return ret;
+}
+
+/* cpdma_chan_set_chan_shaper - set shaper for a channel
+ * Has to be called under ctlr lock
+ */
+static int cpdma_chan_set_chan_shaper(struct cpdma_chan *chan)
+{
+   struct cpdma_ctlr *ctlr = chan->ctlr;
+   u32 rate_reg;
+   u32 rmask;
+   int ret;
+
+   if (!chan->rate)
+   return 0;
+
+   rate_reg = CPDMA_TX_PRI0_RATE + 4 * chan->chan_num;
+   dma_reg_write(ctlr, rate_reg, chan->rate_factor);
+
+   rmask = _cpdma_control_get(ctlr, CPDMA_TX_RLIM);
+   rmask |= chan->mask;
+
+   ret = _cpdma_control_set(ctlr, CPDMA_TX_RLIM, rmask);
+   return ret;
+}
+
+static int cpdma_chan_on(struct cpdma_chan *chan)
+{
+   struct cpdma_ctlr *ctlr = chan->ctlr;
+   struct cpdma_desc_pool  *pool = ctlr->pool;
+   unsigned long flags;
+
+   spin_lock_irqsave(>lock, flags);
+   if (chan->state != CPDMA_STATE_IDLE) {
+   spin_unlock_irqrestore(>lock, flags);
+   return -EBUSY;
+   }
+   if (ctlr->state != CPDMA_STATE_ACTIVE) {
+   spin_unlock_irqrestore(>lock, flags);
+   return -EINVAL;
+   }
+   dma_reg_write(ctlr, chan->int_set, chan->mask);
+   chan->state = CPDMA_STATE_ACTIVE;
+   if (chan->head) {
+   chan_write(chan, hdp, desc_phys(pool, chan->head));
+   if (chan->rxfree)
+   chan_write(chan, rxfree, chan->count);
+   }
+
+   spin_unlock_irqrestore(>lock, flags);
+   return 0;
+}
+
+/* cpdma_chan_fit_rate - set rate for a channel and check if it's possible.
+ * rmask - mask of rate limited channels
+ * Returns min rate in Kb/s
+ */
+static int cpdma_chan_fit_rate(struct cpdma_chan *ch, u32 rate,
+  u32 *rmask, int *prio_mode)
+{
+   struct cpdma_ctlr *ctlr = ch->ctlr;
+   struct cpdma_chan *chan;
+   u32 old_rate = ch->rate;
+   u32 new_rmask = 0;
+   int rlim = 1;
+ 

[PATCH 0/5] cpsw: add per channel shaper configuration

2016-11-29 Thread Ivan Khoronzhuk
This series is intended to allow user to set rate for per channel
shapers at cpdma level. This patchset doesn't have impact on performance.
The rate can be set with:

echo 100 > /sys/class/net/ethX/queues/tx-0/tx_maxrate

Tested on am572xx
Based on net-next/master

Ivan Khoronzhuk (5):
  net: ethernet: ti: davinci_cpdma: add weight function for channels
  net: ethernet: ti: davinci_cpdma: add set rate for a channel
  net: ethernet: ti: cpsw: add .ndo to set per-queue rate
  net: ethernet: ti: cpsw: optimize end of poll cycle
  net: ethernet: ti: cpsw: split tx budget according between channels

 drivers/net/ethernet/ti/cpsw.c  | 264 +++
 drivers/net/ethernet/ti/davinci_cpdma.c | 453 
 drivers/net/ethernet/ti/davinci_cpdma.h |   6 +
 3 files changed, 624 insertions(+), 99 deletions(-)

-- 
2.7.4



[PATCH] net: ethernet: ti: davinci_cpdma: don't stop ctlr if it was stopped

2016-11-11 Thread Ivan Khoronzhuk
No need to stop ctlr if it was already stopped. It can cause timeout
warns. Steps:
- ifconfig eth0 down
- ethtool -l eth0 rx 8 tx 8
- ethtool -l eth0 rx 1 tx 1

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on net-next/master

 drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index 56395ce..56708a7 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -387,7 +387,7 @@ int cpdma_ctlr_stop(struct cpdma_ctlr *ctlr)
int i;
 
spin_lock_irqsave(>lock, flags);
-   if (ctlr->state == CPDMA_STATE_TEARDOWN) {
+   if (ctlr->state != CPDMA_STATE_ACTIVE) {
spin_unlock_irqrestore(>lock, flags);
return -EINVAL;
}
-- 
1.9.1



[PATCH v2] net: ethernet: ti: davinci_cpdma: fix fixed prio cpdma ctlr configuration

2016-11-11 Thread Ivan Khoronzhuk
The dma ctlr is reseted to 0 while cpdma soft reset, thus cpdma ctlr
cannot be configured after cpdma is stopped. So restoring content
of cpdma ctlr while off/on procedure is needed. The cpdma ctlr off/on
procedure is present while interface down/up and while changing number
of channels with ethtool. In order to not restore content in many
places, move it to cpdma_ctlr_start().

Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---

Based on net-next/master

Since v1:
- don't use redundant parameters for cpdma, make prio fixed to be constant

 drivers/net/ethernet/ti/cpsw.c  |   4 --
 drivers/net/ethernet/ti/davinci_cpdma.c | 102 +---
 2 files changed, 53 insertions(+), 53 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b1ddf89..39d06e8 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1376,10 +1376,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);
 
if (!cpsw_common_res_usage_state(cpsw)) {
-   /* setup tx dma to fixed prio and zero offset */
-   cpdma_control_set(cpsw->dma, CPDMA_TX_PRIO_FIXED, 1);
-   cpdma_control_set(cpsw->dma, CPDMA_RX_BUFFER_OFFSET, 0);
-
/* disable priority elevation */
__raw_writel(0, >regs->ptype);
 
diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index c3f35f1..c2fb1b6 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -124,6 +124,29 @@ struct cpdma_chan {
int int_set, int_clear, td;
 };
 
+struct cpdma_control_info {
+   u32 reg;
+   u32 shift, mask;
+   int access;
+#define ACCESS_RO  BIT(0)
+#define ACCESS_WO  BIT(1)
+#define ACCESS_RW  (ACCESS_RO | ACCESS_WO)
+};
+
+static struct cpdma_control_info controls[] = {
+   [CPDMA_CMD_IDLE]  = {CPDMA_DMACONTROL,  3,  1,  ACCESS_WO},
+   [CPDMA_COPY_ERROR_FRAMES] = {CPDMA_DMACONTROL,  4,  1,  ACCESS_RW},
+   [CPDMA_RX_OFF_LEN_UPDATE] = {CPDMA_DMACONTROL,  2,  1,  ACCESS_RW},
+   [CPDMA_RX_OWNERSHIP_FLIP] = {CPDMA_DMACONTROL,  1,  1,  ACCESS_RW},
+   [CPDMA_TX_PRIO_FIXED] = {CPDMA_DMACONTROL,  0,  1,  ACCESS_RW},
+   [CPDMA_STAT_IDLE] = {CPDMA_DMASTATUS,   31, 1,  ACCESS_RO},
+   [CPDMA_STAT_TX_ERR_CODE]  = {CPDMA_DMASTATUS,   20, 0xf,ACCESS_RW},
+   [CPDMA_STAT_TX_ERR_CHAN]  = {CPDMA_DMASTATUS,   16, 0x7,ACCESS_RW},
+   [CPDMA_STAT_RX_ERR_CODE]  = {CPDMA_DMASTATUS,   12, 0xf,ACCESS_RW},
+   [CPDMA_STAT_RX_ERR_CHAN]  = {CPDMA_DMASTATUS,   8,  0x7,ACCESS_RW},
+   [CPDMA_RX_BUFFER_OFFSET]  = {CPDMA_RXBUFFOFS,   0,  0x, ACCESS_RW},
+};
+
 #define tx_chan_num(chan)  (chan)
 #define rx_chan_num(chan)  ((chan) + CPDMA_MAX_CHANNELS)
 #define is_rx_chan(chan)   ((chan)->chan_num >= CPDMA_MAX_CHANNELS)
@@ -253,6 +276,31 @@ static void cpdma_desc_free(struct cpdma_desc_pool *pool,
gen_pool_free(pool->gen_pool, (unsigned long)desc, pool->desc_size);
 }
 
+static int _cpdma_control_set(struct cpdma_ctlr *ctlr, int control, int value)
+{
+   struct cpdma_control_info *info = [control];
+   u32 val;
+
+   if (!ctlr->params.has_ext_regs)
+   return -ENOTSUPP;
+
+   if (ctlr->state != CPDMA_STATE_ACTIVE)
+   return -EINVAL;
+
+   if (control < 0 || control >= ARRAY_SIZE(controls))
+   return -ENOENT;
+
+   if ((info->access & ACCESS_WO) != ACCESS_WO)
+   return -EPERM;
+
+   val  = dma_reg_read(ctlr, info->reg);
+   val &= ~(info->mask << info->shift);
+   val |= (value & info->mask) << info->shift;
+   dma_reg_write(ctlr, info->reg, val);
+
+   return 0;
+}
+
 struct cpdma_ctlr *cpdma_ctlr_create(struct cpdma_params *params)
 {
struct cpdma_ctlr *ctlr;
@@ -324,6 +372,10 @@ int cpdma_ctlr_start(struct cpdma_ctlr *ctlr)
if (ctlr->channels[i])
cpdma_chan_start(ctlr->channels[i]);
}
+
+   _cpdma_control_set(ctlr, CPDMA_TX_PRIO_FIXED, 1);
+   _cpdma_control_set(ctlr, CPDMA_RX_BUFFER_OFFSET, 0);
+
spin_unlock_irqrestore(>lock, flags);
return 0;
 }
@@ -874,29 +926,6 @@ int cpdma_chan_int_ctrl(struct cpdma_chan *chan, bool 
enable)
return 0;
 }
 
-struct cpdma_control_info {
-   u32 reg;
-   u32 shift, mask;
-   int access;
-#define ACCESS_RO  BIT(0)
-#define ACCESS_WO  BIT(1)
-#define ACCESS_RW  (ACCESS_RO | ACCESS_WO)
-};
-
-static struct cpdma_control_info controls[] = {
-   [CPDMA_CMD_IDLE]  = {CPDMA_DMACONTROL,  3,  1

[PATCH v2] net: ethernet: ti: davinci_cpdma: free memory while channel destroy

2016-11-11 Thread Ivan Khoronzhuk
While create/destroy channel operation memory is not freed. It was
supposed that memory is freed while driver remove. But a channel
can be created and destroyed many times while changing number of
channels with ethtool.

Reviewed-by: Grygorii Strashko <grygorii.stras...@ti.com>
Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>

---

Based on net-next/master

 drivers/net/ethernet/ti/davinci_cpdma.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/ti/davinci_cpdma.c 
b/drivers/net/ethernet/ti/davinci_cpdma.c
index 05afc05..07fc92d 100644
--- a/drivers/net/ethernet/ti/davinci_cpdma.c
+++ b/drivers/net/ethernet/ti/davinci_cpdma.c
@@ -586,7 +586,7 @@ int cpdma_chan_destroy(struct cpdma_chan *chan)
cpdma_chan_stop(chan);
ctlr->channels[chan->chan_num] = NULL;
ctlr->chan_num--;
-
+   devm_kfree(ctlr->dev, chan);
cpdma_chan_split_pool(ctlr);
 
spin_unlock_irqrestore(>lock, flags);
-- 
1.9.1



Re: [PATCH] net: ethernet: ti: davinci_cpdma: fix fixed prio cpdma ctlr configuration

2016-11-09 Thread Ivan Khoronzhuk



On 09.11.16 23:09, Grygorii Strashko wrote:



On 11/08/2016 07:10 AM, Ivan Khoronzhuk wrote:

The dma ctlr is reseted to 0 while cpdma start, thus cpdma ctlr


I assume this is because cpdma_ctlr_start() does soft reset. Is it correct?

Probably not. I've seen this register doesn't hold any previous settings (just 
trash)
after cpdma_ctlr_stop(), actually after last channel is stopped (inside of 
cpdma_ctlr_stop()).
Then cpdma_ctlr_start() just reset it to 0.




cannot be configured after cpdma is stopped. So, restore content
of cpdma ctlr while off/on procedure.

Based on net-next/master


^ remove it

sure





Signed-off-by: Ivan Khoronzhuk <ivan.khoronz...@linaro.org>
---
 drivers/net/ethernet/ti/cpsw.c  |   6 +-
 drivers/net/ethernet/ti/davinci_cpdma.c | 103 +---
 drivers/net/ethernet/ti/davinci_cpdma.h |   2 +
 3 files changed, 58 insertions(+), 53 deletions(-)

diff --git a/drivers/net/ethernet/ti/cpsw.c b/drivers/net/ethernet/ti/cpsw.c
index b1ddf89..4d04b8e 100644
--- a/drivers/net/ethernet/ti/cpsw.c
+++ b/drivers/net/ethernet/ti/cpsw.c
@@ -1376,10 +1376,6 @@ static int cpsw_ndo_open(struct net_device *ndev)
  ALE_ALL_PORTS, ALE_ALL_PORTS, 0, 0);

if (!cpsw_common_res_usage_state(cpsw)) {
-   /* setup tx dma to fixed prio and zero offset */
-   cpdma_control_set(cpsw->dma, CPDMA_TX_PRIO_FIXED, 1);
-   cpdma_control_set(cpsw->dma, CPDMA_RX_BUFFER_OFFSET, 0);
-
/* disable priority elevation */
__raw_writel(0, >regs->ptype);

@@ -2710,6 +2706,8 @@ static int cpsw_probe(struct platform_device *pdev)
dma_params.desc_align   = 16;
dma_params.has_ext_regs = true;
dma_params.desc_hw_addr = dma_params.desc_mem_phys;
+   dma_params.rxbuf_offset = 0;
+   dma_params.fixed_prio   = 1;


Do we really need this new parameters? Do you have plans to use other values?

I'm ok if this is static (equally as a bunch of rest in dma_params), no see 
reason to use other values,
it at least now. But the issue is not only in these two parameters and not only 
in cpsw_ndo_open().
It touches cpsw_set_channels() also, where ctlr stop/start is present.
In order to not copy cpdma_control_set(cpsw->dma, CPDMA_TX_PRIO_FIXED, 1)...
in all such kind places in eth drivers, better to allow the cpdma to control 
it's parameters...

The cpdma ctlr register holds a little more parameters (but only two of them 
are set in cpsw)
Maybe there is reason to save them also. Actually I'd seen this problem when 
playing with
on/off channel shapers, unfortunately the cpdma ctlr holds this info also, and 
it was lost
while on/off (but I'm going to restore it in chan_start()).






--
Regards,
Ivan Khoronzhuk


  1   2   3   >