Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-19 Thread Jisheng Zhang
On Wed, 19 Dec 2018 10:24:37 +0100 Marcin Wojtas wrote:

> Hi Jisheng,
> 
> śr., 19 gru 2018 o 04:11 Jisheng Zhang  
> napisał(a):
> >
> >
> > On Mon, 17 Dec 2018 08:37:35 +0100 Thomas Petazzoni wrote:
> >  
> > > Hello Marcin,
> > >
> > > On Mon, 17 Dec 2018 00:25:58 +0100, Marcin Wojtas wrote:
> > >  
> > > > Thanks. Indeed, the patch is valid as a fix for current version of SW
> > > > BM. However, because this concept is broken, I will rework it and
> > > > submit patch/patches some time early 2019.  
> > >
> > > I know some people are working on XDP support in mvneta, and this work
> > > also needs to change parts of the memory allocation strategy in this
> > > driver. I'd suggest to get in touch with those folks. Antoine can give
> > > you the contact details, I don't have them off-hand. Or perhaps they
> > > will see this e-mail :-)  
> >
> > Great. So the problem of current memory allocation is seen, glad to
> > know reworking is going on.
> >
> > Besides the memory waste, there's another issue with commit 7e47fd84b56b
> > it always allocates page, so the rx is mapped with dmap_map_page(), but
> > the unmap routine isn't updated, there's mismatch here.
> >  
> 
> Indeed, despite the upcoming rework, which will be more complex, how
> about I submit a quick patch for this?

That's better. Thank you


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-19 Thread Marcin Wojtas
Hi Jisheng,

śr., 19 gru 2018 o 04:11 Jisheng Zhang  napisał(a):
>
>
> On Mon, 17 Dec 2018 08:37:35 +0100 Thomas Petazzoni wrote:
>
> > Hello Marcin,
> >
> > On Mon, 17 Dec 2018 00:25:58 +0100, Marcin Wojtas wrote:
> >
> > > Thanks. Indeed, the patch is valid as a fix for current version of SW
> > > BM. However, because this concept is broken, I will rework it and
> > > submit patch/patches some time early 2019.
> >
> > I know some people are working on XDP support in mvneta, and this work
> > also needs to change parts of the memory allocation strategy in this
> > driver. I'd suggest to get in touch with those folks. Antoine can give
> > you the contact details, I don't have them off-hand. Or perhaps they
> > will see this e-mail :-)
>
> Great. So the problem of current memory allocation is seen, glad to
> know reworking is going on.
>
> Besides the memory waste, there's another issue with commit 7e47fd84b56b
> it always allocates page, so the rx is mapped with dmap_map_page(), but
> the unmap routine isn't updated, there's mismatch here.
>

Indeed, despite the upcoming rework, which will be more complex, how
about I submit a quick patch for this?

Best regards,
Marcin


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-18 Thread Jisheng Zhang


On Mon, 17 Dec 2018 08:37:35 +0100 Thomas Petazzoni wrote:

> Hello Marcin,
> 
> On Mon, 17 Dec 2018 00:25:58 +0100, Marcin Wojtas wrote:
> 
> > Thanks. Indeed, the patch is valid as a fix for current version of SW
> > BM. However, because this concept is broken, I will rework it and
> > submit patch/patches some time early 2019.  
> 
> I know some people are working on XDP support in mvneta, and this work
> also needs to change parts of the memory allocation strategy in this
> driver. I'd suggest to get in touch with those folks. Antoine can give
> you the contact details, I don't have them off-hand. Or perhaps they
> will see this e-mail :-)

Great. So the problem of current memory allocation is seen, glad to
know reworking is going on.

Besides the memory waste, there's another issue with commit 7e47fd84b56b
it always allocates page, so the rx is mapped with dmap_map_page(), but
the unmap routine isn't updated, there's mismatch here.

thanks


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-16 Thread Thomas Petazzoni
Hello Marcin,

On Mon, 17 Dec 2018 00:25:58 +0100, Marcin Wojtas wrote:

> Thanks. Indeed, the patch is valid as a fix for current version of SW
> BM. However, because this concept is broken, I will rework it and
> submit patch/patches some time early 2019.

I know some people are working on XDP support in mvneta, and this work
also needs to change parts of the memory allocation strategy in this
driver. I'd suggest to get in touch with those folks. Antoine can give
you the contact details, I don't have them off-hand. Or perhaps they
will see this e-mail :-)

Best regards,

Thomas
-- 
Thomas Petazzoni, CTO, Bootlin
Embedded Linux and Kernel engineering
https://bootlin.com


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-16 Thread Marcin Wojtas
Hi David,

niedz., 16 gru 2018 o 21:41 David Miller  napisał(a):
>
> From: Marcin Wojtas 
> Date: Tue, 11 Dec 2018 13:56:49 +0100
>
> > Recent changes in the mvneta driver reworked allocation
> > and handling of the ingress buffers to use entire pages.
> > Apart from that in SW BM scenario the HW must be informed
> > via PRXDQS about the biggest possible incoming buffer
> > that can be propagated by RX descriptors.
> >
> > The BufferSize field was filled according to the MTU-dependent
> > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > when usin 64K pages, as the field is simply too small.
> >
> > This patch conditionally limits the value passed to the BufferSize
> > of the PRXDQS register, depending on the PAGE_SIZE used.
> > On the occasion remove now unused frag_size field of the mvneta_port
> > structure.
> >
> > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation method for 
> > SWBM")
> > Signed-off-by: Marcin Wojtas 
>
> The discussion died on this, but the bug should be fixed.
>
> So in the short term I am applying this and queueing it up for v4.19
> -stable.
>
> Thanks.

Thanks. Indeed, the patch is valid as a fix for current version of SW
BM. However, because this concept is broken, I will rework it and
submit patch/patches some time early 2019.

Best regards,
Marcin


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-16 Thread David Miller
From: Marcin Wojtas 
Date: Tue, 11 Dec 2018 13:56:49 +0100

> Recent changes in the mvneta driver reworked allocation
> and handling of the ingress buffers to use entire pages.
> Apart from that in SW BM scenario the HW must be informed
> via PRXDQS about the biggest possible incoming buffer
> that can be propagated by RX descriptors.
> 
> The BufferSize field was filled according to the MTU-dependent
> pkt_size value. Later change to PAGE_SIZE broke RX operation
> when usin 64K pages, as the field is simply too small.
> 
> This patch conditionally limits the value passed to the BufferSize
> of the PRXDQS register, depending on the PAGE_SIZE used.
> On the occasion remove now unused frag_size field of the mvneta_port
> structure.
> 
> Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation method for 
> SWBM")
> Signed-off-by: Marcin Wojtas 

The discussion died on this, but the bug should be fixed.

So in the short term I am applying this and queueing it up for v4.19
-stable.

Thanks.


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-12 Thread Marcin Wojtas
Hi Jisheng,

śr., 12 gru 2018 o 10:25 Jisheng Zhang  napisał(a):
>
> Hi Marcin,
>
> On Wed, 12 Dec 2018 09:22:57 +0100 Marcin Wojtas  wrote:
>
> > Hi Jisheng,
> >
> > śr., 12 gru 2018 o 03:48 Jisheng Zhang  
> > napisał(a):
> > >
> > > Hi,
> > >
> > > On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:
> > >
> > > > Recent changes in the mvneta driver reworked allocation
> > > > and handling of the ingress buffers to use entire pages.
> > > > Apart from that in SW BM scenario the HW must be informed
> > > > via PRXDQS about the biggest possible incoming buffer
> > > > that can be propagated by RX descriptors.
> > > >
> > > > The BufferSize field was filled according to the MTU-dependent
> > > > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > > > when usin 64K pages, as the field is simply too small.
> > > >
> > > > This patch conditionally limits the value passed to the BufferSize
> > > > of the PRXDQS register, depending on the PAGE_SIZE used.
> > > > On the occasion remove now unused frag_size field of the mvneta_port
> > > > structure.
> > > >
> > > > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> > > > method for SWBM")
> > >
> > > IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb
> > >
> > > The issue commit 562e2f467e71 wants to solve is due to commit 
> > > 7e47fd84b56bb
> > > It looks a bit wired, to introduce regression then submit another 
> > > commit(in
> > > the same patch set) solve it
> > >
> > > Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't 
> > > reproduce
> > > what's claimed in commit 562e2f467e71 -- "With system having a small 
> > > memory
> > > (around 256MB), the state "cannot allocate memory to refill with new 
> > > buffer"
> > > is reach pretty quickly."
> >
> > I am not the one to decide about patch reverting. From what I
> > understand, commit 7e47fd84b56bb was intorduced in order to increase
> > performance thanks to replacing mvneta_frag_alloc/free with using
> > entire pages for RX buffers. I have 2 questions:
> > - without reverting anything, do you observe memory allocation
> > problems during refill?
>
> I see memory waste: For normal 1500 MTU, before commit 7e47fd84b56bb we
> allocate 1920Bytes for rx. After commit 7e47fd84b56bb, we always allocate
> PAGE_SIZE bytes, if PAGE_SIZE=4096, we waste 53% memory for each rx buf.
>
> > - are you able to check L2 forwarding numbers on top of the pure
> > mainline branch and after reverting the mentioned patches? I'm
> > wondering what would be the performance penalty (if any).
>
> I didn't have the numbers. IMHO, when the performance number should
> be put into the commit msg when introducing commit 7e47fd84b56bb.
>

In general I agree with you about the memory waste and lack of numbers
backing the 7e47fd84b56bb change. However the improved refill
mechanism from 562e2f467e71 is something IMO worth to keep, so simple
reverts may not be the best idea. We should focus on dropping the full
page per descriptor dependency - I want to do it, but since it's a
slightly bigger rework, I cannot promise it will happen fast.

Best regards,
Marcin


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-12 Thread Jisheng Zhang
Hi Marcin,

On Wed, 12 Dec 2018 09:22:57 +0100 Marcin Wojtas  wrote:

> Hi Jisheng,
> 
> śr., 12 gru 2018 o 03:48 Jisheng Zhang  
> napisał(a):
> >
> > Hi,
> >
> > On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:
> >  
> > > Recent changes in the mvneta driver reworked allocation
> > > and handling of the ingress buffers to use entire pages.
> > > Apart from that in SW BM scenario the HW must be informed
> > > via PRXDQS about the biggest possible incoming buffer
> > > that can be propagated by RX descriptors.
> > >
> > > The BufferSize field was filled according to the MTU-dependent
> > > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > > when usin 64K pages, as the field is simply too small.
> > >
> > > This patch conditionally limits the value passed to the BufferSize
> > > of the PRXDQS register, depending on the PAGE_SIZE used.
> > > On the occasion remove now unused frag_size field of the mvneta_port
> > > structure.
> > >
> > > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> > > method for SWBM")  
> >
> > IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb
> >
> > The issue commit 562e2f467e71 wants to solve is due to commit 7e47fd84b56bb
> > It looks a bit wired, to introduce regression then submit another commit(in
> > the same patch set) solve it
> >
> > Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't 
> > reproduce
> > what's claimed in commit 562e2f467e71 -- "With system having a small memory
> > (around 256MB), the state "cannot allocate memory to refill with new buffer"
> > is reach pretty quickly."  
> 
> I am not the one to decide about patch reverting. From what I
> understand, commit 7e47fd84b56bb was intorduced in order to increase
> performance thanks to replacing mvneta_frag_alloc/free with using
> entire pages for RX buffers. I have 2 questions:
> - without reverting anything, do you observe memory allocation
> problems during refill?

I see memory waste: For normal 1500 MTU, before commit 7e47fd84b56bb we
allocate 1920Bytes for rx. After commit 7e47fd84b56bb, we always allocate
PAGE_SIZE bytes, if PAGE_SIZE=4096, we waste 53% memory for each rx buf.

> - are you able to check L2 forwarding numbers on top of the pure
> mainline branch and after reverting the mentioned patches? I'm
> wondering what would be the performance penalty (if any).

I didn't have the numbers. IMHO, when the performance number should
be put into the commit msg when introducing commit 7e47fd84b56bb.

Thanks

> 
> Best regards,
> Marcin
> 
> >
> >  
> > >
> > > Signed-off-by: Marcin Wojtas 
> > > ---
> > >  drivers/net/ethernet/marvell/mvneta.c | 6 +++---
> > >  1 file changed, 3 insertions(+), 3 deletions(-)
> > >
> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > > b/drivers/net/ethernet/marvell/mvneta.c
> > > index e5397c8..61b2349 100644
> > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > > @@ -408,7 +408,6 @@ struct mvneta_port {
> > >   struct mvneta_pcpu_stats __percpu   *stats;
> > >
> > >   int pkt_size;
> > > - unsigned int frag_size;
> > >   void __iomem *base;
> > >   struct mvneta_rx_queue *rxqs;
> > >   struct mvneta_tx_queue *txqs;
> > > @@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port 
> > > *pp,
> > >   if (!pp->bm_priv) {
> > >   /* Set Offset */
> > >   mvneta_rxq_offset_set(pp, rxq, 0);
> > > - mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
> > > + mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> > > + PAGE_SIZE :
> > > + MVNETA_RX_BUF_SIZE(pp->pkt_size));
> > >   mvneta_rxq_bm_disable(pp, rxq);
> > >   mvneta_rxq_fill(pp, rxq, rxq->size);
> > >   } else {
> > > @@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
> > >   int ret;
> > >
> > >   pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
> > > - pp->frag_size = PAGE_SIZE;
> > >
> > >   ret = mvneta_setup_rxqs(pp);
> > >   if (ret)  
> >  



Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-12 Thread Marcin Wojtas
Hi Jisheng,

śr., 12 gru 2018 o 03:48 Jisheng Zhang  napisał(a):
>
> Hi,
>
> On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:
>
> > Recent changes in the mvneta driver reworked allocation
> > and handling of the ingress buffers to use entire pages.
> > Apart from that in SW BM scenario the HW must be informed
> > via PRXDQS about the biggest possible incoming buffer
> > that can be propagated by RX descriptors.
> >
> > The BufferSize field was filled according to the MTU-dependent
> > pkt_size value. Later change to PAGE_SIZE broke RX operation
> > when usin 64K pages, as the field is simply too small.
> >
> > This patch conditionally limits the value passed to the BufferSize
> > of the PRXDQS register, depending on the PAGE_SIZE used.
> > On the occasion remove now unused frag_size field of the mvneta_port
> > structure.
> >
> > Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> > method for SWBM")
>
> IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb
>
> The issue commit 562e2f467e71 wants to solve is due to commit 7e47fd84b56bb
> It looks a bit wired, to introduce regression then submit another commit(in
> the same patch set) solve it
>
> Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't reproduce
> what's claimed in commit 562e2f467e71 -- "With system having a small memory
> (around 256MB), the state "cannot allocate memory to refill with new buffer"
> is reach pretty quickly."

I am not the one to decide about patch reverting. From what I
understand, commit 7e47fd84b56bb was intorduced in order to increase
performance thanks to replacing mvneta_frag_alloc/free with using
entire pages for RX buffers. I have 2 questions:
- without reverting anything, do you observe memory allocation
problems during refill?
- are you able to check L2 forwarding numbers on top of the pure
mainline branch and after reverting the mentioned patches? I'm
wondering what would be the performance penalty (if any).

Best regards,
Marcin

>
>
> >
> > Signed-off-by: Marcin Wojtas 
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 6 +++---
> >  1 file changed, 3 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> > b/drivers/net/ethernet/marvell/mvneta.c
> > index e5397c8..61b2349 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -408,7 +408,6 @@ struct mvneta_port {
> >   struct mvneta_pcpu_stats __percpu   *stats;
> >
> >   int pkt_size;
> > - unsigned int frag_size;
> >   void __iomem *base;
> >   struct mvneta_rx_queue *rxqs;
> >   struct mvneta_tx_queue *txqs;
> > @@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
> >   if (!pp->bm_priv) {
> >   /* Set Offset */
> >   mvneta_rxq_offset_set(pp, rxq, 0);
> > - mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
> > + mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> > + PAGE_SIZE :
> > + MVNETA_RX_BUF_SIZE(pp->pkt_size));
> >   mvneta_rxq_bm_disable(pp, rxq);
> >   mvneta_rxq_fill(pp, rxq, rxq->size);
> >   } else {
> > @@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
> >   int ret;
> >
> >   pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
> > - pp->frag_size = PAGE_SIZE;
> >
> >   ret = mvneta_setup_rxqs(pp);
> >   if (ret)
>


Re: [PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-11 Thread Jisheng Zhang
Hi,

On Tue, 11 Dec 2018 13:56:49 +0100 Marcin Wojtas wrote:

> Recent changes in the mvneta driver reworked allocation
> and handling of the ingress buffers to use entire pages.
> Apart from that in SW BM scenario the HW must be informed
> via PRXDQS about the biggest possible incoming buffer
> that can be propagated by RX descriptors.
> 
> The BufferSize field was filled according to the MTU-dependent
> pkt_size value. Later change to PAGE_SIZE broke RX operation
> when usin 64K pages, as the field is simply too small.
> 
> This patch conditionally limits the value passed to the BufferSize
> of the PRXDQS register, depending on the PAGE_SIZE used.
> On the occasion remove now unused frag_size field of the mvneta_port
> structure.
> 
> Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
> method for SWBM")

IMHO, we'd better revert 562e2f467e71 and 7e47fd84b56bb

The issue commit 562e2f467e71 wants to solve is due to commit 7e47fd84b56bb
It looks a bit wired, to introduce regression then submit another commit(in
the same patch set) solve it

Per my test, after reverting 562e2f467e71 and 7e47fd84b56bb, I can't reproduce
what's claimed in commit 562e2f467e71 -- "With system having a small memory
(around 256MB), the state "cannot allocate memory to refill with new buffer"
is reach pretty quickly."


> 
> Signed-off-by: Marcin Wojtas 
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 6 +++---
>  1 file changed, 3 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c 
> b/drivers/net/ethernet/marvell/mvneta.c
> index e5397c8..61b2349 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -408,7 +408,6 @@ struct mvneta_port {
>   struct mvneta_pcpu_stats __percpu   *stats;
>  
>   int pkt_size;
> - unsigned int frag_size;
>   void __iomem *base;
>   struct mvneta_rx_queue *rxqs;
>   struct mvneta_tx_queue *txqs;
> @@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
>   if (!pp->bm_priv) {
>   /* Set Offset */
>   mvneta_rxq_offset_set(pp, rxq, 0);
> - mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
> + mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> + PAGE_SIZE :
> + MVNETA_RX_BUF_SIZE(pp->pkt_size));
>   mvneta_rxq_bm_disable(pp, rxq);
>   mvneta_rxq_fill(pp, rxq, rxq->size);
>   } else {
> @@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
>   int ret;
>  
>   pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
> - pp->frag_size = PAGE_SIZE;
>  
>   ret = mvneta_setup_rxqs(pp);
>   if (ret)



[PATCH net] net: mvneta: fix operation for 64K PAGE_SIZE

2018-12-11 Thread Marcin Wojtas
Recent changes in the mvneta driver reworked allocation
and handling of the ingress buffers to use entire pages.
Apart from that in SW BM scenario the HW must be informed
via PRXDQS about the biggest possible incoming buffer
that can be propagated by RX descriptors.

The BufferSize field was filled according to the MTU-dependent
pkt_size value. Later change to PAGE_SIZE broke RX operation
when usin 64K pages, as the field is simply too small.

This patch conditionally limits the value passed to the BufferSize
of the PRXDQS register, depending on the PAGE_SIZE used.
On the occasion remove now unused frag_size field of the mvneta_port
structure.

Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation
method for SWBM")

Signed-off-by: Marcin Wojtas 
---
 drivers/net/ethernet/marvell/mvneta.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c 
b/drivers/net/ethernet/marvell/mvneta.c
index e5397c8..61b2349 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -408,7 +408,6 @@ struct mvneta_port {
struct mvneta_pcpu_stats __percpu   *stats;
 
int pkt_size;
-   unsigned int frag_size;
void __iomem *base;
struct mvneta_rx_queue *rxqs;
struct mvneta_tx_queue *txqs;
@@ -2905,7 +2904,9 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
if (!pp->bm_priv) {
/* Set Offset */
mvneta_rxq_offset_set(pp, rxq, 0);
-   mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
+   mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
+   PAGE_SIZE :
+   MVNETA_RX_BUF_SIZE(pp->pkt_size));
mvneta_rxq_bm_disable(pp, rxq);
mvneta_rxq_fill(pp, rxq, rxq->size);
} else {
@@ -3760,7 +3761,6 @@ static int mvneta_open(struct net_device *dev)
int ret;
 
pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
-   pp->frag_size = PAGE_SIZE;
 
ret = mvneta_setup_rxqs(pp);
if (ret)
-- 
2.7.4