net: mvneta: fix operation for 64K PAGE_SIZE
authorMarcin Wojtas <mw@semihalf.com>
Tue, 11 Dec 2018 12:56:49 +0000 (13:56 +0100)
committerGreg Kroah-Hartman <gregkh@linuxfoundation.org>
Wed, 9 Jan 2019 16:38:36 +0000 (17:38 +0100)
commite15413d61d4e05f6d819fa165116b3d1781d2157
treea721305beb4cedce0be8fece0fd6a85a80139abf
parente97ecb19fee9eeb104a21bed14e9af78f5368117
net: mvneta: fix operation for 64K PAGE_SIZE

[ Upstream commit e735fd55b94bb48363737db3b1d57627c1a16b47 ]

Recent changes in the mvneta driver reworked allocation
and handling of the ingress buffers to use entire pages.
Apart from that in SW BM scenario the HW must be informed
via PRXDQS about the biggest possible incoming buffer
that can be propagated by RX descriptors.

The BufferSize field was filled according to the MTU-dependent
pkt_size value. Later change to PAGE_SIZE broke RX operation
when usin 64K pages, as the field is simply too small.

This patch conditionally limits the value passed to the BufferSize
of the PRXDQS register, depending on the PAGE_SIZE used.
On the occasion remove now unused frag_size field of the mvneta_port
structure.

Fixes: 562e2f467e71 ("net: mvneta: Improve the buffer allocation method for SWBM")
Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: David S. Miller <davem@davemloft.net>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
drivers/net/ethernet/marvell/mvneta.c