From: Linus Torvalds <[email protected]> ------------------- This is a commit scheduled for the next v2.6.34 longterm release. If you see a problem with using this for longterm, please comment. -------------------
commit 1b1f693d7ad6d193862dcb1118540a030c5e761f upstream. As reported by Thomas Pollet, the rdma page counting can overflow. We get the rdma sizes in 64-bit unsigned entities, but then limit it to UINT_MAX bytes and shift them down to pages (so with a possible "+1" for an unaligned address). So each individual page count fits comfortably in an 'unsigned int' (not even close to overflowing into signed), but as they are added up, they might end up resulting in a signed return value. Which would be wrong. Catch the case of tot_pages turning negative, and return the appropriate error code. [PG: In 34, var names are slightly different, 1b1f6's tot_pages is 34's nr_pages, and 1b1f6's nr_pages is 34's nr; so map accordingly.] Reported-by: Thomas Pollet <[email protected]> Signed-off-by: Linus Torvalds <[email protected]> Signed-off-by: Andy Grover <[email protected]> Signed-off-by: David S. Miller <[email protected]> Signed-off-by: Paul Gortmaker <[email protected]> --- net/rds/rdma.c | 9 +++++++++ 1 files changed, 9 insertions(+), 0 deletions(-) diff --git a/net/rds/rdma.c b/net/rds/rdma.c index cf0dfa7..3481931 100644 --- a/net/rds/rdma.c +++ b/net/rds/rdma.c @@ -498,6 +498,15 @@ static struct rds_rdma_op *rds_rdma_prepare(struct rds_sock *rs, max_pages = max(nr, max_pages); nr_pages += nr; + + /* + * nr for one entry is limited to (UINT_MAX>>PAGE_SHIFT)+1, + * so nr_pages cannot overflow without first going negative. + */ + if ((int)nr_pages < 0) { + ret = -EINVAL; + goto out; + } } pages = kcalloc(max_pages, sizeof(struct page *), GFP_KERNEL); -- 1.7.4.4 _______________________________________________ stable mailing list [email protected] http://linux.kernel.org/mailman/listinfo/stable
