projects
/
linux.git
/ commitdiff
commit
grep
author
committer
pickaxe
?
search:
re
summary
|
shortlog
|
log
|
commit
| commitdiff |
tree
raw
|
patch
| inline |
side by side
(parent:
22d3e1e
)
RDMA/mlx5: Remove dead check for EAGAIN after alloc_mr_from_cache()
author
Jason Gunthorpe
<jgg@nvidia.com>
Mon, 14 Sep 2020 11:26:49 +0000
(14:26 +0300)
committer
Jason Gunthorpe
<jgg@nvidia.com>
Fri, 18 Sep 2020 16:02:42 +0000
(13:02 -0300)
alloc_mr_from_cache() no longer returns EAGAIN, this is just dead code
now.
Fixes:
aad719dcf379
("RDMA/mlx5: Allow MRs to be created in the cache synchronously")
Link:
https://lore.kernel.org/r/20200914112653.345244-2-leon@kernel.org
Signed-off-by: Leon Romanovsky <leonro@nvidia.com>
Signed-off-by: Jason Gunthorpe <jgg@nvidia.com>
drivers/infiniband/hw/mlx5/mr.c
patch
|
blob
|
history
diff --git
a/drivers/infiniband/hw/mlx5/mr.c
b/drivers/infiniband/hw/mlx5/mr.c
index
82cee96
..
3b040f6
100644
(file)
--- a/
drivers/infiniband/hw/mlx5/mr.c
+++ b/
drivers/infiniband/hw/mlx5/mr.c
@@
-1392,10
+1392,8
@@
struct ib_mr *mlx5_ib_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
if (order <= mr_cache_max_order(dev) && use_umr) {
mr = alloc_mr_from_cache(pd, umem, virt_addr, length, ncont,
page_shift, order, access_flags);
- if (PTR_ERR(mr) == -EAGAIN) {
- mlx5_ib_dbg(dev, "cache empty for order %d\n", order);
+ if (IS_ERR(mr))
mr = NULL;
- }
} else if (!MLX5_CAP_GEN(dev->mdev, umr_extended_translation_offset)) {
if (access_flags & IB_ACCESS_ON_DEMAND) {
err = -EINVAL;