From: Dan Williams Date: Sat, 5 Jan 2019 19:45:13 +0000 (-0800) Subject: dax: Use non-exclusive wait in wait_entry_unlocked() X-Git-Tag: rel_imx_4.19.35_1.1.0~7498 X-Git-Url: https://git.somdevices.com/?a=commitdiff_plain;h=9621ea6b9c4c8268d715785759678b401ff36509;p=linux.git dax: Use non-exclusive wait in wait_entry_unlocked() commit d8a706414af4827fc0b4b1c0c631c607351938b9 upstream. get_unlocked_entry() uses an exclusive wait because it is guaranteed to eventually obtain the lock and follow on with an unlock+wakeup cycle. The wait_entry_unlocked() path does not have the same guarantee. Rather than open-code an extra wakeup, just switch to a non-exclusive wait. Cc: Matthew Wilcox Reported-by: Linus Torvalds Reviewed-by: Jan Kara Signed-off-by: Dan Williams Signed-off-by: Sasha Levin --- diff --git a/fs/dax.c b/fs/dax.c index 415605fafaeb..09fa70683c41 100644 --- a/fs/dax.c +++ b/fs/dax.c @@ -275,18 +275,16 @@ static void wait_entry_unlocked(struct address_space *mapping, pgoff_t index, ewait.wait.func = wake_exceptional_entry_func; wq = dax_entry_waitqueue(mapping, index, entry, &ewait.key); - prepare_to_wait_exclusive(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); + /* + * Unlike get_unlocked_entry() there is no guarantee that this + * path ever successfully retrieves an unlocked entry before an + * inode dies. Perform a non-exclusive wait in case this path + * never successfully performs its own wake up. + */ + prepare_to_wait(wq, &ewait.wait, TASK_UNINTERRUPTIBLE); xa_unlock_irq(&mapping->i_pages); schedule(); finish_wait(wq, &ewait.wait); - - /* - * Entry lock waits are exclusive. Wake up the next waiter since - * we aren't sure we will acquire the entry lock and thus wake - * the next waiter up on unlock. - */ - if (waitqueue_active(wq)) - __wake_up(wq, TASK_NORMAL, 1, &ewait.key); } static void unlock_mapping_entry(struct address_space *mapping, pgoff_t index)