Fix rtas_log_read() to correctly check its buffer after sleeping.  A competing
process may have swiped the error we're attempting to retrieve between us being
woken up and retaking the lock, but we return an event and account for it
anyway without checking.

Any positive result from checking rtas_log_size is invalidated when
rtasd_log_lock is dropped or if it is not held.

It is not correct to rely on userspace doing the right thing by assuming only
one userspace process (rtasd) will be attempting read at any one time.

Signed-off-by: David Howells <[EMAIL PROTECTED]>
---

 arch/powerpc/platforms/pseries/rtasd.c |   14 +++++++-------
 1 files changed, 7 insertions(+), 7 deletions(-)


diff --git a/arch/powerpc/platforms/pseries/rtasd.c 
b/arch/powerpc/platforms/pseries/rtasd.c
index c9ffd8c..1ce132f 100644
--- a/arch/powerpc/platforms/pseries/rtasd.c
+++ b/arch/powerpc/platforms/pseries/rtasd.c
@@ -298,16 +298,16 @@ static ssize_t rtas_log_read(struct file * file, char 
__user * buf,
 
        spin_lock_irqsave(&rtasd_log_lock, s);
        /* if it's 0, then we know we got the last one (the one in NVRAM) */
-       if (rtas_log_size == 0 && logging_enabled)
+       while (rtas_log_size == 0 && logging_enabled) {
                nvram_clear_error_log();
-       spin_unlock_irqrestore(&rtasd_log_lock, s);
-
 
-       error = wait_event_interruptible(rtas_log_wait, rtas_log_size);
-       if (error)
-               goto out;
+               spin_unlock_irqrestore(&rtasd_log_lock, s);
+               error = wait_event_interruptible(rtas_log_wait, rtas_log_size);
+               if (error)
+                       goto out;
+               spin_lock_irqsave(&rtasd_log_lock, s);
+       }
 
-       spin_lock_irqsave(&rtasd_log_lock, s);
        offset = rtas_error_log_buffer_max * (rtas_log_start & LOG_NUMBER_MASK);
        memcpy(tmp, &rtas_log_buf[offset], count);
 

_______________________________________________
Linuxppc-dev mailing list
Linuxppc-dev@ozlabs.org
https://ozlabs.org/mailman/listinfo/linuxppc-dev

Reply via email to