The original in-kernel suspend (swsusp) frees the in-memory hibernation
image before powering off the machine.  s2disk doesn't, so there is
_much_ less free memory when it tries to power off.

This is a gratuitous difference.  The userspace suspend interface
/dev/snapshot only allows the hibernation image to be read once.
Once the s2disk program has read the last page, we can free the entire
image.

This avoids a hang after writing the hibernation image which was
triggered by commit 5f8dcc21211a3d4e3a7a5ca366b469fb88117f61
"page-allocator: split per-cpu list into one-list-per-migrate-type":

   [top of trace lost due to screen height]
   ? shrink_zone
   ? try_to_free_pages
   ? isolate_pages_global
   ? __alloc_pages_nodemask
   ? kthread
   ? __get_free_pages
   ? copy_process
   ? kthread
   ? do_fork
   ...

   INFO: task s2disk:2036 blocked for more than 120 seconds
   ...
   Call Trace:
   ...
   ? wait_for_common
   ? default_wake_function
   ? kthread_create
   ? worker_thread
   ? create_workqueue_thread
   ? worker_thread
   ? __create_workqueue_key
   ? stop_machine_create
   ? disable_nonboot_cpus
   ? hibernation_platform_enter
   ? snapshot_ioctl
   ...
   ? sys_ioctl
   ...

Signed-off-by: Alan Jenkins <[email protected]>
---
kernel/power/user.c |    4 ++++
1 files changed, 4 insertions(+), 0 deletions(-)

diff --git a/kernel/power/user.c b/kernel/power/user.c
index bf0014d..94d0210 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -165,6 +165,10 @@ static ssize_t snapshot_read(struct file *filp, char 
__user *buf,
                        res = -EFAULT;
                else
                        *offp = data->handle.offset;
+       } else {
+               swsusp_free();
+               memset(&data->handle, 0, sizeof(struct snapshot_handle));
+               data->ready = 0;
        }

 Unlock:
--
1.6.3.3



--
To unsubscribe from this list: send the line "unsubscribe kernel-testers" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to