Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-10 Thread Baoquan He
Hi Rob,

Thanks a lot for looking into this and involve Nico to this thread!

On 04/09/18 at 09:49am, Rob Herring wrote:
> +Nico who has been working on tinification of the kernel.
> 
> On Mon, Apr 9, 2018 at 4:08 AM, Baoquan He  wrote:
> > The struct resource uses singly linked list to link siblings. It's not
> > easy to do reverse iteration on sibling list. So replace it with list_head.
> 
> Why is reverse iteration needed?

This is the explanation I made when Andrew helped to review the v1 post:
https://lkml.org/lkml/2018/3/23/78

Because we have been using kexec-tools utility to search available
System RAM space for loading kernel/initrd/purgatory from top to down.
That is done in user space by searching /proc/iomem. While later added
kexec_file interface, the searching code happened in kernel, and it
only search System RAM region bottom up, then take an area in that found
RAM region from top to down. We need unify these two interfaces on
behaviour since they are the same on essense from the users' point of
view, though implementation is different. As you know, the singly linked
list implementation of the current resource's sibling linking, makes the
searching from top to down very hard to satisfy people. 

Below is the v1 post, we make an temporary array to copy iomem_resource's
first level of children, then iterate the array reversedly. Andrew
suggested me to try list_head after reviewing. In fact we can optimize
that patch to only copy resource pointer into array, still the way is
ugly.
https://lkml.org/lkml/2018/3/21/952

Then Wei pasted a patch he had made as below. He didn't mention if he
also has requirement on reversed iteration of resource. That is an O(n*n)
way, from personal feelings, hard to say if it's bettern than v1 post.
https://lkml.org/lkml/2018/3/24/157

That's why I would like to have a try of the list_head linking.

> 
> > And code refactoring makes codes in kernel/resource.c more readable than
> > pointer operation.
> 
> resource_for_each_* helpers could solve that without the size increase.

Nico mentioned llist, that is a linux kernel standard singly linked
list, very elegant code existed. Still can not satisfy the reversed
iteration requirement.

> 
> > Besides, type of member variables of struct resource, sibling and child, are
> > changed from 'struct resource *' to 'struct list_head'. Kernel size will
> > increase because of those statically defined struct resource instances.
> 
> The DT struct device_node also has the same tree structure with
> parent, child, sibling pointers and converting to list_head had been
> on the todo list for a while. ACPI also has some tree walking
> functions (drivers/acpi/acpica/pstree.c). Perhaps there should be a
> common tree struct and helpers defined either on top of list_head or a
> new struct if that saves some size.

We have had many this kind of trees in kernel, the obvious examples is 
task_struct. And struct task_group {} too. They have used list_head
already.
struct task_struct {
...
/* Real parent process: */  
   
struct task_struct __rcu*real_parent;   
   

   
/* Recipient of SIGCHLD, wait4() reports: */
   
struct task_struct __rcu*parent;
struct list_headchildren;   
   
struct list_headsibling;
...
}

root
///   |   \\\
   ///|\\\
  /// | \\\
 ///  |  \\\
  (child)<->(child)<->(child)
 ///   |   \\\
 ///   |  \\\
   /// |  \\\
///| \\\
 (grandchild)<->(grandchild)<->(grandchild)

If define an new struct, , like tree_list, or tlist?

struct tlist {
void*parent;
struct list_headsibling;
 
struct list_headchild;
}

Just a rough idea, feel it might not bring too much benefit compared
with direct list operation.

> 
> >
> > Signed-off-by: Baoquan He 
> > ---
> > v2->v3:
> >   Make sibling() and first_child() global so that they can be called
> >   out of kernel/resource.c to simplify code.
> 
> These should probably be inline functions. Or exported if not.
> 
> >
> >   Fix several code bugs found by kbuild test robot.
> >
> >   Got report from lkp that kernel size increased. It's on purpose since
> >   the type change of sibling and 

Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Baoquan He
On 04/09/18 at 07:34pm, Dan Williams wrote:
> On Mon, Apr 9, 2018 at 7:10 PM, Baoquan He  wrote:
> > On 04/09/18 at 08:38am, Dan Williams wrote:
> >> On Mon, Apr 9, 2018 at 2:08 AM, Baoquan He  wrote:
> >> > The struct resource uses singly linked list to link siblings. It's not
> >> > easy to do reverse iteration on sibling list. So replace it with 
> >> > list_head.
> >> >
> >> > And code refactoring makes codes in kernel/resource.c more readable than
> >> > pointer operation.
> >> >
> >> > Besides, type of member variables of struct resource, sibling and child, 
> >> > are
> >> > changed from 'struct resource *' to 'struct list_head'. Kernel size will
> >> > increase because of those statically defined struct resource instances.
> >> >
> >> > Signed-off-by: Baoquan He 
> >> > ---
> >> [..]
> >> > diff --git a/kernel/resource.c b/kernel/resource.c
> >> > index e270b5048988..473c624606f9 100644
> >> > --- a/kernel/resource.c
> >> > +++ b/kernel/resource.c
> >> > @@ -31,6 +31,8 @@ struct resource ioport_resource = {
> >> > .start  = 0,
> >> > .end= IO_SPACE_LIMIT,
> >> > .flags  = IORESOURCE_IO,
> >> > +   .sibling = LIST_HEAD_INIT(ioport_resource.sibling),
> >> > +   .child  = LIST_HEAD_INIT(ioport_resource.child),
> >> >  };
> >> >  EXPORT_SYMBOL(ioport_resource);
> >> >
> >> > @@ -39,6 +41,8 @@ struct resource iomem_resource = {
> >> > .start  = 0,
> >> > .end= -1,
> >> > .flags  = IORESOURCE_MEM,
> >> > +   .sibling = LIST_HEAD_INIT(iomem_resource.sibling),
> >> > +   .child  = LIST_HEAD_INIT(iomem_resource.child),
> >> >  };
> >> >  EXPORT_SYMBOL(iomem_resource);
> >> >
> >> > @@ -57,20 +61,32 @@ static DEFINE_RWLOCK(resource_lock);
> >> >   * by boot mem after the system is up. So for reusing the resource entry
> >> >   * we need to remember the resource.
> >> >   */
> >> > -static struct resource *bootmem_resource_free;
> >> > +static struct list_head bootmem_resource_free = 
> >> > LIST_HEAD_INIT(bootmem_resource_free);
> >> >  static DEFINE_SPINLOCK(bootmem_resource_lock);
> >> >
> >> > +struct resource *sibling(struct resource *res)
> >> > +{
> >> > +   if (res->parent && !list_is_last(>sibling, 
> >> > >parent->child))
> >> > +   return list_next_entry(res, sibling);
> >> > +   return NULL;
> >> > +}
> >> > +
> >> > +struct resource *first_child(struct list_head *head)
> >> > +{
> >> > +   return list_first_entry_or_null(head, struct resource, sibling);
> >> > +}
> >> > +
> >>
> >> These names are too generic for new global symbols. A "resource_"
> >> prefix is warranted.
> >
> > Thanks, sounds reasonable, will change them as resource_sibling() and
> > resource_first_child(). Or res_sibling()/res_1st_child()?
> >
> 
> resource_sibling() and resource_first_child()

OK, will change, thanks.

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Dan Williams
On Mon, Apr 9, 2018 at 7:10 PM, Baoquan He  wrote:
> On 04/09/18 at 08:38am, Dan Williams wrote:
>> On Mon, Apr 9, 2018 at 2:08 AM, Baoquan He  wrote:
>> > The struct resource uses singly linked list to link siblings. It's not
>> > easy to do reverse iteration on sibling list. So replace it with list_head.
>> >
>> > And code refactoring makes codes in kernel/resource.c more readable than
>> > pointer operation.
>> >
>> > Besides, type of member variables of struct resource, sibling and child, 
>> > are
>> > changed from 'struct resource *' to 'struct list_head'. Kernel size will
>> > increase because of those statically defined struct resource instances.
>> >
>> > Signed-off-by: Baoquan He 
>> > ---
>> [..]
>> > diff --git a/kernel/resource.c b/kernel/resource.c
>> > index e270b5048988..473c624606f9 100644
>> > --- a/kernel/resource.c
>> > +++ b/kernel/resource.c
>> > @@ -31,6 +31,8 @@ struct resource ioport_resource = {
>> > .start  = 0,
>> > .end= IO_SPACE_LIMIT,
>> > .flags  = IORESOURCE_IO,
>> > +   .sibling = LIST_HEAD_INIT(ioport_resource.sibling),
>> > +   .child  = LIST_HEAD_INIT(ioport_resource.child),
>> >  };
>> >  EXPORT_SYMBOL(ioport_resource);
>> >
>> > @@ -39,6 +41,8 @@ struct resource iomem_resource = {
>> > .start  = 0,
>> > .end= -1,
>> > .flags  = IORESOURCE_MEM,
>> > +   .sibling = LIST_HEAD_INIT(iomem_resource.sibling),
>> > +   .child  = LIST_HEAD_INIT(iomem_resource.child),
>> >  };
>> >  EXPORT_SYMBOL(iomem_resource);
>> >
>> > @@ -57,20 +61,32 @@ static DEFINE_RWLOCK(resource_lock);
>> >   * by boot mem after the system is up. So for reusing the resource entry
>> >   * we need to remember the resource.
>> >   */
>> > -static struct resource *bootmem_resource_free;
>> > +static struct list_head bootmem_resource_free = 
>> > LIST_HEAD_INIT(bootmem_resource_free);
>> >  static DEFINE_SPINLOCK(bootmem_resource_lock);
>> >
>> > +struct resource *sibling(struct resource *res)
>> > +{
>> > +   if (res->parent && !list_is_last(>sibling, 
>> > >parent->child))
>> > +   return list_next_entry(res, sibling);
>> > +   return NULL;
>> > +}
>> > +
>> > +struct resource *first_child(struct list_head *head)
>> > +{
>> > +   return list_first_entry_or_null(head, struct resource, sibling);
>> > +}
>> > +
>>
>> These names are too generic for new global symbols. A "resource_"
>> prefix is warranted.
>
> Thanks, sounds reasonable, will change them as resource_sibling() and
> resource_first_child(). Or res_sibling()/res_1st_child()?
>

resource_sibling() and resource_first_child()
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Baoquan He
On 04/09/18 at 08:38am, Dan Williams wrote:
> On Mon, Apr 9, 2018 at 2:08 AM, Baoquan He  wrote:
> > The struct resource uses singly linked list to link siblings. It's not
> > easy to do reverse iteration on sibling list. So replace it with list_head.
> >
> > And code refactoring makes codes in kernel/resource.c more readable than
> > pointer operation.
> >
> > Besides, type of member variables of struct resource, sibling and child, are
> > changed from 'struct resource *' to 'struct list_head'. Kernel size will
> > increase because of those statically defined struct resource instances.
> >
> > Signed-off-by: Baoquan He 
> > ---
> [..]
> > diff --git a/kernel/resource.c b/kernel/resource.c
> > index e270b5048988..473c624606f9 100644
> > --- a/kernel/resource.c
> > +++ b/kernel/resource.c
> > @@ -31,6 +31,8 @@ struct resource ioport_resource = {
> > .start  = 0,
> > .end= IO_SPACE_LIMIT,
> > .flags  = IORESOURCE_IO,
> > +   .sibling = LIST_HEAD_INIT(ioport_resource.sibling),
> > +   .child  = LIST_HEAD_INIT(ioport_resource.child),
> >  };
> >  EXPORT_SYMBOL(ioport_resource);
> >
> > @@ -39,6 +41,8 @@ struct resource iomem_resource = {
> > .start  = 0,
> > .end= -1,
> > .flags  = IORESOURCE_MEM,
> > +   .sibling = LIST_HEAD_INIT(iomem_resource.sibling),
> > +   .child  = LIST_HEAD_INIT(iomem_resource.child),
> >  };
> >  EXPORT_SYMBOL(iomem_resource);
> >
> > @@ -57,20 +61,32 @@ static DEFINE_RWLOCK(resource_lock);
> >   * by boot mem after the system is up. So for reusing the resource entry
> >   * we need to remember the resource.
> >   */
> > -static struct resource *bootmem_resource_free;
> > +static struct list_head bootmem_resource_free = 
> > LIST_HEAD_INIT(bootmem_resource_free);
> >  static DEFINE_SPINLOCK(bootmem_resource_lock);
> >
> > +struct resource *sibling(struct resource *res)
> > +{
> > +   if (res->parent && !list_is_last(>sibling, 
> > >parent->child))
> > +   return list_next_entry(res, sibling);
> > +   return NULL;
> > +}
> > +
> > +struct resource *first_child(struct list_head *head)
> > +{
> > +   return list_first_entry_or_null(head, struct resource, sibling);
> > +}
> > +
> 
> These names are too generic for new global symbols. A "resource_"
> prefix is warranted.

Thanks, sounds reasonable, will change them as resource_sibling() and
resource_first_child(). Or res_sibling()/res_1st_child()?

___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Nicolas Pitre
On Mon, 9 Apr 2018, Rob Herring wrote:

> +Nico who has been working on tinification of the kernel.
> 
> On Mon, Apr 9, 2018 at 4:08 AM, Baoquan He  wrote:
> > The struct resource uses singly linked list to link siblings. It's not
> > easy to do reverse iteration on sibling list. So replace it with list_head.
> 
> Why is reverse iteration needed?
> 
> > And code refactoring makes codes in kernel/resource.c more readable than
> > pointer operation.
> 
> resource_for_each_* helpers could solve that without the size increase.
> 
> > Besides, type of member variables of struct resource, sibling and child, are
> > changed from 'struct resource *' to 'struct list_head'. Kernel size will
> > increase because of those statically defined struct resource instances.
> 
> The DT struct device_node also has the same tree structure with
> parent, child, sibling pointers and converting to list_head had been
> on the todo list for a while. ACPI also has some tree walking
> functions (drivers/acpi/acpica/pstree.c). Perhaps there should be a
> common tree struct and helpers defined either on top of list_head or a
> new struct if that saves some size.
> 
> >
> > Signed-off-by: Baoquan He 
> > ---
> > v2->v3:
> >   Make sibling() and first_child() global so that they can be called
> >   out of kernel/resource.c to simplify code.
> 
> These should probably be inline functions. Or exported if not.
> 
> >
> >   Fix several code bugs found by kbuild test robot.
> >
> >   Got report from lkp that kernel size increased. It's on purpose since
> >   the type change of sibling and child inside struct resource{}. For
> >   each struct resource variable, it will cost another 16 bytes on x86 64.
> 
> The size increase should be mentioned in the commit log. More
> generally, the size increase is 2 pointers.

Tiny kernels have much fewer resources anyway, and usually run on 
platforms with 32-bit pointers, so this probably won't matter much in 
the end.

This is if reverse iteration is actually needed as you say though.
Unless I'm mistaken, resource iteration doesn't happen that often, and 
not in critical paths either.

Making the code clearer while keeping the same structure size could be 
considered with the help of llist.h instead.


Nicolas
___
Linux-nvdimm mailing list
Linux-nvdimm@lists.01.org
https://lists.01.org/mailman/listinfo/linux-nvdimm


Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Rob Herring
+Nico who has been working on tinification of the kernel.

On Mon, Apr 9, 2018 at 4:08 AM, Baoquan He  wrote:
> The struct resource uses singly linked list to link siblings. It's not
> easy to do reverse iteration on sibling list. So replace it with list_head.

Why is reverse iteration needed?

> And code refactoring makes codes in kernel/resource.c more readable than
> pointer operation.

resource_for_each_* helpers could solve that without the size increase.

> Besides, type of member variables of struct resource, sibling and child, are
> changed from 'struct resource *' to 'struct list_head'. Kernel size will
> increase because of those statically defined struct resource instances.

The DT struct device_node also has the same tree structure with
parent, child, sibling pointers and converting to list_head had been
on the todo list for a while. ACPI also has some tree walking
functions (drivers/acpi/acpica/pstree.c). Perhaps there should be a
common tree struct and helpers defined either on top of list_head or a
new struct if that saves some size.

>
> Signed-off-by: Baoquan He 
> ---
> v2->v3:
>   Make sibling() and first_child() global so that they can be called
>   out of kernel/resource.c to simplify code.

These should probably be inline functions. Or exported if not.

>
>   Fix several code bugs found by kbuild test robot.
>
>   Got report from lkp that kernel size increased. It's on purpose since
>   the type change of sibling and child inside struct resource{}. For
>   each struct resource variable, it will cost another 16 bytes on x86 64.

The size increase should be mentioned in the commit log. More
generally, the size increase is 2 pointers.

Rob

>
>  arch/sparc/kernel/ioport.c  |   2 +-
>  drivers/gpu/drm/drm_memory.c|   3 +-
>  drivers/gpu/drm/gma500/gtt.c|   5 +-
>  drivers/hv/vmbus_drv.c  |  52 
>  drivers/input/joystick/iforce/iforce-main.c |   4 +-
>  drivers/nvdimm/e820.c   |   2 +-
>  drivers/nvdimm/namespace_devs.c |   6 +-
>  drivers/nvdimm/nd.h |   5 +-
>  drivers/of/address.c|   4 +-
>  drivers/parisc/lba_pci.c|   4 +-
>  drivers/pci/host/vmd.c  |   8 +-
>  drivers/pci/probe.c |   2 +
>  drivers/pci/setup-bus.c |   2 +-
>  include/linux/ioport.h  |   6 +-
>  kernel/resource.c   | 193 
> ++--
>  15 files changed, 149 insertions(+), 149 deletions(-)
>
> diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
> index 3bcef9ce74df..4e91fbbbedcc 100644
> --- a/arch/sparc/kernel/ioport.c
> +++ b/arch/sparc/kernel/ioport.c
> @@ -669,7 +669,7 @@ static int sparc_io_proc_show(struct seq_file *m, void *v)
> struct resource *root = m->private, *r;
> const char *nm;
>
> -   for (r = root->child; r != NULL; r = r->sibling) {
> +   list_for_each_entry(r, >child, sibling) {
> if ((nm = r->name) == NULL) nm = "???";
> seq_printf(m, "%016llx-%016llx: %s\n",
> (unsigned long long)r->start,
> diff --git a/drivers/gpu/drm/drm_memory.c b/drivers/gpu/drm/drm_memory.c
> index 3c54044214db..53e300a993dc 100644
> --- a/drivers/gpu/drm/drm_memory.c
> +++ b/drivers/gpu/drm/drm_memory.c
> @@ -155,9 +155,8 @@ u64 drm_get_max_iomem(void)
> struct resource *tmp;
> resource_size_t max_iomem = 0;
>
> -   for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) {
> +   list_for_each_entry(tmp, _resource.child, sibling)
> max_iomem = max(max_iomem,  tmp->end);
> -   }
>
> return max_iomem;
>  }
> diff --git a/drivers/gpu/drm/gma500/gtt.c b/drivers/gpu/drm/gma500/gtt.c
> index 3949b0990916..addd3bc009af 100644
> --- a/drivers/gpu/drm/gma500/gtt.c
> +++ b/drivers/gpu/drm/gma500/gtt.c
> @@ -565,7 +565,7 @@ int psb_gtt_init(struct drm_device *dev, int resume)
>  int psb_gtt_restore(struct drm_device *dev)
>  {
> struct drm_psb_private *dev_priv = dev->dev_private;
> -   struct resource *r = dev_priv->gtt_mem->child;
> +   struct resource *r;
> struct gtt_range *range;
> unsigned int restored = 0, total = 0, size = 0;
>
> @@ -573,14 +573,13 @@ int psb_gtt_restore(struct drm_device *dev)
> mutex_lock(_priv->gtt_mutex);
> psb_gtt_init(dev, 1);
>
> -   while (r != NULL) {
> +   list_for_each_entry(r, _priv->gtt_mem->child, sibling) {
> range = container_of(r, struct gtt_range, resource);
> if (range->pages) {
> psb_gtt_insert(dev, range, 1);
> size += range->resource.end - range->resource.start;
> restored++;
> }
> -   r = r->sibling;
> 

Re: [PATCH v3 1/3] resource: Use list_head to link resource sibling

2018-04-09 Thread Baoquan He
The struct resource uses singly linked list to link siblings. It's not
easy to do reverse iteration on sibling list. So replace it with list_head.

And code refactoring makes codes in kernel/resource.c more readable than
pointer operation.

Besides, type of member variables of struct resource, sibling and child, are
changed from 'struct resource *' to 'struct list_head'. Kernel size will
increase because of those statically defined struct resource instances.

Signed-off-by: Baoquan He 
---
v2->v3:
  Make sibling() and first_child() global so that they can be called
  out of kernel/resource.c to simplify code.

  Fix several code bugs found by kbuild test robot.

  Got report from lkp that kernel size increased. It's on purpose since
  the type change of sibling and child inside struct resource{}. For
  each struct resource variable, it will cost another 16 bytes on x86 64.

 arch/sparc/kernel/ioport.c  |   2 +-
 drivers/gpu/drm/drm_memory.c|   3 +-
 drivers/gpu/drm/gma500/gtt.c|   5 +-
 drivers/hv/vmbus_drv.c  |  52 
 drivers/input/joystick/iforce/iforce-main.c |   4 +-
 drivers/nvdimm/e820.c   |   2 +-
 drivers/nvdimm/namespace_devs.c |   6 +-
 drivers/nvdimm/nd.h |   5 +-
 drivers/of/address.c|   4 +-
 drivers/parisc/lba_pci.c|   4 +-
 drivers/pci/host/vmd.c  |   8 +-
 drivers/pci/probe.c |   2 +
 drivers/pci/setup-bus.c |   2 +-
 include/linux/ioport.h  |   6 +-
 kernel/resource.c   | 193 ++--
 15 files changed, 149 insertions(+), 149 deletions(-)

diff --git a/arch/sparc/kernel/ioport.c b/arch/sparc/kernel/ioport.c
index 3bcef9ce74df..4e91fbbbedcc 100644
--- a/arch/sparc/kernel/ioport.c
+++ b/arch/sparc/kernel/ioport.c
@@ -669,7 +669,7 @@ static int sparc_io_proc_show(struct seq_file *m, void *v)
struct resource *root = m->private, *r;
const char *nm;
 
-   for (r = root->child; r != NULL; r = r->sibling) {
+   list_for_each_entry(r, >child, sibling) {
if ((nm = r->name) == NULL) nm = "???";
seq_printf(m, "%016llx-%016llx: %s\n",
(unsigned long long)r->start,
diff --git a/drivers/gpu/drm/drm_memory.c b/drivers/gpu/drm/drm_memory.c
index 3c54044214db..53e300a993dc 100644
--- a/drivers/gpu/drm/drm_memory.c
+++ b/drivers/gpu/drm/drm_memory.c
@@ -155,9 +155,8 @@ u64 drm_get_max_iomem(void)
struct resource *tmp;
resource_size_t max_iomem = 0;
 
-   for (tmp = iomem_resource.child; tmp; tmp = tmp->sibling) {
+   list_for_each_entry(tmp, _resource.child, sibling)
max_iomem = max(max_iomem,  tmp->end);
-   }
 
return max_iomem;
 }
diff --git a/drivers/gpu/drm/gma500/gtt.c b/drivers/gpu/drm/gma500/gtt.c
index 3949b0990916..addd3bc009af 100644
--- a/drivers/gpu/drm/gma500/gtt.c
+++ b/drivers/gpu/drm/gma500/gtt.c
@@ -565,7 +565,7 @@ int psb_gtt_init(struct drm_device *dev, int resume)
 int psb_gtt_restore(struct drm_device *dev)
 {
struct drm_psb_private *dev_priv = dev->dev_private;
-   struct resource *r = dev_priv->gtt_mem->child;
+   struct resource *r;
struct gtt_range *range;
unsigned int restored = 0, total = 0, size = 0;
 
@@ -573,14 +573,13 @@ int psb_gtt_restore(struct drm_device *dev)
mutex_lock(_priv->gtt_mutex);
psb_gtt_init(dev, 1);
 
-   while (r != NULL) {
+   list_for_each_entry(r, _priv->gtt_mem->child, sibling) {
range = container_of(r, struct gtt_range, resource);
if (range->pages) {
psb_gtt_insert(dev, range, 1);
size += range->resource.end - range->resource.start;
restored++;
}
-   r = r->sibling;
total++;
}
mutex_unlock(_priv->gtt_mutex);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index bc65c4d79c1f..7ba8a25520d9 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -1413,9 +1413,8 @@ static acpi_status vmbus_walk_resources(struct 
acpi_resource *res, void *ctx)
 {
resource_size_t start = 0;
resource_size_t end = 0;
-   struct resource *new_res;
+   struct resource *new_res, *tmp;
struct resource **old_res = _mmio;
-   struct resource **prev_res = NULL;
 
switch (res->type) {
 
@@ -1462,44 +1461,36 @@ static acpi_status vmbus_walk_resources(struct 
acpi_resource *res, void *ctx)
/*
 * If two ranges are adjacent, merge them.
 */
-   do {
-   if (!*old_res) {
-   *old_res = new_res;
-   break;
-   }
-
-   if (((*old_res)->end + 1) ==