Re: [zathura] [Zathura PATCH] RFC

2013-04-02 Thread Sebastian Ramacher
On 2013-03-25 04:14:25, Marwan Tanager wrote:
  Also, I remember, that I hit a mem-leak recently when working on a
  poster made with LaTeX. I was recompiling the same page over and over
  again with LaTeX and I the memory consumption went very high. I was
  wondering if this is because zathura was remembering the pages from the
  previous versions of the recompiled document. Maybe zathura should free
  memory on document reloads?
 
 Yes, this is an actual bug that is reported at http://bugs.pwmt.org/issue274 
 and it should be fixed. Maybe you're the one who reported it!

Everything except the file monitor and the zathura_t instance should be released
and re-created (if I remember that correctly), i.e. the document gets closed and
re-opened again. zathura is just a PITA to run with valgrind. There are so many
GTK related errors, that we can't fix, that valgrind's output is pretty much
useless.

We might have missed a free, g_object_unref, etc. somewhere of course. So
ideas to find memory leaks without valgrind or a use-able suppression file for
valgrind are very welcome.

  Also, I thought a while ago, that zathura's rendering experience could
  be even further enhanced by caching the pages, which have not been seen
  yet. For example at one moment we see a row of pages and zathura caches
  the row above and below the one we currently view. Then if we scroll in
  either direction, the page is already rendered, which might be visually
  more pleasing than waiting for a large page to come up. Of course, this
  should be implemented in such a way, that we do not recache the pages,
  which are already among the ones which have been looked at recently.
 
 That's exactly what I've just been thinking about, simply a sliding window of 
 a 
 configurable number of rows centered at the current visible one and moving 
 with 
 us as we scroll around.
 
 Combined with the above caching, then pages outside of this window shouldn't 
 be 
 freed unless they 1) aren't in the cache, or 2) are in the cache, but need to 
 be invalidated in order to make room for the currently visible ones if they 
 aren't already in the cache.

That'd be nice to have. Instead of a configurable number of rows we'd need
something that also loads pages left and right of the current page if
pages-per-row is larger than 1.

Regards
-- 
Sebastian Ramacher


signature.asc
Description: Digital signature
___
zathura mailing list
zathura@lists.pwmt.org
http://lists.pwmt.org/mailman/listinfo/zathura


Re: [zathura] [Zathura PATCH] RFC

2013-04-02 Thread Marwan Tanager
On Tue, Apr 02, 2013 at 09:45:44PM +0200, Sebastian Ramacher wrote:
 On 2013-03-25 04:14:25, Marwan Tanager wrote:
   Also, I remember, that I hit a mem-leak recently when working on a
   poster made with LaTeX. I was recompiling the same page over and over
   again with LaTeX and I the memory consumption went very high. I was
   wondering if this is because zathura was remembering the pages from the
   previous versions of the recompiled document. Maybe zathura should free
   memory on document reloads?
  
  Yes, this is an actual bug that is reported at 
  http://bugs.pwmt.org/issue274 
  and it should be fixed. Maybe you're the one who reported it!
 
 Everything except the file monitor and the zathura_t instance should be 
 released
 and re-created (if I remember that correctly), i.e. the document gets closed 
 and
 re-opened again. zathura is just a PITA to run with valgrind. There are so 
 many
 GTK related errors, that we can't fix, that valgrind's output is pretty much
 useless.
 
 We might have missed a free, g_object_unref, etc. somewhere of course. So
 ideas to find memory leaks without valgrind or a use-able suppression file for
 valgrind are very welcome.

I had actually done a quick scan on the code as far as freeing the memory upon 
document closing is concerned, but wasn't able to spot anything missing.  
Everything seems to be freed, the poppler pages, the widgets, the cairo 
surfaces.  But, of course, I'm not omniscient, and I could have missed 
something, too.

While thinking about this issue, I got an idea that might be related; could 
this memory be allocated to the target surfaces of the widgets (i.e., the 
targets of the cairo contexts that are passed to zathura_page_widget_draw), as 
opposed to the source surfaces stored in zathura_page_widget_private_s that are 
used to draw on those targets, and which now seem to be freed when the cache 
entries are invalidated?

Now, with a reasonable cache size, the rate of memory consumption is much less 
than it was before, which confirms that the source surfaces are actually freed. 
 
But, if memory is allocated only to the source surfaces, then the memory 
consumption should stay fixed as soon as ${page-cache-size} pages hit the 
screen. But, this is not what actually happens, and the memory consumption 
keeps increasing until each page in the document is viewed at least once.  At 
this point, no more memory is consumed while scrolling.

This makes me think that if the seemingly leaking memory is allocated to the 
target surfaces of the widgets, and GTK doesn't free this memory when freeing 
the widgets, then this might explain all of the above issues (creeping memory 
while scrolling, and after document reloads), and unfortunately, in this case, 
I think we wouldn't be able to do much about it.  So, I hope I'm wrong ;)


--
Marwan Tanager
___
zathura mailing list
zathura@lists.pwmt.org
http://lists.pwmt.org/mailman/listinfo/zathura


Re: [zathura] [Zathura PATCH] RFC

2013-03-24 Thread Marwan Tanager
On Sun, Mar 24, 2013 at 03:34:33PM +0100, Sebastian Ramacher wrote:
 On 2013-03-24 07:28:03, Marwan Tanager wrote:
  This patch tries to solve a serious problem with zathura leaking a 
  horrendous 
  amount of memory during scrolling through a document.
  
  The problem is that the current way of periodically reclaiming the cairo 
  surfaces of the pages widgets is not working at all. The only case in which 
  it 
  seems to work is the first time (and only the first time) the timeout 
  function purge_pages is triggered, and even this first time sometimes 
  doesn't 
  free the memory if it happened while scrolling through the document.
  
  To see it in action, try opening a large document (e.g. 500+ pages), and 
  scroll through it by dragging the scrollbar or continuously pressing 
  shift-j, 
  or shift-k. Meanwhile, watch the output of 'top $(pidof zathura)' until 
  you 
  end up with a throttled system riddled with continuous swapping. Zathura 
  will 
  fiercely eat all the physical system memory while scrolling through a 
  relatively big document.
  
  I've spent a while trying to figure out why the memory isn't freed after 
  the 
  first time the timeout function is triggered, but quite frankly, I couldn't 
  manage to get to any reasonable conclusion. Injecting debug messages in 
  zathura_page_widget_update_surface shows that all the pages, but the 
  current 
  visible one, are eventually reclaimed. This means that 
  cairo_surface_destroy 
  must have been called on every surface of a page that exceeded it's 
  end-of-life 
  threshold, but without actually doing it's job of freeing the surface 
  memory.
  
  So, this patch bypasses this mechanism by freeing a surface as soon as it's 
  associated page becomes invisible. Try repeating the same experiment as 
  above 
  with this patch, and watch the rate of memory consumption drops 
  dramatically. I 
  scrolled the entire length of a 1300-pages document and the end result was 
  500 
  MB. And I don't think this is leaking memory due to cairo surfaces pixel 
  buffers, because it never increases after reaching this limit. Maybe it's 
  some 
  internal allocations done by GTK or GDK. This doesn't seem very far-fetched 
  for 
  an application with 1300+ GTK widgets in this case.
 
 No, please don't. There is reason why we decided to keep the pages in
 memory: consider a document with same pages, but one of them takes ages
 to render ([1] has an extreme example of such a file, but any document
 with larger images should do). In 0.0.8.x scrolling was a PITA with such
 documents: you had to wait until the page was fully rendered to do
 anything else.

Personally, I haven't had an encounter with such extreme cases, so I hadn't
thought of this, but of course this is something that should be considered.

 Keeping the pages in memory for some time is our attempt to improve the
 scrolling experience in that case. Scrolling back and forth between two
 pages becomes a much better experience since you don't have to wait
 until the page is re-rendered to see it. So it's a compromise between
 memory usage and time spent to re-render pages.

Of course, it's an obvious trade-off, but when it is very unbalanced like in 
this case (I don't want to sacrifice all of my system memory and crash running 
applications in order to have a good experience scrolling around a pdf 
document), that's where we should reconsider doing things.

 The code to destroy the cairo surfaces periodically is not optimal, but
 if it all, we should fix this portion of the code. Maybe a timing-based
 solution is not a good idea and there are better ways to approach this,
 but I really don't want to ruin the above use case again.
 
 Ideas and patches to improve the page reclaiming code are very welcome.

That's why I sent the patch in the first place. I didn't actually intend it to 
be applied, but just to shed some light on the source of the problem.



Marwan
___
zathura mailing list
zathura@lists.pwmt.org
http://lists.pwmt.org/mailman/listinfo/zathura