Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread Eric Firing
The patch in that last message of mine was clearly not quite right.  I 
have gone through several iterations, and have seemed tantalizingly 
close, but I still don't have it right yet.  I need to leave it alone 
for a while, but I do think it is important to get this working 
correctly ASAP--certainly it is for my own work, at least.

What happens with a nan should be somewhat similar to what happens with 
clipping, so perhaps one could take advantage of part of the clipping 
logic, but I have not looked at this approach closely.

Eric


Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the path.  
>>> The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(20)
>>> yy = np.random.rand(20)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h.  The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes.  So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this?  I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway.  
>> It works for me, but I have 4GB RAM here at work.
> 
> It sounds like we have little to lose by increasing the limit as you 
> suggest here.  In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange.  I have no idea how to 
> approach it.
> 
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
> 
> Thank you very much for finding that!
> 
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem?  A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on and 
>>> how to fix it, but I did not get very far.  I could try again, but as 
>>> you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way.  I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
> 
> The attached patch seems to work, based on cursory testing.  I can make 
> an array of 1M points, salt it with nans, and plot it, complete with 
> gaps, and all in a reasonably snappy fashion, thanks to your units fix.
> 
> I will hold off on committing it until I hear from you or John; or if 
> either of you want to polish and commit it (or an alternative), that's 
> even better.
> 
> Eric
> 
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed.  Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments.  The simplest form 
>>> of path simplification fix might be to reset the calculation whenever 
>>> a moveto is encountered, but this would yield no simplification in 
>>> this case.  I assume Agg would still choke. Is there a need for some 
>>> sort of automatic chunking of the rendering operation in addition to 
>>> path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series data, 
>> where x if uniform, there is an opportunity to do something completely 
>> different that should be far faster.  Audio editors (such as 
>> Audacity), draw each column of pixels based on the min/max and/or mean 
>> and/or RMS of the values within that column.  This makes the rendering 
>> extremely fast and simple.  See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code.  It could convert the time 
>> series data to an image and plot that, or to a filled polygon whose 
>> vertices are down

Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread Michael Droettboom
Eric Firing wrote:
> Michael Droettboom wrote:
>> Eric Firing wrote:
>>> Mike, John,
>>>
>>> Because path simplification does not work with anything but a 
>>> continuous line, it is turned off if there are any nans in the 
>>> path.  The result is that if one does this:
>>>
>>> import numpy as np
>>> xx = np.arange(20)
>>> yy = np.random.rand(20)
>>> #plot(xx, yy)
>>> yy[1000] = np.nan
>>> plot(xx, yy)
>>>
>>> the plot fails with an incomplete rendering and general 
>>> unresponsiveness; apparently some mysterious agg limit is quietly 
>>> exceeded.
>> The limit in question is "cell_block_limit" in 
>> agg_rasterizer_cells_aa.h.  The relationship between the number 
>> vertices and the number of rasterization cells I suspect depends on 
>> the nature of the values.
>> However, if we want to increase the limit, each "cell_block" is 4096 
>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>> blocks, for a total of 67,108,864 bytes.  So, the question is, how 
>> much memory should be devoted to rasterization, when the data set is 
>> large like this?  I think we could safely quadruple this number for a 
>> lot of modern machines, and this maximum won't affect people plotting 
>> smaller data sets, since the memory is dynamically allocated anyway.  
>> It works for me, but I have 4GB RAM here at work.
>
> It sounds like we have little to lose by increasing the limit as you 
> suggest here.  In addition, it would be nice if hitting that limit 
> triggered an informative exception instead of a puzzling and quiet 
> failure, but maybe that would be hard to arrange.  I have no idea how 
> to approach it.
Agreed.  But also, I'm not sure how to do that. I can see where the 
limit is tested and no more memory is allocated, but not where it shuts 
down drawing after that.  If we can find that point, we should be able 
to throw an exception back to Python somehow.
>
>>> With or without the nan, this test case also shows the bizarre 
>>> slowness of add_line that I asked about in a message yesterday, and 
>>> that has me completely baffled.
>> lsprofcalltree is my friend!
>
> Thank you very much for finding that!
>
>>>
>>> Both of these are major problems for real-world use.
>>>
>>> Do you have any thoughts on timing and strategy for solving this 
>>> problem?  A few weeks ago, when the problem with nans and path 
>>> simplification turned up, I tried to figure out what was going on 
>>> and how to fix it, but I did not get very far.  I could try again, 
>>> but as you know I don't get along well with C++.
>> That simplification code is pretty hairy, particularly because it 
>> tries to avoid a copy by doing everything in an iterator/generator 
>> way.  I think even just supporting MOVETOs there would be tricky, but 
>> probably the easiest first thing.
>
> The attached patch seems to work, based on cursory testing.  I can 
> make an array of 1M points, salt it with nans, and plot it, complete 
> with gaps, and all in a reasonably snappy fashion, thanks to your 
> units fix.
Very nice!  It looks like a nice approach --- though I see from your 
second message that things aren't quite perfect yet.  I, too, feel it's 
close, though.

One possible minor improvement might be to change the "should_simplify" 
expression to be true if codes is not None and contains only LINETO and 
MOVETOs (but not curves, obviously).  I don't imagine a lot of people 
are building up their own paths with MOVETOs in them, but your 
improvement would at least make simplifying those possible.

Mike
>
> Eric
>
>>>
>>> I am also wondering whether more than straightforward path 
>>> simplification with nan/moveto might be needed.  Suppose there is a 
>>> nightmarish time series with every third point being bad, so it is 
>>> essentially a sequence of 2-point line segments.  The simplest form 
>>> of path simplification fix might be to reset the calculation 
>>> whenever a moveto is encountered, but this would yield no 
>>> simplification in this case.  I assume Agg would still choke. Is 
>>> there a need for some sort of automatic chunking of the rendering 
>>> operation in addition to path simplification?
>>>
>> Chunking is probably something worth looking into (for lines, at 
>> least), as it might also reduce memory usage vs. the "increase the 
>> cell_block_limit" scenario.
>>
>> I also think for the special case of high-resolution time series 
>> data, where x if uniform, there is an opportunity to do something 
>> completely different that should be far faster.  Audio editors (such 
>> as Audacity), draw each column of pixels based on the min/max and/or 
>> mean and/or RMS of the values within that column.  This makes the 
>> rendering extremely fast and simple.  See:
>>
>> http://audacity.sourceforge.net/about/images/audacity-macosx.png
>>
>> Of course, that would mean writing a bunch of new code, but it 
>> shouldn't be incredibly tricky new code.  It could convert the time 
>> series data to an image and plot tha

Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread Michael Droettboom
Michael Droettboom wrote:
> Eric Firing wrote:
>   
>> Michael Droettboom wrote:
>> 
>>> Eric Firing wrote:
>>>   
 Mike, John,

 Because path simplification does not work with anything but a 
 continuous line, it is turned off if there are any nans in the 
 path.  The result is that if one does this:

 import numpy as np
 xx = np.arange(20)
 yy = np.random.rand(20)
 #plot(xx, yy)
 yy[1000] = np.nan
 plot(xx, yy)

 the plot fails with an incomplete rendering and general 
 unresponsiveness; apparently some mysterious agg limit is quietly 
 exceeded.
 
>>> The limit in question is "cell_block_limit" in 
>>> agg_rasterizer_cells_aa.h.  The relationship between the number 
>>> vertices and the number of rasterization cells I suspect depends on 
>>> the nature of the values.
>>> However, if we want to increase the limit, each "cell_block" is 4096 
>>> cells, each with 16 bytes, and currently it maxes out at 1024 cell 
>>> blocks, for a total of 67,108,864 bytes.  So, the question is, how 
>>> much memory should be devoted to rasterization, when the data set is 
>>> large like this?  I think we could safely quadruple this number for a 
>>> lot of modern machines, and this maximum won't affect people plotting 
>>> smaller data sets, since the memory is dynamically allocated anyway.  
>>> It works for me, but I have 4GB RAM here at work.
>>>   
>> It sounds like we have little to lose by increasing the limit as you 
>> suggest here.  In addition, it would be nice if hitting that limit 
>> triggered an informative exception instead of a puzzling and quiet 
>> failure, but maybe that would be hard to arrange.  I have no idea how 
>> to approach it.
>> 
> Agreed.  But also, I'm not sure how to do that. I can see where the 
> limit is tested and no more memory is allocated, but not where it shuts 
> down drawing after that.  If we can find that point, we should be able 
> to throw an exception back to Python somehow.
I figured this out.  When this happens, a RuntimeError("Agg rendering 
complexity exceeded") is thrown.

Cheers,
Mike

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread John Hunter
On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <[EMAIL PROTECTED]> wrote:

> I figured this out.  When this happens, a RuntimeError("Agg rendering
> complexity exceeded") is thrown.

Do you think it is a good idea to put a little helper note in the
exception along the lines of

  throw "Agg rendering complexity exceeded; you may want to increase
the cell_block_size in agg_rasterizer_cells_aa.h"

in case someone gets this exception two years from now and none of us
can remember this brilliant fix :-)

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread Michael Droettboom
John Hunter wrote:
> On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <[EMAIL PROTECTED]> wrote:
>
>   
>> I figured this out.  When this happens, a RuntimeError("Agg rendering
>> complexity exceeded") is thrown.
>> 
>
> Do you think it is a good idea to put a little helper note in the
> exception along the lines of
>
>   throw "Agg rendering complexity exceeded; you may want to increase
> the cell_block_size in agg_rasterizer_cells_aa.h"
>
> in case someone gets this exception two years from now and none of us
> can remember this brilliant fix :-)
>   
We can suggest that, or suggest that the size of the data is too large 
(which is easier for most users to fix, I would suspect).  What about:

"Agg rendering complexity exceeded.  Consider downsampling or decimating 
your data."

along with a comment (not thrown), saying

/* If this is thrown too often, increase cell_block_limit. */

Mike

-- 
Michael Droettboom
Science Software Branch
Operations and Engineering Division
Space Telescope Science Institute
Operated by AURA for NASA


-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


[matplotlib-devel] Bug in legend

2008-10-08 Thread David Huard
I just updated matplotlib from svn and here is traceback I get after calling
legend with the pad argument:

/usr/local/lib64/python2.5/site-packages/matplotlib/pyplot.pyc in
legend(*args, **kwargs)
   2390 def legend(*args, **kwargs):
   2391
-> 2392 ret =  gca().legend(*args, **kwargs)
   2393 draw_if_interactive()
   2394 return ret

/usr/local/lib64/python2.5/site-packages/matplotlib/axes.pyc in legend(self,
*args, **kwargs)
   3662
   3663 handles = cbook.flatten(handles)
-> 3664 self.legend_ = mlegend.Legend(self, handles, labels,
**kwargs)
   3665 return self.legend_
   3666

/usr/local/lib64/python2.5/site-packages/matplotlib/legend.pyc in
__init__(self, parent, handles, labels, loc, numpoints, prop, pad,
borderpad, markerscale, labelsep, handlelen, handletextsep, axespad, shadow)
125 setattr(self,name,value)
126 if pad:
--> 127 warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
128 # 2008/10/04
129 if self.numpoints <= 0:

AttributeError: 'module' object has no attribute 'DeprecationWarning'

This is with python2.5.

Here is a patch:


Index: lib/matplotlib/legend.py
===
--- lib/matplotlib/legend.py(revision 6171)
+++ lib/matplotlib/legend.py(working copy)
@@ -124,7 +124,7 @@
 value=rcParams["legend."+name]
 setattr(self,name,value)
 if pad:
-warnings.DeprecationWarning("Use 'borderpad' instead of
'pad'.")
+DeprecationWarning("Use 'borderpad' instead of 'pad'.")
 # 2008/10/04
 if self.numpoints <= 0:
 raise ValueError("numpoints must be >= 0; it was %d"%
numpoints)



Regards,

David
-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] Bug in legend

2008-10-08 Thread John Hunter
On Wed, Oct 8, 2008 at 1:44 PM, David Huard <[EMAIL PROTECTED]> wrote:

> /usr/local/lib64/python2.5/site-packages/matplotlib/legend.pyc in
> __init__(self, parent, handles, labels, loc, numpoints, prop, pad,
> borderpad, markerscale, labelsep, handlelen, handletextsep, axespad, shadow)
> 125 setattr(self,name,value)
> 126 if pad:
> --> 127 warnings.DeprecationWarning("Use 'borderpad' instead of
> 'pad'.")
> 128 # 2008/10/04
> 129 if self.numpoints <= 0:
>
> AttributeError: 'module' object has no attribute 'DeprecationWarning'

I just replaced this with

  warnings.warn("Use 'borderpad' instead of 'pad'.", DeprecationWarning)

which is what we have been doing in other parts of the code, so please
give svn 6173 a try.

Thanks,
JDH

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread Eric Firing
Michael Droettboom wrote:
> John Hunter wrote:
>> On Wed, Oct 8, 2008 at 11:37 AM, Michael Droettboom <[EMAIL PROTECTED]> 
>> wrote:
>>
>>   
>>> I figured this out.  When this happens, a RuntimeError("Agg rendering
>>> complexity exceeded") is thrown.
>>> 
>> Do you think it is a good idea to put a little helper note in the
>> exception along the lines of
>>
>>   throw "Agg rendering complexity exceeded; you may want to increase
>> the cell_block_size in agg_rasterizer_cells_aa.h"
>>
>> in case someone gets this exception two years from now and none of us
>> can remember this brilliant fix :-)
>>   
> We can suggest that, or suggest that the size of the data is too large 
> (which is easier for most users to fix, I would suspect).  What about:
> 
> "Agg rendering complexity exceeded.  Consider downsampling or decimating 
> your data."
> 
> along with a comment (not thrown), saying
> 
> /* If this is thrown too often, increase cell_block_limit. */
> 
> Mike
> 

Mike,

Thanks for doing this--it has already helped me in my testing of the 
gappy-path simplification support, which I have now committed.  As you 
suggested earlier, I included in path.py a check for a compatible codes 
array.

The agg limit still can be a problem.  It looks like chunking could be 
added easily by making the backend_agg draw_path a python method calling 
the renderer method; if the path length exceeds some threshold, then 
subpaths would be generated and passed to the renderer method.

Eric

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] path simplification with nan (or move_to)

2008-10-08 Thread John Hunter
On Wed, Oct 8, 2008 at 8:40 PM, Eric Firing <[EMAIL PROTECTED]> wrote:
> Thanks for doing this--it has already helped me in my testing of the
> gappy-path simplification support, which I have now committed.  As you
> suggested earlier, I included in path.py a check for a compatible codes
> array.
>
> The agg limit still can be a problem.  It looks like chunking could be added
> easily by making the backend_agg draw_path a python method calling the
> renderer method; if the path length exceeds some threshold, then subpaths
> would be generated and passed to the renderer method.

In unrelated news, I am not in favor of the recent change to warn on
non-GUI backends when "show" is called.  I realize this may sometimes
cause head-scratching behavior for some users who call show and no
figure pops up, but I think this must be pretty rare.  99% of users
have a matplotlibrc which defines a GUI default.  AFAIK, the only
exceptions to this are 1) when the user has changed the rc (power
user, needs no protection) or 2) no GUI was available at build time
and the image backend was the default backend chosen (warning more
appropriate at build time).  If I am missing a use case, let me know.

I like the design where the same script can be used to either generate
a UI figure or hardcopy depending on an rc settng or a command flag
(eg backend driver) and would rather not do the warning.  This has
been a very infrequent problem for users over the years (a few times
at most?) so I am not sure that the annoyance of the warning is
justified.

If  2) in the choices above is the case you are concerned about, and
you want this warning feature in this case, we can add an rc param
which is autoset at build time, something like "show.warn =
True|False" since the build script is setting the default image
backend and can set "show.warn = True" when it sets an image backend
by default, otherwise False.

JDH

-
This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
Build the coolest Linux based applications with Moblin SDK & win great prizes
Grand prize is a trip for two to an Open Source event anywhere in the world
http://moblin-contest.org/redirect.php?banner_id=100&url=/
___
Matplotlib-devel mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/matplotlib-devel


Re: [matplotlib-devel] Patch for scatter plot legend enhancement

2008-10-08 Thread Erik Tollerud
Ah, that makes more sense Jae-Joon - thanks!

With this addition it all looks fine to me - I've attached a patch
against 6174 that does everything I'd like to see it do... note that
I've left in the TODO in collections.py simply because I still am not
certain that what I'm doing there is correct in all use-cases...

On Mon, Oct 6, 2008 at 11:22 PM, Jae-Joon Lee <[EMAIL PROTECTED]> wrote:
> Hi Eric,
>
> As far as I know, get_window_extent is meant to return the extent of
> the object in the display coordinate. And, as you may have noticed,
> often this is used to calculate the relative position of  objects.
>
> I quickly went through your patch and my guess is your implementation
> of get_window_extent is correct in this regard, but I haven't
> considered this seriously so I may be wrong.
>
> On the other hand, I guess the original problem you had is not related
> with the get_window_extents() method.
> The legend class has a _update_positions() method  which is called
> before the legends are drawn. And you have to update positions of your
> handles within this method.
>
> In the simple patch below. I tried to implement some basic update code
> for the polycollections ( I also slightly adjusted the y-offsets. This
> is just my personal preference). See if this patch works for you.
>
> Regards,
>
> -JJ
>
>
>
> Index: lib/matplotlib/legend.py
> ===
> --- lib/matplotlib/legend.py(revision 6163)
> +++ lib/matplotlib/legend.py(working copy)
> @@ -532,6 +540,12 @@
> elif isinstance(handle, Rectangle):
> handle.set_y(y+1/4*h)
> handle.set_height(h/2)
> +elif isinstance(handle, RegularPolyCollection):
> +offsets = handle.get_offsets()
> +xvals = [x for (x, _) in offsets]
> +yy = y + h
> +yvals=[yy-4./8*h,yy-3./8*h,yy-4./8*h]
> +handle.set_offsets(zip(xvals, yvals))
>
> # Set the data for the legend patch
> bbox = self._get_handle_text_bbox(renderer)
>
>
>
> On Tue, Oct 7, 2008 at 12:15 AM, Erik Tollerud <[EMAIL PROTECTED]> wrote:
>> Does anyone have anything new here? I'm perfectly willing to
>> experiment, but I'm really at a loss as to what
>> get_window_extent(self,render) is supposed to do (clearly get some
>> window extent, but exactly what window and what coordinates the extent
>> is in is what is confusing me).
>>
>> On Tue, Sep 23, 2008 at 11:41 AM, John Hunter <[EMAIL PROTECTED]> wrote:
>>> On Tue, Sep 23, 2008 at 12:20 AM, Erik Tollerud <[EMAIL PROTECTED]> wrote:
 Attached is a diff against revision 6115 that contains a patch to
 improve the behavior of the legend function when showing legends for
>>>
>>> Erik,
>>>
>>> I haven't had a chance to get to this yet.  Could you please also post
>>> it on the sf patch tracker so it doesn't get dropped, and ping us with
>>> a reminder in a few days if nothing has happened
>>>
>>> JDH
>>>
>>
>> -
>> This SF.Net email is sponsored by the Moblin Your Move Developer's challenge
>> Build the coolest Linux based applications with Moblin SDK & win great prizes
>> Grand prize is a trip for two to an Open Source event anywhere in the world
>> http://moblin-contest.org/redirect.php?banner_id=100&url=/
>> ___
>> Matplotlib-devel mailing list
>> [email protected]
>> https://lists.sourceforge.net/lists/listinfo/matplotlib-devel
>>
>



-- 
Erik Tollerud
Graduate Student
Center For Cosmology
Department of Physics and Astronomy
2142 Frederick Reines Hall
University of California, Irvine
Office Phone: (949)824-2587
Cell: (651)307-9409
[EMAIL PROTECTED]
Index: lib/matplotlib/legend.py
===
--- lib/matplotlib/legend.py	(revision 6174)
+++ lib/matplotlib/legend.py	(working copy)
@@ -306,18 +306,26 @@
 ret.append(legline)
 
 elif isinstance(handle, RegularPolyCollection):
-if self.numpoints == 1:
-xdata = np.array([left])
-p = Rectangle(xy=(min(xdata), y-3/4*HEIGHT),
-  width = self.handlelen, height=HEIGHT/2,
-  )
-p.set_facecolor(handle._facecolors[0])
-if handle._edgecolors != 'none' and len(handle._edgecolors):
-p.set_edgecolor(handle._edgecolors[0])
-self._set_artist_props(p)
-p.set_clip_box(None)
-p.set_clip_path(None)
-ret.append(p)
+xvals=[min(xdata),min(xdata)+self.handlelen/2,min(xdata)+self.handlelen]
+yvals=[y-3/4*HEIGHT,y-1/4*HEIGHT,y-3/4*HEIGHT]
+nverts=handle.get_paths()[0].vertices.shape[0]-1
+szs=[min(handle._sizes),max(handle.