Re: [Numpy-discussion] Numpy.dot segmentation fault
You can build numpy against Accelerate through macports by specifying the +no_atlas variant. Last time I tried I ran into this issue: http://trac.macports.org/ticket/22201 but it looks like it should be fixed now. Cheers Robin On Mon, Jan 18, 2010 at 8:15 PM, Mark Lescroart wrote: > Hi David (et al), > > Thanks for the reply. The version of ATLAS I was using was v3.8.3_1, > installed via MacPorts, compiled and built on my machine with a gcc4.3 > compiler. I uninstalled numpy 1.4 and ATLAS, re-installed (i.e., re- > compiled) the same version (3.8.3_1), re-installed numpy (for > python2.6, version 1.4), and got the same bug. > > I don't know if this means that there's something fundamentally wrong > with the version of ATLAS on MacPorts (probably less likely) or > something wrong with the way my system is configured (probably more > likely). If anyone can give me any more insight into how to test my > installation of ATLAS, I would be much obliged (I read through a fair > bit of the ATLAS installation notes on the ATLAS sourceforge page, and > could not figure out how to "run the whole test suite" ). > > If possible, I would like to solve this problem within Macports (and > thus not with the Accelerate framework). I am using numpy mostly > through the pyMVPA package for fMRI multi-voxel analysis, and the > pyMVPA package depends on a number of other libraries, and the mess of > dependencies is most easily managed within the framework of Macports. > > Cheers, > > Mark > > On Jan 17, 2010, at 8:38 PM, David Cournapeau wrote: > >> Mark Lescroart wrote: >>> Hello, >>> >>> I've encountered a segfault in numpy when trying to compute a dot >>> product for two arrays - see code below. The problem only seems to >>> occur >>> when the arrays reach a certain size. >> >> Your atlas is most likely broken. You will have to double-check how >> you >> built it, and maybe run the whole test suite (as indicated in the >> ATLAS >> installation notes). >> >> Note that you can use the Accelerate framework on mac os x, this is >> much >> easier to get numpy working on mac, >> >> cheers, >> >> David >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Numpy.dot segmentation fault
Hi David (et al), Thanks for the reply. The version of ATLAS I was using was v3.8.3_1, installed via MacPorts, compiled and built on my machine with a gcc4.3 compiler. I uninstalled numpy 1.4 and ATLAS, re-installed (i.e., re- compiled) the same version (3.8.3_1), re-installed numpy (for python2.6, version 1.4), and got the same bug. I don't know if this means that there's something fundamentally wrong with the version of ATLAS on MacPorts (probably less likely) or something wrong with the way my system is configured (probably more likely). If anyone can give me any more insight into how to test my installation of ATLAS, I would be much obliged (I read through a fair bit of the ATLAS installation notes on the ATLAS sourceforge page, and could not figure out how to "run the whole test suite" ). If possible, I would like to solve this problem within Macports (and thus not with the Accelerate framework). I am using numpy mostly through the pyMVPA package for fMRI multi-voxel analysis, and the pyMVPA package depends on a number of other libraries, and the mess of dependencies is most easily managed within the framework of Macports. Cheers, Mark On Jan 17, 2010, at 8:38 PM, David Cournapeau wrote: > Mark Lescroart wrote: >> Hello, >> >> I've encountered a segfault in numpy when trying to compute a dot >> product for two arrays - see code below. The problem only seems to >> occur >> when the arrays reach a certain size. > > Your atlas is most likely broken. You will have to double-check how > you > built it, and maybe run the whole test suite (as indicated in the > ATLAS > installation notes). > > Note that you can use the Accelerate framework on mac os x, this is > much > easier to get numpy working on mac, > > cheers, > > David > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this expected behavior?
Oh, of course. I can reverse it myself. Thanks, I did not think of that. Russel Warren Weckesser wrote: > Russel Howe wrote: >> Since they are iterators, is it possible to check for the second >> condition and reverse both of them so the behavior I expect happens or >> does this break something else? >> > > You may already know this, but just in case... > > In the second case, you can accomplish the shift by using reversed slices: > > a[:, -1:0:-1] = a[:, -2::-1] > > > Warren > >> Russel >> Robert Kern wrote: >> >>> On Mon, Jan 18, 2010 at 13:41, Russel Howe wrote: >>> This looks like the difference between memmove and memcpy to me, but I am not sure what the expected behavior of numpy should be. The first shift behaves the way I expect, the second is surprising. >>> memmove() and memcpy() are not used for these operations (and in >>> general, they can't be). Rather, iterators are created and looped over >>> to do the assignments. Because you are not making copies on the >>> right-hand-side, you are modifying the RHS as the iterators assign to >>> the LHS. >>> >>> In [3]: a[:, :-1] = a[:, 1:] In [4]: a Out[4]: array([[0, 5, 4, 8, 2, 7, 8, 7, 6, 6], [6, 3, 3, 9, 8, 0, 8, 9, 5, 5], [0, 1, 1, 2, 5, 8, 2, 5, 3, 3], [0, 0, 2, 8, 2, 0, 7, 7, 0, 0], [8, 6, 9, 6, 3, 9, 4, 4, 5, 5], [7, 6, 9, 3, 8, 9, 9, 6, 9, 9], [8, 8, 4, 0, 3, 7, 6, 7, 6, 6], [4, 9, 2, 4, 7, 3, 6, 7, 4, 4], [2, 0, 7, 0, 7, 6, 6, 1, 6, 6], [3, 8, 8, 9, 6, 7, 2, 5, 0, 0]], dtype=uint8) >>> The first one works because the RHS pointer is always one step ahead >>> of the LHS pointer, thus it always reads pristine data. >>> >>> In [5]: a[:, 1:] = a[:, :-1] In [6]: a Out[6]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3]], dtype=uint8) >>> The second one fails to work as you expect because the RHS pointer is >>> always one step behind the LHS pointer, thus it always reads the data >>> that just got modified in the previous step. The data you expected it >>> to read has already been wiped out. >>> >>> >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this expected behavior?
Russel Howe wrote: > Since they are iterators, is it possible to check for the second > condition and reverse both of them so the behavior I expect happens or > does this break something else? > You may already know this, but just in case... In the second case, you can accomplish the shift by using reversed slices: a[:, -1:0:-1] = a[:, -2::-1] Warren > Russel > Robert Kern wrote: > >> On Mon, Jan 18, 2010 at 13:41, Russel Howe wrote: >> >>> This looks like the difference between memmove and memcpy to me, but I >>> am not sure what the expected behavior of numpy should be. The first >>> shift behaves the way I expect, the second is surprising. >>> >> memmove() and memcpy() are not used for these operations (and in >> general, they can't be). Rather, iterators are created and looped over >> to do the assignments. Because you are not making copies on the >> right-hand-side, you are modifying the RHS as the iterators assign to >> the LHS. >> >> >>> In [3]: a[:, :-1] = a[:, 1:] >>> >>> In [4]: a >>> Out[4]: >>> array([[0, 5, 4, 8, 2, 7, 8, 7, 6, 6], >>>[6, 3, 3, 9, 8, 0, 8, 9, 5, 5], >>>[0, 1, 1, 2, 5, 8, 2, 5, 3, 3], >>>[0, 0, 2, 8, 2, 0, 7, 7, 0, 0], >>>[8, 6, 9, 6, 3, 9, 4, 4, 5, 5], >>>[7, 6, 9, 3, 8, 9, 9, 6, 9, 9], >>>[8, 8, 4, 0, 3, 7, 6, 7, 6, 6], >>>[4, 9, 2, 4, 7, 3, 6, 7, 4, 4], >>>[2, 0, 7, 0, 7, 6, 6, 1, 6, 6], >>>[3, 8, 8, 9, 6, 7, 2, 5, 0, 0]], dtype=uint8) >>> >> The first one works because the RHS pointer is always one step ahead >> of the LHS pointer, thus it always reads pristine data. >> >> >>> In [5]: a[:, 1:] = a[:, :-1] >>> >>> In [6]: a >>> Out[6]: >>> array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>>[6, 6, 6, 6, 6, 6, 6, 6, 6, 6], >>>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>>[8, 8, 8, 8, 8, 8, 8, 8, 8, 8], >>>[7, 7, 7, 7, 7, 7, 7, 7, 7, 7], >>>[8, 8, 8, 8, 8, 8, 8, 8, 8, 8], >>>[4, 4, 4, 4, 4, 4, 4, 4, 4, 4], >>>[2, 2, 2, 2, 2, 2, 2, 2, 2, 2], >>>[3, 3, 3, 3, 3, 3, 3, 3, 3, 3]], dtype=uint8) >>> >> The second one fails to work as you expect because the RHS pointer is >> always one step behind the LHS pointer, thus it always reads the data >> that just got modified in the previous step. The data you expected it >> to read has already been wiped out. >> >> > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this expected behavior?
On Mon, Jan 18, 2010 at 13:51, Russel Howe wrote: > Since they are iterators, is it possible to check for the second > condition and reverse both of them so the behavior I expect happens or > does this break something else? In general, no I don't think it would be possible. It would create a weird special case in the semantics, and slow down common assignments that don't have the issue. It would be nice to have a fast in-place roll(), though this is not how one should implement it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this expected behavior?
Since they are iterators, is it possible to check for the second condition and reverse both of them so the behavior I expect happens or does this break something else? Russel Robert Kern wrote: > On Mon, Jan 18, 2010 at 13:41, Russel Howe wrote: >> This looks like the difference between memmove and memcpy to me, but I >> am not sure what the expected behavior of numpy should be. The first >> shift behaves the way I expect, the second is surprising. > > memmove() and memcpy() are not used for these operations (and in > general, they can't be). Rather, iterators are created and looped over > to do the assignments. Because you are not making copies on the > right-hand-side, you are modifying the RHS as the iterators assign to > the LHS. > >> In [3]: a[:, :-1] = a[:, 1:] >> >> In [4]: a >> Out[4]: >> array([[0, 5, 4, 8, 2, 7, 8, 7, 6, 6], >>[6, 3, 3, 9, 8, 0, 8, 9, 5, 5], >>[0, 1, 1, 2, 5, 8, 2, 5, 3, 3], >>[0, 0, 2, 8, 2, 0, 7, 7, 0, 0], >>[8, 6, 9, 6, 3, 9, 4, 4, 5, 5], >>[7, 6, 9, 3, 8, 9, 9, 6, 9, 9], >>[8, 8, 4, 0, 3, 7, 6, 7, 6, 6], >>[4, 9, 2, 4, 7, 3, 6, 7, 4, 4], >>[2, 0, 7, 0, 7, 6, 6, 1, 6, 6], >>[3, 8, 8, 9, 6, 7, 2, 5, 0, 0]], dtype=uint8) > > The first one works because the RHS pointer is always one step ahead > of the LHS pointer, thus it always reads pristine data. > >> In [5]: a[:, 1:] = a[:, :-1] >> >> In [6]: a >> Out[6]: >> array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>[6, 6, 6, 6, 6, 6, 6, 6, 6, 6], >>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], >>[8, 8, 8, 8, 8, 8, 8, 8, 8, 8], >>[7, 7, 7, 7, 7, 7, 7, 7, 7, 7], >>[8, 8, 8, 8, 8, 8, 8, 8, 8, 8], >>[4, 4, 4, 4, 4, 4, 4, 4, 4, 4], >>[2, 2, 2, 2, 2, 2, 2, 2, 2, 2], >>[3, 3, 3, 3, 3, 3, 3, 3, 3, 3]], dtype=uint8) > > The second one fails to work as you expect because the RHS pointer is > always one step behind the LHS pointer, thus it always reads the data > that just got modified in the previous step. The data you expected it > to read has already been wiped out. > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
18/01/10 @ 13:18 (-0600), thus spake Warren Weckesser: > Ernest Adrogué wrote: > > Hi, > > > > This is hard to explain. In this code: > > > > reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) > > > > where m1, m2 and m3 are boolean arrays, I'm trying to figure > > out an expression that works with an arbitrary number of > > arrays, not just 3. Any idea?? > > > > > > If I understand the problem correctly, you want the result to be True > whenever any pair of the corresponding elements of the arrays are True. Exactly. > This could work: > > reduce(np.add, [m.astype(int) for m in mlist]) > 1 > > where mlist is a list of the boolean array (e.g. mlist = [m1, m2, m3] in > your example). Very clever. Thanks a lot! > Warren > > Bye. > > ___ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Is this expected behavior?
On Mon, Jan 18, 2010 at 13:41, Russel Howe wrote: > This looks like the difference between memmove and memcpy to me, but I > am not sure what the expected behavior of numpy should be. The first > shift behaves the way I expect, the second is surprising. memmove() and memcpy() are not used for these operations (and in general, they can't be). Rather, iterators are created and looped over to do the assignments. Because you are not making copies on the right-hand-side, you are modifying the RHS as the iterators assign to the LHS. > In [3]: a[:, :-1] = a[:, 1:] > > In [4]: a > Out[4]: > array([[0, 5, 4, 8, 2, 7, 8, 7, 6, 6], > [6, 3, 3, 9, 8, 0, 8, 9, 5, 5], > [0, 1, 1, 2, 5, 8, 2, 5, 3, 3], > [0, 0, 2, 8, 2, 0, 7, 7, 0, 0], > [8, 6, 9, 6, 3, 9, 4, 4, 5, 5], > [7, 6, 9, 3, 8, 9, 9, 6, 9, 9], > [8, 8, 4, 0, 3, 7, 6, 7, 6, 6], > [4, 9, 2, 4, 7, 3, 6, 7, 4, 4], > [2, 0, 7, 0, 7, 6, 6, 1, 6, 6], > [3, 8, 8, 9, 6, 7, 2, 5, 0, 0]], dtype=uint8) The first one works because the RHS pointer is always one step ahead of the LHS pointer, thus it always reads pristine data. > In [5]: a[:, 1:] = a[:, :-1] > > In [6]: a > Out[6]: > array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], > [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], > [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], > [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], > [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], > [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], > [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], > [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], > [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], > [3, 3, 3, 3, 3, 3, 3, 3, 3, 3]], dtype=uint8) The second one fails to work as you expect because the RHS pointer is always one step behind the LHS pointer, thus it always reads the data that just got modified in the previous step. The data you expected it to read has already been wiped out. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
18/01/10 @ 14:17 (-0500), thus spake josef.p...@gmail.com: > 2010/1/18 Ernest Adrogué : > > Hi, > > > > This is hard to explain. In this code: > > > > reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) > > > > where m1, m2 and m3 are boolean arrays, I'm trying to figure > > out an expression that works with an arbitrary number of > > arrays, not just 3. Any idea?? > > What's the shape of mi (dimension)? fixed or arbitrary number of dimension? > a loop is the most memory efficient I forgot to mention, mi are 1-dimensional, all the same length of course. > array broadcasting builds large arrays (and maybe has redundant > calculations), but might be a one-liner > > or something like list comprehension > > m = [m1, m2, ... mn] > reduce(np.logical_or, [mi & mj for (i, mi) in enumerate(m) for (j, mj) > in enumerate(m) if i > >>> m = [np.arange(10)<5, np.arange(10)>3, np.arange(10)>8] > >>> m > [array([ True, True, True, True, True, False, False, False, False, > False], dtype=bool), array([False, False, False, False, True, True, > True, True, True, True], dtype=bool), array([False, False, False, > False, False, False, False, False, False, True], dtype=bool)] > > >>> reduce(np.logical_or, [mi & mj for (i, mi) in enumerate(m) for (j, mj) in > >>> enumerate(m) if i array([False, False, False, False, True, False, False, False, False, > True], dtype=bool) > > Josef > > > > > > Bye. > > ___ > > NumPy-Discussion mailing list > > NumPy-Discussion@scipy.org > > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Is this expected behavior?
This looks like the difference between memmove and memcpy to me, but I am not sure what the expected behavior of numpy should be. The first shift behaves the way I expect, the second is surprising. I know about numpy.roll. I was hoping for something faster, which this would be if it worked. In [1]: a = (np.random.random((10,10))*10).astype('u1') In [2]: a Out[2]: array([[8, 0, 5, 4, 8, 2, 7, 8, 7, 6], [6, 6, 3, 3, 9, 8, 0, 8, 9, 5], [5, 0, 1, 1, 2, 5, 8, 2, 5, 3], [9, 0, 0, 2, 8, 2, 0, 7, 7, 0], [9, 8, 6, 9, 6, 3, 9, 4, 4, 5], [2, 7, 6, 9, 3, 8, 9, 9, 6, 9], [2, 8, 8, 4, 0, 3, 7, 6, 7, 6], [2, 4, 9, 2, 4, 7, 3, 6, 7, 4], [3, 2, 0, 7, 0, 7, 6, 6, 1, 6], [2, 3, 8, 8, 9, 6, 7, 2, 5, 0]], dtype=uint8) In [3]: a[:, :-1] = a[:, 1:] In [4]: a Out[4]: array([[0, 5, 4, 8, 2, 7, 8, 7, 6, 6], [6, 3, 3, 9, 8, 0, 8, 9, 5, 5], [0, 1, 1, 2, 5, 8, 2, 5, 3, 3], [0, 0, 2, 8, 2, 0, 7, 7, 0, 0], [8, 6, 9, 6, 3, 9, 4, 4, 5, 5], [7, 6, 9, 3, 8, 9, 9, 6, 9, 9], [8, 8, 4, 0, 3, 7, 6, 7, 6, 6], [4, 9, 2, 4, 7, 3, 6, 7, 4, 4], [2, 0, 7, 0, 7, 6, 6, 1, 6, 6], [3, 8, 8, 9, 6, 7, 2, 5, 0, 0]], dtype=uint8) In [5]: a[:, 1:] = a[:, :-1] In [6]: a Out[6]: array([[0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [6, 6, 6, 6, 6, 6, 6, 6, 6, 6], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [0, 0, 0, 0, 0, 0, 0, 0, 0, 0], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [7, 7, 7, 7, 7, 7, 7, 7, 7, 7], [8, 8, 8, 8, 8, 8, 8, 8, 8, 8], [4, 4, 4, 4, 4, 4, 4, 4, 4, 4], [2, 2, 2, 2, 2, 2, 2, 2, 2, 2], [3, 3, 3, 3, 3, 3, 3, 3, 3, 3]], dtype=uint8) In [7]: np.__version__ Out[7]: '1.3.0' ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
On Mon, Jan 18, 2010 at 13:34, Vicente Sole wrote: > You are taking point 4.d)0 while I am taking 4.d)1: > > """ > 1) Use a suitable shared library mechanism for linking with the > Library. A suitable mechanism is one that (a) uses at run time a copy > of the Library already present on the user's computer system, and (b) > will operate properly with a modified version of the Library that is > interface-compatible with the Linked Version. > """ > > If you are using the library as a shared library (what you do most of > the times in Python), you are quite free. numpy would not be using Eigen2 as a shared library. It is true that numpy would act as a shared library with respect to some downstream application, but incorporating Eigen2 into numpy would make those numpy binaries be effectively under the LGPL license with respect to the downstream application. > In any case, it seems I am not the only one seeing it like that: > > http://qt.nokia.com/downloads > > The key point is if you use the library "as is" or you have modified it. With respect to numpy and the way that Eigen2 was proposed as being used, no, it is not the key point. We will not incorporate Eigen2 code into numpy, particularly not as the default linear algebra implementation, because we wish to keep numpy's source as being only BSD. This is a policy decision of the numpy team, not a legal incompatibility. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
josef.p...@gmail.com wrote: > On Mon, Jan 18, 2010 at 2:18 PM, Warren Weckesser > wrote: > >> Ernest Adrogué wrote: >> >>> Hi, >>> >>> This is hard to explain. In this code: >>> >>> reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) >>> >>> where m1, m2 and m3 are boolean arrays, I'm trying to figure >>> out an expression that works with an arbitrary number of >>> arrays, not just 3. Any idea?? >>> >>> >>> >> If I understand the problem correctly, you want the result to be True >> whenever any pair of the corresponding elements of the arrays are True. >> >> This could work: >> >> reduce(np.add, [m.astype(int) for m in mlist]) > 1 >> >> where mlist is a list of the boolean array (e.g. mlist = [m1, m2, m3] in >> your example). >> > > much nicer than what I came up with. Does iterator instead of > intermediate list work (same for my list comprehension)? > > reduce(np.add, (m.astype(int) for m in mlist)) > 1 > > Yes, that works and is preferable, especially if the arrays are large or the list is long. Warren > Josef > > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
Quoting Bruce Southey : > On 01/18/2010 12:47 PM, Vicente Sole wrote: >> Quoting Bruce Southey : >> >>> >>> If you obtain the code from any package then you are bound by the terms >>> of that code. So while a user might not be 'inconvenienced' by the LGPL, >>> they are required to meet the terms as required. For some licenses (like >>> the LGPL) these terms do not really apply until you distribute the code >>> but that does not mean that the user is exempt from the licensing terms >>> of that code because they have not distributed their code (yet). >>> >>> Furthermore there are a number of numpy users that download the numpy >>> project for further distribution such as Enthought, packagers for Linux >>> distributions and developers of projects like Python(x,y). Some of these >>> users would be inconvenienced because binary-only distributions would >>> not be permitted in any form. >>> >> >> I think people are confusing LGPL and GPL... > Not at all. > >> >> I can distribute my code in binary form without any restriction >> when using an LGPL library UNLESS I have modified the library itself. > > I do not interpret the LGPL version 3 in this way: > A "Combined Work" is a work produced by combining or linking an > Application with the Library. > So you must apply section 4, in particular, provide the "Minimal > Corresponding Source": > The "Minimal Corresponding Source" for a Combined Work means the > Corresponding Source for the Combined Work, excluding any source code > for portions of the Combined Work that, considered in isolation, are > based on the Application, and not on the Linked Version. > > So a binary-only is usually not appropriate. > You are taking point 4.d)0 while I am taking 4.d)1: """ 1) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (a) uses at run time a copy of the Library already present on the user's computer system, and (b) will operate properly with a modified version of the Library that is interface-compatible with the Linked Version. """ If you are using the library as a shared library (what you do most of the times in Python), you are quite free. In any case, it seems I am not the only one seeing it like that: http://qt.nokia.com/downloads The key point is if you use the library "as is" or you have modified it. Armando ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
On Mon, Jan 18, 2010 at 2:18 PM, Warren Weckesser wrote: > Ernest Adrogué wrote: >> Hi, >> >> This is hard to explain. In this code: >> >> reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) >> >> where m1, m2 and m3 are boolean arrays, I'm trying to figure >> out an expression that works with an arbitrary number of >> arrays, not just 3. Any idea?? >> >> > > If I understand the problem correctly, you want the result to be True > whenever any pair of the corresponding elements of the arrays are True. > > This could work: > > reduce(np.add, [m.astype(int) for m in mlist]) > 1 > > where mlist is a list of the boolean array (e.g. mlist = [m1, m2, m3] in > your example). much nicer than what I came up with. Does iterator instead of intermediate list work (same for my list comprehension)? reduce(np.add, (m.astype(int) for m in mlist)) > 1 Josef > > Warren >> Bye. >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> > > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
Ernest Adrogué wrote: > Hi, > > This is hard to explain. In this code: > > reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) > > where m1, m2 and m3 are boolean arrays, I'm trying to figure > out an expression that works with an arbitrary number of > arrays, not just 3. Any idea?? > > If I understand the problem correctly, you want the result to be True whenever any pair of the corresponding elements of the arrays are True. This could work: reduce(np.add, [m.astype(int) for m in mlist]) > 1 where mlist is a list of the boolean array (e.g. mlist = [m1, m2, m3] in your example). Warren > Bye. > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] logic problem
2010/1/18 Ernest Adrogué : > Hi, > > This is hard to explain. In this code: > > reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) > > where m1, m2 and m3 are boolean arrays, I'm trying to figure > out an expression that works with an arbitrary number of > arrays, not just 3. Any idea?? What's the shape of mi (dimension)? fixed or arbitrary number of dimension? a loop is the most memory efficient array broadcasting builds large arrays (and maybe has redundant calculations), but might be a one-liner or something like list comprehension m = [m1, m2, ... mn] reduce(np.logical_or, [mi & mj for (i, mi) in enumerate(m) for (j, mj) in enumerate(m) if i>> m = [np.arange(10)<5, np.arange(10)>3, np.arange(10)>8] >>> m [array([ True, True, True, True, True, False, False, False, False, False], dtype=bool), array([False, False, False, False, True, True, True, True, True, True], dtype=bool), array([False, False, False, False, False, False, False, False, False, True], dtype=bool)] >>> reduce(np.logical_or, [mi & mj for (i, mi) in enumerate(m) for (j, mj) in >>> enumerate(m) if i > Bye. > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
On 01/18/2010 12:47 PM, Vicente Sole wrote: Quoting Bruce Southey : If you obtain the code from any package then you are bound by the terms of that code. So while a user might not be 'inconvenienced' by the LGPL, they are required to meet the terms as required. For some licenses (like the LGPL) these terms do not really apply until you distribute the code but that does not mean that the user is exempt from the licensing terms of that code because they have not distributed their code (yet). Furthermore there are a number of numpy users that download the numpy project for further distribution such as Enthought, packagers for Linux distributions and developers of projects like Python(x,y). Some of these users would be inconvenienced because binary-only distributions would not be permitted in any form. I think people are confusing LGPL and GPL... Not at all. I can distribute my code in binary form without any restriction when using an LGPL library UNLESS I have modified the library itself. I do not interpret the LGPL version 3 in this way: A "Combined Work" is a work produced by combining or linking an Application with the Library. So you must apply section 4, in particular, provide the "Minimal Corresponding Source": The "Minimal Corresponding Source" for a Combined Work means the Corresponding Source for the Combined Work, excluding any source code for portions of the Combined Work that, considered in isolation, are based on the Application, and not on the Linked Version. So a binary-only is usually not appropriate. Bruce ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] logic problem
Hi, This is hard to explain. In this code: reduce(np.logical_or, [m1 & m2, m1 & m3, m2 & m3]) where m1, m2 and m3 are boolean arrays, I'm trying to figure out an expression that works with an arbitrary number of arrays, not just 3. Any idea?? Bye. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
On 01/18/2010 10:46 AM, Benoit Jacob wrote: > 2010/1/18 Robert Kern: > >> On Mon, Jan 18, 2010 at 10:26, Benoit Jacob wrote: >> >>> 2010/1/18 Robert Kern: >>> On Mon, Jan 18, 2010 at 09:35, Benoit Jacob wrote: > Sorry for continuing the licensing noise on your list --- I though > that now that I've started, I should let you know that I think I > understand things more clearly now ;) > No worries. > First, Section 5 of the LGPL is horrible indeed, so let's forget about > that. > I don't think it's that horrible, honestly. It just applies to a different deployment use case and a different set of technologies. > If you were using a LGPL-licensed binary library, Section 4 would > rather be what you want. It would require you to: > 4a) say somewhere ("prominently" is vague, the bottom of a README is > OK) that you use the library > 4b) distribute copies of the GPL and LGPL licenses text. Pointless, > but not a big issue. > > the rest doesn't matter: > 4c) not applicable to you > 4d1) this is what you would be doing anyway > Possibly, but shared libraries are not easy for a variety of boring, Python-specific, technical reasons. >>> Ah, that I didn't know. >>> >>> > 4e) not applicable to you > Yes, it is. The exception where Installation Information is not required is only when installation is impossible, such as embedded devices where the code is in a ROM chip. >>> OK, I didn't understand that. >>> >>> > Finally and this is the important point: you would not be passing any > requirement to your own users. Indeed, the LGPL license, contrary to > the GPL license, does not propagate through dependency chains. So if > NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be > met when distributing NumPy, but NumPy itself isn't LGPL at all and an > application using NumPy does not have to care at all about the LGPL. > So there should be no concern at all of "passing on LGPL requirements > to users" > No, not at all. The GPL "propagates" by requiring that the rest of the code be licensed compatibly with the GPL. This is an unusual and particular feature of the GPL. The LGPL does not require that rest of the code be licensed in a particular way. However, that doesn't mean that the license of the "outer layer" insulates the downstream user from the LGPL license of the wrapped component. It just means that there is BSD code and LGPL code in the total product. The downstream user must accept and deal with the licenses of *all* of the components simultaneously. This is how most licenses work. I think that the fact that the GPL is particularly "viral" may be obscuring the normal way that licenses work when combined with other licenses. If I had a proprietary application that used an LGPL library, and I gave my customers some limited rights to modify and resell my application, they would still be bound by the LGPL with respect to the library. They could not modify the LGPLed library and sell it under a proprietary license even if I allow them to do that with the application as a whole. For us to use Eigen2 in numpy such that our users could use, modify and redistribute numpy+Eigen2, in its entirety, under the terms of the BSD license, we would have to get permission from you to distribute Eigen2 under the BSD license. It's only polite. >>> OK, so the Eigen code inside of NumPy would still be protected by the >>> LGPL. But what I meant when I said that the LGPL requirements don't >>> propagate to your users, was that, for example, they don't have to >>> distribute copies of the LGPL text, installation information for >>> Eigen, or links to Eigen's website. >>> >> Yes, they do. They are redistributing Eigen; they must abide by its >> license in all respects. It doesn't matter how much it is wrapped. >> > Well this is where I'm not sure if I agree, I am asking the FSF right > now as, if this were the case, I too would find such a clause very > inconvenient for users. > > If you obtain the code from any package then you are bound by the terms of that code. So while a user might not be 'inconvenienced' by the LGPL, they are required to meet the terms as required. For some licenses (like the LGPL) these terms do not really apply until you distribute the code but that does not mean that the user is exempt from the licensing terms of that code because they have not distributed their code (yet). Furthermore there are a number of numpy users that download the numpy project for further distribution suc
[Numpy-discussion] EPD 6.0 and IPython Webinar Friday
Email not displaying correctly? View it in your browser. Happy 2010! To start the year off, we've released a new version of EPD and lined up a solid set of training options. Scientific Computing with Python Webinar This Friday, Travis Oliphant will then provide an introduction to multiprocessing and iPython.kernal. Scientific Computing with Python Webinar Multiprocessing and iPython.kernal Friday, January 22: 1pm CST/7pm UTC Register Enthought Live Training Enthought's intensive training courses are offered in 3-5 day sessions. The Python skills you'll acquire will save you and your organization time and money in 2010. Enthought Open Course February 22-26, Austin, TX • Python for Scientists and Engineers • Interfacing with C / C++ and Fortran • Introduction to UIs and Visualization Enjoy! The Enthought Team EPD 6.0 Released Now available in our repository, EPD 6.0 includes Python 2.6, PiCloud's cloud library, and NumPy 1.4... Not to mention 64-bit support for Windows, OSX, and Linux. Details. Download now. New: Enthought channel on YouTube Short instructional videos straight from the desktops of our developers. Get started with a 4-part series on interpolation with SciPy. Our mailing address is: Enthought, Inc. 515 Congress Ave. Austin, TX 78701 Copyright (C) 2009 Enthought, Inc. All rights reserved. Forward this email to a friend ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
2010/1/18 Robert Kern : > On Mon, Jan 18, 2010 at 10:26, Benoit Jacob wrote: >> 2010/1/18 Robert Kern : >>> On Mon, Jan 18, 2010 at 09:35, Benoit Jacob >>> wrote: >>> Sorry for continuing the licensing noise on your list --- I though that now that I've started, I should let you know that I think I understand things more clearly now ;) >>> >>> No worries. >>> First, Section 5 of the LGPL is horrible indeed, so let's forget about that. >>> >>> I don't think it's that horrible, honestly. It just applies to a >>> different deployment use case and a different set of technologies. >>> If you were using a LGPL-licensed binary library, Section 4 would rather be what you want. It would require you to: 4a) say somewhere ("prominently" is vague, the bottom of a README is OK) that you use the library 4b) distribute copies of the GPL and LGPL licenses text. Pointless, but not a big issue. the rest doesn't matter: 4c) not applicable to you 4d1) this is what you would be doing anyway >>> >>> Possibly, but shared libraries are not easy for a variety of boring, >>> Python-specific, technical reasons. >> >> Ah, that I didn't know. >> 4e) not applicable to you >>> >>> Yes, it is. The exception where Installation Information is not >>> required is only when installation is impossible, such as embedded >>> devices where the code is in a ROM chip. >> >> OK, I didn't understand that. >> >>> Finally and this is the important point: you would not be passing any requirement to your own users. Indeed, the LGPL license, contrary to the GPL license, does not propagate through dependency chains. So if NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be met when distributing NumPy, but NumPy itself isn't LGPL at all and an application using NumPy does not have to care at all about the LGPL. So there should be no concern at all of "passing on LGPL requirements to users" >>> >>> No, not at all. The GPL "propagates" by requiring that the rest of the >>> code be licensed compatibly with the GPL. This is an unusual and >>> particular feature of the GPL. The LGPL does not require that rest of >>> the code be licensed in a particular way. However, that doesn't mean >>> that the license of the "outer layer" insulates the downstream user >>> from the LGPL license of the wrapped component. It just means that >>> there is BSD code and LGPL code in the total product. The downstream >>> user must accept and deal with the licenses of *all* of the components >>> simultaneously. This is how most licenses work. I think that the fact >>> that the GPL is particularly "viral" may be obscuring the normal way >>> that licenses work when combined with other licenses. >>> >>> If I had a proprietary application that used an LGPL library, and I >>> gave my customers some limited rights to modify and resell my >>> application, they would still be bound by the LGPL with respect to the >>> library. They could not modify the LGPLed library and sell it under a >>> proprietary license even if I allow them to do that with the >>> application as a whole. For us to use Eigen2 in numpy such that our >>> users could use, modify and redistribute numpy+Eigen2, in its >>> entirety, under the terms of the BSD license, we would have to get >>> permission from you to distribute Eigen2 under the BSD license. It's >>> only polite. >> >> OK, so the Eigen code inside of NumPy would still be protected by the >> LGPL. But what I meant when I said that the LGPL requirements don't >> propagate to your users, was that, for example, they don't have to >> distribute copies of the LGPL text, installation information for >> Eigen, or links to Eigen's website. > > Yes, they do. They are redistributing Eigen; they must abide by its > license in all respects. It doesn't matter how much it is wrapped. Well this is where I'm not sure if I agree, I am asking the FSF right now as, if this were the case, I too would find such a clause very inconvenient for users. > >> The only requirement, if I understand well, is that _if_ a NumPy user >> wanted to make modifications to Eigen itself, he would have to >> conform to the LGPL requirements about sharing the modified source >> code. >> >> But is it really a requirement of NumPy that all its dependencies must >> be free to modify without redistributing the modified source code? > > For the default build and the official binaries, yes. OK. > >> Don't you use MKL, for which the source code is not available at all? > > No, we don't. It is a build option. If you were to provide a BLAS > interface to Eigen, Eigen would be another option. OK, then I guess that this is what will happen once we release the BLAS library. Thanks for your patience Benoit ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
On Mon, Jan 18, 2010 at 10:26, Benoit Jacob wrote: > 2010/1/18 Robert Kern : >> On Mon, Jan 18, 2010 at 09:35, Benoit Jacob wrote: >> >>> Sorry for continuing the licensing noise on your list --- I though >>> that now that I've started, I should let you know that I think I >>> understand things more clearly now ;) >> >> No worries. >> >>> First, Section 5 of the LGPL is horrible indeed, so let's forget about that. >> >> I don't think it's that horrible, honestly. It just applies to a >> different deployment use case and a different set of technologies. >> >>> If you were using a LGPL-licensed binary library, Section 4 would >>> rather be what you want. It would require you to: >>> 4a) say somewhere ("prominently" is vague, the bottom of a README is >>> OK) that you use the library >>> 4b) distribute copies of the GPL and LGPL licenses text. Pointless, >>> but not a big issue. >>> >>> the rest doesn't matter: >>> 4c) not applicable to you >>> 4d1) this is what you would be doing anyway >> >> Possibly, but shared libraries are not easy for a variety of boring, >> Python-specific, technical reasons. > > Ah, that I didn't know. > >>> 4e) not applicable to you >> >> Yes, it is. The exception where Installation Information is not >> required is only when installation is impossible, such as embedded >> devices where the code is in a ROM chip. > > OK, I didn't understand that. > >> >>> Finally and this is the important point: you would not be passing any >>> requirement to your own users. Indeed, the LGPL license, contrary to >>> the GPL license, does not propagate through dependency chains. So if >>> NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be >>> met when distributing NumPy, but NumPy itself isn't LGPL at all and an >>> application using NumPy does not have to care at all about the LGPL. >>> So there should be no concern at all of "passing on LGPL requirements >>> to users" >> >> No, not at all. The GPL "propagates" by requiring that the rest of the >> code be licensed compatibly with the GPL. This is an unusual and >> particular feature of the GPL. The LGPL does not require that rest of >> the code be licensed in a particular way. However, that doesn't mean >> that the license of the "outer layer" insulates the downstream user >> from the LGPL license of the wrapped component. It just means that >> there is BSD code and LGPL code in the total product. The downstream >> user must accept and deal with the licenses of *all* of the components >> simultaneously. This is how most licenses work. I think that the fact >> that the GPL is particularly "viral" may be obscuring the normal way >> that licenses work when combined with other licenses. >> >> If I had a proprietary application that used an LGPL library, and I >> gave my customers some limited rights to modify and resell my >> application, they would still be bound by the LGPL with respect to the >> library. They could not modify the LGPLed library and sell it under a >> proprietary license even if I allow them to do that with the >> application as a whole. For us to use Eigen2 in numpy such that our >> users could use, modify and redistribute numpy+Eigen2, in its >> entirety, under the terms of the BSD license, we would have to get >> permission from you to distribute Eigen2 under the BSD license. It's >> only polite. > > OK, so the Eigen code inside of NumPy would still be protected by the > LGPL. But what I meant when I said that the LGPL requirements don't > propagate to your users, was that, for example, they don't have to > distribute copies of the LGPL text, installation information for > Eigen, or links to Eigen's website. Yes, they do. They are redistributing Eigen; they must abide by its license in all respects. It doesn't matter how much it is wrapped. > The only requirement, if I understand well, is that _if_ a NumPy user > wanted to make modifications to Eigen itself, he would have to > conform to the LGPL requirements about sharing the modified source > code. > > But is it really a requirement of NumPy that all its dependencies must > be free to modify without redistributing the modified source code? For the default build and the official binaries, yes. > Don't you use MKL, for which the source code is not available at all? No, we don't. It is a build option. If you were to provide a BLAS interface to Eigen, Eigen would be another option. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
2010/1/18 Robert Kern : > On Mon, Jan 18, 2010 at 09:35, Benoit Jacob wrote: > >> Sorry for continuing the licensing noise on your list --- I though >> that now that I've started, I should let you know that I think I >> understand things more clearly now ;) > > No worries. > >> First, Section 5 of the LGPL is horrible indeed, so let's forget about that. > > I don't think it's that horrible, honestly. It just applies to a > different deployment use case and a different set of technologies. > >> If you were using a LGPL-licensed binary library, Section 4 would >> rather be what you want. It would require you to: >> 4a) say somewhere ("prominently" is vague, the bottom of a README is >> OK) that you use the library >> 4b) distribute copies of the GPL and LGPL licenses text. Pointless, >> but not a big issue. >> >> the rest doesn't matter: >> 4c) not applicable to you >> 4d1) this is what you would be doing anyway > > Possibly, but shared libraries are not easy for a variety of boring, > Python-specific, technical reasons. Ah, that I didn't know. >> 4e) not applicable to you > > Yes, it is. The exception where Installation Information is not > required is only when installation is impossible, such as embedded > devices where the code is in a ROM chip. OK, I didn't understand that. > >> Finally and this is the important point: you would not be passing any >> requirement to your own users. Indeed, the LGPL license, contrary to >> the GPL license, does not propagate through dependency chains. So if >> NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be >> met when distributing NumPy, but NumPy itself isn't LGPL at all and an >> application using NumPy does not have to care at all about the LGPL. >> So there should be no concern at all of "passing on LGPL requirements >> to users" > > No, not at all. The GPL "propagates" by requiring that the rest of the > code be licensed compatibly with the GPL. This is an unusual and > particular feature of the GPL. The LGPL does not require that rest of > the code be licensed in a particular way. However, that doesn't mean > that the license of the "outer layer" insulates the downstream user > from the LGPL license of the wrapped component. It just means that > there is BSD code and LGPL code in the total product. The downstream > user must accept and deal with the licenses of *all* of the components > simultaneously. This is how most licenses work. I think that the fact > that the GPL is particularly "viral" may be obscuring the normal way > that licenses work when combined with other licenses. > > If I had a proprietary application that used an LGPL library, and I > gave my customers some limited rights to modify and resell my > application, they would still be bound by the LGPL with respect to the > library. They could not modify the LGPLed library and sell it under a > proprietary license even if I allow them to do that with the > application as a whole. For us to use Eigen2 in numpy such that our > users could use, modify and redistribute numpy+Eigen2, in its > entirety, under the terms of the BSD license, we would have to get > permission from you to distribute Eigen2 under the BSD license. It's > only polite. OK, so the Eigen code inside of NumPy would still be protected by the LGPL. But what I meant when I said that the LGPL requirements don't propagate to your users, was that, for example, they don't have to distribute copies of the LGPL text, installation information for Eigen, or links to Eigen's website. The only requirement, if I understand well, is that _if_ a NumPy user wanted to make modifications to Eigen itself, he would have to conform to the LGPL requirements about sharing the modified source code. But is it really a requirement of NumPy that all its dependencies must be free to modify without redistributing the modified source code? Don't you use MKL, for which the source code is not available at all? I am not sure that I understand how that is better than having source code subject to LGPL requirements. Benoit ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
On Mon, Jan 18, 2010 at 09:35, Benoit Jacob wrote: > Sorry for continuing the licensing noise on your list --- I though > that now that I've started, I should let you know that I think I > understand things more clearly now ;) No worries. > First, Section 5 of the LGPL is horrible indeed, so let's forget about that. I don't think it's that horrible, honestly. It just applies to a different deployment use case and a different set of technologies. > If you were using a LGPL-licensed binary library, Section 4 would > rather be what you want. It would require you to: > 4a) say somewhere ("prominently" is vague, the bottom of a README is > OK) that you use the library > 4b) distribute copies of the GPL and LGPL licenses text. Pointless, > but not a big issue. > > the rest doesn't matter: > 4c) not applicable to you > 4d1) this is what you would be doing anyway Possibly, but shared libraries are not easy for a variety of boring, Python-specific, technical reasons. 4d0 would be easier for the official binaries (because we provide official source). But that would still force people building a proprietary application using numpy to rebuild a binary without Eigen2 or else make sure that they allow users to rebuild numpy. For a number of deployment options (py2app, py2exe, bbfreeze, etc.), this is annoying, particularly when combined with the 4e requirement, as I explain below. > 4e) not applicable to you Yes, it is. The exception where Installation Information is not required is only when installation is impossible, such as embedded devices where the code is in a ROM chip. > Finally and this is the important point: you would not be passing any > requirement to your own users. Indeed, the LGPL license, contrary to > the GPL license, does not propagate through dependency chains. So if > NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be > met when distributing NumPy, but NumPy itself isn't LGPL at all and an > application using NumPy does not have to care at all about the LGPL. > So there should be no concern at all of "passing on LGPL requirements > to users" No, not at all. The GPL "propagates" by requiring that the rest of the code be licensed compatibly with the GPL. This is an unusual and particular feature of the GPL. The LGPL does not require that rest of the code be licensed in a particular way. However, that doesn't mean that the license of the "outer layer" insulates the downstream user from the LGPL license of the wrapped component. It just means that there is BSD code and LGPL code in the total product. The downstream user must accept and deal with the licenses of *all* of the components simultaneously. This is how most licenses work. I think that the fact that the GPL is particularly "viral" may be obscuring the normal way that licenses work when combined with other licenses. If I had a proprietary application that used an LGPL library, and I gave my customers some limited rights to modify and resell my application, they would still be bound by the LGPL with respect to the library. They could not modify the LGPLed library and sell it under a proprietary license even if I allow them to do that with the application as a whole. For us to use Eigen2 in numpy such that our users could use, modify and redistribute numpy+Eigen2, in its entirety, under the terms of the BSD license, we would have to get permission from you to distribute Eigen2 under the BSD license. It's only polite. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] performance matrix multiplication vs. matlab
2010/1/17 Benoit Jacob : > 2010/1/17 Robert Kern : >> On Sun, Jan 17, 2010 at 13:18, Benoit Jacob wrote: >>> 2010/1/17 Robert Kern : On Sun, Jan 17, 2010 at 12:11, Benoit Jacob wrote: > 2010/1/17 Robert Kern : >> On Sun, Jan 17, 2010 at 08:52, Benoit Jacob >> wrote: >>> 2010/1/17 David Cournapeau : >> There are several issues with eigen2 for NumPy usage: - using it as a default implementation does not make much sense IMHO, as it would make distributed binaries non 100 % BSD. >>> >>> But the LGPL doesn't impose restrictions on the usage of binaries, so >>> how does it matter? The LGPL and the BSD licenses are similar as far >>> as the binaries are concerned (unless perhaps one starts disassembling >>> them). >>> >>> The big difference between LGPL and BSD is at the level of source >>> code, not binary code: one modifies LGPL-based source code and >>> distributes a binary form of it, then one has to release the modified >>> source code as well. >> >> This is not true. Binaries that contain LGPLed code must be able to be >> relinked with a modified version of the LGPLed component. > > This doesn't apply to Eigen which is a header-only pure template > library, hence can't be 'linked' to. > > Actually you seem to be referring to Section 4 of the LGPL3, we have > already asked the FSF about this and their reply was that it just > doesn't apply in the case of Eigen: > > http://listengine.tuxfamily.org/lists.tuxfamily.org/eigen/2009/01/msg00083.html > > In your case, what matters is Section 5. You mean Section 3. Good. >>> >>> Section 3 is for using Eigen directly in a C++ program, yes, but I got >>> a bit ahead of myself there: see below >>> I admit to being less up on the details of LGPLv3 than I was of LGPLv2 which had a problem with C++ header templates. >>> >>> Indeed, it did, that's why we don't use it. >>> That said, we will not be using the C++ templates directly in numpy for technical reasons (not least that we do not want to require a C++ compiler for the default build). At best, we would be using a BLAS interface which requires linking of objects, not just header templates. That *would* impose the Section 4 requirements. >>> >>> ... or rather Section 5: that is what I was having in mind: >>> " 5. Combined Libraries. " >>> >>> I have to admit that I don't understand what 5.a) means. >> >> I don't think it applies. Let's say I write some routines that use an >> LGPLed Library (let's call them Routines A). I can include those >> routines in a larger library with routines that do not use the LGPLed >> library (Routines B). The Routines B can be under whatever license you >> like. However, one must make a library containing only Routines A and >> the LGPLed Library and release that under the LGPLv3, distribute it >> along with the combined work, and give notice about how to obtain >> Routines A+Library separate from Routines B. Basically, it's another >> exception for needing to be able to relink object code in a particular >> technical use case. >> >> This cannot apply to numpy because we cannot break out numpy.linalg >> from the rest of numpy. Even if we could, we do not wish to make >> numpy.linalg itself LGPLed. > > Indeed, that seems very cumbersome. I will ask the FSF about this, as > this is definitely not something that we want to impose on Eigen > users. > Sorry for continuing the licensing noise on your list --- I though that now that I've started, I should let you know that I think I understand things more clearly now ;) First, Section 5 of the LGPL is horrible indeed, so let's forget about that. If you were using a LGPL-licensed binary library, Section 4 would rather be what you want. It would require you to: 4a) say somewhere ("prominently" is vague, the bottom of a README is OK) that you use the library 4b) distribute copies of the GPL and LGPL licenses text. Pointless, but not a big issue. the rest doesn't matter: 4c) not applicable to you 4d1) this is what you would be doing anyway 4e) not applicable to you Finally and this is the important point: you would not be passing any requirement to your own users. Indeed, the LGPL license, contrary to the GPL license, does not propagate through dependency chains. So if NumPy used a LGPL-licensed lib Foo, the conditions of the LGPL must be met when distributing NumPy, but NumPy itself isn't LGPL at all and an application using NumPy does not have to care at all about the LGPL. So there should be no concern at all of "passing on LGPL requirements to users" Again, IANAL. Benoit ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Structured array sorting
On Mon, Jan 18, 2010 at 6:39 AM, Thomas Robitaille < thomas.robitai...@gmail.com> wrote: > > > Warren Weckesser-3 wrote: > > > > Looks like 'sort' is not handling endianess of the column data > > correctly. If you change the type of the floating point data to ' > the sort works. > > > > Thanks for identifying the issue - should I submit a bug report? > > Yes. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Structured array sorting
Warren Weckesser-3 wrote: > > Looks like 'sort' is not handling endianess of the column data > correctly. If you change the type of the floating point data to ' the sort works. > Thanks for identifying the issue - should I submit a bug report? Thomas -- View this message in context: http://old.nabble.com/Structured-array-sorting-tp27200785p27210615.html Sent from the Numpy-discussion mailing list archive at Nabble.com. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion