On 1 April 2010 02:49, David Cournapeau wrote:
> Charles R Harris wrote:
>> On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
>> wrote:
>>
>>> On 1 April 2010 01:59, Charles R Harris wrote:
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
>>> peridot.face...@gmail.com>
wrote:
>
Charles R Harris wrote:
> On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
> wrote:
>
>> On 1 April 2010 01:59, Charles R Harris wrote:
>>>
>>> On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
>> peridot.face...@gmail.com>
>>> wrote:
On 1 April 2010 01:40, Charles R Harris
>> wrote:
>
On 1 April 2010 02:24, Charles R Harris wrote:
>
>
> On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
> wrote:
>>
>> On 1 April 2010 01:59, Charles R Harris wrote:
>> >
>> >
>> > On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
>> >
>> > wrote:
>> >>
>> >> On 1 April 2010 01:40, Charles R Harris
On Thu, Apr 1, 2010 at 12:04 AM, Anne Archibald
wrote:
> On 1 April 2010 01:59, Charles R Harris wrote:
> >
> >
> > On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald <
> peridot.face...@gmail.com>
> > wrote:
> >>
> >> On 1 April 2010 01:40, Charles R Harris
> wrote:
> >> >
> >> >
> >> > On Wed, M
Hi,
A calculation which goes like this...
n = 5
a = np.arange(1000)
b = np.arange(n - 1, 1000)
l = []
for i in range(b.size):
# Absolute difference of n a elements and nth b element
x = np.abs(a[i:i + n] - b[i])
# Average of x
y = x.sum() / n
l.append(y)
It takes a while
On 1 April 2010 01:59, Charles R Harris wrote:
>
>
> On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
> wrote:
>>
>> On 1 April 2010 01:40, Charles R Harris wrote:
>> >
>> >
>> > On Wed, Mar 31, 2010 at 11:25 PM, wrote:
>> >>
>> >> On Thu, Apr 1, 2010 at 1:22 AM, wrote:
>> >> > On Thu, Apr 1,
On Wed, Mar 31, 2010 at 11:46 PM, Anne Archibald
wrote:
> On 1 April 2010 01:40, Charles R Harris wrote:
> >
> >
> > On Wed, Mar 31, 2010 at 11:25 PM, wrote:
> >>
> >> On Thu, Apr 1, 2010 at 1:22 AM, wrote:
> >> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
> >> > wrote:
> >> >>
> >> >>
On 1 April 2010 01:40, Charles R Harris wrote:
>
>
> On Wed, Mar 31, 2010 at 11:25 PM, wrote:
>>
>> On Thu, Apr 1, 2010 at 1:22 AM, wrote:
>> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
>> > wrote:
>> >>
>> >>
>> >> On Wed, Mar 31, 2010 at 6:08 PM, wrote:
>> >>>
>> >>> On Wed, Mar 31,
On Wed, Mar 31, 2010 at 11:25 PM, wrote:
> On Thu, Apr 1, 2010 at 1:22 AM, wrote:
> > On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
> > wrote:
> >>
> >>
> >> On Wed, Mar 31, 2010 at 6:08 PM, wrote:
> >>>
> >>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
> >>> wrote:
> >>> > T J wrot
On Thu, Apr 1, 2010 at 1:22 AM, wrote:
> On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
> wrote:
>>
>>
>> On Wed, Mar 31, 2010 at 6:08 PM, wrote:
>>>
>>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
>>> wrote:
>>> > T J wrote:
>>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
>>>
On Wed, Mar 31, 2010 at 11:17 PM, Anne Archibald
wrote:
> On 1 April 2010 01:11, Charles R Harris wrote:
> >
> >
> > On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald <
> peridot.face...@gmail.com>
> > wrote:
> >>
> >> On 31 March 2010 16:21, Charles R Harris
> >> wrote:
> >> >
> >> >
> >> > On W
On Thu, Apr 1, 2010 at 1:17 AM, Charles R Harris
wrote:
>
>
> On Wed, Mar 31, 2010 at 6:08 PM, wrote:
>>
>> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
>> wrote:
>> > T J wrote:
>> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
>> >> wrote:
>> >>
>> >>> Looks like roundoff error.
>>
On 1 April 2010 01:11, Charles R Harris wrote:
>
>
> On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald
> wrote:
>>
>> On 31 March 2010 16:21, Charles R Harris
>> wrote:
>> >
>> >
>> > On Wed, Mar 31, 2010 at 11:38 AM, T J wrote:
>> >>
>> >> On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
>> >> > H
On Wed, Mar 31, 2010 at 6:08 PM, wrote:
> On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
> wrote:
> > T J wrote:
> >> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
> >> wrote:
> >>
> >>> Looks like roundoff error.
> >>>
> >>>
> >>
> >> So this is "expected" behavior?
> >>
> >> In [1]: np
On Wed, Mar 31, 2010 at 11:03 PM, Anne Archibald
wrote:
> On 31 March 2010 16:21, Charles R Harris
> wrote:
> >
> >
> > On Wed, Mar 31, 2010 at 11:38 AM, T J wrote:
> >>
> >> On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
> >> > Hi,
> >> >
> >> > I'm getting some strange behavior with logaddexp2.
On Wed, Mar 31, 2010 at 9:24 PM, Shailendra wrote:
> Hi All,
> I want to make a function which should be like this
>
> cordinates1=(x1,y1) # x1 and y1 are x-cord and y-cord of a large
> number of points
> cordinates2=(x2,y2) # similar to condinates1
> indices1,indices2= match_cordinates(cordinate
On 31 March 2010 16:21, Charles R Harris wrote:
>
>
> On Wed, Mar 31, 2010 at 11:38 AM, T J wrote:
>>
>> On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
>> > Hi,
>> >
>> > I'm getting some strange behavior with logaddexp2.reduce:
>> >
>> > from itertools import permutations
>> > import numpy as np
On Wed, Mar 31, 2010 at 10:28 PM, T J wrote:
> On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
> wrote:
> >
> > That is a 32 bit kernel, right?
> >
>
> Correct.
>
> Regarding the config.h, which config.h? I have a numpyconfig.h.
> Which compilation options should I obtain and how? When I run
On Wed, Mar 31, 2010 at 7:06 PM, Charles R Harris
wrote:
>
> That is a 32 bit kernel, right?
>
Correct.
Regarding the config.h, which config.h? I have a numpyconfig.h.
Which compilation options should I obtain and how? When I run
setup.py, I see:
C compiler: gcc -pthread -fno-strict-aliasing
Hi All,
I want to make a function which should be like this
cordinates1=(x1,y1) # x1 and y1 are x-cord and y-cord of a large
number of points
cordinates2=(x2,y2) # similar to condinates1
indices1,indices2= match_cordinates(cordinates1,cordinates2)
(x1[indices1],y1[indices1]) "matches" (x2[indices
On Wed, Mar 31, 2010 at 4:42 PM, T J wrote:
> On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
> wrote:
> >> So this is "expected" behavior?
> >>
> >> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
> >> Out[1]: -1.5849625007211561
> >>
> >> In [2]: np.logaddexp2(-0.584962500721
Ryan May wrote:
> On Wed, Mar 31, 2010 at 5:37 PM, Warren Weckesser
> wrote:
>> T J wrote:
>>> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
>>> wrote:
>>>
Looks like roundoff error.
>>> So this is "expected" behavior?
>>>
>>> In [1]: np.logaddexp2(-1.5849625007211563, -53.5849
On Wed, Mar 31, 2010 at 5:37 PM, Warren Weckesser
wrote:
> T J wrote:
>> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
>> wrote:
>>
>>> Looks like roundoff error.
>>>
>>>
>>
>> So this is "expected" behavior?
>>
>> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
>> Out[1]: -1.
On Wed, Mar 31, 2010 at 7:37 PM, Warren Weckesser
wrote:
> T J wrote:
>> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
>> wrote:
>>
>>> Looks like roundoff error.
>>>
>>>
>>
>> So this is "expected" behavior?
>>
>> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
>> Out[1]: -1.
T J wrote:
> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
> wrote:
>
>> Looks like roundoff error.
>>
>>
>
> So this is "expected" behavior?
>
> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
> Out[1]: -1.5849625007211561
>
> In [2]: np.logaddexp2(-0.5849625007211563,
On Wed, Mar 31, 2010 at 3:38 PM, David Warde-Farley wrote:
> Unfortunately there's no good way of getting around order-of-
> operations-related rounding error using the reduce() machinery, that I
> know of.
>
That seems reasonable, but receiving a nan, in this case, does not.
Are my expectations
On Wed, Mar 31, 2010 at 3:36 PM, Charles R Harris
wrote:
>> So this is "expected" behavior?
>>
>> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
>> Out[1]: -1.5849625007211561
>>
>> In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
>> Out[2]: nan
>>
> I don't see tha
On 31-Mar-10, at 6:15 PM, T J wrote:
>
> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
> Out[1]: -1.5849625007211561
>
> In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
> Out[2]: nan
>
> In [3]: np.log2(np.exp2(-0.5849625007211563) +
> np.exp2(-53.5849625007211
On Wed, Mar 31, 2010 at 4:15 PM, T J wrote:
> On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
> wrote:
> >
> > Looks like roundoff error.
> >
>
> So this is "expected" behavior?
>
> In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
> Out[1]: -1.5849625007211561
>
> In [2]: np.log
On Wed, Mar 31, 2010 at 1:21 PM, Charles R Harris
wrote:
>
> Looks like roundoff error.
>
So this is "expected" behavior?
In [1]: np.logaddexp2(-1.5849625007211563, -53.584962500721154)
Out[1]: -1.5849625007211561
In [2]: np.logaddexp2(-0.5849625007211563, -53.584962500721154)
Out[2]: nan
In [
On Wed, Mar 31, 2010 at 15:05, Jeremy Lewi wrote:
> So my question is what is the best way to ensure I’m using the correct
> memory deallocator for the buffer? i.e the deallocator for what ever
> allocator numpy used to allocate the array?
PyArray_free() (note the capitalization). This is almost
Hello,
I'm passing a numpy array into a C-extension. I would like my C-extension to
take ownership of the data and handle deallocating the memory when it is no
longer needed. (The data is large so I want to avoid unnecessarily copying
the data).
So my question is what is the best way to en
On Wed, Mar 31, 2010 at 11:38 AM, T J wrote:
> On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
> > Hi,
> >
> > I'm getting some strange behavior with logaddexp2.reduce:
> >
> > from itertools import permutations
> > import numpy as np
> > x = np.array([-53.584962500721154, -1.5849625007211563,
> -0
On Wed, Mar 31, 2010 at 10:30 AM, T J wrote:
> Hi,
>
> I'm getting some strange behavior with logaddexp2.reduce:
>
> from itertools import permutations
> import numpy as np
> x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
> for p in permutations([0,1,2]):
> print
Hi,
I'm getting some strange behavior with logaddexp2.reduce:
from itertools import permutations
import numpy as np
x = np.array([-53.584962500721154, -1.5849625007211563, -0.5849625007211563])
for p in permutations([0,1,2]):
print p, np.logaddexp2.reduce(x[list(p)])
Essentially, the result
35 matches
Mail list logo