Re: [Numpy-discussion] Ignorance question
On Sun, Nov 1, 2009 at 7:26 PM, wrote: > On Sun, Nov 1, 2009 at 9:58 PM, David Goldsmith > wrote: > > I Googled scipy brownian and the top hit was the doc for > numpy.random.wald, > > but said doc has a "tone" that suggests there are more "sophisticated" > ways > > to generate a random Brownian signal? Or is wald indeed SotA? Thanks! > > > > DG > > Do you mean generating a random sample of a Brownian motion? The > standard approach, I have seen, is just cumsum of random normals, with > time steps depending on the usage, e.g. > Oddly enough, if you divide an interval into an infinite integer number of samples this also works for the theory side ;) Euler would understand, but such odd constructions with extended number systems have fallen out of favour... Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Formatting uint64 number
On Sun, Nov 1, 2009 at 8:37 PM, Thomas Robitaille < thomas.robitai...@gmail.com> wrote: > Hello, > > I have a question concerning uint64 numbers - let's say I want to > format a uint64 number that is > 2**31, at the moment it's necessary > to wrap the numpy number inside long before formatting > > In [3]: "%40i" % np.uint64(2**64-1) > Out[3]: ' -1' > > In [4]: "%40i" % long(np.uint64(2**64-1)) > Out[4]: '18446744073709551615' > > Would it be easy to modify numpy such that it automatically converts > uint64 numbers to long() instead of int() when implicitly converted to > python types? > > Hmm, I suspect this is a bug whose source is uint64 having an integer conversion function as part of the type whereas it should be undefined. A quick look at the source leaves me befuddled, so tracking down just how this happens might be a bit of work. Chuck ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Sturla Molden wrote: > Sturla Molden skrev: > >> Robert Kern skrev: >> >> >>> Then let me clarify: it was written to support integer ranges up to >>> sys.maxint. Absolutely, it would be desirable to extend it. >>> >>> >>> >>> >> Actually it only supports integers up to sys.maxint-1, as >> random_integers call randint. random_integers includes the upper range, >> but randint excludes the upper range. Thus, this happens on line 1153 in >> mtrand.pyx: >> >> return self.randint(low, high+1, size) >> >> inclusive upper interval should call rk_interval >> >> > > I love this one: > > cdef long lo, hi, diff > [...] > diff = hi - lo - 1 > > which silently overflows, and is the reason for this strange exception: > > >>> np.random.random_integers(-2147483648,high=2147483646,size=10) > > Traceback (most recent call last): > File "", line 1, in > np.random.random_integers(-2147483648,high=2147483646,size=10) > File "mtrand.pyx", line 950, in mtrand.RandomState.random_integers > File "mtrand.pyx", line 750, in mtrand.RandomState.randint > ValueError: low >= high > > I'll call this a bug. > Yep, I was bitten by it as well: http://projects.scipy.org/numpy/ticket/965 David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
josef.p...@gmail.com wrote: > > No, it wouldn't be a proper distribution. However in Bayesian analysis > it is used as an improper (diffuse) prior Ah, right - I wonder how this is handled rigorously, though. I know some basics of Bayesian statistics, but I don't much about Bayesian statistics from a theoretical POV (i.e. a rigorous mathematical development). > To simulate huge uniform integers, I think it should be possible to use > the floating point random numbers and rescale and round them. > Rescaling and especially rounding may bias the distribution, no ? The best (but long term) strategy would be to support arbitrary precision integer, as mentioned by Robert. David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
On Sun, Nov 1, 2009 at 23:14, Sturla Molden wrote: > I'll call this a bug. Yes. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Sturla Molden skrev: > Robert Kern skrev: > >> Then let me clarify: it was written to support integer ranges up to >> sys.maxint. Absolutely, it would be desirable to extend it. >> >> >> > Actually it only supports integers up to sys.maxint-1, as > random_integers call randint. random_integers includes the upper range, > but randint excludes the upper range. Thus, this happens on line 1153 in > mtrand.pyx: > > return self.randint(low, high+1, size) > > inclusive upper interval should call rk_interval > I love this one: cdef long lo, hi, diff [...] diff = hi - lo - 1 which silently overflows, and is the reason for this strange exception: >>> np.random.random_integers(-2147483648,high=2147483646,size=10) Traceback (most recent call last): File "", line 1, in np.random.random_integers(-2147483648,high=2147483646,size=10) File "mtrand.pyx", line 950, in mtrand.RandomState.random_integers File "mtrand.pyx", line 750, in mtrand.RandomState.randint ValueError: low >= high I'll call this a bug. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Robert Kern skrev: > Then let me clarify: it was written to support integer ranges up to > sys.maxint. Absolutely, it would be desirable to extend it. > > Actually it only supports integers up to sys.maxint-1, as random_integers call randint. random_integers includes the upper range, but randint excludes the upper range. Thus, this happens on line 1153 in mtrand.pyx: return self.randint(low, high+1, size) The main source of the problem is that number smaller than sys.maxint can become a long. (I have asked why on python-dev, it does not make any sence.) So when random_integers pass "high+1" to randint, it is unneccesarily converted to a long. Then, there is an exception on line 847: hi = high With hi previously declared to long, Cython refuses the conversion. Now, we could try a downcast to int like this: hi = int(high) which would make Cython only raise an exception in case of an integer overflow. >>> int(2**31) 2147483648L >>> int(2**31-1) 2147483647 If there is no overflow, high becomes an int and conversion to C long is allowed. Still, this will only support integer ranges up to sys.maxint - 1. We thus have to swap the order of randint and random_intgers. The one with the inclusive upper interval should call rk_interval. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] odd behaviour with basic operations
Seems like this was a rookie mistake with code later in the function. Thanks for suggesting the use of numpy.where, that is a much better function for the purpose. Benjamin ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
On Sun, Nov 1, 2009 at 10:55 PM, David Cournapeau wrote: > josef.p...@gmail.com wrote: >> >> array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) >> >> might actually be the right answer if you want a uniform distribution >> on the real line. > > Does it make sense to define a uniform random variable whose range is > the extended real line ? It would not have a distribution w.r.t the > Lebesgue measure, right ? > > I must confess I am quickly lost in the maths in statistics, though, No, it wouldn't be a proper distribution. However in Bayesian analysis it is used as an improper (diffuse) prior, which often replicates frequentists results. But it's a theoretical derivation, I don't think anyone tries to simulate this. To simulate huge uniform integers, I think it should be possible to use the floating point random numbers and rescale and round them. Josef > > David > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Formatting uint64 number
Hello, I have a question concerning uint64 numbers - let's say I want to format a uint64 number that is > 2**31, at the moment it's necessary to wrap the numpy number inside long before formatting In [3]: "%40i" % np.uint64(2**64-1) Out[3]: ' -1' In [4]: "%40i" % long(np.uint64(2**64-1)) Out[4]: '18446744073709551615' Would it be easy to modify numpy such that it automatically converts uint64 numbers to long() instead of int() when implicitly converted to python types? Thanks, Thomas ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Robert Kern skrev: > Then let me clarify: it was written to support integer ranges up to > sys.maxint. Absolutely, it would be desirable to extend it. > > I know, but look at this: >>> import sys >>> sys.maxint 2147483647 >>> 2**31-1 2147483647L sys.maxint becomes a long, which is what confuses mtrand. ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Sturla Molden wrote: > Robert Kern skrev: > >> 64-bit and larger integers could be done, but it requires >> modification. The integer distributions were written to support C >> longs, not anything larger. You could also use .bytes() and >> np.fromstring(). >> >> > But as of Python 2.6.4, even 32-bit integers fail, at least on Windows. > It fails on linux as well - I think it is a 32 vs 64 bits issue, not a windows vs linux. I don't know what happens on windows 64, though: we may have issue if we use long. cheers, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
On Sun, Nov 1, 2009 at 22:17, Sturla Molden wrote: > Robert Kern skrev: >> 64-bit and larger integers could be done, but it requires >> modification. The integer distributions were written to support C >> longs, not anything larger. You could also use .bytes() and >> np.fromstring(). >> > But as of Python 2.6.4, even 32-bit integers fail, at least on Windows. Then let me clarify: it was written to support integer ranges up to sys.maxint. Absolutely, it would be desirable to extend it. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Robert Kern skrev: > 64-bit and larger integers could be done, but it requires > modification. The integer distributions were written to support C > longs, not anything larger. You could also use .bytes() and > np.fromstring(). > But as of Python 2.6.4, even 32-bit integers fail, at least on Windows. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
josef.p...@gmail.com wrote: > > array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) > > might actually be the right answer if you want a uniform distribution > on the real line. Does it make sense to define a uniform random variable whose range is the extended real line ? It would not have a distribution w.r.t the Lebesgue measure, right ? I must confess I am quickly lost in the maths in statistics, though, David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
On Sun, Nov 1, 2009 at 20:57, Thomas Robitaille wrote: > Hi, > > I'm trying to generate random 64-bit integer values for integers and > floats using Numpy, within the entire range of valid values for that > type. 64-bit and larger integers could be done, but it requires modification. The integer distributions were written to support C longs, not anything larger. You could also use .bytes() and np.fromstring(). > To generate random 32-bit floats, I can use: > np.random.uniform(low=np.finfo(np.float32).min,high=np.finfo > (np.float32).max,size=10) What is the use case here? I know of none. Floating point is a bit weird and will cause you many problems over such an extended range. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
On Sun, Nov 1, 2009 at 10:20 PM, David Cournapeau wrote: > Thomas Robitaille wrote: >> Hi, >> >> I'm trying to generate random 64-bit integer values for integers and >> floats using Numpy, within the entire range of valid values for that >> type. To generate random 32-bit floats, I can use: >> >> np.random.uniform(low=np.finfo(np.float32).min,high=np.finfo >> (np.float32).max,size=10) >> >> which gives for example >> >> array([ 1.47351436e+37, 9.93620693e+37, 2.22893053e+38, >> -3.33828977e+38, 1.08247781e+37, -8.37481260e+37, >> 2.64176554e+38, -2.72207226e+37, 2.54790459e+38, >> -2.47883866e+38]) >> >> but if I try and use this for 64-bit numbers, i.e. >> >> np.random.uniform(low=np.finfo(np.float64).min,high=np.finfo >> (np.float64).max,size=10) >> >> I get >> >> array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) >> >> Similarly, for integers, I can successfully generate random 32-bit >> integers: >> >> np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo >> (np.int32).max,size=10) >> >> which gives >> >> array([-1506183689, 662982379, -1616890435, -1519456789, 1489753527, >> -604311122, 2034533014, 449680073, -444302414, >> -1924170329]) >> >> but am unsuccessful for 64-bit integers, i.e. >> >> np.random.random_integers(np.iinfo(np.int64).min,high=np.iinfo >> (np.int64).max,size=10) >> >> which produces the following error: >> >> OverflowError: long int too large to convert to int >> >> Is this expected behavior, or are these bugs? >> > > I think those are bugs, but it may be difficult to fix. > > You can check that if you restrict a tiny bit your interval, you get > better result: > > import numpy as np > # max/min for double precision is ~ 1.8e308 > low, high = -1e308, 1e308 > np.random.uniformat(low, high, 100) # bunch of inf > low, high = -1e307, 1e307 > np.random.uniformat(low, high, 100) # much more reasonable > > It may be that you are pushing the limits of the random generator. Your > min and max may be border cases: if you use the min/max representable > numbers, and the random generator needs to do any addition of a positive > number, you will 'overflow' your float number (Robert will have a better > answer to this). The problem is that it may be difficult to detect this > in advance. > > David > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) might actually be the right answer if you want a uniform distribution on the real line. I never realized how many numbers are out there when I saw that most numbers in the example are e+37 or e+38. Josef ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Thomas Robitaille wrote: > Hi, > > I'm trying to generate random 64-bit integer values for integers and > floats using Numpy, within the entire range of valid values for that > type. To generate random 32-bit floats, I can use: > > np.random.uniform(low=np.finfo(np.float32).min,high=np.finfo > (np.float32).max,size=10) > > which gives for example > > array([ 1.47351436e+37, 9.93620693e+37, 2.22893053e+38, > -3.33828977e+38, 1.08247781e+37, -8.37481260e+37, > 2.64176554e+38, -2.72207226e+37, 2.54790459e+38, > -2.47883866e+38]) > > but if I try and use this for 64-bit numbers, i.e. > > np.random.uniform(low=np.finfo(np.float64).min,high=np.finfo > (np.float64).max,size=10) > > I get > > array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) > > Similarly, for integers, I can successfully generate random 32-bit > integers: > > np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo > (np.int32).max,size=10) > > which gives > > array([-1506183689, 662982379, -1616890435, -1519456789, 1489753527, > -604311122, 2034533014, 449680073, -444302414, > -1924170329]) > > but am unsuccessful for 64-bit integers, i.e. > > np.random.random_integers(np.iinfo(np.int64).min,high=np.iinfo > (np.int64).max,size=10) > > which produces the following error: > > OverflowError: long int too large to convert to int > > Is this expected behavior, or are these bugs? > I think those are bugs, but it may be difficult to fix. You can check that if you restrict a tiny bit your interval, you get better result: import numpy as np # max/min for double precision is ~ 1.8e308 low, high = -1e308, 1e308 np.random.uniformat(low, high, 100) # bunch of inf low, high = -1e307, 1e307 np.random.uniformat(low, high, 100) # much more reasonable It may be that you are pushing the limits of the random generator. Your min and max may be border cases: if you use the min/max representable numbers, and the random generator needs to do any addition of a positive number, you will 'overflow' your float number (Robert will have a better answer to this). The problem is that it may be difficult to detect this in advance. David ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Random int64 and float64 numbers
Thomas Robitaille skrev: > np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo > (np.int32).max,size=10) > > which gives > > array([-1506183689, 662982379, -1616890435, -1519456789, 1489753527, > -604311122, 2034533014, 449680073, -444302414, > -1924170329]) > > This fails on my computer (Python 2.6.4, NumPy 1.3.0 on Win32). >>> np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo (np.int32).max,size=10) Traceback (most recent call last): File "", line 2, in (np.int32).max,size=10) File "mtrand.pyx", line 950, in mtrand.RandomState.random_integers File "mtrand.pyx", line 746, in mtrand.RandomState.randint OverflowError: long int too large to convert to int It might have something to do with this: >>> 2**31-1 2147483647L >>> -2**31 -2147483648L In light of this annoying behaviour: def random_int64(size): a0 = np.random.random_integers(0, 0x, size=size).astype(np.uint64) a1 = np.random.random_integers(0, 0x, size=size).astype(np.uint64) a2 = np.random.random_integers(0, 0x, size=size).astype(np.uint64) a3 = np.random.random_integers(0, 0x, size=size).astype(np.uint64) a = a0 + (a1<<16) + (a2 << 32) + (a3 << 48) return a.view(dtype=np.int64) Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Ignorance question
On Sun, Nov 1, 2009 at 21:27, wrote: > On Sun, Nov 1, 2009 at 10:26 PM, wrote: >> On Sun, Nov 1, 2009 at 9:58 PM, David Goldsmith >> wrote: >>> I Googled scipy brownian and the top hit was the doc for numpy.random.wald, >>> but said doc has a "tone" that suggests there are more "sophisticated" ways >>> to generate a random Brownian signal? Or is wald indeed SotA? Thanks! > > What's a SotA? "State of the Art" -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Ignorance question
On Sun, Nov 1, 2009 at 10:26 PM, wrote: > On Sun, Nov 1, 2009 at 9:58 PM, David Goldsmith > wrote: >> I Googled scipy brownian and the top hit was the doc for numpy.random.wald, >> but said doc has a "tone" that suggests there are more "sophisticated" ways >> to generate a random Brownian signal? Or is wald indeed SotA? Thanks! What's a SotA? Josef >> >> DG > > Do you mean generating a random sample of a Brownian motion? The > standard approach, I have seen, is just cumsum of random normals, with > time steps depending on the usage, e.g. > http://groups.google.com/group/sympy/browse_thread/thread/65bf82164cae83be?pli=1 > > However, I never really checked the details of how they generate > Brownian Motions or Brownian Bridges in larger Monte Carlo studies. > > Josef > > >> ___ >> NumPy-Discussion mailing list >> NumPy-Discussion@scipy.org >> http://mail.scipy.org/mailman/listinfo/numpy-discussion >> >> > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Ignorance question
On Sun, Nov 1, 2009 at 9:58 PM, David Goldsmith wrote: > I Googled scipy brownian and the top hit was the doc for numpy.random.wald, > but said doc has a "tone" that suggests there are more "sophisticated" ways > to generate a random Brownian signal? Or is wald indeed SotA? Thanks! > > DG Do you mean generating a random sample of a Brownian motion? The standard approach, I have seen, is just cumsum of random normals, with time steps depending on the usage, e.g. http://groups.google.com/group/sympy/browse_thread/thread/65bf82164cae83be?pli=1 However, I never really checked the details of how they generate Brownian Motions or Brownian Bridges in larger Monte Carlo studies. Josef > ___ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion > > ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] odd behaviour with basic operations
On Sun, Nov 1, 2009 at 21:09, Benjamin Deschamps wrote: > I am getting strange behaviour with the following code: > Pd = ((numpy.sign(C_02) == 1) * Pd_pos) + ((numpy.sign(C_02) == -1) * > Pd_neg) > Ps = ((numpy.sign(C_02) == 1) * Ps_pos) + ((numpy.sign(C_02) == -1) * > Ps_neg) > where Pd, Ps, C_02, Pd_pos, Pd_neg, Ps_pos and Ps_neg are all Float32 numpy > arrays of the same shape. > The problem is that the first line evaluates correctly (Pd is what it should > be), but the second line does not. However, if I run the same line of code > manually in IDLE, then it evaluates correctly! In other words, Ps as > returned by the function does not match the value that I should get and > obtain when entering the exact same code in IDLE. Please provide a self-contained example that demonstrates the problem. Also, be aware that where C_02==0, sign(C_02)==0. You will need to consider what should happen then. A better way to do what you want is to use where(): Pd = numpy.where(C_02 > 0.0, Pd_pos, Pd_neg) Ps = numpy.where(C_02 > 0.0, Ps_pos, Ps_neg) Change the > to >= if you want C_02==0 to use the pos values. -- Robert Kern "I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth." -- Umberto Eco ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] odd behaviour with basic operations
I am getting strange behaviour with the following code: Pd = ((numpy.sign(C_02) == 1) * Pd_pos) + ((numpy.sign(C_02) == -1) * Pd_neg) Ps = ((numpy.sign(C_02) == 1) * Ps_pos) + ((numpy.sign(C_02) == -1) * Ps_neg) where Pd, Ps, C_02, Pd_pos, Pd_neg, Ps_pos and Ps_neg are all Float32 numpy arrays of the same shape. The problem is that the first line evaluates correctly (Pd is what it should be), but the second line does not. However, if I run the same line of code manually in IDLE, then it evaluates correctly! In other words, Ps as returned by the function does not match the value that I should get and obtain when entering the exact same code in IDLE. Basically, (numpy.sign(C_02) == 1) evaluates to either True or False, and multiplying with another array will give either 0 (when false) or the value of the array. The purpose of this code is to compute Pd and Ps without loops, and to take the value from Pd_pos or Ps_pos when C_02 is positive or of Pd_neg and Ps_neg when C_02 is negative. Using loops, it looks like this: for index in numpy.ndindex(ysize, xsize): if numpy.sign(C_02[index]) == 1: Pd[index] = Pd_pos[index] Ps[index] = Ps_pos[index] elif numpy.sign(C_02[index]) == -1: Pd[index] = Pd_neg[index] Ps[index] = Ps_neg[index] which also works fine, but takes much longer. Python 2.6.3, IDLE 2.6.1, Numpy 1.3.0, Snow Leopard, the script also uses some GDAL, matplotlib and scipy functions... Ideas? Benjamin___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Ignorance question
I Googled scipy brownian and the top hit was the doc for numpy.random.wald, but said doc has a "tone" that suggests there are more "sophisticated" ways to generate a random Brownian signal? Or is wald indeed SotA? Thanks! DG ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Random int64 and float64 numbers
Hi, I'm trying to generate random 64-bit integer values for integers and floats using Numpy, within the entire range of valid values for that type. To generate random 32-bit floats, I can use: np.random.uniform(low=np.finfo(np.float32).min,high=np.finfo (np.float32).max,size=10) which gives for example array([ 1.47351436e+37, 9.93620693e+37, 2.22893053e+38, -3.33828977e+38, 1.08247781e+37, -8.37481260e+37, 2.64176554e+38, -2.72207226e+37, 2.54790459e+38, -2.47883866e+38]) but if I try and use this for 64-bit numbers, i.e. np.random.uniform(low=np.finfo(np.float64).min,high=np.finfo (np.float64).max,size=10) I get array([ Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf, Inf]) Similarly, for integers, I can successfully generate random 32-bit integers: np.random.random_integers(np.iinfo(np.int32).min,high=np.iinfo (np.int32).max,size=10) which gives array([-1506183689, 662982379, -1616890435, -1519456789, 1489753527, -604311122, 2034533014, 449680073, -444302414, -1924170329]) but am unsuccessful for 64-bit integers, i.e. np.random.random_integers(np.iinfo(np.int64).min,high=np.iinfo (np.int64).max,size=10) which produces the following error: OverflowError: long int too large to convert to int Is this expected behavior, or are these bugs? Thanks for any help, Thomas ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Single view on multiple arrays
Bill Blinn skrev: > v = multiview((3, 4)) > #the idea of the following lines is that the 0th row of v is > #a view on the first row of a. the same would hold true for > #the 1st and 2nd row of v and the 0th rows of b and c, respectively > v[0] = a[0] This would not even work, becuase a[0] does not return a view array but a scalar arrays, which is a different type of numpy objects. To get a view, you will need to: v = a[0:1] # view of element 0 in a Also you cannot assign to v[0], as that would trigger a copy as well. > v[1] = b[0] > v[2] = c[0] As I mentioned in the answer to Anne, it would take a completely different array object. It would need to internally store an array with memory addresses. I have not made up my mind if ndarray can be subclassed for this, or if it takes a completely different object (e.g. similar to numpy.memmap). What it would require is __setitem__ to store pointers and __getitem__ to dereference (return an ndarray with values). Good look hacking, it is not even difficult, just tedious. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Single view on multiple arrays
Anne Archibald skrev: > The short answer is, you can't. Not really true. It is possible create an array (sub)class that stores memory addresses (pointers) instead of values. It is doable, but I am not wasting my time implementing it. Sturla ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
Re: [Numpy-discussion] Single view on multiple arrays
2009/11/1 Bill Blinn : > What is the best way to create a view that is composed of sections of many > different arrays? The short answer is, you can't. Numpy arrays must be located contiguous blocks of memory, and the elements along any dimension must be equally spaced. A view is simply another array that references (some of) the same underlying memory, possibly with different strides. > For example, imagine I had > a = np.array(range(0, 12)).reshape(3, 4) > b = np.array(range(12, 24)).reshape(3, 4) > c = np.array(range(24, 36)).reshape(3, 4) > > v = multiview((3, 4)) > #the idea of the following lines is that the 0th row of v is > #a view on the first row of a. the same would hold true for > #the 1st and 2nd row of v and the 0th rows of b and c, respectively > v[0] = a[0] > v[1] = b[0] > v[2] = c[0] > > #change the underlying arrays > a[0, 0] = 50 > b[0, 0] = 51 > c[0, 0] = 52 > > #I would want all these assertions to pass because the view > #refers to the rows of a, b and c > assert v[0, 0] == 50 > assert v[1, 0] == 51 > assert v[2, 0] == 52 > > Is there any way to do this? If you need to be able to do this, you're going to have to rearrange your code somewhat, so that your original arrays are views of parts of an initial array. It's worth noting that if what you're worried about is the time it takes to copy data, you might well be surprised at how cheap data copying and memory allocation really are. Given that numpy is written in python, only for really enormous arrays will copying data be expensive, and allocating memory is really a very cheap operation (modern malloc()s average something like a few cycles). What's more, since modern CPUs are so heavily cache-bound, using strided views can be quite slow, since you end up loading whole 64-byte cache lines for each 8-byte double you need. In short, you probably need to rethink your design, but while you're doing it, don't worry about copying data. Anne ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] Single view on multiple arrays
What is the best way to create a view that is composed of sections of many different arrays? For example, imagine I had a = np.array(range(0, 12)).reshape(3, 4) b = np.array(range(12, 24)).reshape(3, 4) c = np.array(range(24, 36)).reshape(3, 4) v = multiview((3, 4)) #the idea of the following lines is that the 0th row of v is #a view on the first row of a. the same would hold true for #the 1st and 2nd row of v and the 0th rows of b and c, respectively v[0] = a[0] v[1] = b[0] v[2] = c[0] #change the underlying arrays a[0, 0] = 50 b[0, 0] = 51 c[0, 0] = 52 #I would want all these assertions to pass because the view #refers to the rows of a, b and c assert v[0, 0] == 50 assert v[1, 0] == 51 assert v[2, 0] == 52 Is there any way to do this? Thanks, bill ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion
[Numpy-discussion] November 6 EPD Webinar: How do I... use Envisage for GUIs?
Having trouble viewing this email? Click here Friday, November 6: How do I... use Envisage for GUIs? Dear Leah, Envisage is a Python-based framework for building extensible applications. The Envisage Core and corresponding Envisage Plugins are components of the Enthought Tool Suite. We've found that Envisage grants us a degree of immediate functionality in our custom applications and have come to rely on the framework in much of our development. For November's EPD webinar, Corran Webster will show how you can hook together existing Envisage plugins to quickly create a new GUI. We'll also look at how you can easily turn an existing Traits UI interface into an Envisage plugin. New: Linux-ready webinars! In order to better serve the Linux-users among our subscribers, we've decided to begin hosting our EPD webinars on WebEx instead of GoToMeeting. This means that our original limit of 35 attendees will be scaled back to 30. As usual, EPD subscribers at a Basic level or above will be guaranteed seats for the event while the general public may add their name to the wait list here. EPD Webinar: How do I... use Envisage for GUIs? Friday, November 6 1pm CDT/6pm UTC Wait list We look forward to seeing you Friday! As always, feel free to contact us with questions, concerns, or suggestions for future webinar topics. Thanks, The Enthought Team QUICK LINKS ::: www.enthought.com code.enthought.com Facebook Enthought Blog Forward email This email was sent to l...@enthought.com by amen...@enthought.com. Update Profile/Email Address | Instant removal with SafeUnsubscribe™ | Privacy Policy. Enthought, Inc. | 515 Congress Ave. | Suite 2100 | Austin | TX | 78701 ___ NumPy-Discussion mailing list NumPy-Discussion@scipy.org http://mail.scipy.org/mailman/listinfo/numpy-discussion