On 3/7/07, Daniel Mahler <[EMAIL PROTECTED]> wrote:

My problem is not space, but time.
I am creating a small array over and over,
and this is turning out to be a bottleneck.
My experiments suggest that problem is the allocation,
not the random number generation.
Allocating all the arrays as one n+1  dim and grabbing rows from it
is faster than allocating the small arrays individually.
I am iterating too many times to allocate everything at once though.
I can just do a nested loop
where create manageably large arrays in the outer
and grab the rows in the inner,
but I wanted something cleaner.
Besides, I thought avoiding allocation altogether would be even faster.


The slow down is probably related to this from a previous thread:

In [46]: def test1() :
  ....:     x = normal(0,1,1000)
  ....:

In [47]: def test2() :
  ....:     for i in range(1000) :
  ....:         x = normal(0,1)

In [50]: t = timeit.Timer('test1()', "from __main__ import test1, normal")

In [51]: t.timeit(100)
Out[51]: 0.022681951522827148

In [52]: t = timeit.Timer('test2()', "from __main__ import test2, normal")

In [53]: t.timeit(100)
Out[53]: 4.3481810092926025

Robert thought this might relate to Travis' changes adding broadcasting to
the random number generator. It does seem certain that generating small
arrays of random numbers has a very high overhead.

Chuck
_______________________________________________
Numpy-discussion mailing list
Numpy-discussion@scipy.org
http://projects.scipy.org/mailman/listinfo/numpy-discussion

Reply via email to