At 08:20 AM 8/23/2007, Kent Johnson wrote:
>Dick Moores wrote:
>>At 07:34 PM 8/22/2007, Kent Johnson wrote:
>>>FWIW here is my fastest solution:
>>>
>>>01 from itertools import chain
>>>02 def compute():
>>>03      str_=str; int_=int; slice_=slice(None, None, -1)
>>>04     for x in chain(xrange(1, 1000001, 10), xrange(2, 1000001, 10),
>>>05 xrange(3, 1000001, 10), xrange(4, 1000001, 10), xrange(5, 1000001, 10),
>>>06 xrange(6, 1000001, 10), xrange(7, 1000001, 10), xrange(8, 1000001, 10),
>>>07 xrange(9, 1000001, 10)):
>>>08          rev = int_(str_(x)[slice_])
>>>09          if rev>=x: continue
>>>10         if not x % rev:
>>>11              print x,
>>>12 compute()
>
>>And at first I thought your use of str_, int_, and slice_ were 
>>pretty weird; instead of your line 08, why not just use the straightforward
>>rev = int(str(x)[::-1])
>>but on upon testing I see they all make their respective 
>>contributions to speed.
>>But I don't understand why. Why is your line 08 faster than "rev = 
>>int(str(x)[::-1])"?
>
>Two reasons. First, looking up a name that is local to a function is 
>faster than looking up a global name. To find the value of 'str', 
>the interpreter has to look at the module namespace, then the 
>built-in namespace. These are each dictionary lookups. Local values 
>are stored in an array and accessed by offset so it is much faster.

Wow, what DON'T you understand?

Please explain "accessed by offset".

>Second, the slice notation [::-1] actually creates a slice object 
>and passes that to the __getitem__() method of the object being 
>sliced. The slice is created each time through the loop. My code 
>explicitly creates and caches the slice object so it can be reused each time.
>>BTW I also thought that
>>for x in chain(xrange(1, 1000001, 10), xrange(2, 1000001, 10),
>>xrange(3, 1000001, 10), xrange(4, 1000001, 10), xrange(5, 1000001, 10),
>>xrange(6, 1000001, 10), xrange(7, 1000001, 10), xrange(8, 1000001, 10),
>>xrange(9, 1000001, 10)):
>>      pass
>>MUST take longer than
>>for x in xrange(1000000):
>>      pass
>>because there appears to be so much more going on, but their 
>>timings are essentially identical.
>
>Well for one thing it is looping 9/10 of the number of iterations.

This time I added "xrange(10, end, 10)" and did the timing using the 
template in the timeit module:

template = """
def inner(_it, _timer):
     _t0 = _timer()
     from itertools import chain
     end = 1000000
     for _i in _it:
         for x in chain(xrange(1, end, 10), xrange(2, end, 10),
         xrange(3, end, 10), xrange(4, end, 10), xrange(5, end, 10),
         xrange(6, end, 10), xrange(7, end, 10), xrange(8, end, 10),
         xrange(9, end, 10), xrange(10, end, 10)):
             pass
     _t1 = _timer()
     return _t1 - _t0
"""
This gets always close to 71 msec per loop.


template = """
def inner(_it, _timer):
     _t0 = _timer()
     end = 1000000
     for _i in _it:
         for x in xrange(end):
             pass
     _t1 = _timer()
     return _t1 - _t0
"""
This gets always close to 84 msec per loop.

So they're not the same! And yours is the faster one! Because 
itertools.chain() is written in C, I suppose.

>  Also, chain() doesn't have to do much, it just forwards the values 
> returned by the underlying iterators to the caller, and all the 
> itertools methods are written in C to be fast. If you read 
> comp.lang.python you will often see creative uses of itertools in 
> optimizations.

I just did a search on "itertools" in the comp.lang.python group at 
Google Groups, and got 1,940 hits. <http://tinyurl.com/3a2abz>. That 
should keep me busy!

Dick



======================================
                       Bagdad Weather
<http://weather.yahoo.com/forecast/IZXX0008_f.html>  

_______________________________________________
Tutor maillist  -  Tutor@python.org
http://mail.python.org/mailman/listinfo/tutor

Reply via email to