Dear All,

When optimizing some sage code. I found in multiple places the same
reason for some stupid slowdown. Namely using: "for i in
range(huge_number):" in several places. Instead of manually changing
each of these occurances I thought it was easier to just write a bash
oneliner which does this for me.

maarten-derickxs-macbook-pro:sage maarten$  find ./ -type f -name
'*.pyx' -print0 | xargs -0 sed -e 's/for \(.*\) in range(/for \1 in
xrange(/' -i ""
maarten-derickxs-macbook-pro:sage maarten$  find ./ -type f -name
'*.py' -print0 | xargs -0 sed -e 's/for \(.*\) in range(/for \1 in
xrange(/' -i ""

I have two questions.

1. Are such computer generated changes allowable.

2. Could we maybe add some sort of "check for common bad practice" to
the doctest or coverage framework, I think something like pylint with
some sage specific plugins could be very usefull in the reviewing
process.

-- 
To post to this group, send an email to [email protected]
To unsubscribe from this group, send an email to 
[email protected]
For more options, visit this group at http://groups.google.com/group/sage-devel
URL: http://www.sagemath.org

Reply via email to