On Tue, 2007-01-16 at 12:28 -0800, Dick Moores wrote: > So I go with working up "an algorithm for first > converting n to an int (for > example, multiplying the above n by 1000), converting to a string, > putting the decimal point back in between indices 2 and 3, then using > that string as n (thereby avoiding the use of quotes around n as the > first argument)."
This seems like a lot of effort for not much reward. Where is n coming from? If you already have something that holds the required level of precision, why does it need to be transformed? Could it already be a string? If n is entered through raw_input, then you received it as a string. I don't want to push you in the wrong direction, but it seems like there must be a better way. -- Lloyd Kvam Venix Corp _______________________________________________ Tutor maillist - Tutor@python.org http://mail.python.org/mailman/listinfo/tutor