"Lorn" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
> I'm trying to work on a dataset that has its primary numbers saved as
> floats in string format. I'd like to work with them as integers with an
> implied decimal to the hundredth. The problem is that the current
> precision is variable. For instance, some numbers have 4 decimal places
> while others have 2, etc. (10.7435 vs 1074.35)... all numbers are of
> fixed length.
> I have some ideas of how to do this, but I'm wondering if there's a
> better way. My current way is to brute force search where the decimal
> is by slicing and then cutoff the extraneous numbers, however, it would
> be nice to stay away from a bunch of if then's.
>
> Does anyone have any ideas on how to do this more efficiently?
If you can live with a small possibility of error, then:
int(float(numIn) * 100.0)
should do the trick.
If you can't, and the numbers are guaranteed to have a decimal point,
this (untested) could do what you want:
aList = numIn.split(".")
result int(aList[0]) * 100 + int(aList[1][:2])
HTH
John Roth
>
> Many Thanks,
> Lorn
>
--
http://mail.python.org/mailman/listinfo/python-list