Whenever you work with floating point, the loss of precision must be expected -- a finite type cannot represent an infinite precision number.
The loss in precision should still be a warning. If I am using reals then I obviously needed a certain level of precision, I don't want to accidentally lose that precision somewhere because the compiler decided it was not important enough to warn me about it.