"John Matthews" <jm5...@...> wrote:
> Jim Dougherty <j_dougherty@> wrote:
> > [email protected] wrote:
> > > "jimdougherty" <j_dougherty@> wrote:
> > > > I still don't get it. Can you give me an example where
> > > > it would be wrong to use 1000 rather than 1000L ?
> > > 
> > > It would be useful to do this in an environment where
> > > sizeof(long) > sizeof(int) eg. sizeof(long)=4 (32 bits)
> > > and sizeof(int)=2 (16 bits).

There are implementations with 32-bit longs where sizeof(long)
is 1. As with coding itself, it is better to talk in terms of
the ranges of the type, than their size in bytes.

> > > Then:
> > > 
> > > int n, x, y;
> > > 
> > > y = (n * MILISECONDS_PER_SECOND) / x;
> > > 
> > > Say n = 45678, x = 1500. Then 45678 * 1000 would overflow
> > > in the 16 bit arithmetic used if MILISECONDS_PER_SECONDS
> > > was specified without the L. With the L, 32 bit arithmetic
> > > would be used and the 'correct' result would be obtained.
> > > 
> > > Of course you don't need the L if you cast:
> > > 
> > > y = ((long)n * MILISECONDS_PER_SECOND) / x;
> > > 
> > > but I suppose including the L acts as a safety net.
> > > 
> > > In an environment where sizeof(long) == sizeof(int), which
> > > is quite common (but not the one I code for), I think you
> > > just gain a bit of portability.

The accuracy tends to be of more concern.

> > Thanks, this help.  I do embedded work and our longs are 32
> > bits and our ints are 16.
> 
> Ditto,
> 
> > The idea seems to be that someone was worried about 
> > numeric overflow.  If that is the case I do not think I like
> > the approach that was taken.  #defining
> > MILLISECONDS_PER_SECOND as a long will cause all calculations
> > that use it to do 32 bit math which is substantially slower
> > on our system than 16 bit math.

Which do you want, slow accurate values or fast inaccurate ones?
How often do you actually do this calculation?
Have you done any profiling?

Note that 16-bit cpus often store the result of a multiply
as a 32-bit value anyway. On the 68000 the '16-bit' division
instruction requires a 32-bit dividend anyway. So you would
lose nothing by converting to long in the sample case above!

> > It seems to me that a better
> > approach would be to #define MILLISECONDS_PER_SECOND as 
> > 1000 (no L) and then to also use typecasting locally in
> > those calculations that need to worry about overflow.
> 
> I agree.

I don't. Casts cause as many problems as they solve.

-- 
Peter

Reply via email to