--- In [email protected], Jim Dougherty <j_doughe...@...> wrote:
>
> 
> 
> [email protected] wrote:
> > 
> > --- In [email protected] <mailto:c-prog%40yahoogroups.com> , 
> > "jimdougherty" <j_dougherty@> wrote:
> >  >
> >  > --- In [email protected] <mailto:c-prog%40yahoogroups.com> , 
> > Brett McCoy <idragosani@> wrote:
> >  > >
> >  > > On Wed, Feb 11, 2009 at 11:31 AM, Jim Dougherty
<j_dougherty@> wrote:
> >  > >
> >  > > > I was just browsing some C-code that I did not write and I
saw a
> > line
> >  > > > that said:
> >  > > >
> >  > > > #define MILISECONDS_PER_SECOND 1000L
> >  > > >
> >  > > > If I wrote the same line I would have written it without
the 'L'.
> >  > > > What is the point of the 'L'?
> >  > >
> >  > > It makes it a long integer.
> >  > >
> >  >
> >  > I still don't get it. Can you give me an example where it would be
> >  > wrong to use 1000 rather than 1000L ?
> > 
> > It would be useful to do this in an environment where sizeof(long) >
> > sizeof(int) eg. sizeof(long)=4 (32 bits) and sizeof(int)=2 (16 bits).
> > Then:
> > 
> > int n, x, y;
> > 
> > y = (n * MILISECONDS_PER_SECOND) / x;
> > 
> > Say n = 45678, x = 1500. Then 45678 * 1000 would overflow in the 16
> > bit arithmetic used if MILISECONDS_PER_SECONDS was specified without
> > the L. With the L, 32 bit arithmetic would be used and the 'correct'
> > result would be obtained.
> > 
> > Of course you don't need the L if you cast:
> > 
> > y = ((long)n * MILISECONDS_PER_SECOND) / x;
> > 
> > but I suppose including the L acts as a safety net.
> > 
> > In an environment where sizeof(long) == sizeof(int), which is quite
> > common (but not the one I code for), I think you just gain a bit of
> > portability.
> > 
> > John
> > 
> 
> Thanks, this help.  I do embedded work and our longs are 32 bits and
our 
> ints are 16.

Ditto,

> The idea seems to be that someone was worried about 
> numeric overflow.  If that is the case I do not think I like the 
> approach that was taken.  #defining MILLISECONDS_PER_SECOND as a long 
> will cause all calculations that use it to do 32 bit math which is 
> substantially slower on our system than 16 bit math.  It seems to me 
> that a better approach would be to #define MILLISECONDS_PER_SECOND as 
> 1000 (no L) and then to also use typecasting locally in those 
> calculations that need to worry about overflow.

I agree.

Reply via email to