--- In [email protected], "jimdougherty" <j_doughe...@...> wrote:
>
> --- In [email protected], Brett McCoy <idragosani@> wrote:
> >
> > On Wed, Feb 11, 2009 at 11:31 AM, Jim Dougherty <j_dougherty@> wrote:
> > 
> > > I was just browsing some C-code that I did not write and I saw a
line
> > > that said:
> > >
> > > #define  MILISECONDS_PER_SECOND  1000L
> > >
> > > If I wrote the same line I would have written it without the 'L'.
> > > What is the point of the 'L'?
> > 
> > It makes it a long integer.
> > 
> 
> I still don't get it.  Can you give me an example where it would be
> wrong to use 1000 rather than 1000L ?

It would be useful to do this in an environment where sizeof(long) >
sizeof(int) eg. sizeof(long)=4 (32 bits) and sizeof(int)=2 (16 bits).
Then:

  int n, x, y;

  y = (n * MILISECONDS_PER_SECOND) / x;

Say n = 45678, x = 1500. Then 45678 * 1000 would overflow in the 16
bit arithmetic used if MILISECONDS_PER_SECONDS was specified without
the L. With the L, 32 bit arithmetic would be used and the 'correct'
result would be obtained.

Of course you don't need the L if you cast:

  y = ((long)n * MILISECONDS_PER_SECOND) / x;

but I suppose including the L acts as a safety net.

In an environment where sizeof(long) == sizeof(int), which is quite
common (but not the one I code for), I think you just gain a bit of
portability.

John

Reply via email to