It's not my code but the reason to #define it is for documentation 
purposes.  It is not clear what is happening if a calculation multiplies 
something by 1000 but it is clear what is happening if a calculation 
multiplies something by MILLIMETERS_PER_SECOND.


[email protected] wrote:
> 
> why do you need to define that anyway? it's not like it's going to 
> change...
> 
> ----- Original Message -----
> From: Jim Dougherty
> To: [email protected] <mailto:c-prog%40yahoogroups.com>  
> Sent: Wednesday, February 11, 2009 3:15 PM
> Subject: Re: [c-prog] Re: Integer constant suffix
> 
> [email protected] <mailto:c-prog%40yahoogroups.com> wrote:
>  >
>  > --- In [email protected] <mailto:c-prog%40yahoogroups.com> 
>  <mailto:c-prog%40yahoogroups.com> ,
>  > "jimdougherty" <j_doughe...@.> wrote:
>  > >
>  > > --- In [email protected] <mailto:c-prog%40yahoogroups.com> 
>  <mailto:c-prog%40yahoogroups.com> ,
>  > Brett McCoy <idragosani@> wrote:
>  > > >
>  > > > On Wed, Feb 11, 2009 at 11:31 AM, Jim Dougherty <j_dougherty@> 
> wrote:
>  > > >
>  > > > > I was just browsing some C-code that I did not write and I saw a
>  > line
>  > > > > that said:
>  > > > >
>  > > > > #define MILISECONDS_PER_SECOND 1000L
>  > > > >
>  > > > > If I wrote the same line I would have written it without the 'L'.
>  > > > > What is the point of the 'L'?
>  > > >
>  > > > It makes it a long integer.
>  > > >
>  > >
>  > > I still don't get it. Can you give me an example where it would be
>  > > wrong to use 1000 rather than 1000L ?
>  >
>  > It would be useful to do this in an environment where sizeof(long) >
>  > sizeof(int) eg. sizeof(long)=4 (32 bits) and sizeof(int)=2 (16 bits).
>  > Then:
>  >
>  > int n, x, y;
>  >
>  > y = (n * MILISECONDS_PER_SECOND) / x;
>  >
>  > Say n = 45678, x = 1500. Then 45678 * 1000 would overflow in the 16
>  > bit arithmetic used if MILISECONDS_PER_SECONDS was specified without
>  > the L. With the L, 32 bit arithmetic would be used and the 'correct'
>  > result would be obtained.
>  >
>  > Of course you don't need the L if you cast:
>  >
>  > y = ((long)n * MILISECONDS_PER_SECOND) / x;
>  >
>  > but I suppose including the L acts as a safety net.
>  >
>  > In an environment where sizeof(long) == sizeof(int), which is quite
>  > common (but not the one I code for), I think you just gain a bit of
>  > portability.
>  >
>  > John
>  >
> 
> Thanks, this help. I do embedded work and our longs are 32 bits and our
> ints are 16. The idea seems to be that someone was worried about
> numeric overflow. If that is the case I do not think I like the
> approach that was taken. #defining MILLISECONDS_PER_SECOND as a long
> will cause all calculations that use it to do 32 bit math which is
> substantially slower on our system than 16 bit math. It seems to me
> that a better approach would be to #define MILLISECONDS_PER_SECOND as
> 1000 (no L) and then to also use typecasting locally in those
> calculations that need to worry about overflow. Along the same lines,
> we certainly would not want to make all of our #define'd integer
> constants long.
> 
> [Non-text portions of this message have been removed]
> 
> 
> 
> 

Reply via email to