David <[EMAIL PROTECTED]> wrote:
> Could this group shine some light on a debate we are having.
> Given the two sets up code, which both do the same thing:
>
>     CustomType* ct;
>
>     ct = (CustomType*) malloc(sizeof(CustomType));
>     memset(ct, 0, sizeof(CustomType));

If you want to write portable code, you shouldn't assume that all-
bits-zero will represent 0.0 or null pointers. On a style note,
puritanical C programmers shun the casting of malloc.

> or
>
>     CustomType* ct;
>     int sizeOfCustomType = sizeof(CustomType);
>     ct = (RingQueueType*) malloc(sizeOfCustomType);

Unless CustomType or RingQueueType are synonyms of void or each
other, this assignment violates a constraint. [Most commonly
manifested as a warning or error about incompatible types.]

Stylistically, sizeof returns a size_t value, so you should use
size_t rather than int.

>     memset(ct, 0, sizeOfCustomType);
>
> Which is more efficient in term of speed and size.

The ONLY answer to that is 'try them both on your given compiler and
see'. The language definitions do not specify how much memory a final
executable should be, nor do they specify how efficient the final
executable must be. These things are 'Quality of Implementation'
(QoI) issues, and will vary from compiler to compiler.

> My thought is that they both compile out to the same thing, while
> the second option gives you better insight into debugging (i.e.
> look at the sizeof value  [our embedded debugger can't evaluate
> methods (sizeof(CustomType)) ]).

Another alternative is...

  static const CustomType ctZero = { 0 };
  CustomType *ct = malloc(sizeof *ct);
  *ct = ctZero;

> The argument against is that the second option uses an extra int
> value, therefore increasing stack size.

Whether a variable is optimised out is highly dependant on your
compiler and its settings.

You seem to be focussing (unnecessarily) on a specific class of
architecture. This can lead you into false beliefs that either don't
apply or can be contradictory of other architectures. Such beliefs
can come back to bite you!

> Some believe, which may be true depending on optimization settings,
> that the first is less efficient from a speed standpoint because it
> calls sizeof() an extra time.

That's unlikely given that sizeof can be evaluated statically at
compile time. In other words, there should be no need to 'evaluate'
it at runtime.

[C99 introduce an exception to this with variadic arrays. GNU C has
had variadic arrays for a while too, though few people use them
(deliberately).]

> Any thoughts?  Replace the sizeof() call with any frequently used
> call in a given method.  I.E. is it better to set a variable based
> on a single call and then re-use the variable, or make multiple
> calls to the same method, with the same paramters.
>
>     doSomething(customCall(constantVal););
>     doSomethingElse(customCall(constantVal););
>     doYetAThirdThing(customCall(constantVal););
>
> or
>
>     int customCallVal = customCall(constantVal);
>    
>     doSomething(customCallVal);
>     doSomethingElse(customCallVal);
>     doYetAThirdThing(customCallVal);

These could have different semantics depending on what 'customCall'
does, so I can't answer that.

Your first priority is to make sure the code works. Not far behind is
making the code readable and easy to maintain. As for debugging, that
should be avoided (or reduced) as much as possible. That you're so
concerned with it suggests you're not thinking enough about the
design and implementation of your code and test suites.

Your concerns fall into a category of 'micro-optimisations'. Unless
you're writing some extremely time and/or space critical code (in
which case you'd probably use assembler anyway), most programmers
don't waste their time on such trivialities. If a given compiler
doesn't produce satisfactory executables, try another compiler.

Compilers tend to be much better at optimisations these days than
they used to be. Modern compilers also tend to come with a fleet of
optimsation options. If you want to tweak code, then write the code
as you want it written, and change your compiler flags as appropriate
for given machine make or project files.

Modifying source code to suit a given compiler's machine code
generation will lock you into a target compiler and/or machine, and
often makes your code run dramatically worse when you try to port it
to other implementations. You'll end up having to revisit your code
everytime you upgrade your compiler.

You should also realise that compiler writers and chip designers are
constantly working on optimisations on your behalf without you
needing to lift a finger. [Although you may have to shell out a few
clams...;-]

So why worry about it? Let the compiler writers sort out the issues,
their solutions will likely be better than anything you and I can
think of!

At the end of the day, programmers are generally busy enough as it is
without making even more work for themselves. ;-) If you want a
faster or smaller program, write a better algorithm.

--
Peter





To unsubscribe, send a blank message to <mailto:[EMAIL PROTECTED]>.


Yahoo! Groups Sponsor
ADVERTISEMENT
click here
Web Bug from http://us.adserver.yahoo.com/l?M=294855.5468653.6549235.3001176/D=groups/S=:HM/A=2376776/rand=333146709


Yahoo! Groups Links

Reply via email to