Date: Tue, 4 Dec 2001 01:57:26 +0100 (CET)

   On  4 Dec, Sven Neumann wrote:

   > Using them for error reporting is definitely a bad idea. Using a
   > negative value to indicate that a value has not been set and needs to
   > be computed is IMO a reasonable usage.

   IMHO not because you're abusing the real value for errors and thus
   one variable for 2 purposes which is a bad idea and using signed
   integers is dragging down performance.

By how much?  If it can't be measured, it's probably not enough to be
worthwhile.  And if you use explicit error returns (which may require
an additional pointer to be passed in), you'll quickly eat up any
performance gain you might achieve.

Using signed integers also catches a particularly common error case
(subtracting a value from a smaller value) that otherwise has to be
checked for explicitly, which is also going to require more
instructions (and even worse on modern processors, branches).

Anyway, integer arithmetic (additions and subtractions) is usually one
cycle (plus however long it takes to retrieve the operands and
instruction from memory, cache, whatever) on modern processors, so
it's hard for me to see how unsigned arithmetic is going to help.

   > If code makes assumptions about parameters (like for example
   > assuming that width is > 0), there has to be code like
   > g_return_if_fail() to check this assumption. Defining width as an
   > unsigned integer would also guarantee this, but it unneededly
   > introduces a possibility for a bug if width is ever used in a
   > subtraction.

   Not in the subtraction itself, only if the destination is also
   unsigned and if one expects a positive result and gets a negative
   one there's a bug in the code anyway.

Fine, but

a = c - d;
if (a < 0)

is likely to be more efficient than

if (c < d)
  a = c - d

And yes, maybe there's a bug in the code, but isn't it better to have
a reasonably easy way of catching it?  If you put assertions in the
code, they can be compiled in during debugging, and then turned off
(thereby not consuming any extra time) when the code is solid.

   > I agree about the return value thing, but I doubt the use of
   > signed or unsigned makes up for any noticeable speed difference
   > expect under very rare conditions. Before any integer is changed
   > to an unsigned value, a benchmark should proove that it really is
   > an optimization and all code using the value needs to be reviewed
   > carefully.

   FWIW I've converted lots of parameters and variables in paint-funcs
   to unsigned and I've experienced a 2,5% object code reduction in
   this part of the GIMP and since the calculations were simplified by
   the compiler (proven by assembly examination) it's for sure also
   faster though it's quite hard to profile this code since there's no
   benchmark generating reproducible results.

Get some numbers first before worrying about micro-optimizations like
this.  Code that looks simpler isn't necessarily faster, and in
particularly it may not be *significantly* faster.  You're better off
looking for hot spots in the code; if this isn't a hot spot, it
doesn't matter what you do.

I've done some stuff like this in Gimp-print (although I haven't found
unsigned vs. signed to matter very much, unless I can replace a
division by a power of two by a right shift, in which it's sometimes
worthwhile to take the branch penalty), but I've spent a lot of time
with the profiler to actually find the real hot spots.  Simply moving
constant computations out of loops (into a lookup table, or some such)
often saves a lot more time than any of this bit fiddling.

   I still think GIMP is too fat and the code in parts too
   ugly. Though my changes in general just show small improvements I
   usually also fix stylistic problems and find unoptimal code and
   bogosity during the audits. Having a clear view what a function
   does and which parameters it expects is pretty important and
   correctly typing their arguments is a good step forward in
   achieving quality code which always runs at maximum speed.

I agree, but that's exactly why this kind of micro-optimization is
premature.  You'll get much better results looking for the hot spots
first, and understanding exactly what's going on and how you can
devise an overall more efficient approach, than by blindly converting
signed ints to unsigned and using bit fields.

Robert Krawitz <[EMAIL PROTECTED]>

Tall Clubs International  -- or 1-888-IM-TALL-2
Member of the League for Programming Freedom -- mail [EMAIL PROTECTED]
Project lead for Gimp Print/stp --

"Linux doesn't dictate how I work, I dictate how Linux works."
--Eric Crampton
Gimp-developer mailing list

Reply via email to