Robert Story <rst...@freesnmp.com>:
>                        And as far as supported configurations, we're
> very big on backwards compatibility.

I think you are spending more effort on this than field conditions justify.
And there is a cost you probably have not audited.

I learned my Unix programming chops back in the 1980s when having thickets
of portability #ifdefs was definitely the right thing to do because there
was so much variation in platform APIs. I too acquired the reflex of never
throwing any of those out, just in case

This attitude has been obsolete for at least 9 years.

The reason I'm that precise about it is because of a learning
experience I had while maintaining GPSD in 2009. The important fact
about GPSD in this context is that it is deployed on a *huge* breadth
of hardware, not just datacenter machines but laptops and smartphones
and embedded gear of all sorts.

Came the day I was doing some routine cleanup, I tripped over a port
#ifdef for a species of big-iron Unix that will never again walk the
earth, and the following thought intruded: "What? Is this really needed?"

Slightly amazed at my own heresy, I continued to think "You know, this
is a build variant I can no longer test.  Why am I letting it clutter
up my code and complicate my build recipe?"

*blink*

Because I have a reflex about these things. I screen out their complexity
cost by habit.

So I went on an #ifdef hunt.  I had never really totaled up the number of
LOC added by alternate build options before.  It was significant.

My next step was to ask how I could reduce this.  The obvious thing to
try was to assume that the standards people won the war - anywhere I
found an #ifdef where one of the paths assumed SuSv2+C99 and rip out
all the other paths.

When I did that, the resulting patch was large but obviously reversible.
So I tried the bold thing.  I removed all that code and shipped a point release.

My reasoning was thus: point releases are cheap.  This change, if it's
bad, will throw an error at compile time well before it disrupts any
runtime behavior.  I can put back the pieces I actually need when the
build failures hit my bugtracker.

I never saw even one.

And that's how I learned that the standards people had succeeded.

Six years later I preformed a similar cruftectomy on the NTP code.
Again, never a peep of complaint from anyone downstream.

The benefits: (1) fewer LOC of more readable code, (2) fewer test paths,
(3) simpler build recipe.

> As 5.8 is getting really close to going out the door, this type of
> cleanup likely won't make it into that release.
> 
> 
> Got any cmake experts? One of the planned items for 5.9 is moving
> to cmake. The bulk of the work is done (patches from VMware against
> 5.7), but work will be needed to integrate to master and put on the
> finishing touches.

I occasionally used cmake when I was on the Battle for Wesnoth project.

I don't like it.  Not so much because cmake is bad in itself, it isn't.
It's a reasonable implementation of its design premises.

The problem is that one of cmake's premises is being a two-phase builder,
generating makefiles, rather than a one-phase builder that directly executes
its recipe.  This repeats the autoconf tragedy, making buld-failure
diagnosis *far* more complex and gnarly than it needs to be.

I have come to believe that all two-phase build engines should be shot
with silver bullets and buried at crossroads with stakes through
their hearts.
-- 
                <a href="http://www.catb.org/~esr/";>Eric S. Raymond</a>

My work is funded by the Internet Civil Engineering Institute: https://icei.org
Please visit their site and donate: the civilization you save might be your own.



------------------------------------------------------------------------------
Check out the vibrant tech community on one of the world's most
engaging tech sites, Slashdot.org! http://sdm.link/slashdot
_______________________________________________
Net-snmp-coders mailing list
Net-snmp-coders@lists.sourceforge.net
https://lists.sourceforge.net/lists/listinfo/net-snmp-coders

Reply via email to