On Sun, Jan 17, 2016 at 4:11 AM, anatoly techtonik <techto...@gmail.com> wrote:
> And actually I don't understand what "a tool to help you configure
> projects" really means, because I am not familiar with autoconf
> (that's why I asked the question in the first place). So I see that C
> files already have #define mechanism inside, but it looks like C
> compilers can not agree what #define's to set and what do they mean.
> That's why users need to create #defines themselves and define the
> meaning for them. Am I right?
> Is there any attempt to define standard set for those #define's and
> their meaning across compilers?

If you're just starting out, it's not unusual to set compiler and
platform specific functionality using defines built into the compiler.
However, most experienced C/C++ developers consider it bad practice to
do so.  Instead, they create their own defines to help specify which
platforms should use subsets of code that may not be portable.  Why go
to all this trouble and create defines independent to the compiler and
platform?  Take a look at some real world examples.  What happens when
you try to build a library using X11 on Windows, but it hard codes
Win32 header files using the compiler's flag to check if you're on a
Windows platform.  You have to go through and fix every define,
because you only want the X11 headers, not the Win32 headers.  Cygwin
went so far as to remove the Windows define from their compiler so
they wouldn't have the issue.  However, if you're using X11 on a
Windows compiler other than Cygwin and the library or application uses
a builtin define, you're stuck fixing every instance of it.  I often
find myself going in and fixing defines because someone assumed
functionality was or wasn't available based on a compiler flag and my
compiler doesn't behave the way they assumed.  That's typically why
using built-in compiler defines to determine what code to use is not
considered the best practice.  What about when a compiler doesn't
support a feature in one version, but does in a later version?  Then
not only do you need to check the compiler flags for what operating
system and what compiler, but also what versions work and what
versions don't with a particular feature.  What happens when you need
to know the sizeof a particular type on a particular system?  Do you
assume you know what it is based on compiler and platform?  How do you
know if you're on a 32 or 64 bit system?  Do you look for yet more
compiler specific defines?  What if there aren't any?  C libraries
like musl went to great lengths not to have any compiler specific
defines to let you know that you're working with musl in place of
something like glibc or uclibc.  The developer wanted to be able to
add new features in later versions and discourage developers from
hard-coding what works based on compiler specific defines.  Things
also get complicated if you're building with a cross-compiler.  If you
check what platform you're on according to the compiler, it might tell
you Linux, but you may be building for Windows or an embedded system
with DOS or some other platform.

If you're familiar with a language like JavaScript, you'll find
JavaScript developers solve the issue of platform specific code by
using a runtime check to see whether functionality is available.  Some
developers have taken a similar approach with C/C++.  AT&T came up
with one configure/build system that does runtime time checks for
functionality.  Typically however, the most widely adopted methods for
checking functionality in C/C++ are systems such as autoconf and
cmake.  They creates small programs to run to see if a feature is
available or if it will fail, to check the size of types, to check
what flags work if a library is available, etc.  So basically it's
doing a runtime check.  The information gathered from these runtime
checks can be used to create defines or select what code is used when
the program is built.  So build systems for most C/C++ programs check
for runtime functionality during the build instead of having the
programs check when they're run.  It's usually much faster for a
program to do the check ahead of time and use the information in the
compile than to try to check everything dynamically at runtime.  It's
also easier for the developer if the build tool simply checks if the
code works or not on a platform rather than the developer trying to
code in every possible combination of which compiler specific flags
might be available and in what combination to determine how to do with
code that might not be portable.

It would be much simpler if developers stuck with code that was
portable to all systems, but in practice, this usually only happens
for very small sample programs.  Even command line tools like many of
the common GNU command line utilities (find, ls, etc.) use a lot of
platform specific code that's not part of the official C language
specifications.  There are several extensions to the C language such
as the POSIX standards.  Many of the POSIX standards are not part of
the official C standards.  They're typically a superset of the C
standards.  They are not necessarily portable to non-POSIX platforms.

> Then tools like autoconf are used to detect which #define is correct
> for specific system and set its value accordingly. Right? This is made
> by perl script named ./configure. Right? What is the role of cDetect
> then? Is it a C program that does the job of Perl ./configure?

configure is typically a bash script.  Some applications create their
own configure scripts using other languages like Perl or TCL, but
these projects probably aren't using the standard GNU autoconf
program.  The GNU configure system uses several languages just to
build a basic C or C++ program including m4 and Perl.  If you're
attempting to build libraries/applications on a minimal system or from
scratch, ideally, it would be nice to just require one language to do

Using CDetect or other configure programs lets you determine what
features will compile successfully and what settings to use before you
actually build a program.  Most build systems also use some type of
make program to do the actual compile steps once the build system
figures out how best to build the application or library on that
particular system.

So that's more than most people probably want to know about the
rationale behind build systems for compiled programs.  However, if you
don't understand what others have done and where other systems have
worked well or failed, you're bound to make the same mistakes all over


Reply via email to