> On Jul 26, 2016, at 12:50 PM, Warren D Smith <[email protected]> wrote:
>
> ...
> Sigh. It's really hard to get compiler and language guys to do anything.
I find it puzzling that you appear to think that insulting your audience is the
best way to influence them.
> ...
> There is absolutely no good reason why things have to be *legislated*
> to be an integer number of bytes. They could be single bits. It
> would be fine. PASCAL already provided
> it 40 years ago.
So what? Pascal is a different language with different goals. The reason
there are hundreds of programming languages in the world -- and dozens in
current use -- is that each design is a tradeoff of conflicting goals, and each
is a different set of choices made for a particular set of reasons. Pascal,
Cobol, Lisp, C, and Python all make very different choices. They are all good
choices in some situations, and bad choices in another; this is why you
sometimes write in C and sometimes in Python.
Support for data in sizes different from those native to most modern machine
architectures comes at a very substantial cost, in compiler complexity, code
size, and execution time. It's clearly doable, and a few languages have done
so. But omitting it is a more common tradeoff, and clearly a good choice given
the way those languages are received in the marketplace.
> If you wanted to make a packed array of 63 bools,
> you could pad it up to 64 to fit it in an integer number of bytes.
> I'd be ok with that. I'm not ok with gratuitously wasting a factor of
> 8 in memory and/or forcing programmers to do lots more work and use
> cruddy syntax, merely because the compiler writers were too lazy to
> just change a few numbers in their code.
Since you clearly don't know much about how compilers work, it would be better
to study the subject before expressing an opinion. You might also study the
art of communicating persuasively.
> And it is an absolute outrage that every processor in the universe
> provides "add with carry" but the C language insists on preventing you
> from accessing that, while providing a way
> to access combined divide & remainder instead. It is simply not a
> justifiable decision.
You might also study processor architecture some more. If by "every processor"
you mean every x86 processor, you might be correct. But of the 15 or so
processor architectures I've looked at, I think that only a modest minority
have add-carry or add-with-carry instructions. For example, MIPS, which is a
very popular architecture in current wide use, does not have such operations.
Quite apart from that, it is not the goal of most (if any) high level languages
to provide direct access to all CPU facilities. Instead, the more common goal
is to provide a clean set of abstractions complete enough to let people write
reliable programs for the problem area in question, with minimal effort. So it
is with C (and Pascal, for that matter, which doesn't have an add-with-carry
primitive either). For those who want to see all the bells and whistles,
there's a simple answer: assembly language.
paul