On Fri, Jun 3, 2011 at 6:26 PM, Marc Espie <[email protected]> wrote:
> On Fri, Jun 03, 2011 at 06:11:31PM -0400, Nico Kadel-Garcia wrote:
>> On Tue, May 31, 2011 at 6:51 AM, Marc Espie <[email protected]> wrote:
>>
>> > How comes nobody in other OSes noticed ? Well, people probably did, and
>> > tweaked their allocators to "work", by using preferably the low address
space,
>> > and having addresses that increase slowly, so that a lot of pointers are
below
>> > 4GB, and a lot of pointer diffs are under 4GB.
>
>> Or you could just be engaging in an ad hominem attack without actually
>> looking at their implementations and assuming they're not doing it
>> right because they're not you or your favorite platform. But hey, we
>> don't know anyone who'd do *that* in the OpenBSD community. Right?
>
> Wrong.
>
> An ad hominem attack would require me asserting all this for a fact, which
> is not what I'm doing. Notice the "probably" ? it makes all the difference
> in the world.

No, I'm afraid it really doesn't require "asserting the truth". To
quote from Wikipedia, "An ad hominem (Latin: "to the man"), short for
argumentum ad hominem, is an attempt to link the truth of a claim to a
negative characteristic or belief of the person advocating it" It's
what I just did to you, in turn. How's it feel?

An example or two would have lent powerful credence to your claim. The
fix for mono, which Marc Espie notes in this thread, is a very
powerful such indicator.

> Now, up to you. What explanation do you have for those problems not being
> that visible on other 64 bit platforms ? That is, other than their mmap
> not being very random and stuff tending to group together near the low end
> of the address space...

I can personally comment on only a few: I see a mix. I've done some
software porting from 32-bit to 64-bit in the last few years.
(Including a fascinating open source project a few years ago, I know
it works well in OpenBSD.) I agree that the poor randomization of
memory allocation contribute profoundly to not noticing it in testing.
(And how often do you *need* 2 GB of RAM for small applicatoins????)
For others, they simply set the compilation architecture to be 32-bit
to avoid problems (which I've certainly done!) For others, they've
relied extensively on the better supported 32-bit Java from Sun. and
not dug into the code to fix the issue. (They were solviing a problem,
not designing an architecture, and were happy to wait and try to fix
it later, despite my personal advice.)

The Java projects have been a chronic source of this kind of problem:
the lack of stability of the early 64-bit implementations led a lot of
Java programmers to insist on 32-bit toolkits, and they've never
bothered to upgrade and support both. Java isn't supposed to be
architecture dependent, but this belief is fundamentally mistaken when
you have to pay attention to resource allocation and optimization. I'm
doing other things this year: I'll be very curious if OpenJDK is good
enough in 64-bit land to help improve performance and testing there
and reduce this behavior.

> Waiting for a good explanation here.
>
> Do you think we make problems just to make other OSes look bad ?

No, but the snarking is sometimes tedious. It goes both ways.

> No, we encounter problems, and we do our best to fix them.
>
> In my personal experience, most of the time, those problems are bugs in
> apparently portable software.  In the case at hand, we already fixed
several
> issues with some programs. All of them related to some very dirty
assumptions
> about memory...

*GOOD*.

My last notable migration project involved well-written C. It partly
needed shifting between the older use of "int" and nailing it down as
"int32_t" or other well-defined structures as necessary, to avoid
confusion when compiled on 64-bit. But it also needed considerable
attention to inherent assumptions about its own, unique databases, and
that kind of time and attention is expensive.

Reply via email to