On 06.03.2012 18:10, Manu wrote:
On 6 March 2012 15:10, Timon Gehr <timon.g...@gmx.ch
<mailto:timon.g...@gmx.ch>> wrote:
On 03/06/2012 01:27 PM, Manu wrote:
concatenation, etc performs bucket loads of implicit GC
allocations
a~b
Nothing implicit about that.
That is the very definition of an implicit allocation. What about the
concatenation operator says that an allocation is to be expected?
And what if you do a sequence of concatenations: a ~ b ~ c, now I've
even created a redundant intermediate allocation. Will it be cleaned up
immediately?
Just make an enhancement request ;)
Anyway it's good point as long as GC stays sloppy.
Is there a convenient syntax to concatenate into a target buffer
(subverting the implicit allocation)? If the syntax isn't equally
convenient, nobody will use it.
This is my single biggest fear in D. I have explicit control
within my
own code, but I wonder if many D libraries will be sloppy and
over-allocate all over the place, and be generally unusable in many
applications.
IMHO this fear is unjustified. If the library developers are that
sloppy, chances are that the library is not worth using, even when
leaving all memory allocation concerns away. (It is likely that you
aren't the only programmer familiar with some of the issues.)
I don't think it is unjustified, this seems to be the rule in C/C++
rather than the exception, and there's nothing in D to suggest this will
be mitigated, possibly worsened...
Many libraries which are perfectly usable in any old 'app' are not
usable in a realtime or embedded apps purely due to its internal
design/allocation habits.
Hopefully the D library authors will be more receptive to criticism...
but I doubt it. I think it'll be exactly as it is in C/C++ currently.
Consider C strings. You need to keep track of ownership of
it. That
often means creating extra copies, rather than sharing a
single copy.
Rubbish, strings are almost always either refcounted
Technically, refcounting is a form of GC.
Not really, it doesn't lock up the app at a random time for an
indeterminate amount of time.
or on the stack for
dynamic strings, or have fixed memory allocated within structures. I
don't think I've ever seen someone duplicating strings into separate
allocations liberally.
It is impossible to slice a zero-terminated string without copying
it in the general case and refcounting slices is not trivial.
This is when stack buffers are most common in C.
Who actually uses shared_ptr? Talking about the stl is
misleading... an
overwhelming number of C/C++ programmers avoid the stl like the
plague
(for these exact reasons). Performance oriented programmers
rarely use
STL out of the box, and that's what we're talking about here right?
Possibly now you are the one who is to provide supporting statistics.
Touche :)
If you're not performance oriented, then who cares about the GC
either?
There is a difference between not performance oriented and
performance agnostic. Probably everyone cares about performance to
some extent.
True.
On 6 March 2012 15:13, Dmitry Olshansky <dmitry.o...@gmail.com
<mailto:dmitry.o...@gmail.com>> wrote:
On 06.03.2012 16:27, Manu wrote:
Phobos/druntine allocate liberally - the CRT almost never
allocates
It's just most of CRT has incredibly bad usability, partly because
it lack _any_ notion of allocators. And policy of using statically
allocated shared data like in localtime, srand etc. shown remarkably
bad M-T scalability.
I agree to an extent. Most C API's tend to expect you to provide the
result buffer,
...pointer and it's supposed length and then do something sucky if it
doesn't fit, like truncate it (strncpy I'm looking at you!).
and that doesn't seem to be the prevailing pattern in D.
There are better abstractions then "pass a buffer".
Some might argue it's ugly to pass a result buffer in, and I agree to an
extent, but I'll take it every time over the library violating my apps
allocation patterns.
Who actually uses shared_ptr?
Like everybody? Though with c++11 move semantics a unique_ptr is
going to lessen it's widespread use. And there are ways to spend
less then 2 proper memory allocations per shared_ptr, like keeping
special block allocator for ref-counters.
More importantly smart pointers are here to stay in C++.
Everybody eh.. :)
Well speaking from within the games industry at least, there's a
prevailing trend back towards flat C or C-like C++, many lectures and
talks on the topic. I have no contact with any C++ programmers that use
STL beyond the most trivial containers like vector. Many games companies
re-invent some stl-ish thing internally which is less putrid ;)
Additionally, I can't think of many libraries I've used that go hard-out
C++. Most popular libraries are very conservative, or even flat C (most
old stable libs that EVERYONE uses, zlib, png, jpeg, mad, tinyxml, etc).
Take into account how much discipline and manpower it took. Yet I
remember the gory days when libpng segfaulted quite often ;)
Havoc, PhysX, FMod, etc are C++, but very light C++, light classes, no
STL, etc.
Havoc is pretty old btw, back in the days STL implementation + c++
compiler combination used to be slow and crappy, partly because of poor
inlining. Here comes so-called "abstraction cost" it's almost the other
way around nowdays.
Unreal used to use STL... but they fixed it :P
--
Dmitry Olshansky