Search gave result enough rope to hang yourself.
On Friday, 15 August 2014 at 03:10:43 UTC, Etienne Cimon wrote:
I'm looking into making a binding for the C++ API called Botan,
and the constructors in it take a std::function. I'm wondering
if there's a D equivalent for this binding to work out, or if I
have to make a C++ wrapper as well?
Quick test...
Ah, thanks a lot Jonathan. I kept telling me I should probably test it
on a simple case.
OK, the good news is, Appender works in these cases (I mean, that's
good news for Phobos).
Now, I just have to find out why it's slower in my case :)
import std.array;
On Thursday, 14 August 2014 at 18:52:00 UTC, Sean Kelly wrote:
On 64 bit, reserve a huge chunk of memory, set a SEGV handler
and commit more as needed. Basically how kernel thread stacks
work. I've been meaning to do this but haven't gotten around to
it yet.
AFAIK, OS already provides this
I wonder if using plain `Array` instead may be result in better performance
where immutability is not needed.
Hmm, no:
module appendertest;
import std.array;
import std.datetime;
import std.stdio;
import std.container;
enum size = 1_000;
void test1()
{
auto arr = appender!(int[])();
On Thu, Aug 14, 2014 at 11:33 PM, Joseph Rushton Wakeling via
Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:
On 14/08/14 19:16, Philippe Sigaud via Digitalmars-d-learn wrote:
Do people here get good results from Appender? And if yes, how are you
using it?
An example where it
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887%28v=vs.85%29.aspx
Allocates memory charges (from the overall size of memory and
the paging files on disk) for the specified reserved memory
pages. The function also guarantees that when the caller later
initially accesses the
On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant
wrote:
The default size of the runtime stack for a Fiber is 4*PAGESIZE
which is very small, and a quick test shows that a Fiber
suffers a stack overflow that doesn't lead to a clean
termination when this limit is exceeded.
Pass a
You'll certainly have to make a C++ wrapper. However, a delegate being
implemented as a struct containing a context pointer and a function, you
can get some degree of interoperability between C++ and D
(BUT note that it is an undocumented implementation detail subject to
change without notice
On Friday, 15 August 2014 at 08:35:41 UTC, Philippe Sigaud via
Digitalmars-d-learn wrote:
I wonder if using plain `Array` instead may be result in
better performance
where immutability is not needed.
Hmm, no:
...
It is very different with better compiler though :
$ ldmd2 -release -O a.d
$
On Thursday, 14 August 2014 at 18:31:15 UTC, Dicebot wrote:
I don't know much about Phobos appender implementation details
but the key thing with reusable buffer is avoid freeing them.
AFAIR Appender.clear frees the allocated memory but
`Appender.length = 0` does not, making it possible to
It is very different with better compiler though :
$ ldmd2 -release -O a.d
$ ./appendertest
Appender.put :378 ms, 794 μs, and 9 hnsecs
Appender ~=:378 ms, 416 μs, and 3 hnsecs
Std array :2 secs, 222 ms, 256 μs, and 2 hnsecs
Std array.reserve :2 secs, 199 ms, 64 μs,
On Friday, 15 August 2014 at 10:31:59 UTC, Dicebot wrote:
On Friday, 15 August 2014 at 08:35:41 UTC, Philippe Sigaud via
Digitalmars-d-learn wrote:
I wonder if using plain `Array` instead may be result in
better performance
where immutability is not needed.
Hmm, no:
...
It is very
On Fri, Aug 15, 2014 at 1:57 PM, Messenger via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
T[size] beats all of those on dmd head, though it is inarguably a
bit limiting.
I confirm (even with 2.065). With ldc2 it's optimized out of the way,
so it gives 0 hnsecs :-)
Hmm, what
On Friday, 15 August 2014 at 11:57:30 UTC, Messenger wrote:
T[size] beats all of those on dmd head, though it is inarguably
a
bit limiting.
Hey guys, just a bit of background and my own understanding of
Appender, having worked on it a fair bit.
First of all, Appender was not designed as a
On Friday, 15 August 2014 at 12:08:58 UTC, Philippe Sigaud via
Digitalmars-d-learn wrote:
Hmm, what about a sort of linked list of static arrays, that
allocates
a new one when necessary?
Appender is not a container, and has no freedom on the data it
manipulates. It has to be able to accept
On Friday, 15 August 2014 at 08:36:34 UTC, Kagamin wrote:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887%28v=vs.85%29.aspx
Allocates memory charges (from the overall size of memory and
the paging files on disk) for the specified reserved memory
pages. The function also
On Friday, 15 August 2014 at 08:41:30 UTC, Kagamin wrote:
On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant
wrote:
The default size of the runtime stack for a Fiber is
4*PAGESIZE which is very small, and a quick test shows that a
Fiber suffers a stack overflow that doesn't lead to a
On Friday, 15 August 2014 at 14:26:28 UTC, Sean Kelly wrote:
Oh handy, so there's basically no work to be done on Windows.
I'll have to check the behavior of mmap on Posix.
I heard, calloc behaves this way on linux (COW blank page mapped
to the entire range), it was discussed here some time
On Friday, 15 August 2014 at 14:28:34 UTC, Kagamin wrote:
On Friday, 15 August 2014 at 14:26:28 UTC, Sean Kelly wrote:
Oh handy, so there's basically no work to be done on Windows.
I'll have to check the behavior of mmap on Posix.
I heard, calloc behaves this way on linux (COW blank page
Well, I created a wrapper around a std.array.uninitializedArray
call, to manage the interface I need (queue behavior: pushing at
the end, popping at the beginning). When hitting the end of the
current array, it either reuse the current buffer or create a new
one, depending of the remaining
On Friday, 15 August 2014 at 14:28:34 UTC, Dicebot wrote:
Won't that kind of kill the purpose of Fiber as low-cost
context abstraction? Stack size does add up for thousands of
fibers.
As long as allocation speed is fast for large allocs (which I
have to test), I want to change the default
On Friday, 15 August 2014 at 14:28:34 UTC, Dicebot wrote:
Won't that kind of kill the purpose of Fiber as low-cost
context abstraction? Stack size does add up for thousands of
fibers.
I didn't measure it.
On Friday, 15 August 2014 at 14:26:28 UTC, Sean Kelly wrote:
On Friday, 15 August 2014 at 08:36:34 UTC, Kagamin wrote:
http://msdn.microsoft.com/en-us/library/windows/desktop/aa366887%28v=vs.85%29.aspx
Allocates memory charges (from the overall size of memory and
the paging files on disk) for
At least on OSX, it appears that mapping memory is constant time
regardless of size, but there is some max total memory I'm
allowed to map, presumably based on the size of a vmm lookup
tabe. The max block size I can allocate is 1 GB, and I can
allocate roughly 131,000 of these blocks before
On Friday, 15 August 2014 at 14:45:02 UTC, Sean Kelly wrote:
On Friday, 15 August 2014 at 14:28:34 UTC, Dicebot wrote:
Won't that kind of kill the purpose of Fiber as low-cost
context abstraction? Stack size does add up for thousands of
fibers.
As long as allocation speed is fast for large
On Friday, 15 August 2014 at 15:25:23 UTC, Dicebot wrote:
No, I was referring to the proposal to supply bigger stack size
to Fiber constructor - AFAIR it currently does allocate that
memory eagerly (and does not use any OS CoW tools), doesn't it?
I thought it did, but apparently the
On Friday, 15 August 2014 at 15:40:35 UTC, Sean Kelly wrote:
On Friday, 15 August 2014 at 15:25:23 UTC, Dicebot wrote:
No, I was referring to the proposal to supply bigger stack
size to Fiber constructor - AFAIR it currently does allocate
that memory eagerly (and does not use any OS CoW
On Friday, 15 August 2014 at 14:40:36 UTC, Philippe Sigaud wrote:
Well, I created a wrapper around a std.array.uninitializedArray
call, to manage the interface I need
Make sure you don't use that if your type has elaborate
construction, or assumes a certain initial state (unless you are
So I'm trying to use @safe, pure and nothrow.
If I understand correctly Adam Ruppe's Cookbook, by putting
@safe:
pure:
nothrow:
at the beginning of a module, I distribute it on all definitions,
right? Even methods, inner classes, and so on?
Because I did just that on half a dozen of modules
On Friday, 15 August 2014 at 16:48:10 UTC, monarch_dodra wrote:
On Friday, 15 August 2014 at 14:40:36 UTC, Philippe Sigaud
wrote:
Well, I created a wrapper around a
std.array.uninitializedArray call, to manage the interface I
need
Make sure you don't use that if your type has elaborate
On Wednesday, 6 August 2014 at 18:07:08 UTC, H. S. Teoh via
Digitalmars-d-learn wrote:
import std.algorithm : reduce, max, min;
auto highest = reduce!((a,b) = max(a,b))(-double.max,
bids.byValue());
auto lowest = reduce!((a,b) = min(a,b))(double.max,
bids.byValue());
T
Take a
On Friday, 15 August 2014 at 16:54:54 UTC, Philippe Sigaud wrote:
So I'm trying to use @safe, pure and nothrow.
If I understand correctly Adam Ruppe's Cookbook, by putting
@safe:
pure:
nothrow:
at the beginning of a module, I distribute it on all
definitions, right? Even methods, inner
On Fri, Aug 15, 2014 at 04:51:59PM +, monarch_dodra via Digitalmars-d-learn
wrote:
On Wednesday, 6 August 2014 at 18:07:08 UTC, H. S. Teoh via
Digitalmars-d-learn wrote:
import std.algorithm : reduce, max, min;
auto highest = reduce!((a,b) = max(a,b))(-double.max,
On Friday, 15 August 2014 at 16:51:20 UTC, Philippe Sigaud wrote:
On Friday, 15 August 2014 at 16:48:10 UTC, monarch_dodra wrote:
Make sure you don't use that if your type has elaborate
construction, or assumes a certain initial state (unless you
are actually emplacing your objects of course).
In another module I marked as '@safe:' at the top, the compiler told
me that a class opEquals could not be @safe (because Object.opEquals
is @system).
So it seems that indeed a module-level '@safe:' affects everything,
since a class method was found lacking.
(I put a @trusted attribute on it).
On Thursday, 14 August 2014 at 17:16:42 UTC, Philippe Sigaud
wrote:
From time to time, I try to speed up some array-heavy code by
using std.array.Appender, reserving some capacity and so on.
It never works. Never. It gives me executables that are maybe
30-50% slower than bog-standard array
On Thursday, 14 August 2014 at 18:52:00 UTC, Sean Kelly wrote:
On 64 bit, reserve a huge chunk of memory, set a SEGV handler
and commit more as needed. Basically how kernel thread stacks
work. I've been meaning to do this but haven't gotten around to
it yet.
Very nice; the hardware VM
On Friday, 15 August 2014 at 08:41:30 UTC, Kagamin wrote:
On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant
wrote:
The default size of the runtime stack for a Fiber is
4*PAGESIZE which is very small, and a quick test shows that a
Fiber suffers a stack overflow that doesn't lead to a
On Friday, 15 August 2014 at 20:11:43 UTC, Carl Sturtivant wrote:
On Friday, 15 August 2014 at 08:41:30 UTC, Kagamin wrote:
On Thursday, 14 August 2014 at 07:46:29 UTC, Carl Sturtivant
wrote:
The default size of the runtime stack for a Fiber is
4*PAGESIZE which is very small, and a quick test
On Friday, 15 August 2014 at 15:40:35 UTC, Sean Kelly wrote:
On Friday, 15 August 2014 at 15:25:23 UTC, Dicebot wrote:
No, I was referring to the proposal to supply bigger stack
size to Fiber constructor - AFAIR it currently does allocate
that memory eagerly (and does not use any OS CoW
On Fri, Aug 15, 2014 at 10:04 PM, John Colvin via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
compiler, version, OS, architecture, flags?
Compiler: DMD 2.065 and LDC 0.14
OS: Linux 64bits (8 cores, but there is no parallelism here)
flags: -O -release -inline (and -noboundscheck
I found out that the redirect was not responsible for the CPU
time, it was some other code part which was responsible for it
and totally unrelated to the redirect.
I also saw that a redirect in my case is much simpler by using
spawnProcess:
auto logFile = File(errors.log, w);
auto pid =
On Friday, 15 August 2014 at 16:48:10 UTC, monarch_dodra wrote:
If you are using raw GC arrays, then the raw append
operation will, outweigh the relocation cost on extension. So
pre-allocation wouldn't really help in this situation (though
the use of Appender *should*)
Is that because it's
On Friday, 15 August 2014 at 20:17:51 UTC, Carl Sturtivant wrote:
On Friday, 15 August 2014 at 15:40:35 UTC, Sean Kelly wrote:
I thought it did, but apparently the behavior of VirtualAlloc
and mmap (which Fiber uses to allocate the stack) simply
reserves the range and then commits it lazily,
On Friday, 15 August 2014 at 21:24:25 UTC, Jonathan M Davis wrote:
On Friday, 15 August 2014 at 16:48:10 UTC, monarch_dodra wrote:
If you are using raw GC arrays, then the raw append
operation will, outweigh the relocation cost on extension. So
pre-allocation wouldn't really help in this
On Fri, 15 Aug 2014 20:19:18 +
Carl Sturtivant via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
Should have read further down the thread --- you're right as the
memory is in effect merely reserved virtual memory and isn't
actually allocated.
and we -- 32-bit addicts --
On Friday, 15 August 2014 at 16:54:54 UTC, Philippe Sigaud wrote:
So I'm trying to use @safe, pure and nothrow.
If I understand correctly Adam Ruppe's Cookbook, by putting
@safe:
pure:
nothrow:
at the beginning of a module, I distribute it on all
definitions, right? Even methods, inner
On Fri, 15 Aug 2014 19:04:10 -0700
Timothee Cour via Digitalmars-d-learn
digitalmars-d-learn@puremagic.com wrote:
sounds like my C library based on this article:
http://e98cuenc.free.fr/wordprocessor/piecetable.html
i'm slowly converting my C code to D (nothing fancy yet, still C-style).
it's
On Friday, 15 August 2014 at 23:22:27 UTC, Vlad Levenfeld wrote:
On Friday, 15 August 2014 at 16:54:54 UTC, Philippe Sigaud
wrote:
So I'm trying to use @safe, pure and nothrow.
If I understand correctly Adam Ruppe's Cookbook, by putting
@safe:
pure:
nothrow:
at the beginning of a module, I
50 matches
Mail list logo