On Friday, 22 January 2016 at 05:15:13 UTC, Mike Parker wrote:
On Thursday, 21 January 2016 at 23:06:55 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:44:14 UTC, H. S. Teoh wrote:
Hi - I want to be sure that my code is not allocating memory
via the GC allocator; but when
On Thu, 21 Jan 2016 21:54:36 +, Dibyendu Majumdar wrote:
> Is there a way to disable GC in D?
> I am aware of the @nogc qualifier but I would like to completely disable
> GC for the whole app/library.
>
> Regards Dibyendu
In order to suppress GC collection
On Thursday, 21 January 2016 at 22:44:14 UTC, H. S. Teoh wrote:
Hi - I want to be sure that my code is not allocating memory
via the GC allocator; but when shipping I don't need to
disable GC - it is mostly a development check.
I want to manage all memory allocation manually via
malloc/free
On Thursday, 21 January 2016 at 21:54:36 UTC, Dibyendu Majumdar
wrote:
Is there a way to disable GC in D?
I am aware of the @nogc qualifier but I would like to
completely disable GC for the whole app/library.
Regards
Dibyendu
You should read the core.memory.GC section (
http://dlang.org
Is there a way to disable GC in D?
I am aware of the @nogc qualifier but I would like to completely
disable GC for the whole app/library.
Regards
Dibyendu
On Thursday, 21 January 2016 at 22:34:43 UTC, cym13 wrote:
Out of curiosity, why would you force not being able to
allocate memory?
Hi - I want to be sure that my code is not allocating memory via
the GC allocator; but when shipping I don't need to disable GC -
it is mostly a development
not allocating memory via the
> GC allocator; but when shipping I don't need to disable GC - it is
> mostly a development check.
>
> I want to manage all memory allocation manually via malloc/free.
Just write "@nogc:" at the top of every module and the compiler will
tell y
On Thursday, 21 January 2016 at 21:54:36 UTC, Dibyendu Majumdar
wrote:
Is there a way to disable GC in D?
I am aware of the @nogc qualifier but I would like to
completely disable GC for the whole app/library.
Regards
Dibyendu
GC.disable();
This prevents the garbage collector from running
On Thursday, 21 January 2016 at 22:15:13 UTC, Chris Wright wrote:
Finally, you can use gc_setProxy() with a a GC proxy you
create. Have it throw an exception instead of allocating. That
means you will get crashes instead of memory leaks if something
uses the GC when it shouldn't
On Thursday, 21 January 2016 at 22:20:13 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:15:13 UTC, Chris Wright
wrote:
Finally, you can use gc_setProxy() with a a GC proxy you
create. Have it throw an exception instead of allocating. That
means you will get crashes instead
On Thursday, 21 January 2016 at 23:06:55 UTC, Dibyendu Majumdar
wrote:
On Thursday, 21 January 2016 at 22:44:14 UTC, H. S. Teoh wrote:
Hi - I want to be sure that my code is not allocating memory
via the GC allocator; but when shipping I don't need to
disable GC - it is mostly a development
to consume about 3Gb of memory.
Does GC greediness depend on available RAM?
My understanding is that the GC won't return collected memory
to the OS unless a threshold relative the system total is
crossed. You can use GC.minimize() from core.memory to
decrease this. This could result
Rainer Schuetze <r.sagita...@gmx.de> writes:
> On 05.01.2016 01:39, Dan Olson wrote:
>> I haven't played with any of the new GC configuration options introduced
>> in 2.067, but now need to. An application on watchOS currently has
>> about 30 MB of RAM. Is there
On a server with 4GB of RAM our D application consumes about 1GB.
Today we have increased server memory to 6 Gb and the same
application under the same conditions began to consume about 3Gb
of memory.
Does GC greediness depend on available RAM?
On Tue, 05 Jan 2016 16:07:36 +, Jack Applegame wrote:
> On a server with 4GB of RAM our D application consumes about 1GB.
> Today we have increased server memory to 6 Gb and the same application
> under the same conditions began to consume about 3Gb of memory.
> Does GC greed
On 05.01.2016 01:39, Dan Olson wrote:
I haven't played with any of the new GC configuration options introduced
in 2.067, but now need to. An application on watchOS currently has
about 30 MB of RAM. Is there any more documentation than the web page
https://dlang.org/spec/garbage.html
On Sunday, 25 October 2015 at 08:56:52 UTC, Jonathan M Davis
wrote:
It is my understanding that the GC does not normally ever
return memory to the OS
It seems that it does now. In smallAlloc() and bigAlloc(), if
allocation fails it collects garbage and then:
if (lowMem) minimize
Correction: you said
"the GC does not normally ever return memory"
and you're right, because applications do not "normally" consume
>95% of their address space.
On Sunday, October 25, 2015 05:49:42 Richard White via Digitalmars-d-learn
wrote:
> Just wondering if D's GC release memory back to the OS?
> The documentation for the GC.minimize
> (http://dlang.org/phobos/core_memory.html#.GC.minimize) seems to
> imply that it does,
> but w
Just wondering if D's GC release memory back to the OS?
The documentation for the GC.minimize
(http://dlang.org/phobos/core_memory.html#.GC.minimize) seems to
imply that it does,
but watching my OS's memory usage for various D apps doesn't
support this.
On Thursday, 8 October 2015 at 09:25:36 UTC, tcak wrote:
On Thursday, 8 October 2015 at 05:46:31 UTC, ketmar wrote:
On Thursday, 8 October 2015 at 04:38:43 UTC, tcak wrote:
Is it possible to modify GC (without rebuilding the
compiler), so it uses a given shared memory area instead of
heap
On Thursday, 8 October 2015 at 05:46:31 UTC, ketmar wrote:
On Thursday, 8 October 2015 at 04:38:43 UTC, tcak wrote:
Is it possible to modify GC (without rebuilding the compiler),
so it uses a given shared memory area instead of heap for
allocations?
sure. you don't need to rebuild
GC is chosen at link time simply to satisfy unresolved symbols.
You only need to compile your modified GC and link with it, it
will be chosen instead of GC from druntime, no need to recompile
anything else.
Is it possible to modify GC (without rebuilding the compiler), so
it uses a given shared memory area instead of heap for
allocations?
On Thursday, 8 October 2015 at 04:38:43 UTC, tcak wrote:
Is it possible to modify GC (without rebuilding the compiler),
so it uses a given shared memory area instead of heap for
allocations?
sure. you don't need to rebuild the compiler, only druntime.
On Tuesday, 25 August 2015 at 05:12:55 UTC, Laeeth Isharc wrote:
What's the best reference to learn more about PGAS?
I've seen a few presentations,
https://www.osc.edu/sites/osc.edu/files/staff_files/dhudak/pgas-tutorial.pdf
More info on the Go 1.5 concurrent GC, a classic one:
https://blog.golang.org/go15gc
On Tuesday, 25 August 2015 at 07:18:24 UTC, Ola Fosheim Grøstad
wrote:
On Tuesday, 25 August 2015 at 05:09:56 UTC, Laeeth Isharc wrote:
On Monday, 24 August 2015 at 21:57:41 UTC, rsw0x wrote:
[...]
Horses for courses ? Eg for Andy Smith's problem of
processing trade information of tens of
On Tuesday, 25 August 2015 at 05:09:56 UTC, Laeeth Isharc wrote:
On Monday, 24 August 2015 at 21:57:41 UTC, rsw0x wrote:
On Monday, 24 August 2015 at 21:20:39 UTC, Russel Winder wrote:
For Python and native code, D is a great fit, perhaps more so
that Rust, except that Rust is getting more
On Tuesday, 25 August 2015 at 07:21:13 UTC, rsw0x wrote:
An option implies you can turn it off, has this changed since
the last time I used Rust?(admittedly, a while back)
Rust supports other reference types, so you decide by design
whether you want to use linear typing or not?
On Monday, 24 August 2015 at 21:20:39 UTC, Russel Winder wrote:
The issue here for me is that Chapel provides something that C,
C++, D, Rust, Numba, NumPy, cannot – Partitioned Global Address
Space (PGAS) programming. This directly attacks the
multicore/multiprocessor/cluster side of
On Monday, 24 August 2015 at 21:20:39 UTC, Russel Winder wrote:
For Python and native code, D is a great fit, perhaps more so
that Rust, except that Rust is getting more mind share,
probably because it is new.
I'm of the opinion that Rust's popularity will quickly die when
people realize
On Sun, 2015-08-23 at 19:42 +, via Digitalmars-d-learn wrote:
[…]
Yes, of course it is, but given it's typical use context I find
it odd that they didn't go more towards higher level constructs.
For me Go displaces Python where more speed is required, though I
wish it was more
. Of course systems like Numba change the Python performance
game, which undermines D's potential in the Python-verse, as it
does C and C++. Currently I am investigating
Python/Numba/Chapel as the way of doing performance computing.
Anyone who just uses Python/NumPy/SciPy is probably not doing
On Monday, 24 August 2015 at 21:57:41 UTC, rsw0x wrote:
On Monday, 24 August 2015 at 21:20:39 UTC, Russel Winder wrote:
For Python and native code, D is a great fit, perhaps more so
that Rust, except that Rust is getting more mind share,
probably because it is new.
I'm of the opinion that
On Saturday, 22 August 2015 at 06:54:43 UTC, Ola Fosheim Grøstad
wrote:
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder
wrote:
But one that Google are entirely happy to fully fund.
Yes, they have made Go fully supported on Google Cloud now, so
I think it is safe to say that Google
On Sunday, 23 August 2015 at 12:49:35 UTC, Russel Winder wrote:
You are mixing too many factors here. General purpose has
nothing to do with performance, it is to do with can the
language describe most if not all forms of computation. Go is a
general purpose programming language just like C,
On Sun, 2015-08-23 at 11:26 +, rsw0x via Digitalmars-d-learn wrote:
[…]
https://groups.google.com/forum/#!msg/golang
-dev/pIuOcqAlvKU/C0wooVzXLZwJ
25-50% performance decrease across the board in 1.4 with the
addition of write barriers, to an already slow language.
Garbage collection
.
On the other hand gc is blindingly fast at compilation compared to
gccgo. This seems reminiscent of dmd vs. ldc and gdc!
--
Russel.
=
Dr Russel Winder t: +44 20 7585 2200 voip: sip:russel.win...@ekiga.net
41 Buckmaster Road
On Sat, 2015-08-22 at 09:27 +, rsw0x via Digitalmars-d-learn wrote:
[…]
The performance decrease has been there since 1.4 and there is no
way to remove it - write barriers are the cost you pay for
concurrent collection. Go was already much slower than other
compiled languages, now it
On Saturday, 22 August 2015 at 12:48:31 UTC, rsw0x wrote:
The problem with D's GC is that there's no scaffolding there
for it, so you can't really improve it.
At best you could make the collector parallel.
If I had the runtime hooks and language guarantees I needed I'd
begin work on a per
On Sunday, 23 August 2015 at 11:06:20 UTC, Russel Winder wrote:
On Sat, 2015-08-22 at 09:27 +, rsw0x via
Digitalmars-d-learn wrote:
[…]
The performance decrease has been there since 1.4 and there is
no way to remove it - write barriers are the cost you pay for
concurrent collection. Go
has had). However, because of current
traction in Web servers and general networking, it is clear that that
is where the bulk of the libraries are. Canonical also use it for Qt UI
applications. I am not sure of Google real intent for Go on Android,
but there is one.
A concurrent GC for D would kill
On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder wrote:
On Sat, 2015-08-22 at 07:30 +, rsw0x via
Digitalmars-d-learn wrote:
[...]
Not entirely true. Go is a general purpose language, it is a
successor to C as envisioned by Rob Pike, Russ Cox, and others
(I am not sure how much
intent for Go on Android, but there
is one.
A concurrent GC for D would kill D. Go programs saw a 25-50%
performance decrease across the board for the lower latencies.
They also saw a 100% increase in performance when it was
rewritten, and a 20% fall with this latest rewrite. I
anticipate great
applications. I am not sure of Google real intent for Go on
Android, but there is one.
A concurrent GC for D would kill D. Go programs saw a 25-50%
performance decrease across the board for the lower latencies.
They also saw a 100% increase in performance when it was rewritten,
and a 20% fall
released GC
improvement plans for 1.6:
https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA
f0V27A64Mo/edit
It is rather obvious that a building a good concurrent GC is
a time consuming effort.
But one that Google are entirely happy to fully fund.
because Go is not a general
On Saturday, 22 August 2015 at 10:47:55 UTC, Laeeth Isharc wrote:
Out of curiosity, how much funding is required to develop the
more straightforward kind of GCs ?
A classical GC like D has is very straightforward. It is been
used since the 60s, I even have a paper from 1974 or so
describing
On Saturday, 22 August 2015 at 07:02:40 UTC, Russel Winder wrote:
I think Go 2 is a long way off, and even then generics will not
be part of the plan.
I agree that Go from Google will stay close to the ideals of the
creators. I think it would be difficult get beyond that for
social reasons.
On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn wrote:
Yes, Go has sacrificed some compute performance in favour of
latency and convenience. They have also released GC improvement
plans for 1.6:
https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziA
f0V27A64Mo
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote:
But one that Google are entirely happy to fully fund.
Yes, they have made Go fully supported on Google Cloud now, so I
think it is safe to say that Google management is backing Go
fully.
I'm kinda hoping for Go++...
On Sat, 2015-08-22 at 06:54 +, via Digitalmars-d-learn wrote:
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote:
But one that Google are entirely happy to fully fund.
Yes, they have made Go fully supported on Google Cloud now, so I
think it is safe to say that Google
On Saturday, 22 August 2015 at 06:48:48 UTC, Russel Winder wrote:
On Fri, 2015-08-21 at 10:47 +, via Digitalmars-d-learn
wrote:
Yes, Go has sacrificed some compute performance in favour of
latency and convenience. They have also released GC
improvement plans for 1.6:
https
On Saturday, 22 August 2015 at 10:47:55 UTC, Laeeth Isharc wrote:
On Saturday, 22 August 2015 at 09:16:32 UTC, Russel Winder
wrote:
[...]
I didn't mean to start again the whole GC and Go vs D thing.
Just that one ought to know the lay of the land as it develops.
Out of curiosity, how much
Yes, Go has sacrificed some compute performance in favour of
latency and convenience. They have also released GC improvement
plans for 1.6:
https://docs.google.com/document/d/1kBx98ulj5V5M9Zdeamy7v6ofZXX3yPziAf0V27A64Mo/edit
It is rather obvious that a building a good concurrent GC
On Thursday, 20 August 2015 at 17:13:33 UTC, Ilya Yaroshenko
wrote:
Hi All!
Does GC scan manually allocated memory?
Only if you ask GC to do it - by calling core.memory.addRange.
https://medium.com/@robin.verlangen/billions-of-request-per-day-meet-go-1-5-362bfefa0911
We then started analyzing the behavior of our Go application. On
average the application spent ~ 2ms per request, which was great!
It gave us 98 milliseconds to spare for network overhead, SSL
handshake,
Hi All!
Does GC scan manually allocated memory?
I want to use huge manually allocated hash tables and I don't
want to GC scan them because performance reasons.
Best regards,
Ilya
On Thursday, 20 August 2015 at 17:13:33 UTC, Ilya Yaroshenko
wrote:
Hi All!
Does GC scan manually allocated memory?
I want to use huge manually allocated hash tables and I don't
want to GC scan them because performance reasons.
Best regards,
Ilya
Yes, just don't store any GC managed
On Thursday, 20 August 2015 at 17:13:33 UTC, Ilya Yaroshenko
wrote:
Hi All!
Does GC scan manually allocated memory?
I want to use huge manually allocated hash tables and I don't
want to GC scan them because performance reasons.
Best regards,
Ilya
GC does not scan memory allocated
I'm writing a game engine in D. Try to minimize allocations and
that's will be OK.
I'm using delegates and all the phobos stuff. I allocate only in
few places at every frame.
So i can reach 1K fps on a complicated scene.
GC is not a problem. DMD optimizes so ugly that all the math is
very
On Wednesday, 29 July 2015 at 09:25:50 UTC, Snape wrote:
I'm in the early stages of building a little game with OpenGL
(in D) and I just want to know the facts about the GC before I
decide to either use it or work around it. Lots of people have
said lots of things about it, but some
On Wednesday, 29 July 2015 at 09:25:50 UTC, Snape wrote:
I'm in the early stages of building a little game with OpenGL
(in D) and I just want to know the facts about the GC before I
decide to either use it or work around it. Lots of people have
said lots of things about it, but some
On Wednesday, 29 July 2015 at 09:25:50 UTC, Snape wrote:
I'm in the early stages of building a little game with OpenGL
(in D) and I just want to know the facts about the GC before I
decide to either use it or work around it. Lots of people have
said lots of things about it, but some
On 29/07/2015 9:25 p.m., Snape wrote:
I'm in the early stages of building a little game with OpenGL (in D) and
I just want to know the facts about the GC before I decide to either use
it or work around it. Lots of people have said lots of things about it,
but some of that information is old, so
I'm in the early stages of building a little game with OpenGL (in
D) and I just want to know the facts about the GC before I decide
to either use it or work around it. Lots of people have said lots
of things about it, but some of that information is old, so as of
today, what effect does the GC
On Wednesday, 29 July 2015 at 17:09:52 UTC, Namespace wrote:
On Wednesday, 29 July 2015 at 09:25:50 UTC, Snape wrote:
I'm in the early stages of building a little game with OpenGL
(in D) and I just want to know the facts about the GC before I
decide to either use it or work around it. Lots
On Sunday, 26 July 2015 at 17:43:42 UTC, Martin Nowak wrote:
On 07/26/2015 04:16 PM, Gary Willoughby wrote:
I thought there is a recently added compiler option that
profiles the GC and creates a report now?
That's an allocation profiler, the other one mentioned by me
reports GC stats
stopped, how
much total memory and how many blocks were recovered. i.e.
how much garbage was created in between collections. Are
there any hooks on the runtime?
http://dlang.org/changelog.html#gc-options
https://github.com/D-Programming-Language/druntime/blob
. i.e. how
much garbage was created in between collections. Are there any
hooks on the runtime?
http://dlang.org/changelog.html#gc-options
https://github.com/D-Programming-Language/druntime/blob/1e25749cd01ad08dc08319a3853fbe86356c3e62/src/rt/config.d#L14
I thought there is a recently added
On 07/26/2015 04:16 PM, Gary Willoughby wrote:
I thought there is a recently added compiler option that profiles the GC
and creates a report now?
That's an allocation profiler, the other one mentioned by me reports GC
stats as requested by the OP.
Hello!
I was wondering if anyone has suggestions on the easiest way to
time how long GC collections take? I haven't seen anything in the
docs.
What I want is a clean non-intrusive way to log when a collection
happened, how long my threads were stopped, how much total memory
and how many
. Are there any hooks
on the runtime?
http://dlang.org/changelog.html#gc-options
https://github.com/D-Programming-Language/druntime/blob/1e25749cd01ad08dc08319a3853fbe86356c3e62/src/rt/config.d#L14
On Monday, 8 June 2015 at 20:11:31 UTC, Oleg B wrote:
On Monday, 8 June 2015 at 13:37:40 UTC, Marc Schütz wrote:
On Monday, 8 June 2015 at 12:24:56 UTC, Oleg B wrote:
I guess you should follow andrei's post about new allocators!
Can you get link to this post?
These are some of his posts:
On Monday, 8 June 2015 at 13:37:40 UTC, Marc Schütz wrote:
On Monday, 8 June 2015 at 12:24:56 UTC, Oleg B wrote:
I guess you should follow andrei's post about new allocators!
Can you get link to this post?
These are some of his posts:
On Thursday, 28 May 2015 at 10:11:38 UTC, zhmt wrote:
On Thursday, 28 May 2015 at 02:00:57 UTC, zhmt wrote:
the throughput is steady now: if buffer size is set to 1,
throughput is about 20K response/second; when buffer size is
big enough ,the throughput is about 60K response/second.
Thanks for all help.
On Thursday, 28 May 2015 at 02:00:57 UTC, zhmt wrote:
I think it is not problem of gc, it is my fault:
The operations is serialized:
clent send - server recv - server send - client recv,
so if one operation takes too long time, the throughput will
definitely fall down.
I cant explain why
On 05/28/2015 03:13 AM, zhmt wrote:
Thanks for all help.
Thank you for debugging and reporting. :) I am sure this will be helpful
to many others.
Ali
On Wednesday, 27 May 2015 at 08:42:01 UTC, zhmt wrote:
When I enable the --profle, get something like this, it doesnt
give me too much help:
[...]
Tried callgrind and kcachegrind? If nothing else it's better at
illustrating the same output, assuming you have graphviz/dot
installed.
Given
On Wednesday, 27 May 2015 at 09:39:42 UTC, Anonymouse wrote:
On Wednesday, 27 May 2015 at 08:42:01 UTC, zhmt wrote:
When I enable the --profle, get something like this, it doesnt
give me too much help:
[...]
Tried callgrind and kcachegrind? If nothing else it's better at
illustrating the
When I enable the --profle, get something like this, it doesnt
give me too much help:
Timer Is 3579545 Ticks/Sec, Times are in Microsecs
Num TreeFuncPer
CallsTimeTimeCall
1298756 4996649885 49875773773840
@jklp
And also you could try to surround the whole block with
`GC.disable` and `GC.enable`. This would help to determine if
the GC is involved:
---
Ptr!Conn conn = connect(127.0.0.1,8881);
GC.disable;
ubyte[100] buf;
string str;
for(int i=0; iN; i++)
{
str = format(%s,i
On Wednesday, 27 May 2015 at 10:24:59 UTC, zhmt wrote:
@Anonymouse
Thank u very much, I have tried this:
valgrind --tool=callgrind ./gamelibdtest
callgrind_annotate callgrind.out.29234
Ir
What happened when the code changes a little? Who will give an
explaination,Thanks a lot?
what happend if you use sformat instead?
Ptr!Conn conn = connect(127.0.0.1,8881);
ubyte[100] buf;
char[100] buf2;
for(int i=0; iN; i++)
{
auto str = sformat(buf2, %s,i);
If I pass a timeout with 1ms to epoll_wait,the cpu will not be
busy when throughput falls down.
It seems that dlang library is not so effient?
On 5/27/15 2:42 AM, zhmt wrote:
When I enable the --profle, get something like this, it doesnt give me
too much help:
I don't see any GC function here, I don't think you are are focusing on
the right portion of the code. Seems like the gamelib library is
consuming all the time. You may want
On Wednesday, 27 May 2015 at 10:27:08 UTC, zhmt wrote:
It seems that dlang library is not so effient?
The gamelibd could be doing a lot more than just echoing... it
sounds to me that your socket might be blocking and epoll is busy
looping waiting for it to become available again.
On Wednesday, 27 May 2015 at 05:48:13 UTC, zhmt wrote:
The code you posted is the client code, but the issue seems to be
on the server side.
Can you post the server code and also the timing code?
On Wednesday, 27 May 2015 at 05:51:21 UTC, zhmt wrote:
I noticed that the cpu% falls from 99% down to 4% as well when
the throughput falls down.
I tried again for several times, the cpu is still busy, 98.9%.
I think it is not problem of gc, it is my fault:
The operations is serialized:
clent send - server recv - server send - client recv,
so if one operation takes too long time, the throughput will
definitely fall down.
I cant explain why it so fast when buffer is big enough, and so
low when
On Wednesday, 27 May 2015 at 19:04:53 UTC, Márcio Martins wrote:
On Wednesday, 27 May 2015 at 05:48:13 UTC, zhmt wrote:
The code you posted is the client code, but the issue seems to
be on the server side.
Can you post the server code and also the timing code?
@Márcio Martins
here is
On Wednesday, 27 May 2015 at 05:48:13 UTC, zhmt wrote:
What happened when the code changes a little? Who will give an
explaination,Thanks a lot?
Yes, on the first sight it looks like your allocations in the
loop make GC spend too much time. I don't think scope does
anything here. Try adding
i=0; iN; i++)
{
str = format(%s,i);
conn.write(cast(ubyte[]) str);
conn.read(buf[0..str.length]);
n++;
}
conn.close();
---
And also you could try to surround the whole block with
`GC.disable` and `GC.enable`. This would help to determine if the
GC is involved:
---
Ptr!Conn conn
On Wed, 27 May 2015 05:48:11 +
zhmt via Digitalmars-d-learn digitalmars-d-learn@puremagic.com wrote:
I am writing a echoclient, as below:
Ptr!Conn conn = connect(127.0.0.1,8881);
ubyte[100] buf;
for(int i=0; iN; i++)
{
scope string str = format(%s,i);
On Wednesday, 27 May 2015 at 05:48:13 UTC, zhmt wrote:
I am writing a echoclient, as below:
Ptr!Conn conn = connect(127.0.0.1,8881);
ubyte[100] buf;
for(int i=0; iN; i++)
{
scope string str = format(%s,i);
conn.write((cast(ubyte*)str.ptr)[0..str.length]);
I am writing a echoclient, as below:
Ptr!Conn conn = connect(127.0.0.1,8881);
ubyte[100] buf;
for(int i=0; iN; i++)
{
scope string str = format(%s,i);
conn.write((cast(ubyte*)str.ptr)[0..str.length]);
conn.read(buf[0..str.length]);
n++;
}
conn.close();
When it
I noticed that the cpu% falls from 99% down to 4% as well when
the throughput falls down.
On 5/20/15 11:09 AM, Kagamin wrote:
On Wednesday, 20 May 2015 at 13:54:29 UTC, bitwise wrote:
Yes, but D claims to support manual memory management. It seems to get
second class treatment though.
It's WIP. There were thoughts to run finalizers on the thread where the
object was allocated (I
901 - 1000 of 1599 matches
Mail list logo