Re: alloca without runtime?

2017-05-10 Thread via Digitalmars-d-learn

On Wednesday, 10 May 2017 at 20:25:45 UTC, aberba wrote:

On Thursday, 4 May 2017 at 14:54:58 UTC, 岩倉 澪 wrote:

On Thursday, 4 May 2017 at 12:50:02 UTC, Kagamin wrote:

You can try ldc and llvm intrinsics
http://llvm.org/docs/LangRef.html#alloca-instruction
http://llvm.org/docs/LangRef.html#llvm-stacksave-intrinsic


Ah, yep!

pragma(LDC_alloca) void* alloca(size_t);

This appears to work with ldc. It would be nice if there was a 
way to do this with dmd/other compilers as well though. If it 
were up to me I'd have alloca defined by the language standard 
and every compiler would have to provide an implementation 
like this. At the very least I'd like to have an alloca that 
works with dmd, as I want to do debug builds with dmd and 
release builds with ldc.


embedded platform?


An embedded platform would be a good use-case for this, but I'm 
just trying to do this on Linux x86_64 personally. It's a fun 
experiment to see how far I can push D to give me low-level 
control without dependencies


Re: alloca without runtime?

2017-05-04 Thread via Digitalmars-d-learn

On Thursday, 4 May 2017 at 12:50:02 UTC, Kagamin wrote:

You can try ldc and llvm intrinsics
http://llvm.org/docs/LangRef.html#alloca-instruction
http://llvm.org/docs/LangRef.html#llvm-stacksave-intrinsic


Ah, yep!

pragma(LDC_alloca) void* alloca(size_t);

This appears to work with ldc. It would be nice if there was a 
way to do this with dmd/other compilers as well though. If it 
were up to me I'd have alloca defined by the language standard 
and every compiler would have to provide an implementation like 
this. At the very least I'd like to have an alloca that works 
with dmd, as I want to do debug builds with dmd and release 
builds with ldc.


Re: alloca without runtime?

2017-05-04 Thread via Digitalmars-d-learn

On Sunday, 30 April 2017 at 05:07:31 UTC, 岩倉 澪 wrote:
I've been playing around with using D with no runtime on Linux, 
but recently I was thinking it would be nice to have an alloca 
implementation. I was thinking I could just bump the stack 
pointer (with alignment considerations) but from what I 
understand compilers sometimes generate code that references 
variables relative to RSP instead of RBP? I've seen people 
saying that a proper alloca can't be implemented without help 
from the compiler...


I took a peek in druntime and found rt.alloca which has 
__alloca implemented with inline asm. I tried throwing that in 
my project and calling it but it segfaults on rep movsq. The 
comments in the code suggest it is trying to copy temps on the 
stack but I seem to get a really large garbage RCX, I don't 
fully follow what is going on yet.


Is there any way I can get a working alloca without using 
druntime, c runtime, etc?


As a follow-up, here is a simple example of what I mean:

first, let's create a main.d, we'll define our own entry point 
and make a call to alloca in main:


extern (C):
void _start()
{
asm nothrow @nogc
{
naked;
xor RBP, RBP;
pop RDI;
mov RSI, RSP;
and RSP, -16;
call main;
mov RDI, RAX;
mov RAX, 60;
syscall;
ret;
}
}
pragma(startaddress, _start);
int main(int argc, char** argv)
{
import rt.alloca;
void* a = __alloca(42);
return 0;
}

Next, let's make an rt directory and copy the source of 
druntime's rt.alloca into rt/alloca.d


Now let's compile these:

dmd -betterC -debuglib= -defaultlib= -boundscheck=off -vgc -vtls 
-c -gc main.d rt/alloca.d


Great, now we need to strip symbols out to make this work, like 
so:


objcopy -R '.data.*[0-9]TypeInfo_*' -R '.[cd]tors.*' -R .eh_frame 
-R minfo -R .group.d_dso -R .data.d_dso_rec -R .text.d_dso_init 
-R .dtors.d_dso_dtor -R .ctors.d_dso_ctor -N __start_minfo -N 
__stop_minfo main.o
objcopy -R '.data.*[0-9]TypeInfo_*' -R '.[cd]tors.*' -R .eh_frame 
-R minfo -R .group.d_dso -R .data.d_dso_rec -R .text.d_dso_init 
-R .dtors.d_dso_dtor -R .ctors.d_dso_ctor -N __start_minfo -N 
__stop_minfo alloca.o


With that out of the way, we are ready to link:

ld main.o alloca.o

And when we try to run ./a.out we get a segfault.

What I want is a way to allocate on the stack (size of allocation 
not necessarily known at compile-time) and for the compiler to be 
aware that it can't generate code that refers to variables on the 
stack relative to rsp, or anything else that might break the 
naive implementation of alloca as simply bumping rsp with inline 
asm. Apparently this "magic" __alloca can't be used outside of 
the compiler, or is there a way to make it work?


alloca without runtime?

2017-04-29 Thread via Digitalmars-d-learn
I've been playing around with using D with no runtime on Linux, 
but recently I was thinking it would be nice to have an alloca 
implementation. I was thinking I could just bump the stack 
pointer (with alignment considerations) but from what I 
understand compilers sometimes generate code that references 
variables relative to RSP instead of RBP? I've seen people saying 
that a proper alloca can't be implemented without help from the 
compiler...


I took a peek in druntime and found rt.alloca which has __alloca 
implemented with inline asm. I tried throwing that in my project 
and calling it but it segfaults on rep movsq. The comments in the 
code suggest it is trying to copy temps on the stack but I seem 
to get a really large garbage RCX, I don't fully follow what is 
going on yet.


Is there any way I can get a working alloca without using 
druntime, c runtime, etc?


Re: The Mystery of the Misbehaved Malloc

2016-07-29 Thread via Digitalmars-d-learn

On Saturday, 30 July 2016 at 05:21:26 UTC, ag0aep6g wrote:

On 07/30/2016 07:00 AM, 岩倉 澪 wrote:

auto mem = malloc(2^^31);


2^^31 is negative. 2^^31-1 is the maximum positive value of an 
int, so 2^^31 wraps around to int.min.


Try 2u^^31.


bah, I'm an idiot! CASE CLOSED. Thanks for the help :P


The Mystery of the Misbehaved Malloc

2016-07-29 Thread via Digitalmars-d-learn
So I ran into a problem earlier - trying to allocate 2GB or more 
on Windows would fail even if there was enough room. Mentioned it 
in the D irc channel and a few fine folks pointed out that 
Windows only allows 2GB for 32-bit applications unless you pass a 
special flag which may or may not be a good idea.


I think to myself, "Easy solution, I'll just compile as 64-bit!"

But alas, my 64-bit executable suffers the same problem.

I boiled it down to a simple test:

void main()
{
import core.stdc.stdlib : malloc;
auto mem = malloc(2^^31);
assert(mem);
import core.stdc.stdio : getchar;
getchar();
}

I wrote this test with the C functions so that I can do a direct 
comparison with a C program compiled with VS 2015:


#include 
#include 
#include 
#include 
int main(int argc, char *argv[])
{
void *ptr = malloc((size_t)pow(2, 31));
assert(ptr);
getchar();
return 0;
}

I compile the D test with: `ldc2 -m64 -test.d`
I compile the C test with: `CL test.c`

`file` reports "PE32+ executable (console) x86-64, for MS 
Windows" for both executables.


When the C executable runs, I see the allocation under "commit 
change" in the Resource Monitor. When the D executable runs, the 
assertion fails!


The D program is able to allocate up to 2^31 - 1 before failing. 
And yes, I do have enough available memory to make a larger 
allocation.


Can you help me solve this mystery?


Re: Create Windows "shortcut" (.lnk) with D?

2016-03-06 Thread via Digitalmars-d-learn

On Sunday, 6 March 2016 at 11:00:35 UTC, John wrote:

On Sunday, 6 March 2016 at 03:13:23 UTC, 岩倉 澪 wrote:

IShellLinkA* shellLink;
IPersistFile* linkFile;

Any help would be highly appreciated as I'm new to Windows 
programming in D and have no idea what I'm doing wrong!


In D, interfaces are references, so it should be:

  IShellLinkA shellLink;
  IPersistFile linkFile;


That's exactly what the problem was, thank you!!


Re: Create Windows "shortcut" (.lnk) with D?

2016-03-05 Thread via Digitalmars-d-learn

On Sunday, 6 March 2016 at 05:00:55 UTC, BBasile wrote:
If you don't want to mess with the Windows API then you can 
dynamically create a script (I do this in CE installer):


This might be an option but I'd prefer to use the Windows API 
directly. I don't know vb script and maintaining such a script 
inline would just add cognitive overhead I think.


If I can't get it working with the Windows API; I'll probably 
have to do it this way though. Thanks for the suggestion!





Create Windows "shortcut" (.lnk) with D?

2016-03-05 Thread via Digitalmars-d-learn
I'm creating a small installation script in D, but I've been 
having trouble getting shortcut creation to work! I'm a linux 
guy, so I don't know much about Windows programming...


Here are the relevant bits of code I have:

import core.sys.windows.basetyps, core.sys.windows.com, 
core.sys.windows.objbase, core.sys.windows.objidl, 
core.sys.windows.shlobj, core.sys.windows.windef;
import std.conv, std.exception, std.file, std.format, std.path, 
std.stdio, std.string, std.utf, std.zip;


//Couldn't find these in core.sys.windows
extern(C) const GUID CLSID_ShellLink = {0x00021401, 0x, 
0x,

  [0xC0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x46]};

extern(C) const IID IID_IShellLinkA  = {0x000214EE, 0x, 
0x,

  [0xC0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x46]};

extern(C) const IID IID_IPersistFile = {0x010B, 0x, 
0x,

  [0xC0, 0x00, 0x00, 0x00, 0x00, 0x00, 0x00, 0x46]};

void main()
{
string programFolder, dataFolder, desktopFolder;
{
char[MAX_PATH] _programFolder, _dataFolder, 
_desktopFolder;
SHGetFolderPathA(null, CSIDL_PROGRAM_FILESX86, null, 0, 
_programFolder.ptr);
SHGetFolderPathA(null, CSIDL_LOCAL_APPDATA, null, 0, 
_dataFolder.ptr);
SHGetFolderPathA(null, CSIDL_DESKTOPDIRECTORY, null, 0, 
_desktopFolder.ptr);
programFolder = 
_programFolder.assumeUnique().ptr.fromStringz();

dataFolder = _dataFolder.assumeUnique().ptr.fromStringz();
desktopFolder = 
_desktopFolder.assumeUnique().ptr.fromStringz();

}
auto inputFolder = buildNormalizedPath(dataFolder, 
"Aker/agtoolbox/input");
auto outputFolder = buildNormalizedPath(dataFolder, 
"Aker/agtoolbox/output");

CoInitialize(null);
scope(exit)
CoUninitialize();
IShellLinkA* shellLink;
IPersistFile* linkFile;
CoCreateInstance(_ShellLink, null, 
CLSCTX_INPROC_SERVER, _IShellLinkA, cast(void**));
auto exePath = buildNormalizedPath(programFolder, 
"Aker/agtoolbox/agtoolbox.exe").toStringz();

shellLink.SetPath(exePath);
auto arguments = format("-i %s -o %s", inputFolder, 
outputFolder).toStringz();

shellLink.SetArguments(arguments);
auto workingDirectory = programFolder.toStringz();
shellLink.SetWorkingDirectory(workingDirectory);
shellLink.QueryInterface(_IPersistFile, 
cast(void**));
auto linkPath = buildNormalizedPath(desktopFolder, 
"agtoolbox.lnk").toUTF16z();

linkFile.Save(linkPath, TRUE);
}

I tried sprinkling it with print statements and it crashes on 
shellLink.setPath(exePath);


This is the full script: 
https://gist.github.com/miotatsu/1cc55fe29d8a8dcccab5
I found the values for the missing windows api bits from this: 
http://www.dsource.org/projects/tutorials/wiki/CreateLinkUsingCom


I compile with: dmd agtoolbox-installer.d ole32.lib
I got the ole32.lib from the dmd install

Any help would be highly appreciated as I'm new to Windows 
programming in D and have no idea what I'm doing wrong!


Re: Concurrency Confusion

2015-08-10 Thread

On Saturday, 8 August 2015 at 06:24:30 UTC, sigod wrote:
Use negative value for `receiveTimeout`. 
http://stackoverflow.com/q/31616339/944911


actually this no longer appears to be true?
Passing -1.msecs as the duration gives me an assertion failure:

core.exception.AssertError@std/concurrency.d(1902): Assertion 
failure


Took a look in phobos and it appears to be from this line:
https://github.com/D-Programming-Language/phobos/blob/master/std/concurrency.d#L1904

If you look at the implementation of receiveTimeout, you'll see 
that it no longer has these lines from the stack overflow answer:


if( period.isNegative || !m_putMsg.wait( period ) )
return false;

https://github.com/D-Programming-Language/phobos/blob/master/std/concurrency.d#L824


Re: Concurrency Confusion

2015-08-10 Thread

On Monday, 10 August 2015 at 22:50:15 UTC, sigod wrote:

On Monday, 10 August 2015 at 22:21:18 UTC, 岩倉 澪 wrote:

Took a look in phobos and it appears to be from this line:
https://github.com/D-Programming-Language/phobos/blob/master/std/concurrency.d#L1904


It looks like you're trying to use `receiveTimeout` like this:

bool value;
receiveTimeout(-1.msecs, value);


Ah, nope I just assumed the closest assert to the line mentioned 
in the assertion failure was the culprit without thinking about 
it much. You are correct that the assertion your pull requests 
removes is the one that gave me trouble. I'll leave it as 0.msecs 
until your pull request is merged and a new release made, thanks 
for the help!





Re: Concurrency Confusion

2015-08-09 Thread

On Sunday, 9 August 2015 at 21:06:10 UTC, anonymous wrote:

On Sunday, 9 August 2015 at 17:43:59 UTC, 岩倉 澪 wrote:
Afaict it is the best way to do what I'm trying to do, and 
since the data is mutable and cast to immutable with 
assumeUnique, casting it back to mutable shouldn't be a 
problem. Technically casting away immutable might be undefined 
behaviour and it might be an ugly hack, but I don't see a more 
idiomatic solution.


I think casting to shared and back would be better.

Unfortunately, it looks like std.concurrency.send doesn't like 
shared arrays. I filed an issue: 
https://issues.dlang.org/show_bug.cgi?id=14893


I agree! I initially tried to cast to shared and back, but when I 
encountered that compiler error I decided to go with immutable. I 
assumed that there was a good reason it didn't work, rather than 
a deficiency in the language. Hopefully the issue can be 
resolved, leading to a nicer solution. :)


Re: Concurrency Confusion

2015-08-09 Thread

On Saturday, 8 August 2015 at 06:24:30 UTC, sigod wrote:
Use negative value for `receiveTimeout`. 
http://stackoverflow.com/q/31616339/944911


On Saturday, 8 August 2015 at 13:34:24 UTC, Chris wrote:
Note aside: if you only import what you need (say `import 
std.concurrency : receiveTimeout; std.datetime : msecs`), you 
can reduce the size of the executable considerably as your 
program grows.


Thanks for the tips!



Re: Concurrency Confusion

2015-08-09 Thread

On Saturday, 8 August 2015 at 05:14:20 UTC, Meta wrote:
I'm not completely sure that it's bad in this case, but you 
really shouldn't be casting away immutable. It's undefined 
behaviour in D.


Afaict it is the best way to do what I'm trying to do, and since 
the data is mutable and cast to immutable with assumeUnique, 
casting it back to mutable shouldn't be a problem. Technically 
casting away immutable might be undefined behaviour and it might 
be an ugly hack, but I don't see a more idiomatic solution.




Re: Concurrency Confusion

2015-08-07 Thread

On Friday, 7 August 2015 at 15:55:33 UTC, Chris wrote:
To stop threads immediately, I've found that the best way is to 
use a shared variable, typically a bool, that is changed only 
in one place.

...
Unfortunately, sending an abort message to a thread as in 
`send(thread, true)` takes too long. Setting a global flag like 
ABORT is instantaneous. Beware of data races though. You might 
want to have a look at:


http://ddili.org/ders/d.en/concurrency_shared.html

Especially `synchronized` and atomicOp.


Ah, I already had a variable like ABORT in my application for 
signaling the main thread to close, so this was a surprisingly 
painless change! I made that variable shared and then did the 
following:


instead of
ABORT = true;
I now do
import core.atomic;
atomicStore!(MemoryOrder.rel)(ABORT, true);
and instead of
if(ABORT) break;
I now do
import core.atomic;
if(atomicLoad!(MemoryOrder.acq)(ABORT)) break;

This works great, and with the memory ordering specified I do not 
see a noticeable difference in performance, whereas with the 
default memory ordering my ~36 second processing takes ~38 
seconds.


One concern I had was that `break` might be a bad idea inside of 
a parallel foreach. Luckily, it seems that the author(s) of 
std.parallelism thought of this - according to the documentation 
break inside of a parallel foreach throws an exception and some 
clever exception handling is done under the hood. I don't see an 
uncaught exception when I close my application, but it is now 
able to close without having to wait for the worker thread to 
complete, so everything seems fine and dandy! Thanks for the help!


On Friday, 7 August 2015 at 15:55:33 UTC, Chris wrote:

receiveTimeout can be used like this: ...


My problem is that when you do this:


received = receiveTimeout(600.msecs,
  (string message) {  // === 
Receiving a value
  writeln(received: , 
message);

});


message is local to the delegate that receiveTimeout takes.
I want to use message outside of the delegate in the receiving 
thread. However, if you send an immutable value from the worker 
thread, afaict there would be no way to assign it to a 
global/outer variable without making a mutable copy (expensive!)
I haven't really spent much time trying to pass my message as 
mutable via shared yet, but hopefully that could work...


Re: Concurrency Confusion

2015-08-07 Thread

On Friday, 7 August 2015 at 22:13:35 UTC, 岩倉 澪 wrote:

message is local to the delegate that receiveTimeout takes.
I want to use message outside of the delegate in the 
receiving thread. However, if you send an immutable value from 
the worker thread, afaict there would be no way to assign it to 
a global/outer variable without making a mutable copy 
(expensive!)
I haven't really spent much time trying to pass my message as 
mutable via shared yet, but hopefully that could work...


Found the answer to this :) 
http://forum.dlang.org/post/mailman.1706.1340318206.24740.digitalmars-d-le...@puremagic.com


I send the results from my worker thread with assumeUnique, and 
then simply cast away immutable in the receiving thread like so:


(in module scope)
Bar[] baz;

(in application loop)
import std.array
if(baz.empty)
{
import std.concurrency, std.datetime;
receiveTimeout(0.msecs,
(immutable Bar[] bar){ baz = cast(Bar[])bar; });
}



Re: Concurrency Confusion

2015-08-07 Thread

On Saturday, 8 August 2015 at 00:39:57 UTC, 岩倉 澪 wrote:

receiveTimeout(0.msecs,
(immutable Bar[] bar){ baz = cast(Bar[])bar; });


Whoops, that should be:
 receiveTimeout(0.msecs,
 (immutable(Bar)[] bar){ baz = cast(Bar[])bar; });


Re: Concurrency Confusion

2015-08-06 Thread

On Tuesday, 4 August 2015 at 08:35:10 UTC, Dicebot wrote:

// in real app use `receiveTimeout` to do useful stuff until
// result message is received
auto output = receiveOnly!(immutable(Bar)[]);


New question: how would I receive a immutable value with 
receiveTimeout? I need the results from my worker thread outside 
of the delegate that receiveTimeout takes.


Also: what is the best way to kill off the worker thread when I 
close the application, without having to wait for the worker 
thread to complete? My first thought was to use receiveTimeout in 
the worker thread, but the work is being done in a parallel 
foreach loop, and I am not sure if there is a way to safely use 
receiveTimeout in a parallel situation...
I also found Thread.isDaemon in core.thread. I tried doing auto 
thread = Thread.getThis(); thread.isDaemon = true; at the start 
of the worker thread, but it still seems to wait for it to 
complete before closing.


Thanks again!



Re: Thread communication

2015-08-05 Thread

On Wednesday, 5 August 2015 at 14:31:20 UTC, Marc Schütz wrote:
It was a conscious decision not to provide a kill method for 
threads, because it is impossible to guarantee that your 
program is still consistent afterwards.


What about the situation where we want to kill worker threads off 
when closing a program? For example, I have a program with a 
thread that does some heavy computation in the background. When 
the application is closed, I want it to abort that computation, 
however I can't just slap a receiveTimeout in the worker thread 
because it is doing its work in a parallel foreach loop.


Re: Concurrency Confusion

2015-08-04 Thread

On Tuesday, 4 August 2015 at 08:36:26 UTC, John Colvin wrote:

Do you mean this instead?

spawn(fooPtrToBarArr, foo, bar);


Yep, that was a typo when writing up the post!

Anyway, you need to use shared, not __gshared, then it should 
work.


I have been wary of shared because of: 
https://p0nce.github.io/d-idioms/#The-truth-about-shared


Re: Concurrency Confusion

2015-08-04 Thread

On Tuesday, 4 August 2015 at 10:37:39 UTC, Dicebot wrote:
std.concurrency does by-value message passing (in this case 
just ptr+length), it never deep copies automatically


I assumed that it would deep copy (in the case of mutable data) 
since the data being sent is thread-local (unless I am 
misunderstanding something)





Concurrency Confusion

2015-08-04 Thread

Hi all, I'm a bit confused today (as usual, haha).

I have a pointer to a struct (let's call it Foo) allocated via a 
C library.
I need to do some expensive computation with the Foo* to create a 
Bar[], but I would like to do that computation in the background, 
because the Bar[] is not needed right away.


I definitely do not want there to be a copy of all elements of 
the Bar[] between threads, because it is very large.


I tried to implement it like this:

void fooPtrToBarArr(in shared Foo* f, out shared Bar[] b){ 
/*do work*/ }

__gshared Foo* foo;
foo = allocateFoo();
__gshared Bar[] bar;
spawn(foo, bar);

To my dismay, it results in a cryptic compiler error:

template std.concurrency.spawn cannot deduce function from 
argument types
!()(void function(shared(const(Foo*)) f, out shared(Bar[]) b), 
Foo*,

Bar[]), candidates are:
/usr/include/dlang/dmd/std/concurrency.d(466):
std.concurrency.spawn(F, T...)(F fn, T args) if 
(isSpawnable!(F, T))


Any help would be greatly appreciated :)




Re: Concurrency Confusion

2015-08-04 Thread

On Tuesday, 4 August 2015 at 08:35:10 UTC, Dicebot wrote:

auto output = receiveOnly!(immutable(Bar)[]);


Won't message passing like this result in an expensive copy, or 
does the cast to immutable via assumeUnique avoid that?


Re: Concurrency Confusion

2015-08-04 Thread

On Tuesday, 4 August 2015 at 11:42:54 UTC, Dicebot wrote:

On Tuesday, 4 August 2015 at 11:33:11 UTC, 岩倉 澪 wrote:

On Tuesday, 4 August 2015 at 10:37:39 UTC, Dicebot wrote:
std.concurrency does by-value message passing (in this case 
just ptr+length), it never deep copies automatically


I assumed that it would deep copy (in the case of mutable 
data) since the data being sent is thread-local (unless I am 
misunderstanding something)


It is heap-allocated and in there is no thread-local heap 
currently in D - only globals and static variables.


std.concurrency never deep copies - if you are trying to send 
data which contains indirections (pointers/arrays) _and_ is not 
marked either immutable or shared, it will simply not compile.


Ahh, thanks for the clarification! That makes a lot of sense


Re: Do strings with enum allocate at usage point?

2015-03-17 Thread

On Tuesday, 17 March 2015 at 18:25:00 UTC, Ali Çehreli wrote:

Strings are fine and fortunately it is very easy to test:

enum arr = [ 1, 2 ];
enum s = hello;

void main()
{
assert(arr.ptr !is arr.ptr);
assert(s.ptris s.ptr);
}

Ali


Ah, that is good to know. Thanks!


Do strings with enum allocate at usage point?

2015-03-17 Thread
I often hear it advised to avoid using enum with arrays because 
they will allocate at the usage point, but does this also apply 
to strings?
strings are arrays, so naively it seems that they would, but that 
seems odd to me.
I would imagine string literals end up in the data segment as 
they are immutable.


As a continuation of this question, I know that string literals 
have an implicit null delimiter, so it should be correct to pass 
a literal.ptr to a function taking a C-string, and presumably 
this still applies when using enum.
However, if enum implies allocation at the usage point for 
strings, one would be better served with static, meaning one 
would need to be explicit: static foo = bar\0?


Re: Difference between concatenation and appendation

2015-01-25 Thread

On Monday, 26 January 2015 at 01:17:17 UTC, WhatMeWorry wrote:
Ok, I just made up that word. But what is the difference 
between appending and concatenating?  Page 100 of TPDL says 
The result of the concatenation is a new array... and the 
section on appending talks about possibly needing expansion and 
reallocation of memory.


But I still don't feel like I have a grasp on the subtleties 
between them. Can someone give a short and sweet rule of 
thumb?


It might be so obvious that I'll regret posting this.

Thanks.


I'm no expert, so take what I say with a grain of salt. That 
said, here is my understanding:


When you append to an array with ~=, it attempts to reallocate 
the array in-place, meaning it allocates on top of the already 
used space, but grabs some more space past the end of the array. 
If there isn't enough space after the array then obviously it 
can't do that, so it allocates memory somewhere else that it can 
fit and then it copies the contents of the array to the new 
location. If you were to do myArray = myArray ~ moreStuff; I 
assume this is no different from ~=. Conceptually ~= is just 
syntactic sugar in the same way that += or -= is, you are just 
doing a concatenation and then updating the array to point to the 
new result. The fact that it can reallocate in place if there is 
enough space is just like an optimization, in my mind.


Re: How to pass a member function/delegate into a mixin template?

2014-09-16 Thread

On Monday, 15 September 2014 at 21:37:50 UTC, John Colvin wrote:
would this work for you? alias is the usual way of taking a 
function as a template parameter.


mixin template EventListener(alias slot)


Ah, thanks for the help! That works great. :)



How to pass a member function/delegate into a mixin template?

2014-09-15 Thread

Hi all,

I've been reading D Cookbook, in which the author recommends the 
use of mixin templates to essentially hold boilerplate code for 
classes (page 28). Referencing TDPL reaffirms this strategy. With 
this design choice in mind, I would like to be able to use a 
mixin template that creates a slot for me (as in signals  slots) 
and provides a constructor that connects it to the signal.


The closest I have come is in the simplified example given below:

EventHandler event_handler;

class EventHandler{
...

mixin std.signals.Signal!Event;

...
}

mixin template EventListener(void delegate(Event) slot){
private this()
in{ assert(event_handler); }
body{ event_handler.connect(eventListener); }
private void eventListener(Event e){ slot(e); }
}

class Foo{
mixin EventListener!((e){ ... });
}

Sadly, this code does not compile.
My understanding (and correct me if I'm wrong) is that this does 
not compile because I cannot create such a delegate, as the scope 
that it would be in is not available at compile time (it would be 
objects instantiated from class Foo in this example).


One strategy that works is to not pass anything to the mixin 
template, and have the mixin template use a function that is 
presumed to exist. It is then the duty of the class writer to 
make sure a function of the correct name and signature exists if 
they use this mixin. However, I worry that this is poor/brittle 
design.


What is the best approach to achieve this?

Thanks, Mio


GC can collect object allocated in function, despite a pointer to the object living on?

2014-08-16 Thread

Hello, I've been interested in the D language for a few years
now, but only dabble. I have read TDPL and recently started
reading D Cookbook. On the side, I started to write a small game
in D, but ran into a problem. I would greatly appreciate any help!

My design is a simple state machine. I have a module game.states,
which includes:

interface IState{
void handleEvents();
void renderFrame();
}

IState *currentState;
string nextState;

void changeState(){
 if(nextState != WaitState  nextState != ExitState){
 auto newState = cast(IState)
Object.factory(game.states.~nextState);
 import std.exception;
 enforce(newState);
 currentState = newState;
 nextState = WaitState;
 }
}

class TitleState : IState{
 void handleEvents(){
 ...
 }

 void renderFrame(){
 ...
 }
 }
 ...
}

and the game loop is of the following form:

//init state
nextState = TitleState;
changeState();

//game loop
while(nextState != ExitState){
 currentState.handleEvents();
 currentState.renderFrame();
 changeState();
}

However, it appears that the changeState function has a bug.
I believe the problem is that when changeState returns, newState
gets garbage collected, despite currentState pointing to it.
I come from a C++ background, so I am not used to garbage
collection.
Normally the changeState function would explicitly free the old
state, and allocate the new one.

Am I correct that newState is liable to be collected after
changeState returns?
Is there an easy fix?
Is my design fundamentally flawed within the context of garbage
collection?
If so, what kind of design would you recommend instead?


Re: GC can collect object allocated in function, despite a pointer to the object living on?

2014-08-16 Thread

Thank you for the help! I just removed the unnecessary
indirection and it is working great!
I was aware that interface and object variables are reference
types, but it slipped my mind. I'm too used to the C++ way of
things still :p
On Saturday, 16 August 2014 at 22:41:45 UTC, Sean Kelly wrote:

Interface and object variables are reference types--you don't
need the '*' to make them so.  By adding the extra layer of
indirection you're losing the only reference the GC can decipher
to the currentState instance.


Re: GC can collect object allocated in function, despite a pointer to the object living on?

2014-08-16 Thread

I see now, makes sense. :)

On Saturday, 16 August 2014 at 22:43:21 UTC, Chris Cain wrote:

This is actually not garbage collection. newState is making a
pointer to a reference that is located on the stack (that is,
when you return from that function
you now have a pointer that may at any time become overwritten
and made invalid.)