Re: CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn
On Wednesday, 27 April 2022 at 14:34:27 UTC, rikki cattermole 
wrote:

This works:


Cool, thanks.

Unfortunately, with that implementation, I need to know the 
maximum size for the array. It works for that particular example, 
but in the context of an XML file analysis, it's a bit awkward.


Regarding my comment above, I tried using cork functions for 
missing symbols: it also works! However the linker does not 
optimize those functions out (I see the symbols in the executable 
binary)...


Re: CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn
On Wednesday, 27 April 2022 at 14:27:43 UTC, Stanislav Blinov 
wrote:
This is a long-standing pain point with BetterC (see 
https://issues.dlang.org/show_bug.cgi?id=19268).


That's what I was afraid of... Thanks for the link to the 
bug-report.


On Wednesday, 27 April 2022 at 14:27:43 UTC, Stanislav Blinov 
wrote:
When not using BetterC, but not linking against druntime 
either, you have to provide your own implementation for those 
functions. This is e.g. so you can replace druntime with your 
own version.


Yeah... The problem is that there will a lot of those functions 
to define (for a whole XML parser). I suppose I can use cork 
functions with empty bodies??


I will check if the linker optimize them out...


CTFE and BetterC compatibility

2022-04-27 Thread Claude via Digitalmars-d-learn

Hello,

I want to make a SAX XML parser in D that I could both use at 
run-time or compile-time.


Also when I use it at compile-time, I would like to use BetterC 
so I don't have to link D-runtime.


But I have some compilation problems. I use GDC (GCC 9.4.0).

Here's a reduced sample code:

```
struct Data
{
int[] digits;
}

int parseDigit(char c) pure
{
return c - '0';
}

Data parse(string str) pure
{
Data data;

while (str.length != 0)
{
// Skip spaces
while (str[0] == ' ')
str = str[1 .. $];

// Parse single digit integer
data.digits ~= parseDigit(str[0]);

// Consume digit
str = str[1 .. $];
}

return data;
}

enum Data parsedData = parse("5 4 2 6 9");

extern(C) int main()
{
pragma(msg, "First digit=", parsedData.digits[0]);
return 0;
}
```

If I compile and link against D-runtime, it works:
```
$ gcc test.d -lgdruntime -o test
First digit=5
```

If I compile with BetterC (no D-runtime for GDC), I get a 
compilation error about RTTI:

```
$ gcc test.d -fno-druntime -o test
test.d: In function ‘parse’:
test.d:25:21: error: ‘object.TypeInfo’ cannot be used with 
-fno-rtti

   25 | data.digits ~= parseDigit(str[0]);
  | ^
```

If I compile without the BetterC switch, compilation actually 
works but I'll have some linker issues:

```
$ gcc test.d -o test
First digit=5
/tmp/ccuPwjdv.o : In function 
« _D5test5parseFNaAyaZS5test4Data » :

test.d:(.text+0x137) : undefined reference to « _d_arraybounds »
test.d:(.text+0x183) : undefined reference to « _d_arraybounds »

etc...
```

The operation requiring the D-runtime is appending the array, but 
it should **only** be done at compile-time.


I don't understand why it requires to link against the D-runtime 
whereas it only needs it at compilation-time (and the compilation 
and CTFE interpretation works, as we can see in the last example).


Is there a way to force the compiler to not emit any object ode 
for those functions?


Or am I missing something?

Regards,

Claude


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 12:49:21 UTC, Alain De Vos wrote:

PS :
I use
```
ldc2 --gcc=cc ,
cc -v : clang version 11.0.1
```


We only have gcc in our toolchain (we target an ARM-based 
embedded system).


---

I also encountered problems while I was trying to use CTFE only 
functions (using betterC so I don't have to link 
phobos/D-runtime).


However, if those functions use the GC for instance (like 
appending a dynamic-array), it will require me to link D-runtime, 
whereas I only use them at compile-time. So I'm a bit confused... 
I'll try and get more information and reduce a code sample.


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 10:29:39 UTC, Iain Buclaw wrote:

On Tuesday, 26 April 2022 at 10:23:15 UTC, Claude wrote:

Hello,



Hello,

<%--SNIP--%>



Does anyone have any idea what's going on?

(if I just compile a single D file with "int main() { int* a = 
new int(42); return *a; }", it works as intended.)


The `new` keyword requests the druntime GC to allocate memory, 
however you haven't initialized the D run-time in your program.


main.cpp
```D
extern "C" int rt_init();
extern "C" const int* ct_parse();

int main(int argc, char ** argv)
{
rt_init();
return *ct_parse();
}
```


Ok, thanks!
I should have suspected something like this.


Re: Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

On Tuesday, 26 April 2022 at 10:23:15 UTC, Claude wrote:

It seg-faults...


Just to make it clear, it seg-faults at run-time (not at 
compilation or link time) when I launch the executable "test".


Problem with GC - linking C++ & D (with gdc)

2022-04-26 Thread Claude via Digitalmars-d-learn

Hello,

I'm working on a C++ project requiring an XML parser. I decided 
to make it in D so I could easily parse at run-time or 
compile-time as I wish.


As our project uses a gcc tool-chain, I naturally use GDC (GCC 
9.4.0).


But I have a few problems with D, linking with it, trying to use 
better-C and CTFE, etc.


Here's a reduced sample of one of my problems:

parser.d
```
extern(C) int* ct_parse()
{
int* a = new int(42);
return a;
}
```

main.cpp
```
extern "C" const int* ct_parse();

int main(int argc, char ** argv)
{
return *ct_parse();
}
```

Compiling/linking using the following command-lines:
```
gcc -c parser.d -o parser.o
gcc -std=c++17 -c main.cpp -o main.o
gcc main.o parser.o -lstdc++ -lgphobos -lgdruntime -o test
```

It seg-faults...

Here's the output of gdb:
```
Program received signal SIGSEGV, Segmentation fault.
0x7777858a in gc_qalloc () from 
/usr/lib/x86_64-linux-gnu/libgdruntime.so.76

```

Does anyone have any idea what's going on?

(if I just compile a single D file with "int main() { int* a = 
new int(42); return *a; }", it works as intended.)


Re: DIP1028 - Rationale for accepting as is

2020-05-27 Thread Claude via Digitalmars-d-announce

On Wednesday, 27 May 2020 at 13:42:08 UTC, Andrej Mitrovic wrote:
Is the actual problem those `@trusted:` declarations at the top 
of C headers?


There could be a simple solution to that:

Ban `@trusted:` and `@trusted { }` which apply to multiple 
symbols. Only allow `@trusted` to apply to a single symbol. For


IMO, it makes things worse. Because the careless programmer will 
slap @trusted to every declaration (maybe with a script or 
find/replace macro of his editor). So now, we don't know if the 
annotation is greenwashing or careful examination of the 
definition.


At least with "@trusted:" and "@trusted { }", the greenwashing is 
obvious.


Re: Rationale for accepting DIP 1028 as is

2020-05-27 Thread Claude via Digitalmars-d-announce

On Wednesday, 27 May 2020 at 10:51:54 UTC, Walter Bright wrote:

On 5/27/2020 3:01 AM, Timon Gehr wrote:
I've addressed exactly this a dozen times or more, to you and 
others. Repeating myself has become pointless.


It's fine to disagree with me. Argue that point. But don't say 
I didn't address it.


I'm trying to understand the logic of "@safe by default for 
extern declaration".


So I setup a simplified real-life example that I had in my mind...

Let's say we have a project consisting of:
- an ASM file containing several function definitions (assembly 
language scares people away, so it's a good example).
- a .di file containing the extern(C) declarations of the former 
ASM functions.

- and a .d file using those functions with @safe code.

No function is annotated.

Before DIP-1028, it compiles and link without annotations.

If extern(C) decl are @system by default, it will not compile 
anymore, and a careless programmer will slap a "@trusted:" at the 
top of the ".di" file (greenwashing), and it'll stay there.


On the other hand, a careful programmer will need to annotate 
@trusted ONLY on functions that he actually trusts AND make the 
project compile anyway (because he cannot afford to leave his 
project broken). So what does he do to make it compile ? Should 
he slap a "@trusted:" at the beginning of the file just like the 
careless programmer and maybe explicitly annotate individually 
declarations that are actually trusted ? It's weird...


However, if extern(C) decl are @safe by default, he does not need 
to slap a "@trusted" at the top of the ".di" file, he just needs 
to annotate individual declarations with @trusted to advertise 
(to the QA, or his colleague or future self) that he reviewed 
successfully the function implementation, AND it will still 
compile.



So that example would go in favor of Walter's point. So is it a 
good example? Does pinpoint the point?


Because again, as the others said, it's still controversial, it's 
hard to sell, it's convoluted... I dunno...


Re: BindBC Updates: new loader function, SDL_net, streamlined SDL_* version indentifiers

2020-05-14 Thread Claude via Digitalmars-d-announce

On Wednesday, 13 May 2020 at 14:39:13 UTC, Mike Parker wrote:
I've recently implemented some improvements centered on 
bindbc-sdl.


As a user of BindBC (and former Derelict), I really enjoy using 
those binding libraries. It's some great work, thanks.


Re: D as a C Replacement

2020-02-06 Thread Claude via Digitalmars-d-announce

On Wednesday, 5 February 2020 at 11:50:47 UTC, IGotD- wrote:

[...]
The language is used as an academic sandbox for testing 
stuff by their creators. Theres no direction whatsoever.


Ignoring the lack of tools, documentation, etc



I must say that it is summarized very well. Especially that it 
is focusing implementing the latest cool feature instead of 
stability.


Yeah... Sure... and after all that constructive, accurate and 
subtle criticism, the guy says: "I love D, i really do [... but 
blablabla]". That's funny, in some sort of convoluted way.


Anyway, I quite agree with the article. I'm currently playing 
with D and some Vulkan demos, and translating some C++ tutorial 
code to D is very easy and gives a much more readable syntax.


ie.: simple stuff like:
std::vector uniformBuffers;
VS
VkBuffer[] uniformBuffers;


Re: Deep nesting vs early returns

2018-10-05 Thread Claude via Digitalmars-d

On Thursday, 4 October 2018 at 06:43:02 UTC, Gopan wrote:

Any advices?


In C, I will layout my functions like that (that's especially 
good for initializer functions):


int init(struct my_handle *handle, ...)
{
  if (handle == NULL)
return -EINVAL;  // direct return for parameter safety check

  if (other_arg > MY_MIN || other_arg > MY_MAX)
return -EINVAL;  // same here

  handle->data == calloc(1, SIZE);
  if (handle->data == NULL)
return -ENOMEM;  // return when memory allocation fails,
 // I assume my program won't recover anyway

  // create some system resource
  handle->fd = fopen("bla.txt", "w");
  if (handle->fd == NULL)
goto error;  // goto error, because I have some cleanup to do
 // I will use goto's from now on every time I 
have

 // some action that return an error

  ...

  return 0; // Success

error:
  close(handle->fd);  // maybe check first fd is valid...
  free(handle->data);
  return -EFAIL;
}


My basic rules are:
1- Argument safety check before anything else, return 
straightaway.
2- If memory allocation fails, return (or assert), no cleanup, 
unless you expect to recover from it, so that is a resource 
allocation like described in the next point.
3- If some resource allocation fails, escape using a goto 
statement to final label. I call it "error" for simple cases, 
else "error_closepipe", "error_epoll" etc, depending on the 
resource I'm cleaning up.
4- error handling is after a successful "return 0;" which 
finalizes a nominal execution flow.
5- Maybe use some "int ret" to carry some error code at the end, 
and maybe log it, for easier debugging.
6- Don't use a return statement in a loop. Break from it: there 
could be some cleanup to do after the loop.


As a side-note, one instruction is one operation, I don't like:

if ((ret = handle->doSomeStuff(args)) < 0)
  goto error;

I prefer:

ret = handle->doSomestuff(args);
if (ret < 0)
  goto error;

It's especially annoying to read (and thus maintain) if some 
other conditions are tested in the if-statement.


Re: fearless v0.0.1 - shared made easy (and @safe)

2018-09-19 Thread Claude via Digitalmars-d-announce

On Tuesday, 18 September 2018 at 17:20:26 UTC, Atila Neves wrote:
I was envious of std::sync::Mutex from Rust and thought: can I 
use DIP1000 to make this work in D and be @safe? Turns out, yes.


Beautiful! The only current downside, is the fact the application 
using that library has to be compiled with -dip1000 if I'm not 
wrong?




Re: Mobile is the new PC and AArch64 is the new x64

2018-09-10 Thread Claude via Digitalmars-d

On Monday, 10 September 2018 at 13:43:46 UTC, Joakim wrote:
Despite all this, D may never do very well on mobile or 
AArch64, even though I think it's well-suited for that market. 
But at the very least, you should be looking at mobile and 
AArch64, as they're taking over the computing market.


Coming from ARM system programming for embedded systems, I'm also 
looking into AArch64. Having done some x86 assembly, ARM assembly 
was a bliss, and AArch64 looks even better!


I also wish D will do well for embedded systems. Most of the 
system developers I know program in C, and do very little C++ for 
it becomes unmaintainable (unless you enforce strict coding 
rules: like using it like "C with class" with very little 
template). C is straigthforward and gets the job done (ie. Vulkan 
- as a quite recent API - was written in C, not C++. So D would 
do so good!


That's why I think "BetterC" is a good strategic move (in the 
long run) even though I understand people coming from Java/Python 
background might not get it at all.


If we have a BetterConst (I'm referring to Jonathan Davis 
article: http://jmdavisprog.com/articles/why-const-sucks.html ), 
I think D would be even greater.


Re: iopipe v0.0.4 - RingBuffers!

2018-05-14 Thread Claude via Digitalmars-d-announce
On Thursday, 10 May 2018 at 23:22:02 UTC, Steven Schveighoffer 
wrote:
However, I am struggling to find a use case for this that 
showcases why you would want to use it. While it does work, and 
works beautifully, it doesn't show any measurable difference 
vs. the array allocated buffer that copies data when it needs 
to extend.


I can think of a good use-case:
- Audio streaming (on embedded environment)!

If you have something like a bluetooth audio source, and alsa or 
any audio hardware API as an audio sink to speakers, you can use 
the ring-buffer as a fifo between the two.


The bluetooth source has its own pace (and you cannot control it) 
and has a variable bit-rate, whereas the sink has a constant 
bit-rate, so you have to have a buffer between them. And you want 
to reduce the CPU cost has much as possible due to embedded 
system constraints (or even real-time constraint, especially for 
audio).




Re: Time to move logger from experimental to std ?

2017-12-01 Thread Claude via Digitalmars-d

On Thursday, 30 November 2017 at 09:39:20 UTC, Basile B. wrote:

On Wednesday, 29 November 2017 at 21:14:57 UTC, Claude wrote:
On Wednesday, 29 November 2017 at 14:32:54 UTC, Basile B. 
wrote:

Did I miss anything?


Sorry but yes, i think so, the handle is indirectly accessible 
with "FileLogger.file.fileno".


Oops! Sorry for the noise...


Re: Time to move logger from experimental to std ?

2017-11-29 Thread Claude via Digitalmars-d

On Wednesday, 29 November 2017 at 14:32:54 UTC, Basile B. wrote:
Hello, most of the changes made during the current year to the 
std.experimental.logger package are related to the cosmetic 
style. Isn't this a sign showing that the experimentation is 
achieved ?


I tried deriving FileLogger class to implement my own log 
fomatting. I overrided beginLogMsg, logMsgPart or finishLogMsg, 
but I could not do anything with it because the handle "file_" is 
private (so I cannot access it within the derived class).


So I ended up rewriting my own FileLogger deriving directly from 
Logger and copying most of the Phobos version code, but that's 
not convenient.


Did I miss anything?


Re: DerelictSDL2 3.1.0-alpha.1

2017-09-28 Thread Claude via Digitalmars-d-announce

On Thursday, 28 September 2017 at 09:26:19 UTC, Claude wrote:

However, I did not find any SDL2-2.0.6 package for Ubuntu... :(


I built it, it works fine.


Re: DerelictSDL2 3.1.0-alpha.1

2017-09-28 Thread Claude via Digitalmars-d-announce

On Saturday, 23 September 2017 at 12:16:46 UTC, Mike Parker wrote:
SDL 2.0.6 was just released [1], so I've updated DerelictSDL2 
[2] to support it. It's available in DerelictSDL2 
3.1.0-alpha.1. I've tested that the loader works, but beyond 
that I've done nothing with it.


I also fixed some minor issues in the 3.0 branch, the latest 
release of which is 3.0.0-beta.7.


The 3.0 series supports SDL 2.0.0 - SDL 2.0.5, and the 3.1 
series supports SDL 2.0.0 - 2.0.6. The latter is currently not 
in the master branch (it's in the 3.1 branch), nor is it in the 
documentation. I'll move 3.1 development to master and update 
the docs once I pull it out of alpha.


[1] https://discourse.libsdl.org/t/sdl-2-0-6-released/23109
[2] https://github.com/DerelictOrg/DerelictSDL2


Great!

I made it work on Windows 10, with derelict-gl3 2.0.0-beta.4.

Rendering and mouse/keyboard input works with my little engine 
(https://github.com/claudemr/orb).


However, I did not find any SDL2-2.0.6 package for Ubuntu... :(


Re: D as a Better C

2017-08-31 Thread Claude via Digitalmars-d-announce
I think "betterC" can be a good tool to use D on embedded 
systems, keep as few dependencies as possible, a low ROM 
footprint and a good C interoperability.


I'll try to find some time to play with it.


Re: Is it acceptable to not parse unittest blocks when unittests are disabled ?

2017-03-30 Thread Claude via Digitalmars-d
On Wednesday, 29 March 2017 at 19:43:52 UTC, Vladimir Panteleev 
wrote:
On Wednesday, 29 March 2017 at 19:32:50 UTC, Vladimir Panteleev 
wrote:

On Wednesday, 29 March 2017 at 11:16:28 UTC, deadalnix wrote:
I was wondering. When uniitests aren't going to run, it may 
be desirable to skip parsing altogether, just lexing and 
counting braces until the matching closing brace is found.


Sorry, is this not already the case?


https://github.com/dlang/dmd/pull/4704


To quote Walter Bright from that PR, unittest can contain invalid 
code ONLY if you never compile with -unittest (obviously), -D or 
-H.


It looks consistent to me: just don't parse it in release mode.


Re: Amper audio player for GNU/Linux and X11

2017-03-17 Thread Claude via Digitalmars-d-announce

On Friday, 17 March 2017 at 06:33:38 UTC, ketmar wrote:
* pure D audio code, no external decoder libraries required 
(and no SDL);

* supports FLAC, Vorbis, Opus, MP3 playback;
* various hardware sampling rates with transparent resampling 
of source audio;
* multiband equalizer (the code is there, but UI is not ready 
yet);


So you ported the FLAC, OPUS and MP3 decoder to D ?

That's huge. :)


Re: Questionnaire

2017-02-10 Thread Claude via Digitalmars-d-announce

1. Why your company uses  D?


My company does not use D. If I had the time, I really think I 
could integrate D into our build system, probably forcing it a 
bit: "Oh and by the way, that new library I wrote happens to be 
written in D..." (We have Vala in our build system, how worse 
could it be?).


I use it for personal projects.


2. Does your company uses C/C++, Java, Scala, Go, Rust?


We use C/C++/assembly for system stuff. And Java for Android 
applications. We run Linux or Android on ARM embedded systems.



3. If yes, what the reasons to do not use D instead?


Nobody knows about D. Most system developers use C here, half of 
them don't like C++ and scorn Java. And most of them don't know 
about D apart from my close colleagues which probably must hate 
it without having even used it, just because I always bring it up 
in any unrelated conversation at lunch.



2. Have you use one of the following Mir projects in production:


No, but it could be very useful for DSP routines. I hope Mir (and 
D) to have the success it deserves.


4. Have you use one of the following Tamedia projects in your 
production:


No.


5. What D misses to be commercially successful languages?


I don't know, I'm not a sales-person at all.
I also like D because it's got that "made by developers for 
developers" thing. I'm an idealist, I'd prefer D to be successful 
because of its cheer intrinsic value as a programing language, 
rather than because we throw big money at it.


6. Why many topnotch system projects use C programming language 
nowadays?


For history reasons. And because of its simplicity (and tooling 
etc), and its "system" trait.
I don't buy the "C compiles bugs" argument. Every languages in 
the world produce bugs [1].


I noticed the hardest and most insiduous bugs could always be 
avoided if the software was more carefully designed upfront, 
especially for real-time or concurrent software.


I use C a lot, it's my favorite language with D, though I'm not a 
proselyte. I use C++ only as "C with class".


[1] http://jonathanwhiting.com/writing/blog/games_in_c/


Re: memcpy() comparison: C, Rust, and D

2017-02-02 Thread Claude via Digitalmars-d
On Wednesday, 1 February 2017 at 21:16:30 UTC, Walter Bright 
wrote:
6. Valgrind isn't available on all platforms, like Windows, 
embedded systems, phones (?), etc.


You can use valgrind on embedded systems as long as they run a 
GNU/Linux OS. I've used valgrind successfully many times on ARM 
architecture.
But I don't know if it works with Android, and I doubt it works 
on baremetal indeed.


Re: CTFE Status

2017-01-24 Thread Claude via Digitalmars-d

On Tuesday, 24 January 2017 at 11:01:46 UTC, Stefan Koch wrote:

Green means it passes all tests on auto-tester.


Yes, I understood. I was playing with the idea of 
eco-friendliness as well... :)


Re: CTFE Status

2017-01-24 Thread Claude via Digitalmars-d

On Tuesday, 24 January 2017 at 09:52:13 UTC, Stefan Koch wrote:

NEW CTFE IS GREEN ON 64 BIT!
GREEN!


I felt first a bit surprised about that "green" thing, but 
somehow if the new-ctfe engine consumes less CPU, it means it 
consumes less power, and therefore less carbone dioxyde is 
released into the atmosphere.

So indeed, we can assert that new-ctfe is eco-friendly!

Good job! :)

I really enjoy reading this thread everyday. It's kind of like a 
Dungeon & Dragon novel where the hero slashes its way through 
monsters, and bugs and features, up to the Holy Grail: the 
New-CTFE Engine implementation!


This is awesome work! Thanks.


Re: GNU License warning:

2017-01-13 Thread Claude via Digitalmars-d

On Friday, 13 January 2017 at 15:15:14 UTC, Ignacious wrote:

On Friday, 13 January 2017 at 12:01:22 UTC, bachmeier wrote:
This is not the proper place to blog about software license 
preferences or to make unsubstantiated accusations against an 
organization you don't like. There are other sites for that.


So, what is up with all the wanna be Nazi's running around 
today? Did Hitler come out of retirement??


Retirement?? I thought he commited suicide...


Re: Conditional Compilation Multiple Versions

2017-01-09 Thread Claude via Digitalmars-d-learn

Druntime uses this for its translation of POSIX header files:

https://github.com/dlang/druntime/blob/master/src/core/sys/posix/config.d

An example:

https://github.com/dlang/druntime/blob/master/src/core/sys/posix/sys/resource.d#L96


Ok, I see. Thanks!
(I've gotta try reggae someday) :)




Re: Conditional Compilation Multiple Versions

2017-01-06 Thread Claude via Digitalmars-d-learn

On Friday, 6 January 2017 at 13:27:06 UTC, Mike Parker wrote:

version(Windows)
enum bool WindowsSupported = true;
else
enum bool WindowsSupported = false;


Well, yes, that was a bad example. I thought to change it before 
sending my post but I could find any other meaningful alternative.
My point was, that you can re-define WindowsSupported as a 
version even if it already defined, but not as an enum. And 
sometimes, you cannot simply use the else statement without 
creating another indented block (which seems a bit awkward).


Yes, it works quite well for most use cases and should 
generally be preferred. I disagree that it scales, though. At 
some point (a point that is highly project-dependent), it 
breaks down, requiring either very large modules or duplicated 
versions across multiple modules.


Yes, in that case, you would probably break it down into several 
specialized config modules. I meant it forces you not to put 
directly version(Windows) into your code, but rather 
version(ThatFeatureSupportedByWindowsAmongstOtherOSs).


My position is that I will always choose version blocks first, 
but if I find myself in a situation where I have to choose 
between duplicating version statements (e.g. version(A) 
{version=AorB; version=AorC;}) across multiple modules and 
restructuring my code to accommodate versioning, I much prefer 
to use the enum alternative as an escape hatch.


Ok, that's interesting.
Do you have code samples where you do that? I'm just curious.


Re: Conditional Compilation Multiple Versions

2017-01-06 Thread Claude via Digitalmars-d-learn

On Thursday, 20 October 2016 at 09:58:07 UTC, Claude wrote:
I'm digging up that thread, as I want to do some multiple 
conditional compilation a well.


Well I'm digging up that thread again, but to post some positive 
experience feedback this time as I've found an answer to my own 
questions, and I thought I could share them.


I wanted to convert some C preprocessing code to D: thousands of 
conditional compilation macros #ifdef, #if defined() used in a 
program that determine the capabilities of a platform (number of 
CPU cores, SIMD availability, etc). So it had to check compiler 
types and versions, combined with the target architecture, and 
the OS, and the endianess and so on.


So the C implementation is a stream of:
#if defined(MYOS) || defined(ARCHITECTURE) && 
defined(__weirdstuff)

# define SPECIFIC FEATURE
#else
# blabla
...

And I though I would have to use some || and && operators in my D 
code as well.


So I did. I used that trick from Mike Parker and anonymous (see 
above in the thread) by declaring "enum bool"s to be checked with 
"static if"s later to implement specific feature.


So I had a stream of:

version (Win32)
  enum bool WindowsSupported = true;
else
  enum bool WindowsSupported = false;

version (Win64)
  enum bool WindowsSupported = true; //Ooops
else
  enum bool WindowsSupported = false; //Ooops

It turned out to be not so readable (even when using a "string 
mixin" to make the code tighter), and I cannot define twice an 
enum without using "static if", which was a deal-breaker. Also 
the conciseness of the versions for the D compilers (only 4: DMD, 
GDC, LDC and SDC), as well as the different OS versions made the 
code a lot tighter than the C version.


So I just dropped the enum definition thing and just used 
"version" as it was designed to be used:


version (Win32)
  version = WindowsSupported;
else version (Win64)
  version = WindowsSupported;
else etc...

So to my older question:

* Is there an "idiomatic" or "elegant" way of doing it? Should 
we use Mike Parker solution, or use the "template 
Version(string name)" solution (which basically just circumvent 
"version" specific limitation)?


That little experience showed that using version as it is 
designed currently is enough to elegantly cover my needs. And it 
seemed to scale well.
Also I think it may force developers to handle all version 
specific stuff into one specific module and define your own 
version identifiers to list features from compiler, OS, target 
architecture version identifiers; which is a good coding practice 
anyway.


So:

module mylib.platform;

version (ThisOs)
 version = ThatFeature;
else
 version = blabla;
etc...

And:

module mylib.feature;

void doFeature()
{
version (ThatFeature)
  blabla;
}

But again, that's just my feedback from one single experience 
(even though I think that kind of code is quite common in C/C++ 
cross-platform libraries).


So I'm still curious as why Walter designed "version" that 
particular way, and if anyone has bumped on "version" 
(quasi-)limitations and what they think about it!




Re: What are you planning, D related, for 2017 ?

2017-01-03 Thread Claude via Digitalmars-d

1/ Carry on doing my 3D engine pet project, slowly but steadily.
2/ Small tool to automatically backup some files on an USB drive 
upon insertion (with a GUI).
3/ An audio/video streaming framework (a light gstreamer). I want 
to make it fast, with low memory footprint, suitable for embedded 
system, and especially make all the synchronization part easy for 
a potential plugin developer (no deadlock, no crash because an 
object is destroyed too early).

4/ Do some D stuff for ARM architecture...


Re: OT: Tiobe Index - December Headline: What is happening to good old language C?

2016-12-08 Thread Claude via Digitalmars-d
It's strange to see "assembly language" as an entry, the target 
is not specified, so I suppose it includes them all, and is more 
a way of programming. It would be interesting to see which target 
(x86, ARM?) are the most used.


Re: ESA's Schiaparelli Mars probe crashed because of integer overflow

2016-11-25 Thread Claude via Digitalmars-d
On Friday, 25 November 2016 at 07:14:45 UTC, Patrick Schluter 
wrote:
Hey, sounds suspicously similar to Ariane 5 explosion. Does ESA 
not learn from its errors or am I only reading too much in it 
(probably)?


Well, from the little information we have, I suppose we can only 
be reading too much in it.


So, I like too to think it's just due to an integer overflow. But 
not from a software engineer perspective, but more from a Marxist 
approach. One misses a simple test over an integer, and you make 
a rocket-ship worth billions of good money (that could be used in 
education, medical care or whatever) explode in tiny cold little 
pieces, 54 millions km from here.


What an ironic and subversive bug, the engineer who did that 
should be immensely proud of himself. :)


Re: OT: for (;;) {} vs while (true) {}

2016-11-25 Thread Claude via Digitalmars-d

Sorry, I sent my post before finishing it, so...

It's in same vein as using:

if (cond)
{
  singleStatement;
}

instead of:

if (cond)
  singleStatement;

Because, you can more easily insert statements within the block 
(without having to navigate to different to insert the brackets).




Re: OT: for (;;) {} vs while (true) {}

2016-11-25 Thread Claude via Digitalmars-d

On Friday, 25 November 2016 at 11:10:44 UTC, Timon Gehr wrote:

On 25.11.2016 11:33, Claude wrote:

...

Between "for(;;)", "while(true)" and "do while(true)", I would 
use the

"while (true) { }" for pure readability and semantic reasons.
...


What semantic reasons?


In the general sense:

- While true (is always true), I loop.

Is more meaningful and conceptually easy then the empty for 
statement :


- For "empty initialization statement" until "I don't know (so 
forever by default)" and "not iterating", I loop.


I reckon "for(;;)" form is used for debug reasons (so you can 
easily
insert conditions to transform an infinite loop into a finite 
one).


You can just as easily edit the while condition. I use it 
because "unconditional loop" is less silly than "loop until 
true is false".


I was just trying to explain why one would use for(;;) instead of 
while(true), you've got only one line to edit.


It's in same vein as using:

if (cond)
{

}


Re: OT: for (;;) {} vs while (true) {}

2016-11-25 Thread Claude via Digitalmars-d
On Thursday, 24 November 2016 at 22:09:22 UTC, Dennis Ritchie 
wrote:

On Thursday, 24 November 2016 at 22:04:00 UTC, LiNbO3 wrote:
As you can see [1] the `while (true)` is lowered into `for 
(;true;)` so it's all about what construct pleases you the 
most.


[1] 
https://github.com/dlang/dmd/blob/cd451ceae40d04f7371e46df1c955fd914f3085f/src/statementsem.d#L357


OK, thanks.

The next question is:
What principles guided when choosing between `for (;;) { ... }` 
and `while (true) { ... }` ?


For example, there are two options:
https://github.com/dlang/phobos/blob/master/std/algorithm/sorting.d


Between "for(;;)", "while(true)" and "do while(true)", I would 
use the "while (true) { }" for pure readability and semantic 
reasons.


I reckon "for(;;)" form is used for debug reasons (so you can 
easily insert conditions to transform an infinite loop into a 
finite one).


Re: Conditional Compilation Multiple Versions

2016-10-20 Thread Claude via Digitalmars-d-learn

On Saturday, 13 June 2015 at 12:21:50 UTC, ketmar wrote:

On Fri, 12 Jun 2015 20:41:59 -0400, bitwise wrote:


Is there a way to compile for multiple conditions?

Tried all these:

version(One | Two){ }
version(One || Two){ }
version(One && Two){ }
version(One) |  version(Two){ }
version(One) || version(Two){ }
version(One) && version(Two){ }

   Bit


nope. Walter is against that, so we'll not have it, despite the 
triviality of the patch.


I'm digging up that thread, as I want to do some multiple 
conditional compilation a well.


I have a couple of questions:
* Why is Walter against that? There must be some good reasons.
* Is there an "idiomatic" or "elegant" way of doing it? Should we 
use Mike Parker solution, or use the "template Version(string 
name)" solution (which basically just circumvent "version" 
specific limitation)?


Here' the kind of stuff I'd like to translate from C:

#if defined(_MSC_VER) && !defined(__INTEL_COMPILER)
#define YEP_MICROSOFT_COMPILER
#elif defined(__GNUC__) && !defined(__clang__) && 
!defined(__INTEL_COMPILER) && !defined(__CUDA_ARCH__)

#define YEP_GNU_COMPILER
#elif defined(__INTEL_COMPILER)
...

#if defined(_M_IX86) || defined(i386) || defined(__i386) || 
defined(__i386__) || defined(_X86_) || defined(__X86__) || 
defined(__I86__) || defined(__INTEL__) || defined(__THW_INTEL__)

#define YEP_X86_CPU
#define YEP_X86_ABI
#elif defined(_M_X64) || defined(_M_AMD64) || defined(__amd64__) 
|| defined(__amd64) || defined(__x86_64__) || defined(__x86_64)

...


Re: Meta-programming: iterating over a container with different types

2016-10-20 Thread Claude via Digitalmars-d-learn

On Friday, 23 September 2016 at 12:55:42 UTC, deed wrote:

// Maybe you can try using std.variant?



Thanks for your answer.
However I cannot use variants, as I have to store the components 
natively in a void[] array (for cache coherency reasons).


So I found a way to solve that problem: delegate callbacks.
There may be more elegant solutions but well, it works.

Basically I register some kind of accessor delegate of the form:

void accessor(Entity e, Component* c)
{
  // Do stuff, like save the component struct for that entity in 
a file

}

And it is stored in the entity class in an array of delegates:

  void delegate(Entity e, void* c);


Here's a very basic implementation:

class Entity
{
public:
  void register!Component(Component val);
  void unregister!Component();
  Component getComponent!Component();

  alias CompDg = void delegate(Entity e, void* c);

  void accessor!Component(void delegate(Entity e, Component* c) 
dg) @property

  {
auto compId = getComponentId!Component;
mCompDg[compId] = cast(CompDg)dg;
  }


  // Iterating over the components
  void iterate()
  {
// For every possible components
foreach (compId; 0..mMaxNbComponents)
  if (isRegistered(compId))
if (mCompDg[compId] !is null)
  mCompDg[compId](this, getComponentStoragePtr(compId));
  }

private:
  void* getComponentStoragePtr(uint compId);
  bool isRegistered(uint compId);

  void[]   mComponentStorage;  // Flat contiguous storage of all 
components

  CompDg[] mCompDg;
  // ...
}

unittest
{
  // registering
  auto e = new Entity;
  e.register!int(42);
  e.register!string("loire");
  e.register!float(3.14);

  assert(e.getComponent!float() == 3.14); // that is OK

  e.accessor!int= (Entity et, int i){ writefln("%d", i); 
};
  e.accessor!string = (Entity et, string s) { writefln("%s", s); 
};
  e.accessor!float  = (Entity et, float f)  { writefln("%f", f); 
};


  // Print the values
  e.iterate();
}


Re: Linking D code into existing C programs

2016-09-27 Thread Claude via Digitalmars-d
On Tuesday, 27 September 2016 at 08:12:58 UTC, Walter Bright 
wrote:

On 9/27/2016 12:03 AM, Jacob Carlborg wrote:

You're not linking druntime.


That's one issue. The other one is druntime needs to be 
initialized, and calling a random D function won't do that.


We've got to consider that when it is statically linked, and also 
dynamically as well.


Also, if it's statically linked, will the linker just use parts 
of druntime that are actually used by the D libtrary?


For instance, if the D library does not use exceptions, ideally 
it should not link any related code in druntime (it's important, 
especially in embedded software).


Re: [OT] Punctuation of "Params:" and "Returns:"

2016-09-23 Thread Claude via Digitalmars-d
On Thursday, 22 September 2016 at 18:26:15 UTC, Andrei 
Alexandrescu wrote:

Parameters:
r | The range subject to partitioning.

Returns:
Something awesome.


This is incorrect because one is not supposed to punctuate 
sentence fragments as full sentences. Next attempt:


I also prefer that version.

And I think it makes sense even linguistically.

Sentences in english could be formalized as:
   [when/where/why/...].

But sometimes, only fragments of sentences are used, when a 
fragment is already implied.

For instance, for the imperative form, we may write

  Alice said to Bob: "Do this!".

"Do this!" is a perfectly valid english sentence, and the subject 
is implied: Bob (and not Alice). Expanded, it would give:


  Alice wants Bob to do this.
Or:
  Alice wants "Bob does this".


So, for comments, it is also ok to do the same.

i.e.

/**
 * Get a coefficient from the 2D matrix. (subject implied: 
function getCoef)

 *
 * Parameters:
 *   r | Row index. (subject implied: parameter r)
 *   c | Column index. (subject implied: parameter c)
 *
 * Returns:
 *   Coefficient fetched at the specified row and column. 
(subject implied: Return value)

 */
float getCoef(int r, int c)




Meta-programming: iterating over a container with different types

2016-09-23 Thread Claude via Digitalmars-d-learn
It's more a general meta-programming question than a specific D 
stuff.


For an entity-component engine, I am trying to do some run-time 
composition: registering a certain type (component) to a 
structure (entity).


I would like to know how I can iterate an entity and get the 
different type instances registered to it.


Here is a simple example to clarify:

class Entity
{
  void register!Component(Component val);
  void unregister!Component();
  Component getComponent!Component();

  //iterating over the components (?!??)
  void opApply(blabla);
}

unittest
{
  // registering
  auto e = new Entity;
  e.register!int(42);
  e.register!string("loire");
  e.register!float(3.14);

  assert(e.getComponent!float() == 3.14); // that is OK

  // the code below is wrong, but how can I make that work??
  foreach (c; e)
  {
writeln(c); // it would display magically 42, "loire" and 3.14
// and c would have the correct type at each step
  }
}


Re: [OT] Music to Program Compilers To

2016-09-23 Thread Claude via Digitalmars-d

There are a lot metal fans in here. :)

When I code, I listen most of time psychedelic 70's stuff:
- Pink Floyd
- Led Zeppelin
- Lynyrd Skynyrd
- Queen
- ACDC
- Black Sabbath
- Rammstein
- Robert Wyatt
- Renaud

Most of the time, I prefer early stuff... When band members don't 
die chocking in their own vomit or in an airplane crash, they 
tend to release uninspired music as they grow older.


So more early stuff (radio Moscow):
https://www.youtube.com/watch?v=LSOqHrYXWSY


Re: LDC with ARM backend

2016-08-09 Thread Claude via Digitalmars-d-learn

On Monday, 1 August 2016 at 06:21:48 UTC, Kai Nacke wrote:

Thanks! That's really awesome!

Did you manage to build more complex applications? EABI is a 
bit different from the hardfloat ABI and there may be still 
bugs lurking in LDC...


Unfortunately no, I didn't have the time.

I was interested in building audio applications in D, but I do 
not use much float arithmetic on embedded systems (I prefer 
integer/fixed-point over it). Anyway I have some pieces of DSP 
algorithms I could try out in float (FFT, biquads, FIR etc).


I could also try to run the phobos test suite on the board I use, 
if there is an easy way to do it (I'm pretty new to all this).


On Tuesday, 2 August 2016 at 04:19:15 UTC, Joakim wrote:
Sorry, I didn't see this thread till now, or I could have saved 
you some time by telling you not to apply the llvm patch on 
non-Android linux.  Note that you don't have to compile llvm 
yourself at all, as long as the system llvm has the ARM backend 
built in, as it often does.


Ah ok. I am totally new to llvm. I did it the hard way. :)


Re: LDC with ARM backend

2016-07-21 Thread Claude via Digitalmars-d-learn

On Thursday, 21 July 2016 at 10:30:55 UTC, Andrea Fontana wrote:

On Thursday, 21 July 2016 at 09:59:53 UTC, Claude wrote:
I can build a "Hello world" program on ARM GNU/Linux, with 
druntime and phobos.

I'll write a doc page about that.


It's a good idea :)


Done:

https://wiki.dlang.org/LDC_cross-compilation_for_ARM_GNU/Linux

I based it totally on Kai's previous page for LDC on Android.

It lacks the build for druntime/phobos unit-tests.


Re: LDC with ARM backend

2016-07-21 Thread Claude via Digitalmars-d-learn

On Wednesday, 20 July 2016 at 16:10:48 UTC, Claude wrote:

R_ARM_TLS_IE32 used with non-TLS symbol ??


Oh, that was actually quite obvious... If I revert the first 
android patch on LLVM sources, and build it back it works!


I can build a "Hello world" program on ARM GNU/Linux, with 
druntime and phobos.

I'll write a doc page about that.


Re: LDC with ARM backend

2016-07-20 Thread Claude via Digitalmars-d-learn
So I'm trying to build druntime correctly, I corrected some 
problems here and there, but I still cannot link with 
libdruntime-ldc.a:



/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc loire.o 
lib/libdruntime-ldc.a -o loire



I get many errors like:

/opt/arm-2009q1/bin/../lib/gcc/arm-none-linux-gnueabi/4.3.3/../../../../arm-none-linux-gnueabi/bin/ld:
 
lib/libdruntime-ldc.a(libunwind.o)(.text._D3ldc2eh6common61__T21eh_personality_commonTS3ldc2eh9libunwind13NativeContextZ21eh_personality_commonUKS3ldc2eh9libunwind13NativeContextZ3acbMFNbNcNiNfZPS3ldc2eh6common18ActiveCleanupBlock[_D3ldc2eh6common61__T21eh_personality_commonTS3ldc2eh9libunwind13NativeContextZ21eh_personality_commonUKS3ldc2eh9libunwind13NativeContextZ3acbMFNbNcNiNfZPS3ldc2eh6common18ActiveCleanupBlock]+0x38):
 R_ARM_TLS_IE32 used with non-TLS symbol 
_D3ldc2eh6common21innermostCleanupBlockPS3ldc2eh6common18ActiveCleanupBlock




R_ARM_TLS_IE32 used with non-TLS symbol ??


Re: LDC with ARM backend

2016-07-20 Thread Claude via Digitalmars-d-learn

I think my cross-compile LDC is fine.

I tried to build this D program:

/// loire.d
int main()
{
return 42;
}



However, the run-time is not (neither is phobos), most of the 
linker issues come from the druntime. So...


I wrote my own druntime. Here's the code:

/// dummyruntime.d
// from rt/sections_elf_shared.d, probably don't need it right 
now...

extern(C) void _d_dso_registry(void* data)
{
}

// from rt/dmain2.d, just call my D main(), ignore args...
private alias extern(C) int function(char[][] args) MainFunc;

extern (C) int _d_run_main(int argc, char **argv, MainFunc 
mainFunc)

{
return mainFunc(null);
}



I built everything:

# Compilation
./bin/ldc2 -mtriple=arm-none-linux-gnueabi -c loire.d
./bin/ldc2 -mtriple=arm-none-linux-gnueabi -c dummyruntime.d
# Link
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc loire.o 
dummyruntime.o -o loire



And I ran it successfully on my ARM target:

$> loire
$> echo $?
42



So now I know I have a proper LDC cross-compiler! :)

I'm jut missing a proper druntime and phobos for GNU/Linux ARM.


Re: LDC with ARM backend

2016-07-19 Thread Claude via Digitalmars-d-learn

On Friday, 15 July 2016 at 15:24:36 UTC, Kai Nacke wrote:
There is a reason why we do not distribute a binary version of 
LDC with all LLVM targets enabled. LDC still uses the real 
format of the host. This is different on ARM (80bit on 
Linux/x86 vs. 64bit on Linux/ARM). Do not expect that 
applications using real type work correctly.
(The Windows version of LDC uses 64bit reals. The binary build 
has the ARM target enabled.)


Regards,
Kai


Hello Kai,

Thanks for your answer.

From the link https://wiki.dlang.org/Build_LDC_for_Android , I 
did exactly the same steps described in section "Compile LLVM" 
(patch applied).


At section "Build ldc for Android/ARM", I did it quite the same. 
I applied the patch ldc_1.0.0_android_arm, but changed 
runtime/CMakeList.txt, instead of using Android specific stuff, I 
did:



Line 15:
set(D_FLAGS   -w;-mtriple=arm-none-linux-gnueabi  
  CACHE STRING  "Runtime build flags, separated by ;")


Line 505:
#
# Set up build targets.
#
set(RT_CFLAGS "-g")
set(CMAKE_SYSTEM_NAME Linux)
set(CMAKE_C_COMPILER 
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-gcc)
set(CMAKE_CXX_COMPILER 
/opt/arm-2009q1/bin/arm-none-linux-gnueabi-c++)




On the command line, I aliased DMD to /usr/bin/dmd and runt cmake 
as described...


Afterwards, I ran make for ldc2, phobos2-ldc an druntime-ldc, but 
I did not apply the patches on phobos and runtime. It looked like 
the path introduced some static compilation towards Android, so I 
thought it would not apply to my needs.


So here' what I get if I do a "ldc2 -version":


LDC - the LLVM D compiler (1.0.0):
  based on DMD v2.070.2 and LLVM 3.8.1
  built with DMD64 D Compiler v2.071.1
  Default target: x86_64-unknown-linux-gnu
  Host CPU: westmere
  http://dlang.org - http://wiki.dlang.org/LDC

  Registered Targets:
arm - ARM
armeb   - ARM (big endian)
thumb   - Thumb
thumbeb - Thumb (big endian)



I can strictly compile a "hello world" program:
./bin/ldc2 -mtriple=arm-none-linux-gnueabi test.d

I get the expected "test.o"

But I don't know how to link it. I don't have "clang". I tried to 
link it with the gcc from the gnu ARM toolchain with 
libdruntime-ldc.a, libldc.a and libphobos2-ldc.a, but it fails 
miserably: many undefined symbols (pthread, and some other os 
related stuff).


Re: new cpuid is ready for comments

2016-07-15 Thread Claude via Digitalmars-d-announce

On Friday, 15 July 2016 at 15:05:53 UTC, Ilya Yaroshenko wrote:

On Friday, 15 July 2016 at 12:10:22 UTC, Claude wrote:

[...]


Yes! Finally we need the final code for LDC, it support ARM 
assembler.

http://wiki.dlang.org/LDC_inline_assembly_expressions


[...]


No, I have not. Thank you for the link!


I uploaded the code I used from Yeppp there:
https://github.com/claudemr/cputest

It's a bit of the mess as it is, but it works, and it looks like 
there is a thorough test of what features ARM hardware may 
provide.


Re: LDC with ARM backend

2016-07-15 Thread Claude via Digitalmars-d-learn

On Friday, 15 July 2016 at 15:02:15 UTC, Radu wrote:

Hi,
LDC on Linux ARM is fairly complete. I think it is a fully 
supported platform (all tests are passing). Check in 
https://wiki.dlang.org/Compilers the LDC column.


This is the close for a tutorial for cross-compiling 
https://wiki.dlang.org/Build_LDC_for_Android builds.


Great, I didn't see it.

However I don't use Android on my ARM target, I have a 
arm-none-linux-gnueabi toolchain.


I think I have to change the Android patch, keep the "80-bit 
float" stuff, and modify the build scripts somehow to use GNU 
version.


LDC with ARM backend

2016-07-15 Thread Claude via Digitalmars-d-learn

Hello,

I would like to cross-compile a D program from a x86 machine to 
an ARM target.


I work on GNU/Linux Ubuntu 64-bit.
I have an ARM gcc toolchain, which I can use to make programs on 
an ARM Cortex-A9 architecture running a Linux kernel 3.4.11+.


I managed to build and install LLVM 3.8.1 with LDC 1.1-alpha1, 
which works fine to build and run native programs.


I read some documentation here:
http://wiki.dlang.org/Minimal_semihosted_ARM_Cortex-M_%22Hello_World%22

... but it seems to target bare-metal programming, whereas I 
already have a GNU/Linux running on my ARM target and want to use 
it. It does noty tell how to have an LDC with ARM backend.


So I'm a bit confused of what the current state of LDC+ARM is. 
For example, is the run-time fully ported on ARM/Linux?


What would be the steps to have an LDC cross-compiling to ARM?

Thanks


Re: new cpuid is ready for comments

2016-07-15 Thread Claude via Digitalmars-d-announce

On Monday, 11 July 2016 at 16:30:44 UTC, Ilya Yaroshenko wrote:

ARM contributors are wanted!


What exactly do you need for ARM architecture?
I have an ARM target and I have tried to run a library[1] to get 
some CPU info.


I hacked in the source files to just build and link the CPU info 
code. I used an arm-gcc toolchain (I don't know how to 
cross-compile using ldc... yet). And it's built on a native Linux 
OS.


And it seems to work.
Here's the output I have after running the code:
https://gist.github.com/claudemr/98b5a4bb83e8d967b31a3044e4d81c0f

Most of it is C code. There is some ARM assembly code, some of 
which is inlined, and some is in a ".S" file to test specific 
instructions.


Is it what you're looking for?

[1] It's called "Yeppp", and looks like what you want to do with 
MIR: http://www.yeppp.info/

Have you come across it?


Re: new cpuid is ready for comments

2016-07-15 Thread Claude via Digitalmars-d-announce

Intel Core i5:
https://gist.github.com/claudemr/aa99d03360dccc65d7967651011dc8ca


Re: Dynamic arrays, emplace and GC

2016-07-05 Thread Claude via Digitalmars-d-learn

On Tuesday, 5 July 2016 at 12:43:14 UTC, ketmar wrote:

On Tuesday, 5 July 2016 at 10:04:05 UTC, Claude wrote:

So here's my question: Is it normal???


yes. `ubyte` arrays by definition cannot hold pointers, so GC 
doesn't bother to scan 'em.


Ah ok. I tried using void[size] static array and it seems to work 
without having to use GC.addRange().


Dynamic arrays, emplace and GC

2016-07-05 Thread Claude via Digitalmars-d-learn

Hello,

I've been working on some kind of allocator using a dynamic array 
as a memory pool. I used emplace to allocate class instances 
within that array, and I was surprised to see I had to use 
GC.addRange() to avoid the GC to destroy stuff referenced in that 
array.


Here's a chunk of code[1]:

struct Pool(T)
{
public:
T alloc(Args...)(Args args)
{
mData.length++;
import core.memory : GC;
//GC.addRange(mData[$ - 1].data.ptr, mData[$ - 
1].data.length);

import std.conv : emplace;
auto t = emplace!T(mData[$ - 1].data, args);
return t;
}

private:
struct Storage
{
ubyte[__traits(classInstanceSize, T)] data;
}

Storage[] mData;
}

class Foo
{
this(int a)
{
aa = a;
}
~this()
{
import std.stdio; writefln("DTOR");
aa = 0;
}
int aa;
}

class Blob
{
this(int b)
{
foo = new Foo(b);
}

Foo foo;
}

void main()
{
Pool!Blob pool;

Blob blob;
foreach(a; 0 .. 1)
{
blob = pool.alloc(6);
}
while(true){}
import std.stdio; writefln("END");
}

Basically Blob instances are allocated in the array using 
emplace. And Blob creates references to Foo. If I comment out 
GC.addRange(), I see that Foo destructor is called by the GC[2]. 
If I leave it uncommented, the GC leaves the array alone.


So here's my question: Is it normal???
I thought that allocating memory in a dynamic array using 
"mData.length++;" was GC-compliant (unlike 
core.stdc.stdlib.malloc()), and I did not have to explictly use 
GC.addRange().



[1] I left out alignment management code. It's not the issue here.
[2] I used the helpful destructor tracker function of p0nce 
there: https://p0nce.github.io/d-idioms/#GC-proof-resource-class


Re: static if enhancement

2016-06-27 Thread Claude via Digitalmars-d

On Monday, 27 June 2016 at 11:05:49 UTC, cym13 wrote:
What's unintuitive about it (real question)? It would make it 
behave more like a standard if and early returns are very 
common, well understood and good practice:


void func(int* somepointer) {
if (somepointer == null)
return;
[rest of the code]
}


From this perspective, you are right. But mixing "return" or 
"throw" which belong to the "run-time world", and "static if" 
which is "compile-time" feels wrong somehow to me.


But maybe I'm biased by my C experience. It's like mixing 
"return" and "#if".


Some other negative responses in this thread may also give a 
better explanation of what I mean.


The other thing is that, introducing that will not break any code 
(a priori). But if the change is made, and it turns out to not 
pay enough or lead to some abuse (more on the readability part), 
going backward shall introduce code-breakage.


Also, I agree with QAston comments about errors on unreachable 
code.


Maybe "static return" is a good compromise?...


Re: static if enhancement

2016-06-27 Thread Claude via Digitalmars-d

On Saturday, 25 June 2016 at 11:27:01 UTC, cym13 wrote:

We are talking about early returns (checking for something and
returning as soon as possible) which are a well-known and 
efficient
way to reduce indentation levels and increase modularity. You 
can't
come and say "What? You want it to work? Man, you should have 
thought
your code better!": the very reason this subject is discussed 
is to

allow people to deal with indentation levels!


I didn't want to sound like that. But my post was unclear. 
Though, in the example, it looks nice, and I understand one would 
want such a feature. I think it could be abused in some other 
cases and make the code less readable.


I had in mind some cross-platform libraries written in C with #if 
#elif and #endif all other the place (used with compiler 
switches). And I reckon the current "static if" is a good tool 
that fits well with the rest of the language to properly mark 
different sections of code, and have different implementations. 
The fact it gives another indentation level could be seen as an 
opportunity to better modularize code (it's what I meant).


So I find that special case (having code after a "static if() 
{return;}" treated like in the "else" block) a bit unintuitive, 
and could be prone to bad practice and confusion.


Re: static if enhancement

2016-06-25 Thread Claude via Digitalmars-d
On Friday, 24 June 2016 at 15:24:48 UTC, Andrei Alexandrescu 
wrote:
Does anyone else find this annoying? 
https://issues.dlang.org/show_bug.cgi?id=16201 -- Andrei


My 2 cents. I don't find that annoying at all. It's perfectly 
normal IMHO.


It may introduce an additional indentation level for the second 
part, but I reckon it is normal to have those 2 instruction 
blocks at the same level. If we were to introduce a new keyword 
for "static if else", why not do the same for the run-time "if 
else" (to be coherent)?


And, if some code have too many indentation levels, than it 
probably means it should be better modularized, hence I'd suggest 
to split it in several sub-functions, it will be more 
readable/maintainable.


Re: Interest in Paris area D meetups?

2016-06-23 Thread Claude via Digitalmars-d

On Monday, 13 June 2016 at 21:34:45 UTC, Guillaume Chatelet wrote:

Sounds good to me.
How about next Wednesday (15th) at "Bière et Malt" (4 rue 
Poissonnière

in the 2nd district) at say 19:00?


In case someone else wants to join. This has been postponed to 
next week. Wednesday 22nd same place, same time.


We were 3 and that was a great meeting.

The last time I met people I don't know from the Internet was 
probably 10 years ago, when I met up with a girl from a dating 
website. And you know how it works, you chat away pretending 
you're not here to seduce her (well, the first 30 minutes) and 
suddenly the magic happens ... or not at all.


Well, that was EXACTLY the same, we met up at the bar, drank a 
beer and talked pretending we didn't know about D and suddenly: 
"What? Did you just mention the D programming language?? What 
were the odds that 3 people meet randomly in a big old town like 
Paris and find out they felt in love the same programming 
language? Oh my God...".


...

We plan to meet up roughly once a month, so let's stay tuned... :)


Re: Interest in Paris area D meetups?

2016-06-12 Thread Claude via Digitalmars-d
On Thursday, 9 June 2016 at 17:35:32 UTC, Guillaume Chatelet 
wrote:

On Thursday, 9 June 2016 at 16:27:41 UTC, Claude wrote:
On Thursday, 9 June 2016 at 09:11:05 UTC, Guillaume Chatelet 
wrote:

Sounds good to me.
How about next Wednesday (15th) at "Bière et Malt" (4 rue 
Poissonnière

in the 2nd district) at say 19:00?


Ok, great!


FYI my email address: chatelet.guillaume at gmail


Fine, did you receive my email?


Re: Interest in Paris area D meetups?

2016-06-09 Thread Claude via Digitalmars-d
On Thursday, 9 June 2016 at 09:11:05 UTC, Guillaume Chatelet 
wrote:

Sounds good to me.
How about next Wednesday (15th) at "Bière et Malt" (4 rue 
Poissonnière

in the 2nd district) at say 19:00?


Ok, great!


Re: Interest in Paris area D meetups?

2016-06-08 Thread Claude via Digitalmars-d
On Wednesday, 1 June 2016 at 20:33:40 UTC, Guillaume Chatelet 
wrote:

On Wednesday, 1 June 2016 at 19:25:13 UTC, Claude wrote:
On Wednesday, 18 May 2016 at 15:05:21 UTC, Guillaume Chatelet 
wrote:

I got inspired by Steven's thread :)
Anyone in Paris interested in D meetups?


Sorry for the later reply, but yes, I'd be interested by a 
meetup in Paris. Anyone else?


Two sounds like a good start ^__^
We can start with a beer somewhere :)


Yeah great! I'm up for a beer.
I work near Gare de l'Est. We can have a drink after work-time 
(I'm available around 19h), and share about what we think about 
that evil auto-decode.

Is it ok for you ?


Re: Interest in Paris area D meetups?

2016-06-01 Thread Claude via Digitalmars-d
On Wednesday, 18 May 2016 at 15:05:21 UTC, Guillaume Chatelet 
wrote:

I got inspired by Steven's thread :)
Anyone in Paris interested in D meetups?


Sorry for the later reply, but yes, I'd be interested by a meetup 
in Paris. Anyone else?


Re: SIGUSR2 from GC interrupts application system calls on Linux

2016-05-27 Thread Claude via Digitalmars-d

On Thursday, 26 May 2016 at 20:10:57 UTC, ikod wrote:

Will it not hang in the loop if you send it SIGINT?
Looks like not, but is strange for me.


Yes, I had the same feeling the first time I came across that.

I remember why we had to use that loop in C: when we were using 
gdb to do some debugging, it would interrupt the sys call, which 
was not desired.


We use that in production code, I've never had any problem with 
it.


Re: SIGUSR2 from GC interrupts application system calls on Linux

2016-05-26 Thread Claude via Digitalmars-d

On Thursday, 26 May 2016 at 18:44:22 UTC, ikod wrote:
Is there any recommended workaround for this problem? Is this a 
bug?


I don't think it's a bug. Even without a GC, on GNU/Linux OS, I 
would enclose the receive function (or any blocking system 
function like poll etc) in a do-while loop testing specifically 
for EINTR. I had the same kind of problem in C.


int r;

do {
r = s.receive(buffer);
} while (r < 0 && errno == EINTR);


Re: Killing the comma operator

2016-05-12 Thread Claude via Digitalmars-d

int x;
while( scanf("%d", ),  x!= 0) // until user input 0.
{
   //do something with x
}

Does anybody think that this is a useful case of comma operator?


I think it's not a safe way to write code. The only time I use 
comma in C (outside for loops) is when I write some clever macros 
to do some boilerplate. Comma-operator may have some use in C, 
but I reckon it makes no sense in D and adds some confusion.


So I join the majority: KILL IT!!

(WITH ACID...)


Re: Pointer top 16 bits use

2016-05-09 Thread Claude via Digitalmars-d

On Saturday, 7 May 2016 at 06:08:01 UTC, Nicholas Wilson wrote:
In Dicebot's DConf talk he mentioned that it is possible to use 
the top 16 bits for tagging pointers and the associated risks.


Regarding the GC not seeing a pointer if the top 16 bits are 
used, whats to stop us from changing the GC to (or adding an 
option to) ignore those bits when checking?


I imagine that it would cause a lot more false pointers, but 
when we get precise GC that problem would go away.


I would not rely on it *at all* actually, even now. I do not 
think it would make D portable to every system (let's not only 
consider x86 architecture).
We don't know how a system would choose to map physical memory 
and hardware devices to virtual memory.


Re: Walter's Famous German Language Essentials Guide

2016-05-03 Thread Claude via Digitalmars-d
LOL. Well, every language has its quirks - especially with the 
commonly used words (they probably get munged the most over 
time, because they get used the most), but I've found that 
French is far more consistent than English - especially when 
get a grammar book that actually explains things rather than 
just telling you what to do. English suffers from having a lot 
of different sources for its various words. It's consistent in 
a lot of ways, but it's a huge mess in others - ...


Several years ago, I read "Frankenstein" of Mary Shelley (in 
english), and I was surprised to see that the english used in 
that novel had a lot of french sounding words (like "to 
continue", "to traverse", "to detest", "the commencement" etc), 
which are now seldom used even in litterature. There was very few 
verb constructions like "get up", "come on", "carry out" etc...


... though I for one think that the fact that English has no 
gender like languages such as French and German is a huge win.


Yes, I think the difficulty in english is mostly pronunciation, 
and irregular verbs (which actually many languages enjoy: french, 
german, spanish...).


Re: Walter's Famous German Language Essentials Guide

2016-05-02 Thread Claude via Digitalmars-d
The same goes with French. e.g. body parts which one would 
think would be obviously masculine are feminine (and vice 
versa).


Funny, it's actually true. I've never figured that out... :)

In french, there are 2 specials cases about gender. "orgue" 
(organ) and "amour" (love) are masculine on singular, and 
feminine on plural.


Re: Clarification about compilation model and the mapping of package names to directory.

2016-04-29 Thread Claude via Digitalmars-d
"supplying import module paths manually" ? How would that even 
work? Suppose you want to compile main.d separately. You'd need 
to supply to the compiler an option for each module with 
non-standard path, such as `xxx.foo=foo.d;xxx.bar=bar.d`, etc..


No, the command-line option could be much simpler:
dmd -c myimplementation.d -Imyinterface.d

Just like when you compile and link both myimplementation.d and 
myinterface.d (and it resolves import locations on its own). 
Except you tell the compiler it just needs to look for 
declarations within myinterface.d, and not actually compiling it.




Re: Clarification about compilation model and the mapping of package names to directory.

2016-04-29 Thread Claude via Digitalmars-d

On Friday, 29 April 2016 at 14:50:34 UTC, Dicebot wrote:

On 04/29/2016 05:32 PM, Bruno Medeiros wrote:
Imagine you have a module "main.d" with the line `import 
xxx.foo;`, and you have a file "foo.d" with the module 
declaration `module xxx.foo;`. Now imagine the files are laid 
out like this:


src/main.d
src/foo.d

Is this valid, should these files compile? Currently DMD will 
fail if each file is compiled individually (because it expects 
`xxx.foo` to be in "src/xxx/foo.d", *but* it will succeed if 
all files are supplied to DMD in the same command-line.


AFAIK yes, this is valid and shall compile (and there are quite 
some projects relying on it). There is simply a missing feature 
for separate compilation case to allow supplying import module 
paths manually.


That means that using the layout above, there is no way to 
compile individually "main.d" or a file named "bar.d" with 
"import xxx.foo;" ??


What if I want to make a library out of bar.d without totally 
compiling foo.d?


I understand there must be some analysis of foo.d to get its 
interface (basically get the ".di" version of it) but it should 
not have to compile its implementation.


Re: Walter's Famous German Language Essentials Guide

2016-04-28 Thread Claude via Digitalmars-d

On Wednesday, 27 April 2016 at 02:57:47 UTC, Walter Bright wrote:
To prepare for a week in Berlin, a few German phrases is all 
you'll need to fit in, get around, and have a great time:


1. Ein Bier bitte!
2. Noch ein Bier bitte!
3. Wo ist der WC!


4. Ich bin ein Berliner!

That may you get free beers, if you're an american citizen and 
you manage to build a time-machine to get back to 1963 (I suggest 
using bidirectional ranges in order to return to the present, or 
a glass of fresh water).


Re: Procedural drawing using ndslice

2016-02-12 Thread Claude via Digitalmars-d-learn

Thanks for your replies, John and Ali. I wasn't sure I was clear.

I'm going to try to see if I can fit Ali concept (totally lazy, 
which is what I was looking for) within ndslices, so that I can 
also use it in 3D and apply window() function to the result and 
mess around with it.


Procedural drawing using ndslice

2016-02-11 Thread Claude via Digitalmars-d-learn

Hello,

I come from the C world and try to do some procedural terrain 
generation, and I thought ndslice would help me to make things 
look clean, but I'm very new to those semantics and I need help.


Here's my problem: I have a C-style rough implementation of a 
function drawing a disk into a 2D buffer. Here it is:



import std.math;
import std.stdio;

void draw(ref float[16][16] buf, int x0, int y0, int x1, int y1)
{
float xc = cast(float)(x0 + x1) / 2;
float yc = cast(float)(y0 + y1) / 2;
float xr = cast(float)(x1 - x0) / 2;
float yr = cast(float)(y1 - y0) / 2;

float disk(size_t x, size_t y)
{
float xx, yy;
xx = (x - xc) / xr;
yy = (y - yc) / yr;
return 1.0 - sqrt(xx * xx + yy * yy);
}

for (int y = 0; y < 16; y++)
{
for (int x = 0; x < 16; x++)
{
buf[x][y] = disk(x, y);
writef(" % 3.1f", buf[x][y]);
}
writeln("");
}
}

void main()
{
float[16][16] buf;

draw(buf, 2, 2, 10, 10);
}


The final buffer contains values where positive floats are the 
inside of the disk, negative are outside, and 0's represents the 
perimeter of the disk.


I would like to simplify the code of draw() to make it look more 
something like:


Slice!(stuff) draw(int x0, int y0, int x1, int y1)
{
float disk(size_t x, size_t y)
{
// ...same as above
}

return Slice!stuff.something!disk.somethingElseMaybe;
}

Is it possible?

Do I need to back-up the slice with an array, or could the slice 
be used lazily and modified as I want using some other drawing 
functions.


auto diskNoiseSlice = diskSlice.something!AddNoiseFunction;

... until I do a:

auto buf = mySlice.array;

... where the buffer would be allocated in memory and filled with 
the values according to all the drawing primitives I used on the 
slice.


Re: What are you planning for 2016?

2016-01-06 Thread Claude via Digitalmars-d

I'll do more work on my OpenGL 3D engine/game in D.

Later, I'd like to either:
1/ write a RT streaming framework (kind of like gStreamer, but 
without the gLib non-sense).


or:
2/ write a baremetal OS in D and assembly towards ARMv5 
compatible architecture (on Rasberry Pi maybe). Something very 
simple, just for the proof of concept.


I'd really like to see D more in the embedded/system language 
side. I think it's got some very good potential but I'm not sure 
it's quite there yet. I want to check that out by myself.


At work, we work on ARM targets with GNU/Linux, I don't know if I 
can write a full process in D and integrate a D compiler in our 
build chain easily. Why not, some other guy already managed to do 
that with Vala... :-S


Re: D Cannot Be Used for Real Time / Low Latency Systems? - Of course it can!

2015-12-18 Thread Claude via Digitalmars-d
Bottom line is, if you are competent enough, you can be 
successfull with D, just like you would be if you were using 
C/C++. D's superior compile-time meta programming allows you to 
express zero cost abstractions give you the edge that makes 
things more enjoyable.


There are several open-source kernels written in D that are 
proof of the above:

https://github.com/Vild/PowerNex
https://github.com/Bloodmanovski/Trinix
https://github.com/xomboverlord/xomb
Adam Ruppe has a chapter about bare-metal programming with D in 
his Cookbook.


I do think D may be a perfectly valid real-time language. And 
indeed, I believe that the GC is not an issue (you can disable 
it, question solved).


However, is D a proper Embedded System language? I'm not pretty 
sure it's there yet.
Plain C rules the world of embedded systems. All the RTES 
programmers I've met are reluctant to even use C++.


If you cannot program a 16-bit PIC in D, then D will not replace 
C at all (but is it meant to?).


The open-source kernels above are targeted at PC architecture.
I know some work have been done to make bare-metal OS targeted at 
ARM. I don't know what's the state of those projects, and I'd 
love to make my own once I have time (based on Rasberry Pi for 
instance).


To validate D as a proper Real-Time Embedded System, one would 
have to make a proper bare-metal OS a ARMv5 chip (for example). 
Write your Interrupt Service Routines in assembly calling D 
functions, program the MMU, the different hardware blocks (UART, 
INTC, USB etc).
And the API of such an OS would benefit of the expressiveness 
power of D (templates, traits UDA etc) and not just be a C-style 
clone, with an efficiency similar to C (at least same CPU load).


Re: Segfault while compiling simple program

2015-12-16 Thread Claude via Digitalmars-d-learn

I tested it on linux (64-bit distro), and it segfaults as well:

-

$ echo "struct S { ushort a, b; ubyte c, d; } struct T { ushort 
e; S s; }" > test.d


$ dmd -v test.d
binarydmd
version   v2.069.0
config/etc/dmd.conf
parse test
importall test
importobject(/usr/include/dmd/druntime/import/object.d)
semantic  test
Segmentation fault

$ uname -r
3.13.0-37-generic

$ cat /etc/issue
Linux Mint 17.1 Rebecca \n \l

$ dmd --version
DMD64 D Compiler v2.069.0
Copyright (c) 1999-2015 by Digital Mars written by Walter Bright

-

It doesn't crash if compiled in 32-bit:

-

$ dmd -v -m32 test.d
binarydmd
version   v2.069.0
config/etc/dmd.conf
parse test
importall test
importobject(/usr/include/dmd/druntime/import/object.d)
semantic  test
semantic2 test
semantic3 test
code  test
gcc test.o -o test -m32 -L/usr/lib/i386-linux-gnu -Xlinker 
--export-dynamic -Xlinker -Bstatic -lphobos2 -Xlinker -Bdynamic 
-lpthread -lm -lrt -ldl
/usr/lib/gcc/x86_64-linux-gnu/4.8/../../../../lib32/crt1.o: In 
function `_start':

(.text+0x18): undefined reference to `main'
collect2: error: ld returned 1 exit status
--- errorlevel 1

-

Using ubyte[2] or swapping the fields also solve the issue as 
mentioned above.


I also reproduce the issue using DMD v2.069.2.

So it may be good to add that information in the bug-report.




Re: Define methods using templates

2015-01-08 Thread Claude via Digitalmars-d-learn
I just saw this post, which is essentially the same question as 
Basile Burg's. I hope that a college (in France?) is teaching D 
and that this is a homework assignment. Cool stuff! :)


Maybe using templates to create properties is a bit overkill in 
this example. But I could not solve what I thought would be a 
very simple and straightforward template use-case (initially 
I'm an embedded RT system C/asm developer).


I'm doing this for a personal project of a 3D engine. As I know 
little about C++/Java or other OO language, I thought I would do 
it directly in D, which seems very promising to me (but 
unfortunately not taught in France as far as I know).


Define methods using templates

2014-12-30 Thread Claude via Digitalmars-d-learn
Hello, I'm trying to use templates to define several methods 
(property setters) within a class to avoid some code duplication.

Here is an attempt:

class Camera
{
private:
Vector4 m_pos;
float m_fov, m_ratio, m_near, m_far;
bool m_matrixCalculated;

public:
void SetProperty(Tin, alias Field)(ref Tin param) @property 
pure @safe

{
Field = param;
m_matrixCalculated = false;
}

alias pos   = SetProperty!(float[], m_pos);
alias pos   = SetProperty!(Vector4, m_pos);
alias ratio = SetProperty!(float,   m_ratio);
alias near  = SetProperty!(float,   m_near);
alias far   = SetProperty!(float,   m_far);
}

I get this kind of compilation error:
Error: template instance SetProperty!(float[], m_pos) cannot use 
local 'm_pos' as parameter to non-global template 
SetProperty(Tin, alias Field)(ref Tin param)


I don't understand why that error occurs.

And I cannot find any elegant solutions (even with mixin's) to 
declare a template and then instantiate it in a single line to 
define the methods I want.


Does any of you have an idea?

Thanks


Re: Define methods using templates

2014-12-30 Thread Claude via Digitalmars-d-learn

Thanks Steven and Daniel for your explanations.


mixin template opAssign(alias Field) {
void opAssign(Tin)(auto ref Tin param) @property pure 
@safe

{
Field = param;
m_matrixCalculated = false;
}
}

mixin opAssign!(m_pos)   pos;


I tested both the string mixin and opAssign implementations, 
and they work like a charm.
I would have never thought of using both @property and 
opAssign, but it looks like a secure way of doing it for the 
compilation fails nicely if I type a wrong field in.


src/camera.d(58): Error: undefined identifier m_os, did you mean 
variable m_pos?