Re: C++ launched its community survey, too

2018-02-28 Thread TheFlyingFiddle via Digitalmars-d

On Wednesday, 28 February 2018 at 20:01:34 UTC, H. S. Teoh wrote:

Just to give some background. At work I spend most of my time 
maintaining legacy systems adding some small features or 
replacing subcomponents. So most of what I do is reading code and 
making some minor changes (unless it's buggy code then you get to 
rewrite :))


The idea of sigils is actually not a bad one.  It does, after 
all, have basis in natural languages. But the way it was 
implemented in Perl is, shall we say, rather quirky, leading to 
all sorts of unexpected interactions with other things and 
generally become a cognitive burden in large projects, where a 
disproportionate amount of time is spent fighting with syntax 
rather than getting things done.  (E.g., is it @$x[$y] or $x->y 
or ${x}{y} or ${x->$y}[$z] or something else?)


Perl is a maintenance nightmare. It just takes so much time to 
figure out what the code actually does. Particularity if it has 
been changed by multiple people over the years.
That's the main reason Perl is at the top of my most disliked 
languages.



Yeah, that's 10 trains worth of WATs right there. :-D

And the whole thing about == vs === is just one huge WAT.  It 
"makes sense" if you understand why it's necessary, but it just 
begs the question, why, oh why, wasn't the language designed 
properly in the first place so that such monstrosities aren't 
necessary?!


Another personal favorite.
function foo($myArray) {
   return $myArray['test'];
}

$myString = "hello, world";
$test = foo($myString);

echo $test; // $test = 'h'; Because you know 'test' auto converts 
to 0.


Now PHP does have many WATs but it's still simpler to read than 
Perl so it has an edge over Perl for me atleast.


Compared to them, programming in C++ or Java for that matter 
is like a dream.


C++ is hardly any better, actually:

https://bartoszmilewski.com/2013/09/19/edward-chands/


Yeah... C++ is interesting in that way. Mostly what I have seen 
is that for any given project they have a strict policy of how to 
do memory management and error handling. Also it's not really a 
pleasure reading C++ templates :D.


Java... well, Java is a walled garden, a somewhat nice (if 
verbose) one that's somewhat detached from reality, but 
forcefully anchored to it by big commercial interests.  As a 
language it's not too bad; the core language is pretty nicely 
designed -- in an idealistic, ivory tower sort of sense.


But in practice, it's more of a Write Once, Debug Everywhere 
deal.  The verbosity and IDE dependence sucks.  The OO 
fanaticism also sucks (singleton classes IMO is a big code 
smell, esp. when it's really just syntactic lip service to the 
OO religion for what's essentially global functions).  The lack 
of real generics is total suck, and a showstopper for me. Even 
the half-hearted attempt at generics that they shoehorned into 
it later doesn't fully save it from being sucky.  The only 
saving grace of Java is the extensive library support -- you 
can find just about anything you might imagine as a library, 
which saves you from dealing with the suckier parts of the 
language. Most of the time.


The awful and nice part of Java is that since your forced to do 
things a certain way then things will actually be done in that 
way. In this case OOP and all of that. It's nice when you read 
the code, awful when you actually have to code in it.



Only thing missing is the ability to do arbitrary system calls 
during compilation :D


AKA 
compile-my-source-code-and-get-a-trojan-installed-on-your-system. :-D  If this ?
(mis)feature ever gets merged into DMD, give me a call right 
away -- I have a lot of source code to sell you. For free. :-D


Why must you ruin my perfect plan of getting a free botnet! :D




Re: C++ launched its community survey, too

2018-02-28 Thread TheFlyingFiddle via Digitalmars-d

On Wednesday, 28 February 2018 at 10:15:13 UTC, Zoadian wrote:
On Wednesday, 28 February 2018 at 00:53:16 UTC, psychoticRabbit 
wrote:
It should have gone to the Java developers - cause they 
deserved it.


C++ is the worst thing to have ever come out of computer 
science!


yes c++ is not the greatest language (thats why i use D). but 
java is the worst language i've ever used.


My least preferredlanguage of all times would be Perl. With (PHP 
5.3) coming in at a close second :)


Perl is just... I get it, you can write somewhat nicer bash 
scripts in the language. But people... they just go with it and 
build these huge crazy systems that somehow 10+ years later 
become my problem. The designer of Perl was clearly insane. Let's 
store everything in fun global "invisible" variables, functions 
don't specify arguments... And lets have the first letter of the 
variable define how/what $, @, %, with fun auto conversions 
everywhere :D.


PHP is better but there is some really weird stuff in it. Of the 
top of my head is the auto type conversion system. This works, by 
design...


//PHP
$a = 5;
$b = $a * "10 trains";
echo $b; //$b is now 50... Fun and interesting stuff right there

Compared to them, programming in C++ or Java for that matter is 
like a dream. But when I can, I always use D. Mainly because 
unlike every other language, it has static introspection, ctfe 
and mixins. A whole new world of expressive power is gained by 
this. Only thing missing is the ability to do arbitrary system 
calls during compilation :D





Re: Function template declaration mystery...

2018-02-28 Thread TheFlyingFiddle via Digitalmars-d-learn
On Wednesday, 28 February 2018 at 17:47:22 UTC, Robert M. Münch 
wrote:

Hi, I'm lost reading some code:

A a;

auto do(alias f, A)(auto ref A _a){
alias fun = unaryFun!f;
return ...
...
}

How is this alias stuff working? I mean what's the type of f? 
Is it an anonymous function which then gets checked to be 
unary? How is it recognized in the code using the function 
template?



This function can be called with code like this:

a.do((myType) {...myCode...});
do(a, (myType) {...myCode...});

What's wondering me here is that the template function only has 
one paraemter (_a) but I somehow can get my myCode into it. But 
the code looks like a parameter to me. So why isn't it like:


auto do(alias f, A)(auto ref A _a, ??? myCode){...

I'm a bit confused.


Testing this with:

auto foo(alias f, A)(auto ref A a) { return f(a); }

I can call foo either like this:

foo!(x => x + x)(1);
or
1.foo!(x => x + x);

but these will give errors

foo(1, x => x + x); //Error
1.foo(x => x + x); // Error

I don't see how you can get that kind of behavior...




Re: Cast a 2d static array to a 1d static array. T[s][r] -> T[s*r]

2018-02-27 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 27 February 2018 at 22:17:25 UTC, Jonathan wrote:

On Tuesday, 27 February 2018 at 22:13:05 UTC, Jonathan wrote:
Is it possible to cast a 2d static length array to a 1d static 
length array?


E.g.
int[2][2] a = [[1,2],[3,4]];
int[4]b = cast(int[4])a;

Is not the byte data in memory exactly the same?


*( [pos,size].ptr .cst!(void*) .cst!(int[4]*) )
(using dub `cst` library) or
*( cast(int[4]*)(cast(void*)([pos,size].ptr)) )

Okay, this works but is this the best way?!


This should work
int[4] b = *(cast(int[4]*)a.ptr);


Re: implicit construction operator

2018-02-27 Thread TheFlyingFiddle via Digitalmars-d

On Monday, 26 February 2018 at 23:33:48 UTC, H. S. Teoh wrote:
Not really a big deal (and auto kind of ruins it) but it would 
make stuff consistent between user types and built in ones.


Not sure what you mean here.  In a user type, if opBinary!"/" 
returns an int, then you still have the same problem.


Yes, your right, did not really think that through. Also if you 
overload opAssign you get implicit conversions in assign 
expressions...











Re: implicit construction operator

2018-02-26 Thread TheFlyingFiddle via Digitalmars-d

On Monday, 26 February 2018 at 21:30:09 UTC, aliak wrote:

On Monday, 26 February 2018 at 19:32:44 UTC, ketmar wrote:

WebFreak001 wrote:
And if that's also a no no, how about char -> int. Or int -> 
float? Is ok?


Maybe there're some valid arguments, to disallow it 
*completely* though?


Cheers


I would be very happy if char -> int and int -> float was not 
implicit.


This has happened to me enough times that i remember it:

float a = some_int / some_other_int; //ops i don't think this was 
what I wanted.

constrast
float a = some_int.to!float / some_other_int.to!float; //ugly but 
at-least its clear.


Not really a big deal (and auto kind of ruins it) but it would 
make stuff consistent between user types and built in ones.


Re: Destructor called twice.

2018-02-26 Thread TheFlyingFiddle via Digitalmars-d-learn

On Sunday, 25 February 2018 at 21:35:33 UTC, ketmar wrote:

add postblit debug prints, and you will see.


I get that it will call the postblit since it creates a temporary.

What I expected though was that.

auto s = S(0).foo(1);

Would become something like:

S s; s.__ctor(0).foo(1);

But maybe this would not be consistent behavior?

I'm wondering why it creates the temporary in the first place.




Re: Aliasing member's members

2018-02-26 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 26 February 2018 at 20:50:35 UTC, Kayomn wrote:
I've been experimenting with D's Better C mode, and I have a 
question regarding something that I started thinking about 
after watching one of Jonathon Blow's talks on data-oriented 
programming - more specifically the aspect of fake "inheritance"


I have the following code. My question is if it's possible to 
use alias in a similar way to Jonathon's own language Jai and 
its using keyword, referencing the internal Vector2 as 
Player.pos instead of Player.entity.position:


import core.stdc.stdio : printf;

uint idCounter = 0;

struct Vector2 {
double x,y;
}

struct Entity {
uint id;
Vector2 position;
}

struct Player {
Entity entity;

alias pos = Entity.position;
}

Player createPlayer(Vector2 position) {
Player player;
player.entity.id = idCounter++;
player.entity.position = position;

return player;
}

int main(string[] args) {
Player player = createPlayer(Vector2(50.0,50.0));

printf(
"[Player]\nid: %d\nPosition: %lf x %lf\n",
player.entity.id,
player.pos.x,
player.pos.y
);

return 0;
}


Don't think you can alias member variables directly.

You could do this though:

struct Player {
Entity entity;

ref auto pos() inout { return entity.position; }
}

Which will give you most of what you want. Although if you want 
to take the address of pos you have to use


auto addr = ();





Destructor called twice.

2018-02-25 Thread TheFlyingFiddle via Digitalmars-d-learn
When writing some code to setup properties in a chain function 
manner I ran into some unexpected behavior with destructors.


Example:

struct S {
int a, b;

ref S foo(int b) {
this.b = b;
return this;
}

this(int ab) {
this.a = this.b = ab;
printf("ctor a=%d, b=%d\n", a, b);
}

~this() {
printf("dtor a=%d b=%d\n", a, b);
}


}

void main()
{
auto s0 = S(0).foo(1);
auto s1 = S(1).foo(2).foo(3).foo(4);
auto s2 = S(2);
s2.foo(5).foo(6).foo(7);
}

//Output is
ctor 0
dtor 0 1
ctor 1
dtor 1 4
ctor a=2, b=2
dtor a=2 b=7
dtor 1 4
dtor 0 1


For s0,s1 the destructor is called twice but s2 works as I would 
expect.


Taking a look with dmd -vcg-ast provided this:
void main()
{
S s0 = ((S __slS3 = S(, );) , __slS3).this(0).foo(1);
try
{
	S s1 = ((S __slS4 = S(, );) , 
__slS4).this(1).foo(2).foo(3).foo(4);

try
{
S s2 = s2 = S , s2.this(2);
try
{
s2.foo(5).foo(6).foo(7);
}
finally
s2.~this();
}
finally
s1.~this();
}
finally
s0.~this();
return 0;
}

The two extra dtor calls are not visible here but I guess they 
are caused by the temporary variables that are created and then 
go out of scope directly. Am I doing something wrong or is this a 
bug?






Re: Double link list

2018-02-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Saturday, 24 February 2018 at 09:48:13 UTC, Joel wrote:
I'm trying some code for practice, but it isn't working 
properly - it prints just one number when printing in reverse.




I think the problem is here:

void popFront() { head = head.next;  if (head !is null) 
head.prev = null; }
void popBack() { tail = tail.prev; if (tail !is null) tail.next 
= null; }


Head and tail will point to the same nodes and you are setting 
the head.prev to null. Which would make the tail.prev null when 
iterating it backwards. Similarly if you started iterating it 
backwards and then iterating it forward it would not work.






Re: array/Array: "hard" bounds checking

2018-02-22 Thread TheFlyingFiddle via Digitalmars-d-learn

On Thursday, 22 February 2018 at 12:50:43 UTC, ag0aep6g wrote:

On 02/22/2018 10:39 AM, bauss wrote:
On Thursday, 22 February 2018 at 05:22:19 UTC, TheFlyingFiddle 
wrote:


Eg:

uint a = 3;
int b = -1;

assert(a > b); //No idea what should happen here.


This is what happens:

assert(cast(int)a > b);


Nope. It's `assert(a > cast(uint)b);`.


These two posts kind of proved my point :D. And that is why you 
should never mix signed and unsigned integers. A good thing is 
that dscanner static analysis will warn you about this stuff (in 
simple cases at-least).


Re: array/Array: "hard" bounds checking

2018-02-21 Thread TheFlyingFiddle via Digitalmars-d-learn

On Thursday, 22 February 2018 at 00:34:59 UTC, kdevel wrote:
Is there a D equivalent of the C++ at method? I would like to 
reformulate


repro2.d
---
void main ()
{
   import std.stdio;
   import std.container;
   import std.range;
   auto z = Array!char();
   z.reserve(0xC000_);
   z.capacity.writeln;
   z.length.writeln;
   for (uint u = 0; u < 0xC000_; ++u)
  z.insert = 'Y';
   int i = -1073741825;
   i.writeln;
   z[i] = 'Q';
   z[i].writeln;
}
---

$ dmd -O -m32 repro2.d
$ ./repro2
3221225472
0
-1073741825
Q

such that it fails like the 64 bit version:

$ dmd -O -m64 repro2.d
$ ./repro2

3221225472
0
-1073741825
core.exception.RangeError@.../dmd2/linux/bin64/../../src/phobos/std/container/array.d(650):
 Range violation

??:? _d_arrayboundsp [0x440d22]
.../dmd2/linux/bin64/../../src/phobos/std/container/array.d:650 
inout pure nothrow ref @nogc @safe inout(char) 
std.container.array.Array!(char).Array.opIndex(ulong) [0x43bb0f]

repro2.d:14 _Dmain [0x43afff]


Well in a 32bit program the value 0xBFFF_(-1073741825) is 
clearly inside the array. The Array class uses an size_t 
internaly for storing the length/capacity, that is uint in a 
32bit program and ulong in a 64bit program. In the 64bit the 
value (0x__BFFF_)(-1073741825) is larger than 
0xC000_000 so it will be out of bounds in this case.


If you want any negative integer to be out of bounds the capacity 
cannot be larger than 0x7FFF_ in 32bit programs.


But this behavior is strange. Well the really strange/bad part is 
that it's allowed by the compiler in the first place. I would be 
very happy if a user was forced to make an explicit cast for int 
<-> uint conversions. Like we have to do for long -> int 
conversions. Also signed/unsigned comparisons should be strictly 
outlawed by the compiler.


Eg:

uint a = 3;
int b = -1;

assert(a > b); //No idea what should happen here.






Re: Policy-based design in D

2017-02-14 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 14 February 2017 at 06:48:33 UTC, TheGag96 wrote:
Tonight I stumbled upon Andrei's concept of policy-based design 
(https://en.wikipedia.org/wiki/Policy-based_design) and tried 
to implement their example in D with the lack of multiple 
inheritance in mind.


https://dpaste.dzfl.pl/adc05892344f (btw, any reason why 
certificate validation on dpaste fails right now?)


The implementation isn't perfect, as I'm not sure how to check 
members of mixin templates so that you  could verify whether 
print() and message() are actually where they should be. How 
would you do that? Is there any use for this kind of thing in 
D, and if so, what would it be? I've hardly dabbled in OOP 
patterns, but the abstraction seems kinda interesting.


Something like this can be used to check if the mixin has a 
specific member:


template hasMixinMember(alias mixin_, string member) {
  enum hasMixinMember = __traits(compiles, () {
  mixin mixin_ mix;
  static assert(__traits(hasMember, mix, member));
  });
}

struct HelloWorld(alias OutputPolicy, alias LanguagePolicy)
  if(hasMixinMember!(OutputPolicy, "print") &&
 hasMixinMember!(LanguagePolicy, "message"))
{
  mixin OutputPolicy;
  mixin LanguagePolicy;

  void run() {
print(message());
  }
}

Note: This method could fail if you do some compile-time 
reflection black magic inside the mixins.




Could also do this:

struct HelloWorld(alias OutputPolicy, alias LanguagePolicy)
{
  mixin OutputPolicy output;
  mixin LanguagePolicy lang;

  void run() {
output.print(lang.message());
  }
}

If "output" / "lang" does not contain a particular member you 
will get a compile time error at the usage point (although it's 
not the best message).




Re: Yield from function?

2017-01-30 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 30 January 2017 at 11:03:52 UTC, Profile Anaysis wrote:
I need to yield from a complex recursive function too allow 
visualizing what it is doing.


e.g., if it is a tree searching algorithm, I'd like to yield 
for each node so that the current state can be shown visually.


I realize that there are several ways to do this but D a yield 
version without additional threads would be optimal. I don't 
need concurrency or speed, just simple.


If you don't want to use fibers then an alternative to yeilds is 
to use callbacks during iteration.


Example:
struct Tree(T)
{
   T value;
   Tree!(T)[] children;
}

void iterDepth(T)(Tree!(T) tree, void delegate(Tree!T) cb)
{
cb(tree);
foreach(child; tree.children)
{
iterDepth(child, cb);
}
}

unittest
{
auto tree = ... //Make the tree somehow
iterDepth(tree, (node)
{
writeln(node.value);
});
}

Callbacks have their set of problems but it's one of the simpler 
ways to do this.















Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Tuesday, 24 January 2017 at 21:41:12 UTC, Profile Anaysis 
wrote:
On Tuesday, 24 January 2017 at 21:36:50 UTC, Profile Anaysis 
wrote:

...


Maybe with all this talk of the new CTFE engine being 
developed, a similar mechanism can be used optionally? This 
could help with debugging also.


In debug mode, the cfte mixin's are written to disk with hash, 
if they are not a string themselves. (could be done with all 
cfte's, I suppose, but not sure about performance and 
consistency)


Then debuggers can use the outputed cfte's for proper analysis, 
line breaking, etc...


Would be nice to have something like this in dmd. Would be even 
better if it could work on templates as well. No more stepping 
though functions filled with static if :).


I think it would be possible to make something like this work to:

template Generator(T...)
{
   string Generator()
   {
  //Complex ctfe with T... that generates strings and nothing 
else.

   }
}

If it's possible to quickly detect changes to all T arguments one 
could cache things on this form to.


For example:

mixin Cache!("some_file_name", Generator, size_t, int, 2,
   "hello", MyStruct(3),
   MyType);

The problem would be detecting changes in the arguments. As long 
as one is able to get a unique hash from each input element it 
should work fine I think. I guess it would be required to reflect 
over the members of the structs/classes to lookup attributes and 
such. If the generation stage is time consuming this might be 
worth it... But it's not gonna be "almost free" like for DSLs. 
Basic types (not including functions/delegates don't see how this 
could work without writing mixin code targeted at caching) should 
be simple/trivial to detect changes to however.







Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Tuesday, 24 January 2017 at 21:36:50 UTC, Profile Anaysis 
wrote:
On Tuesday, 24 January 2017 at 16:49:03 UTC, TheFlyingFiddle 
wrote:
On Tuesday, 24 January 2017 at 16:41:13 UTC, TheFlyingFiddle 
wrote:

Everything turned out s much better than expected :)
Added bonus is that mixin output can be viewed in the 
generated files :D


Could you post your solution?

I suggest we get a real caching module like above that has the 
extra feature of hashing the mixin strings.


This way the caching mechanism can validate if the mixin 
strings have changed. Put the hash in a comment in the output 
file that used to test if the input string has the same hash. 
If it does, simply use the output file, else, regenerate.


Adds some overhead but keeps things consistent.


(Since I'm not sure what Cache!() is, I'm assuming it doesn't 
do this)


This is the solution I through together:
// module mixin_cache.d
mixin template Cache(alias GenSource, string from, string path)
{
import core.internal.hash;
import std.conv : to;

//Hash to keep track of changes
enum h = hashOf(from);
enum p = path ~ h.to!string ~ ".txt";

//Check if the file exists else suppress errors
//The -J flag needs to be set on dmd else
//this always fails
static if(__traits(compiles, import(p)))
{
//Tell the wrapper that we loaded the file p
//_importing is a magic string
pragma(msg, "_importing");
pragma(msg, p);
mixin(import(p));
}
else
{
//We don't have a cached file so generate it
private enum src = GenSource!(from);
static if(__traits(compiles, () { mixin(src); }))
{
//_exporing_start_ tells the wrapper to begin
//outputing the generated source into file p
pragma(msg, "_exported_start_");
pragma(msg, p);
pragma(msg, src);
pragma(msg, "_exported_end_");
}

mixin(src);
}
}

To make this work I wrap dmd in a d script like this:
(ignoring some details as what i've got is not really tested yet)

// dmd_with_mixin_cache.d

void main(string[] args)
{
   auto dmdargs = ... //Fix args etc.
   auto dmd = pipeProcess(dmdargs, Redirect.stderr);

   foreach(line; dmd.stderr.byLine(KeepTerminator.yes))
   {
   if(line.startsWith("_exported_start_")) {
  //Parse file and store source in a file
  //Keep going until _exported_end_
   } else if(line.startsWith("_importing")) {
  //A user imported a file. (don't delete it!)
   } else {
 //Other output from dmd like errors / other pragma(msg, 
...)

   }
   }

   //Files not imported / exported could be stale
   //delete them. Unless we got a compile error from dmd
   //Then don't delete anything.
}

The cache template and the small wrapper that wraps dmd was all 
that were needed.


usage is somthing like this:

template Generate(string source)
{
string Generate()
{
   //Do some complex ctfe here
   //can't wait for the new ctfe engine!
   foreach(i; 0 .. 100_000)
   { }
   return source;
}
}

mixin Cache!("Some interesting DSL or similar", __MODULE__);

//some_filename is not really needed but can be nice when 
browsing the

//mixin code to see where it came from. (approximately anwyays)

This is it. If you want I can post a full solution sometime later 
this week but I want to clean up what I have first.





Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Tuesday, 24 January 2017 at 16:41:13 UTC, TheFlyingFiddle 
wrote:

Everything turned out s much better than expected :)
Added bonus is that mixin output can be viewed in the generated 
files :D


Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 24 January 2017 at 12:19:33 UTC, ketmar wrote:
On Tuesday, 24 January 2017 at 12:14:05 UTC, TheFlyingFiddle 
wrote:

unittest
{
   enum s = import("myfile");
}

Is there something similar to this for outputting files at 
compile-time?


no. this is by design, so it won't be fixed. sorry. you may use 
build script that will create the code first, and you can dump 
`pragma(msg, …);` to file, but that's all.


Thanks again. Wrapping dmd with a script worked wonders!

Now i'm able to do this:
From the old: (a)
unittest
{
   mixin Sql!(...);
   mixin Sql!(...);
   ...
   mixin Sql!(...);
}

To the new:  (b)
unittest
{
   mixin Cache!(Sql, ...);
   mixin Cache!(Sql, ...);
   ...
   mixin Cache!(Sql, ...);
}

For (a) the build times are in the 10-30s always
For (b) the build times are in the 10-30s the first time and 
subseconds later.

Each query just adds a few ms to the build time now!

Additionally even if dmd crashes with an out of memory exception 
(which still happens with the current ctfe engine) most of the 
queries will have already been built and dmd can be restarted. 
After the restart the built queries are loaded via caching and 
dmd can finish working on the leftovers.


Everything turned out s much better than expected :)



Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 24 January 2017 at 12:19:33 UTC, ketmar wrote:
On Tuesday, 24 January 2017 at 12:14:05 UTC, TheFlyingFiddle 
wrote:

unittest
{
   enum s = import("myfile");
}

Is there something similar to this for outputting files at 
compile-time?


no. this is by design, so it won't be fixed. sorry. you may use 
build script that will create the code first, and you can dump 
`pragma(msg, …);` to file, but that's all.


Yeah, I guess allowing the compiler to produce arbitrary files 
would be problematic from a security standpoint.


Thanks for the pragma idea! Wrapping the build in a script is a 
satisfactory solution for me.


Re: Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Tuesday, 24 January 2017 at 11:19:58 UTC, TheFlyingFiddle 
wrote:

Does D have any facilities that could make this possible?


It seems that there is a feature I was unaware of/forgot called 
Import Expressions.


unittest
{
   enum s = import("myfile");
}

Is there something similar to this for outputting files at 
compile-time? Or do I need to keep the source around and write it 
at run-time?


Re: Why is [0] @safer than array.ptr?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 24 January 2017 at 11:28:17 UTC, Atila Neves wrote:

void main() {
foo;
}

void foo() @safe {
int[] array;
auto ptr = array.ptr;
}


foo.d(7): Deprecation: array.ptr cannot be used in @safe code, 
use [0] instead



[0] is incredibly ugly and feels like an unnecessary 
hack, and I'm wondering why it's @safe.


Atila


Just a speculative guess.

unittest @safe
{
   int[] array;

   auto ptr  = array.ptr; //could be null
   auto ptr2 = [0]; //Does a bounds check?
   auto ptr3 = [5]; //Should do a bounds check.
}


Is it possible to "cache" results of compile-time executions between compiles?

2017-01-24 Thread TheFlyingFiddle via Digitalmars-d-learn

Context:
I am currently writing a small library that compiles sql strings 
at compile-time and generates query objects.


Something like this:

unittest
{
mixin Sql!(q{
   select feed.url, feed.title
   from users
   join user_feeds as feed
   on users.id = feed.user
   where user.id = {user}
}, DBInterface) UserFeeds;

UserFeeds.Query query;
query.user = 1; //some user

auto con   = //Open db connection
auto feeds = con.execute(query);
foreach(f; feeds)
{
writef("Feed: %s\n\tTitle: %s\n", f.url, f.title);
}
}

The parsing step does some amount work like validates the sql 
query, makes sure that tables "user" and "user_feed" exists in 
DBInterface and creates a query having "user" as input and a 
result type containing url, title with appropriate type.


Now the compile times for parsing a small number of queries are 
marginal. However, as the queries become more numerous and 
complex compile-times starts to become a problem.


Currently I "solve"tm long compile times by having the 
DBInterface and queries be compiled into a separate lib and 
linking to that lib from the main application.


Having a dedicated db lib fixes compile times for the main 
application. But I am wondering if it's possible to work on a 
lower level of granularity. Somehow "cache" the queries between 
compiles, without resorting to each query being compiled into 
it's own lib/object file.


Does D have any facilities that could make this possible?





Re: Problems with stored procedure using the mysql-native library.

2017-01-18 Thread TheFlyingFiddle via Digitalmars-d-learn
On Wednesday, 18 January 2017 at 19:40:12 UTC, TheFlyingFiddle 
wrote:

Hi

I am having problems using stored procedures that return 
results.


Update: I am using the SvrCapFlags.MULTI_RESULTS flag when 
initiating the connection and have also tried using the 
SvrCapFlags.MULTI_STATEMENTS flag.


Unfortunately these flags do not solve the problem.




Problems with stored procedure using the mysql-native library.

2017-01-18 Thread TheFlyingFiddle via Digitalmars-d-learn

Hi

I am having problems using stored procedures that return results.

Example procedure:
CREATE PROCEDURE GetUsers()
  SELECT * FROM users;

When I use this procedure from the MySQL command line everything 
works fine. However, when I use it from the mysql-native library 
i get into problems.


int main()
{
   auto con = new Connection(...); //
   auto cmd = Command(con);
   cmd.execProcedure("GetUsers");
}

This returns the error:
MySQL error: PROCEDURE db.getusers can't return a result set in 
the given context.


Other SQL queries work fine over the connection and procedure 
calls that do not return anything also works.


How would i go about solving this problem?



Re: test if the alias of a template is a literal

2016-10-27 Thread TheFlyingFiddle via Digitalmars-d-learn
On Thursday, 27 October 2016 at 14:45:22 UTC, Gianni Pisetta 
wrote:
On Thursday, 27 October 2016 at 14:34:38 UTC, TheFlyingFiddle 
wrote:
On Thursday, 27 October 2016 at 14:04:23 UTC, Gianni Pisetta 
wrote:

Hi all,
but at the moment isStringLiteral will return true even with 
variables of type string. So i searched for a metod to check 
if an alias is a literal value, but found nothing. Anyone 
have any clue on how can be done?


Thanks,
Gianni Pisetta


Not really understanding your problem. Could you include an 
example use that is problematic?


Yea, sorry I missed that.
A really stupid example would be

string var;

alias Sequence = Optimize!( "The", " ", "value", " ", "of", " 
", "var is ", var );


static assert( is( Sequence == AliasSeq!( "The value of var is 
", var ) ) );


writeln( Sequence );

given that you include the code snippet in the first post.

Thanks, Gianni


I think this fixes the problem:

template isStringLiteral(T...) if (T.length == 1) {
static if(is( typeof(T[0]) == string ))
{
enum bool isStringLiteral = !__traits(compiles, [0]);
}
else
{
enum bool isStringLiteral = false;
}
}

Literals do not have an address but variables do.

However note that:
string var;
static assert(!is(AliasSeq!(var) == AliasSeq!(var)));

It still works at run-time though:
string var = " hello ";
alias Seq = Optimize!("This", " is", " a", " variable! ", var);
//pragma(msg, Seq) //Fails to compile at var
//static assert(is(Seq == AliasSeq!("This is a variable!", 
var))); //Also fails

writeln(Seq); //Still works



Re: Meta-programming detecting anonymous unions inside structs.

2016-10-21 Thread TheFlyingFiddle via Digitalmars-d-learn
On Friday, 21 October 2016 at 08:18:58 UTC, rikki cattermole 
wrote:

On 21/10/2016 9:13 PM, TheFlyingFiddle wrote:
On Friday, 21 October 2016 at 07:56:27 UTC, rikki cattermole 
wrote:

You're gonna have to use UDA's for that.


Yes, to do the serialization you're right.

But my usecase for this is for error reporting. Basically any 
struct
that contains unions without serialization instructions cannot 
be

serialized and I want to make such structures errors.

So when I try to serialize the example struct Foo. It should 
assert with

something along the lines of:

"Don't know how to serialize overlapping fields: "Foo.integer",
"Foo.floating" and "Foo.array".


I suppose you could use .offsetof to determine this.


This is what I was looking for. Thanks!


Meta-programming detecting anonymous unions inside structs.

2016-10-21 Thread TheFlyingFiddle via Digitalmars-d-learn
I am trying to port a serialization library I wrote in Lua some 
time ago. I've ran into a problem relating to types with 
anonymous unions inside.


Given this code:
enum Kind
{
   none = 0,
   array,
   integer,
   floating,
}

struct Foo
{
   Kind type;
   union
   {
   ulong  integer;
   double floating;
   void[] array;
   }
   int nonUnionField;
   //...
}

How can I tell that "integer", "floating" and "array" are part 
the union while "nonUnionField" is not?


Thanks in advance.


Re: How to do "inheritance" in D structs

2016-10-11 Thread TheFlyingFiddle via Digitalmars-d-learn
On Wednesday, 12 October 2016 at 02:18:47 UTC, TheFlyingFiddle 
wrote:

void foo(ref ABase base)
{
base.ival = 32;
}

This should be:
void foo(ref Base1 base)
{
base.ival = 32;
}



Re: How to do "inheritance" in D structs

2016-10-11 Thread TheFlyingFiddle via Digitalmars-d-learn

On Wednesday, 12 October 2016 at 01:22:04 UTC, lobo wrote:

Hi,

I'm coming from C++ and wondered if the pattern below has an 
equivalent in D using structs. I could just use classes and 
leave it up to the caller to use scoped! as well but I'm not 
sure how that will play out when others start using my lib.


Thanks,
lobo


module A;

class Base1 {
int ival = 42;
}
class Base2 {
int ival = 84;
}

module B;

class S(ABase) : ABase {
string sval = "hello";
}

module C;

import A;
import B;

void main() {
auto s= scoped!(S!Base1); // scoped!(S!Base2)
}


You could use "alias this" to simulate that type of inheritence.

module A;
struct Base1
{
int ival = 42;
}

module B;

struct Base2
{
int ival = 84;
}

module C;
import A, B;

struct S(Base) if(is(Base == struct))
{
Base base;
alias base this;
string sval = "Hello ";
}

void foo(ref ABase base)
{
base.ival = 32;
}

void main()
{
S!Base1 a;
S!Base2 b;
writeln(a.sval, a.ival);
writeln(b.sval, b.ival);
foo(a);
writeln(a.sval, a.ival);
}


Re: Working with ranges: mismatched function return type inference

2016-10-11 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 11 October 2016 at 15:46:20 UTC, orip wrote:

On Tuesday, 11 October 2016 at 13:06:37 UTC, pineapple wrote:
Rewrite `return chain(ints[0..5], ints[8..$]);` as `return 
ints[0..5] ~ ints[8..$];`


The `chain` function doesn't return an array, it returns a 
lazily-evaluated sequence of an entirely different type from 
`int[]`.


Of course it does! I would like the function to return an 
"input range of int", no matter which one specifically. Is this 
possible?


It is, but you will have to use an interface / class to achieve 
this behavior (or use some sort of polymorphic struct). Something 
like this will do the trick:


import std.range;
import std.stdio;

interface IInputRange(T)
{
bool empty();
T front();
void popFront();
}

final class InputRange(Range) if(isInputRange!Range)
: IInputRange!(ElementType!Range)
{
Range r;
this(Range r)
{
this.r = r;
}

bool empty() { return r.empty; }
ElementType!Range front() { return r.front; }
void popFront() { r.popFront; }
}

auto inputRange(Range)(Range r)
{
return new InputRange!Range(r);
}

IInputRange!int foo(int[] ints)
{
import std.range;
if(ints.length > 10) {
return inputRange(chain(ints[0 .. 5], ints[8 .. $]));
} else {
return inputRange(ints);
}
}

void main()
{
auto ir  = foo([0, 1, 2, 3, 4, 5, 6, 7, 8, 9]);
auto ir2 = foo([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10]);
writeln(ir);
writeln(ir2);
}





Re: bug, or is this also intended?

2016-10-04 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 3 October 2016 at 11:40:00 UTC, deed wrote:

Unexpected auto-concatenation of string elements:

string[] arr = ["a", "b" "c"];// ["a", "bc"], length==2
int[] arr2   = [[1], [2] [3]];// Error: array index 3 is 
out of bounds [2][0 .. 1]
  // Error: array index 3 is 
out of bounds [0..1]


dmd 2.071.2-b2


It comes from C.

In C you can write stuff like:

char* foo = "Foo is good but... "
"... bar is better!";

Eg static string concatenation for multiline/macros etc.

Think it was implemented this way to provide better support for 
converting c codebases.


Re: How to make rsplit (like in Python) in D

2016-10-01 Thread TheFlyingFiddle via Digitalmars-d-learn

On Saturday, 1 October 2016 at 16:45:11 UTC, Uranuz wrote:
How to make rsplit (like in Python) in D without need for extra 
allocation using standard library? And why there is no 
algorithms (or parameter in existing algorithms) to process 
range from the back. Is `back` and `popBack` somehow worse than 
`front` and `popFront`.


I've tried to write somethig that would work without 
allocation, but failed.

I have searching in forum. Found this thread:
https://forum.dlang.org/post/bug-1030...@http.d.puremagic.com%2Fissues%2F

I tried to use `findSplitBefore` with `retro`, but it doesn't 
compile:


import std.stdio;
import std.algorithm;
import std.range;
import std.string;

void main()
{
string str = "Human.Engineer.Programmer.DProgrammer";

writeln( findSplitBefore(retro(str), ".")[0].retro );
}

Compilation output:
/d153/f534.d(10): Error: template std.range.retro cannot deduce 
function from argument types !()(Result), candidates are:
/opt/compilers/dmd2/include/std/range/package.d(198):
std.range.retro(Range)(Range r) if 
(isBidirectionalRange!(Unqual!Range))



Why I have to write such strange things to do enough 
wide-spread operation. I using Python at the job and there is 
very much cases when I use rsplit. So it's very strange to me 
that D library has a lot of `advanced` algorithms that are not 
very commonly used, but there is no rsplit.


Maybe I missing something, so please give me some advice)


There are two reasons why this does not compile. The first has to 
do with how retro() (and indeed most function in std.range) work 
with utf-8 strings (eg the string type). When working on strings 
as ranges, the ranges internally change the type of ".front" from 
'char' into 'dchar'. This is done to ensure that algorithms 
working on strings do not violate utf-8.


See related thread: 
http://forum.dlang.org/post/mailman.384.1389668512.15871.digitalmars-d-le...@puremagic.com


The second reason has to do with findSplitBefore called with 
bidirectional ranges.


Now to why your code does not compile.

retro takes as input parameter a bidirectional or random access 
range. Because of how strings are handled, the string type 
"string" is not characterized as a random access range but 
instead as a bidirectional range. So the output from retro is a 
bidirectional range.


Calling findSplitBefore with a bidirectional range unfortunately 
does not return results that are also bidirectional ranges. 
Instead findSplitBefore returns forward ranges when bidirectional 
ranges are given as input. I am not entirely sure why.


The chain of calls has the following types.

string is (bidirectional range) -> retro(str) is (bidirectional 
range) -> findSplitBefore is (forward range)


Now the last call to retro is called with a forward range and 
retro needs bidirectional or random access ranges as input.


If you change the line:

string str = "Human.Engineer.Programmer.DProgrammer";

into:
dstring str = "Human.Engineer.Programmer.DProgrammer";

Then the code will compile.

The reason for this is:

dstring is (random access range) -> retro(str) is (random access 
range) -> findSplitBefore is (random access range)


Now the last call to retro gets a random access range as input 
and everyone is happy.









What does the -betterC switch in dmd do?

2015-11-12 Thread TheFlyingFiddle via Digitalmars-d-learn
The description in dmd help says: omit generating some runtime 
information and helper functions.


What runtime information are we talking about here?  My 
understanding is that it's basically an experimental feature but 
when (if) completed what subset of the language would still be 
usable?


Re: String interpolation

2015-11-10 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 10 November 2015 at 10:41:52 UTC, tired_eyes wrote:
On Tuesday, 10 November 2015 at 10:33:30 UTC, Tobias Pankrath 
wrote:

Ruby:
a = 1
b = 4
puts "The number #{a} is less than #{b}"

PHP:
$a = 1;
$b = 4;
echo "The number $a is less than $b";

D:
???


int a = 1, b = 4;
writefln("The number %s is less than %s", a, b);

You can't do it the ruby / perl / php way in D. It could be 
possible if we had AST macros in the language but as it stands 
now it's not possible to do that.


the closest you could get is something like this:

string s = aliasFormat!("The number $a is less than $b", a, b);

or

aliasWrite!("The number $a is less than $b", a, b);

Not really recommended though as these would end up creating lots 
of template bloat.









Re: Associative arrays

2015-11-09 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 9 November 2015 at 04:52:37 UTC, rsw0x wrote:
On Monday, 9 November 2015 at 04:29:30 UTC, Rikki Cattermole 
wrote:
Fwiw, EMSI provides high quality containers backed by 
std.experimental.allocator.

https://github.com/economicmodeling/containers


I have a question regarding the implementation of the 
economicmodeling hashmap. Why must buckets be a power of two? Is 
it to be able to use the: hash & (buckets.length - 1) for index 
calculations or is there some other reason?


Re: Associative arrays

2015-11-09 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 9 November 2015 at 04:52:37 UTC, rsw0x wrote:
On Monday, 9 November 2015 at 04:29:30 UTC, Rikki Cattermole 
wrote:

On 09/11/15 4:57 PM, TheFlyingFiddle wrote:

[...]


Nope.


[...]


As far as I'm aware, you are stuck using e.g. structs to 
emulate AA behavior.
I have a VERY basic implementation here: 
https://github.com/rikkimax/alphaPhobos/blob/master/source/std/experimental/internal/containers/map.d

Feel free to steal.


Fwiw, EMSI provides high quality containers backed by 
std.experimental.allocator.

https://github.com/economicmodeling/containers


Thanks for the suggestions. I also made a hashmap using 
allocators some time ago that I use in-place of the built in 
hashmap for most of my purposes. The syntax of a custom hash map 
is somewhat lacking in comparison to the built in one however and 
I was hoping that I could either make the built in work with 
allocators or replace it with my own implementation.


In addition to this I am building a pointer patching binary 
serializer and I hoped that I could make it work with the built 
in aa without requiring to many gc allocations.


The economicmodeling one seems interesting ill try it out and see 
if it's better then the one I am currently using.





Re: Is D so powerfull ??

2015-11-08 Thread TheFlyingFiddle via Digitalmars-d

On Sunday, 8 November 2015 at 10:22:44 UTC, FreeSlave wrote:

On Saturday, 7 November 2015 at 14:49:05 UTC, ZombineDev wrote:

basically you don't have technical reasons not to use D :D


What about the lack of proper support for dynamic libraries on 
Windows and OSX? I mean, GC merging is still not implemented, 
right?


Pretty sure gc merging is done via the gc proxy in dlls/shared 
libraries. But there are lot's of other problems. Mainly that 
Phobos cannot be dynamically loaded (or is this fixed?) Last time 
i tried throwing exceptions did not work either. Also the export 
problems on windows have not really been fixed (see 
http://wiki.dlang.org/DIP45). Still I have used DLL's sucessfully 
for reloading plugins, it can be done but it's really a pain to 
use as it stands now.


Re: D for TensorFlow-like library

2015-11-08 Thread TheFlyingFiddle via Digitalmars-d

On Sunday, 8 November 2015 at 17:47:33 UTC, Muktabh wrote:
We cannot make D bindings to it because it is a closed source 
project by Google and only a spec like mapreduce will be 
released, so I thought maybe I might try and come up with an 
open source implementation. I was just curious if D would be a 
good choice language for a library like this instead of C++ 
which is used by Google.


Well, if you are going to write it yourself i see no reason why D 
would be any worse a language the C++. You can get the same 
speed, interface with the GPU in pretty much the same way etc. 
You could probably do a lot in compile time to simplify writing 
kernels in D. From my point of view D is simpler than C++ as-well 
so that should help implementation. (no headers, sane meta 
programming etc.)


It does seem to be a huge undertaking however since tensorflow 
seems to be a very complex library. But if you feel confident in 
this domain then I would say go for it. It would be very cool to 
have something like this in D.


Re: Preferred behavior of take() with ranges (value vs reference range)

2015-11-08 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 9 November 2015 at 02:14:58 UTC, Jon D wrote:
Here's an example of the behavior differences below. It uses 
refRange, but same behavior occurs if the range is created as a 
class rather than a struct.

--Jon


This is an artifact of struct based ranges being value types. 
When you use take the range get's copied into another structure 
that is also a range but limits the number of elements you take 
from that range.


Basically: take looks something like this: (simplified)

struct Take(Range)
{
   size_t count;
   Range  range;

   @property ElementType!Range front() { return range.front; }
   @property bool empty() { return count == 0 || range.empty; }
   void popFront() { count--; range.popFront; }
}

Code like this:

auto fib1 = ...

//Here fib1 get's copied into first5.
auto first5 = Take(5, fib);

So later when you perform actions on first5 you no longer take 
any action on fib1 but instead take action on the copied range 
inside of first5. Hence you don't see any consumption of fib1's 
elements.


However when you use a refRange / a class the Take range will 
take a reference / pointer to the actual range. So now your no 
longer working a copy of the range but on the range itself. As 
you reuse the same range you will see that consumption has 
occured.


If you want a more indepth explanation there were two talks at 
Dconf this year that (in part) discussed this topic. 
(https://www.youtube.com/watch?v=A8Btr8TPJ8c, 
https://www.youtube.com/watch?v=QdMdH7WX2ew=PLEDeq48KhndP-mlE-0Bfb_qPIMA4RrrKo=14)


Associative arrays

2015-11-08 Thread TheFlyingFiddle via Digitalmars-d-learn
I have a few questions about the pseudo built in associative 
arrays.


1. Is it possible to have the built in associative array use a 
custom allocator from std.experimental.allocator to service it's 
allocation needs?


2. A while ago I read on the newsgroup a while back that there 
was a plan to make it possible for a user to swap out the 
standard associative array implementation by modifying druntime 
and or implementing some linker functions. Have this been done 
yet? And if so what must I do to swap the implementation?


Re: Metaprogramming in D - From a beginner's perspective

2015-11-08 Thread TheFlyingFiddle via Digitalmars-d

On Sunday, 8 November 2015 at 21:03:44 UTC, maik klein wrote:
Here is the blog post 
https://maikklein.github.io/2015/08/11/Metaprogramming-D/


And the discussion on reddit: 
https://www.reddit.com/r/programming/comments/3s1qrt/metaprogramming_in_d_from_a_beginners_perspective/


Interesting read!

The Engine class reminds me of a similar thing I did a while back 
when I was creating a simple entity system.


Re: Align a variable on the stack.

2015-11-06 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 6 November 2015 at 11:38:29 UTC, Marc Schütz wrote:

On Friday, 6 November 2015 at 11:37:22 UTC, Marc Schütz wrote:
Ok, benchA and benchB have the same assembler code generated. 
However, I _can_ reproduce the slowdown albeit on average only 
20%-40%, not a factor of 10.


Forgot to add that this is on Linux x86_64, so that probably 
explains the difference.




It turns out that it's always the first tested function that's 
slower. You can test this by switching benchA and benchB in 
the call to benchmark(). I suspect the reason is that the OS 
is paging in the code the first time, and we're actually 
seeing the cost of the page fault. If you a second round of 
benchmarks after the first one, that one shows more or less 
the same performance for both functions.


I tested swapping around the functions on windows x86 and I still 
get the same slowdown with the default initializer. Still 
basically the same running speed of both functions on windows 
x64. Interestingly enough the slowdown disappear if I add another 
float variable to the structs. This causes the assembly to change 
to using different instructions so I guess that is why. Also it 
only seems to affect small structs with floats in them. If I 
change the memebers to int both versions run at the same speed on 
x86 aswell.


Re: Align a variable on the stack.

2015-11-05 Thread TheFlyingFiddle via Digitalmars-d-learn
On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle 
wrote:
On Thursday, 5 November 2015 at 21:22:18 UTC, TheFlyingFiddle 
wrote:
On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz 
wrote:

~10x slowdown...


I forgot to mention this but I am using DMD 2.069.0-rc2 for x86 
windows.


I reduced it further:

struct A { float x, y, z ,w; }
struct B
{
   float x=float.nan;
   float y=float.nan;
   float z=float.nan;
   float w=float.nan;
}

void initVal(T)(ref T t, ref float k) { pragma(inline, false); }

void benchA()
{
   foreach(float f; 0 .. 1000_000)
   {
  A val = A.init;
  initVal(val, f);
   }
}

void benchB()
{
   foreach(float f; 0 .. 1000_000)
   {
  B val = B.init;
  initVal(val, f);
   }
}

int main(string[] argv)
{
   import std.datetime;
   import std.stdio;

   auto res = benchmark!(benchA, benchB)(1);
   writeln("Default:  ", res[0]);
   writeln("Explicit: ", res[1]);

   readln;
   return 0;
}

also i am using dmd -release -boundcheck=off -inline

The pragma(inline, false) is there to prevent it from removing 
the assignment in the loop.


Re: Align a variable on the stack.

2015-11-05 Thread TheFlyingFiddle via Digitalmars-d-learn

On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz wrote:
On Thursday, 5 November 2015 at 03:52:47 UTC, TheFlyingFiddle 
wrote:
Can you publish two compilable and runnable versions of the 
code that exhibit the difference? Then we can have a look at 
the generated assembly. If there's really different code being 
generated depending on whether the .init value is explicitly 
set to float.nan or not, then this suggests there is a bug in 
DMD.


I created a simple example here:

struct A { float x, y, z ,w; }
struct B
{
   float x=float.nan;
   float y=float.nan;
   float z=float.nan;
   float w=float.nan;
}

void initVal(T)(ref T t, ref float k)
{
pragma(inline, false);
t.x = k;
t.y = k * 2;
t.z = k / 2;
t.w = k^^3;
}


__gshared A[] a;
void benchA()
{
A val;
foreach(float f; 0 .. 1000_000)
{
val = A.init;
initVal(val, f);
a ~= val;
}
}

__gshared B[] b;
void benchB()
{
B val;
foreach(float f; 0 .. 1000_000)
{
val = B.init;
initVal(val, f);
b ~= val;
}
}


int main(string[] argv)
{
import std.datetime;
import std.stdio;

auto res = benchmark!(benchA, benchB)(1);
writeln("Default: ", res[0]);
writeln("Explicit: ", res[1]);

return 0;
}

output:

Default:  TickDuration(1637842)
Explicit: TickDuration(167088)

~10x slowdown...




Re: Align a variable on the stack.

2015-11-05 Thread TheFlyingFiddle via Digitalmars-d-learn
On Thursday, 5 November 2015 at 21:22:18 UTC, TheFlyingFiddle 
wrote:

On Thursday, 5 November 2015 at 11:14:50 UTC, Marc Schütz wrote:
~10x slowdown...


I forgot to mention this but I am using DMD 2.069.0-rc2 for x86 
windows.





Re: Unittest in a library

2015-11-05 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 6 November 2015 at 03:59:07 UTC, Charles wrote:
Is it possible to have unittest blocks if I'm compiling a 
library?


I've tried having this:

test.d:

class Classy {
unittest { assert(0, "failed test"); }
}


and then build it with `dmd test.d -lib -unittest` and it 
doesn't fail the unittest.


You can test the unittests by using the -main switch.
http://dlang.org/dmd-linux.html#switch-main


Re: Align a variable on the stack.

2015-11-05 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 6 November 2015 at 00:43:49 UTC, rsw0x wrote:
On Thursday, 5 November 2015 at 23:37:45 UTC, TheFlyingFiddle 
wrote:
On Thursday, 5 November 2015 at 21:24:03 UTC, TheFlyingFiddle 
wrote:

[...]


I reduced it further:

[...]


these run at the exact same speed for me and produce identical 
assembly output from a quick glance

dmd 2.069, -O -release -inline


Are you running on windows?

I tested on windows x64 and there I also get the exact same speed 
for both functions.


Re: Align a variable on the stack.

2015-11-04 Thread TheFlyingFiddle via Digitalmars-d-learn
On Wednesday, 4 November 2015 at 01:14:31 UTC, Nicholas Wilson 
wrote:

Note that there are two different alignments:
 to control padding between instances on the stack 
(arrays)

 to control padding between members of a struct

align(64) //arrays
struct foo
{
  align(16) short baz; //between members
  align (1) float quux;
}

your 2.5x speedup is due to aligned vs. unaligned loads and 
stores which for SIMD type stuff has a really big effect. 
Basically misaligned stuff is really slow. IIRC there was a 
(blog/paper?) of someone on a uC spending a vast amount of time 
in ONE misaligned integer assignment causing traps and getting 
the kernel involved. Not quite as bad on x86 but still with 
doing.


As to a less jacky solution I'm not sure there is one.


Thanks for the reply. I did some more checking around and I found 
that it was not really an alignment problem but was caused by 
using the default init value of my type.


My starting type.
align(64) struct Phys
{
   float x, y, z, w;
   //More stuff.
} //Was 64 bytes in size at the time.

The above worked fine, it was fast and all. But after a while I 
wanted the data in a diffrent format. So I started decoding 
positions, and other variables in separate arrays.


Something like this:
align(16) struct Pos { float x, y, z, w; }

This counter to my limited knowledge of how cpu's work was much 
slower. Doing the same thing lot's of times, touching less memory 
with less branches should in theory at-least be faster right? So 
after I ruled out bottlenecks in the parser I assumed there was 
some alignment problems so I did my Aligner hack. This caused to 
code to run faster so I assumed this was the cause... Naive! 
(there was a typo in the code I submitted to begin with I used a 
= Align!(T).init and not a.value = T.init)


The performance was actually cased by the line : t = T.init no 
matter if it was aligned or not. I solved the problem by changing 
the struct to look like this.

align(16) struct Pos
{
float x = float.nan;
float y = float.nan;
float z = float.nan;
float w = float.nan;
}

Basically T.init get's explicit values. But... this should be the 
same Pos.init as the default Pos.init. So I really fail to 
understand how this could fix the problem. I guessed the compiler 
generates some slightly different code if I do it this way? And 
that this slightly different code fixes some bottleneck in the 
cpu. But when I took a look at the assembly of the function I 
could not find any difference in the generated code...


I don't really know where to go from here to figure out the 
underlying cause. Does anyone have any suggestions?









Align a variable on the stack.

2015-11-03 Thread TheFlyingFiddle via Digitalmars-d-learn

Is there a built in way to do this in dmd?

Basically I want to do this:

auto decode(T)(...)
{
   while(...)
   {
  T t = T.init; //I want this aligned to 64 bytes.
   }
}


Currently I am using:

align(64) struct Aligner(T)
{
   T value;
}

auto decode(T)(...)
{
   Aligner!T t = void;
   while(...)
   {
  t.value = T.init;
   }
}

But is there a less hacky way? From the documentation of align it 
seems i cannot use that for this kind of stuff. Also I don't want 
to have to use align(64) on my T struct type since for my usecase 
I am decoding arrays of T.


The reason that I want to do this in the first place is that if 
the variable is aligned i get about a 2.5x speedup (i don't 
really know why... found it by accident)









Re: Is it possible to filter variadics?

2015-11-03 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 3 November 2015 at 23:41:10 UTC, maik klein wrote:

Is it possible to filter variadics for example if I would call

void printSumIntFloats(Ts...)(Ts ts){...}

printSumIntFloats(1,1.0f,2,2.0f);

I want to print the sum of all integers and the sum of all 
floats.



//Pseudo code
void printSumIntFloats(Ts...)(Ts ts){
auto sumOfInts = ts
  .filter!(isInteger)
  .reduce(a => a + b);
writeln(sumOfInts);
...
}

Is something like this possible?


It is possible: I don't think that reduce works on tuples but you 
could do something like this.


import std.traits, std.meta;
void printsumIntFloats(Ts...)(Ts ts)
{
   alias integers = Filter!(isInteger, Ts);
   alias floats   = Filter!(isFloatingPoint, Ts);
   alias int_t= CommonType!(integers);
   alias float_t  = CommonType!(floats);

   int_t intres = 0;
   float_t floatres = 0;
   foreach(i, arg; ts)
   {
  static if(isInteger!(Ts[i]))
  intres += arg;
  else
  floatres += arg;
   }
   writeln(intres);
   writeln(floatres);
}










Re: foreach loop

2015-11-03 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 3 November 2015 at 15:29:31 UTC, Namal wrote:
well I tried this that way, but my count stays 0, same as if I 
do it in an int function with a return though I clearly have 
some false elements in the arr.


You could also use count: 
http://dlang.org/phobos/std_algorithm_searching.html#count


return arr.count!(x => !x);



Re: Efficiency of immutable vs mutable

2015-11-02 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 3 November 2015 at 03:16:07 UTC, Andrew wrote:
I've written a short D program that involves many lookups into 
a static array. When I make the array immutable the program 
runs faster.  This must mean that immutable is more than a 
restriction on access, it must affect the compiler output. But 
why and how?


Thanks
Andrew


I'm going to speculate a bit here since you did not post any code.

Say you have this code:

static char[4] lookup = ['a', 't', 'g', 'c']

This lookup table will be in thread local storage (tls). TLS is a 
way to have global variables that are not shared between threads 
that is every thread has it's own copy of the variable. TLS 
variables are not as fast to access as true global variables 
however since the accessing code has to do some additional lookup 
based on the thread to gain access to the correct copy.


If you change the above code to this:
static immutable char[4] lookup = ['a', 't', 'g', 'c'];

Then you get an immutable array. Since this array cannot change 
and is the same on all threads there is no reason to have 
separate storage for each thread. Thus the above code will create 
a true global variable that is shared between the threads. So 
what you are likely seeing is that a variable changed from being 
tls to being a shared global. Global accessing is faster then TLS 
accessing so this is probably what made your code run faster.


If you want a shared global that is not immutable you can do this.
__gshared char[4] lookup = ['a', 't', 'g', 'c];

Be warned though these kinds of globals are not thread safe so 
mutation and accessing should be synchronized if your using more 
then one thread.










Re: Unionize range types

2015-11-02 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 3 November 2015 at 01:55:27 UTC, Freddy wrote:

Is there any way I can Unionize range Types?
---
auto primeFactors(T)(T t, T div = 2)
{
if (t % div == 0)
{
return t.only.chain(primeFactors(t / div, div));
}
if (div > t)
{
return [];
}
else
{
return primeFactors(t, div + 1);
}
}
---


Simplest way would be to go with a polymorphic approach.
import std.range;

interface IInputRange(T)
{
T front();
bool empty();
void popFront();
}

class InputRange(Range) : IInputRange(ElementType!Range)
{
   Range inner;
   T front() { return inner.front; }
   bool empty() { return inner.empty; }
   void popFront() { inner.popFront; }
}

Simply wrap the ranges in the InputRange!Range class template. 
And return an IInputRange!T. This will require allocation though 
and is not really the way I would tackle the problem.


Or you could use Algebraic from std.variant which is probably 
more what you were looking for. But it's kind of awkward to use 
in the example you posted since it's not really clear what the 
specific range types are.


I would suggest creating a new range for prime-factors instead of 
trying to mix several ranges together. Something like this should 
do the trick. (if i am understanding the problem correctly)


auto primeFactors(T)(T t) if(isIntegral!T)
{
   struct Factors
   {
  T value;
  T front;
  bool empty() { return value == 1; }
  void popFront()
  {
 while(true)
 {
if(value % front == 0)
{
   value /= front;
   break;
}
if(front > value)
{
   value = 1;
   break;
}
else
{
   front = front + 1;
}
 }
  }
   }

   auto f = Factors(t, 2);
   f.popFront();
   return f;
}









Re: Attributes on parameters in functions.

2015-10-31 Thread TheFlyingFiddle via Digitalmars-d
On Saturday, 31 October 2015 at 12:45:13 UTC, Jacob Carlborg 
wrote:

On 2015-10-30 22:28, TheFlyingFiddle wrote:

In other languages that have Attributes (Java and C# atleast)

I can do stuff like this: (Java)

//com.bar.java
interface Bar { /*stuff*/ }
//com.foo.java
class Foo
{
Foo(@Bar int a)
{
   //some stuff
}
}


I don't seem to be able to do this in D. That is I cannot do 
this:


https://github.com/D-Programming-Language/dmd/pull/4783


This looks very promising I hope something like this get's pulled 
eventually.


Attributes on parameters in functions.

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d

In other languages that have Attributes (Java and C# atleast)

I can do stuff like this: (Java)

//com.bar.java
interface Bar { /*stuff*/ }
//com.foo.java
class Foo
{
   Foo(@Bar int a)
   {
  //some stuff
   }
}


I don't seem to be able to do this in D. That is I cannot do this:

enum Bar;
void foo(@Bar int a) { /* stuff */ }
//Error: basic type expected, not @

Is there a reason that attributes cannot be used on parameters in 
functions?


My current use-case for this is that I would like to use UDA's 
for pattern matching.


auto value = Tuple!(int, "x", int, "y")(1,1);
value.match!(
 (@(0,1) int a, @(0,1) int b) => ...,
 (@isPrime int a, int b) => ...,
 (@(x => (x % 2)) int a, int b) => ...,
 (@Bind!"y" int a) => ...);

Basically match to the first lambda if "x" == 0 | 1 and "y" == 0 
| 1, to the second if a "x" is prime third if "x" is odd, else 
explicitly bind "y" to a.


I know I can technically do. Match! or M! instead of the @ symbol 
to achieve the same effect which is probably what I will end up 
doing but it would be nice if we could use attributes in function 
parameters.




Re: Static constructors in structs.

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 30 October 2015 at 21:29:22 UTC, BBasile wrote:
__gshared is mostly usefull on fields (eg public uint a) 
because it prevents a data to be put on the TLS, which in 
certain case reduces the perfs up to 30%. The byte code using a 
global variable that's not __gshared can be incredibly slower !


I have gotten used to using __gshared on fields so much that I 
just naturally assumed that shared static this() { } would be 
equivalent to __gshared static this() { } :P. I find they can be 
very useful for data that initialized in static constructors and 
then never change again (I will have to fix my shared 
constructors now :S guess i have been lucky not running into race 
problems before).


Static constructors in structs.

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

Is this intended to work?

struct A
{
   __gshared static this()
   {
  //Add some reflection info to some global stuff.
  addReflectionInfo!(typeof(this));
   }
}

I just noticed this works in 2.069, is this intended? I mean I 
love it! It makes it possible to do lot's of useful mixins for 
runtime reflection for example. Just wondering if I can start 
writing code this way or if it's a regression and is going away. 
I think this did not work a year ago when I tried doing something 
like this.


How can I distinguish an enum constant from an actual enum at compile time?

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

I want to be able to do something like this:

enum a = 32
enum b = { q,w,e,r,t,y }

CtType ctype  = getCtType!(a); // -> Would become 
CtType.enumConstant

CtType ctype1 = getCtType!(b); // -> Would become CtType.enum_



Re: How can I distinguish an enum constant from an actual enum at compile time?

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 30 October 2015 at 11:46:43 UTC, TheFlyingFiddle wrote:

I want to be able to do something like this:

enum a = 32
enum b = { q,w,e,r,t,y }

CtType ctype  = getCtType!(a); // -> Would become 
CtType.enumConstant

CtType ctype1 = getCtType!(b); // -> Would become CtType.enum_


Never mind if found out how:

pragma(msg, is(b == enum)); //True
pragma(msg, is(a == enum)); //False.


Re: Static constructors in structs.

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 30 October 2015 at 20:58:37 UTC, anonymous wrote:

On 30.10.2015 21:23, TheFlyingFiddle wrote:

Is this intended to work?

struct A
{
__gshared static this()
{
   //Add some reflection info to some global stuff.
   addReflectionInfo!(typeof(this));
}
}

I just noticed this works in 2.069, is this intended?


static constructors are supposed to work, yes.

The description is on the class page: 
http://dlang.org/class.html#static-constructor


__gshared doesn't do anything there, though. Use `shared static 
this` instead, if you want the constructor to run only once per 
process, and not once per thread.


Was under the impression that __gshared did the same thing for 
static constructors. Thanks.


Re: Static constructors in structs.

2015-10-30 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 30 October 2015 at 20:59:46 UTC, Adam D. Ruppe wrote:
On Friday, 30 October 2015 at 20:23:45 UTC, TheFlyingFiddle 
wrote:
But yeah, the struct feature table http://dlang.org/struct.html 
shows them as checked.


I gotta say the language documentation is shaping up nicely.


Re: Option types and pattern matching.

2015-10-27 Thread TheFlyingFiddle via Digitalmars-d

On Tuesday, 27 October 2015 at 07:55:46 UTC, Kagamin wrote:
On Monday, 26 October 2015 at 16:42:27 UTC, TheFlyingFiddle 
wrote:
If you instead use pattern matching as in your example you 
have much better context information that can actually help 
you do something in the case a value is not there.


Probably possible:

Some!T get(T)(Option!T item) {
Some!T r;
//Static guarantee of handling value not present
item match {
None() => {
throw new Exception("empty!");
}
Some(t) => {
r=t;
}
}

return r;
}

Then:
Option!File file;
Some!File s = file.get();


Sure that would work but i don't see that it's different then an 
enfore since you don't have access the context where get is 
invoked so you can't really do anything with it.


Contrived Example:

void foo()
{
   Option!File worldFile = getAFile("world.json");
   auto world  = parseJSON(worldFile.get());
   Option!File mapFile = getAFile(world["map"]);
   auto map= parseJSON(mapFile.get());
   //Other stuff.
}

Let's say we get an NoObjectException, this tells us that either 
the world.json file did not exist or the map file does not exist. 
get does not have access to that context so we wouldn't be able 
to tell. This next example would fix this.


void foo()
{
   Option!File worldFile = getAFile("world.json");
   enforce(worldFile.hasValue, "Error while loading file: 
world.json");

   auto world = parseJSON(worldFile.get());
   Option!File mapFile = getAFile(world["map"]);
   enforce(mapFile.hasValue, "Error while loading file: " ~ 
world["map"]);

   auto map = parseJSON(mapFile.get());
   //Other stuff
}

Now we know which file failed to load. But we bypassed the 
NoObjectException to do it.


I would prefer this style instead.
void foo()
{
  Option!File worldFile = getAFile("world.json");
  match worldFile {
 Some(value) => {
 auto world  = parseJSON(value);
 Option!File mapFile = getAFile(world["map"]);
 match mapFile {
Some(mapf) => {
   auto map = parseJSON(mapf);
   //Do something here.
},
None => enforce(false, "Failed to load: " ~ 
world["map"]);

 }
 },
 None => enforce(false, "Failed to load: world.json");
   }
}

The reason that I prefer that is not that I like the syntax 
really. It's just that if the only way to get a value is to 
pattern match on it then you are forced to consider the case 
where the value was not there.



Guess a D version without language support would look something 
like:

void foo()
{
  auto worldFile = getAFile("world.json");
  worldFile.match!(
 (File worldf) {
auto world = parseJSON(value);
auto mapFile = getAFile(world["map"]);
mapFile.match!(
   (File mapf)
   {
  auto map = parseJSON(mapf);
  //Do stuff;
   },
   (None) => enforce(false, "Failed to load: " ~ 
world["map"]);

 },
 (None) => enforce(false, "Failed to load: world.json")
  );
}

The example here is very contrived. Here we just throw exceptions 
if the file could not load and if that is all we do we should 
just wrap getAFile instead but i hope you get my point.







Re: What's in a empty class?

2015-10-27 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 27 October 2015 at 21:28:31 UTC, Adam D. Ruppe wrote:
On Tuesday, 27 October 2015 at 21:23:45 UTC, TheFlyingFiddle 
wrote:

I can account for the first thing a vtable. But that
should only cover 4bytes. What's in the other 4bytes?


The monitor used for `synchronized`.

(yes, this is something a lot of people want to remove as it is 
rarely all that useful yet you pay the price in all D class 
objects)


I see Thanks.


Re: Option types and pattern matching.

2015-10-27 Thread TheFlyingFiddle via Digitalmars-d

On Tuesday, 27 October 2015 at 17:48:04 UTC, Meta wrote:
On Tuesday, 27 October 2015 at 15:06:07 UTC, TheFlyingFiddle 
wrote:

This can arguably already be done cleaner in D.

if (auto worldFile = getAFile("world.json"))
{
auto world = parseJSON(worldFile);
if (auto mapFile = getAFile(world["map"))
{
//...
}
else enforce(false, "Failed to load: " ~ world["map"]);
}
else enforce(false, "Failed to load: world.json");

Or even:

auto worldFile = enforce(getAFile("world.json"), "Failed to 
load: world.json");

auto world = parseJSON(worldFile);
auto mapFile = enforce(getAFile(world["map"]), "Failed to load: 
" ~ world["map"]);

//...

From what I've seen in the Rust community, they try to avoid 
using match as it's very syntactically heavy. They have all 
kinds of idioms to avoid doing matches on Option, such as the 
try! macro, unwrap_or, unwrap_or_else, etc.


That being said, pattern matching has been one of my 
most-wanted D features for years.


Yes this is much cleaner. But it does not really force a user to 
consider the empty case.


I mean this would still compile.
auto worldFile = getAFile("world.json");
auto world = parseJSON(worldFile);
auto mapFile   = getAFile(world["map"]);
auto map   = parseJSON(mapFile);

What I was after was a way to at compile time ensure that all 
accesses to the value in the optional type are considered. From 
my uses of Maybe in haskell i know this get's annoying quick so I 
dunno if it's a good thing. But atleast you would know that in 
all places that optionals are accessed a handler for the empty 
case would be present.


What's in a empty class?

2015-10-27 Thread TheFlyingFiddle via Digitalmars-d-learn

With this code:

class A { }
pragma(msg, __traits(classInstanceSize, A));

I get the output 8 (32-bit).
I can account for the first thing a vtable. But that
should only cover 4bytes. What's in the other 4bytes?


Re: Option types and pattern matching.

2015-10-26 Thread TheFlyingFiddle via Digitalmars-d

On Monday, 26 October 2015 at 11:40:09 UTC, Edmund Smith wrote:
On Sunday, 25 October 2015 at 06:22:51 UTC, TheFlyingFiddle 
wrote:
You could also emulate constant matching using default 
parameters (albeit with the restriction that they must be after 
any non-default/constant parameters), since the defaults form 
part of the function's type. I tried making something like this 
earlier this summer and it'd check that a given value was first 
equal to the default parameter and match if so, or match if 
there was no default parameter but the types matched.


e.g.
//template ma(tch/g)ic

unittest
{
Algebraic!(string, int, double, MyStruct) v = 5;
string s = v.match!(
(string s = "") => "Empty string!",
(string s) => s,
(int i = 7) => "Lucky number 7",
(int i = 0) => "Nil",
(int i) => i.to!string,
(double d) => d.to!string,
(MyStruct m = MyStruct(15)) => "Special MyStruct value",
(MyStruct m) => m.name, //
() => "ooer");
writeln(s);
}

It's a bit ugly overloading language features like this, but it 
makes the syntax a little prettier.

This does look nicer indeed.


Why not just use a value as an extra argument:
v.match!(
   7, (int i) => "Lucky number 7"
);
I like this you could go further with this to allow any number of 
constants.

v.match!(
5, 7,   i => "Was: " ~ i.to!string,
(int i)   => "Was this: " ~ i.to!string);

Or for ranges.
v.match!(
MatchR!(1, 10), i => "Was: " ~ i.to!string, //Matches 1 .. 10
(int i)  => "Was this: " ~ i.to!string);

I'd really like to see proper pattern matching as a 
language-level feature however; for all the emulating it we can 
do in D, it's not very pretty or friendly and optimising it is 
harder since the language has no concept of pattern matching.

One could probably get something like this:

int i = 5;
string s = i.match!(
   5, 7, n => "Five or seven",
   MatchR!(10, 100), n => "Was between ten and a hundred",
   (n) => "Was: " ~ n.to!string);

to fold into something like this:

void match(T...)(int i)
{
   switch(i)
   {
  case 5: case 7: return (T[2])!int(i);
  case 10: .. case 99: return (T[3])!int(i);
  default: return (T[4])!int(i);
   }
}

int i = 5;
string s = match!(/* lambdas and whatnot */), i);

With some template/ctfe and string mixings magic.

In-lining, constant folding etc could probably just reduce it to
the equvalent of:

int i= 5;
string s = "Five or seven";

(if there is really good constant folding :P)

It might however generate lot's of useless symbols in the 
resulting code

making code size's larger.

Things like Option (and other ADTs) are lovely, but really need 
good pattern matching to become worthwhile IMO (e.g. Java 
Optional has a get() method that throws on empty, which 
undermines the main reason to use optional -
Another thing that has always bothered me with Optional in 
Java in addition to this is that the optional value itself might 
be null. So to write robust code you first have to check against 
null on the option value :P.


Scala's Option is really nice on the other hand since you 
can/should pattern match).
Don't really see a point in an optional type if can access the 
underlying

value without first checking if it's there.






Re: Option types and pattern matching.

2015-10-26 Thread TheFlyingFiddle via Digitalmars-d

On Monday, 26 October 2015 at 15:58:38 UTC, Edmund Smith wrote:
On Monday, 26 October 2015 at 14:13:20 UTC, TheFlyingFiddle 
wrote:

On Monday, 26 October 2015 at 11:40:09 UTC, Edmund Smith wrote:
Scala's Option is really nice on the other hand since you 
can/should pattern match).
Don't really see a point in an optional type if can access the 
underlying

value without first checking if it's there.


The key difference with (exhaustive) pattern matching is that 
it *is* the check that the value is there. Pattern matching 
enforces the existence of an on-nothing clause for Optional, 
on-error for Error, on-Leaf and on-Branch for Bintrees, etc.
And even with nice higher-order functions, plain pattern 
matching is quite valuable for finely controlled error/resource 
handling, and I often see it in Rust code as well as Scala (and 
I've seen it used in Haskell occasionally too). A brief, 
contrived example use-case:


What I meant is that I don't really see the point in optionals 
that look something like this:


struct Optional(T)
{
   T value;
   bool empty;

   ref T get()
   {
  enforce(!empty, "Value not present!");
  return value;
   }

   //Stuff...
}

Optional!(int[]) doSomething(...);
void process(int[]);

void foo()
{
Optional!(int[]) result = doSomething(...);
process(result.get());
}

I mean sure you get a null check in foo instead of in process but 
this style of writing code does not really give you much of an 
advantage since you can't really handle the errors much better 
then you could a Null exception.


If you instead use pattern matching as in your example you have 
much better context information that can actually help you do 
something in the case a value is not there.


Re: Option types and pattern matching.

2015-10-25 Thread TheFlyingFiddle via Digitalmars-d

On Sunday, 25 October 2015 at 05:45:15 UTC, Nerve wrote:
On Sunday, 25 October 2015 at 05:05:47 UTC, Rikki Cattermole 
wrote:
Since I have no idea what the difference between Some(_), None 
and default. I'll assume it's already doable.


_ represents all existing values not matched. In this case, 
Some(_) represents any integer value that is not 7. None 
specifically matches the case where no value has been returned. 
We are, in most languages, also able to unwrap the value:


match x {
Some(7) => "Lucky number 7!",
Some(n) => "Not a lucky number: " ~ n,
None => "No value found"
}


You can do something very similar to that. With slightly 
different syntax.


import std.traits;
import std.conv;
import std.variant;
struct CMatch(T...) if(T.length == 1)
{
   alias U = typeof(T[0]);
   static bool match(Variant v)
   {
  if(auto p = v.peek!U)
 return *p == T[0];
  return false;
   }
}

auto ref match(Handlers...)(Variant v)
{
   foreach(handler; Handlers)
   {
  alias P = Parameters!handler;
  static if(P.length == 1)
  {
 static if(isInstanceOf!(CMatch, P[0]))
 {
if(P[0].match(v))
   return handler(P[0].init);
 }
 else
 {
if(auto p = v.peek!(P[0]))
   return handler(*p);
 }
  }
  else
  {
 return handler();
  }
   }

   assert(false, "No matching pattern");
}

unittest
{
Variant v = 5;
string s = v.match!(
(CMatch!7) => "Lucky number seven",
(int n)=> "Not a lucky number: " ~ n.to!string,
() => "No value found!");

   writeln(s);
}



Re: Option types and pattern matching.

2015-10-25 Thread TheFlyingFiddle via Digitalmars-d

On Sunday, 25 October 2015 at 14:43:25 UTC, Nerve wrote:
On Sunday, 25 October 2015 at 06:22:51 UTC, TheFlyingFiddle 
wrote:
That is actually freaking incredible. It evaluates to a value, 
unwraps values, matches against the None case...I guess the 
only thing it doesn't do is have compiler-enforced matching on 
all cases. Unless I'm just slow this morning and not thinking 
of other features a pattern match should have.


With some changes to the match function one could enforce that a 
default handler is always present so that all cases are handled 
or error on compilation if it's not.


Something like: (naive way)

auto ref match(Handlers...)(Variant v)
{
//Default handler must be present and be the last handler.
static assert(Parameters!(Handlers[$ - 1]).length == 0,
  "Matches must have a default handler.");
}

now

//Would be a compiler error.
v.match!((int n) => n.to!string));

//Would work.
v.match!((int n) => n.to!string),
 () => "empty");

Additionally one could check that all return types share a common 
implicit conversion type. And cast to that type in the match.


//Returns would be converted to long before being returned.
v.match!((int n)  => n, //Returns int
 (long n) => n, //Returns long
 ()   => 0);

Or if they don't share a common implicit conversion type return a 
Variant result.


Also the handlers could be sorted so that the more general 
handlers are tested later.


//Currently
v.match!((int n) => n,
 (CMatch!7) => 0,
 () => 0);

Would not really work since (int n) is tested for first so 
CMatch!7 would never get called even if the value was 7. But if 
we sort the incoming Handlers with CMatch instances at the front 
then the above would work as a user intended. This would also 
allow the empty/default case to be in any order.


For even more error checking one could make sure that no CMatch 
value intersects with another. That way if there are for example 
two cases with CMatch!7 then an assert error would be emited.


So:
v.match!((CMatch!7) => "The value 7",
 (CMatch!7) => "A seven value",
 () => "empty");

Would error with something like "duplicate value in match"

Other extensions one could do to the pattern matching is:

1. Allow more then one value in CMatch. So CMatch!(5, 7) would 
mean either 5 or 7.
2. Rust has a range syntax, this could be kind of nice. Maybe 
RMatch!(1, 10) for that.

3. Add a predicate match that takes a lambda.

//Predicate match.
struct PMatch(alias lambda)
{
alias T = Parameters!(lambda)[0];
alias this value;
T value;
static bool match(Variant v)
{
   alias P = Parameters!lambda;
   if(auto p = v.peek!P)
   {
  if(lambda(*p))
  {
  value = *p;
  return true;
  }
   }
   return false;
}
}

struct RMatch(T...) if(T.length == 2)
{
   alias C = CommonType!(typeof(T[0]), typeof(T[1]));
   C value;
   alias this value;

   static bool match(Variant v)
   {
  if(auto p = v.peek!C)
  {
 if(*p >= T[0] && *p < T[1])
 {
 value = *p;
 return true;
 }
  }
  return false;
   }
}

v.match!(
  (RMatch!(1, 10) n) => "Was (1 .. 10): " ~ n.to!string;
  (PMatch!((int x) => x % 2 == 0) n) => "Was even: " ~ 
n.to!string,
  (PMatch!((int x) => x % 2 == 1) n) => "Was odd:  " ~ 
n.to!string,

  () => "not an integer");

The PMatch syntax is not the most fun... It can be reduced 
slightly if your not using a variant but a Maybe!T type or a 
regular old type to.


The pattern matching can have more static checks and the syntax 
can look a somewhat better if we are matching on a Maybe!T type 
or a regular type instead of a variant. We could for example make 
sure that all CMatch/RMatch values have the correct type and (in 
some limited cases) ensure that all cases are covered without the 
need for a default switch.


All in all I think that something like this would be a fairly 
comprehensive library pattern matching solution. Catching many 
types of programming errors at compile-time. It could be fast as 
well if all the constants and ranges are converted into a switch 
statements (via string mixin magic).


This problem has gained my interest and I plan on implementing 
this sometime this week. I'll post a link to the source when it's 
done if anyone is interested in it.













Re: Option types and pattern matching.

2015-10-25 Thread TheFlyingFiddle via Digitalmars-d
On Sunday, 25 October 2015 at 18:23:42 UTC, Dmitry Olshansky 
wrote:
I humbly believe that D may just add special re-write rule to 
the switch statement in order to allow user-defined switchable 
types. This goes along nicely with the trend - e.g. foreach 
statement works with anything having static range interfaces or 
opApply.


I don't think I understand this, could you elaborate?


Re: Array of templated classes or structs

2015-10-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Saturday, 24 October 2015 at 15:57:09 UTC, Dandyvica wrote:

Hi guys,

Apart from deriving from the same class and declaring an array 
of that

root class, is there a way to create an array of templates?

This seems not possible since template are compile-time 
generated, but just to be sure. For example, it seems logical 
to get an array of complex numbers but Complex needs to be 
declared with a type.


Thanks for any hint.
Structs or classes that are templated will create new types each 
time they are instantiated.


struct S(T) { /*stuff*/ }
static assert(!is(S!int == S!double));

So you can create arrays of:
S!int[] a; or
S!double[] b;

But you can't really create arrays of S!(int or double)[].

However it can be sort of done by using a variant(taged union).

import std.variant;
alias SIntOrDouble = Algebraic!(S!int, S!double);

SIntOrDouble[] array;
array ~= S!int(...);
array ~= S!double(...);


Now the array holds two items an S!int for the first item and an 
S!double for the second.


You can use it like this.
foreach(ref elem; array)
{
   if(auto p = elem.peek!(S!int))
   {
  //Do stuff with an S!int item
   }
   else if(auto p = elem.peek!(S!double))
   {
  //Do stuff with an S!double.
   }
}

Or like this:
foreach(ref elem; array)
{
   elem.visit!(
   (S!int i) => /*something with ints*/,
   (S!double d) => /*something with doubles*/
   );
}

Take a look at std.variant if you are interested.

A drawback to the Algebraic is that you must know all the 
different template instantiations that you will be using. If you 
don't know this I suggest you use a variant instead.


The line:
SIntOrDouble[] array;
changes to
Variant[] array;

With this you can hold anything in the array. This is both an 
advantage and a drawback, the advantage is that you can just add 
more templpate instantiations to the program as is evolves. But 
you lose static typing information so the compiler will not be 
able to help you anymore. For example this would be valid:


Variant[] array;
array ~= S!int(...);
array ~= S!double(...);
array ~= S!long(...);
array ~= "I am a string!";

And this is probably not what you want.








Re: Kinds of containers

2015-10-24 Thread TheFlyingFiddle via Digitalmars-d
On Saturday, 24 October 2015 at 09:22:37 UTC, Jacob Carlborg 
wrote:
Can these be implemented by the user just declaring a regular 
container as immutable? The implement will recognize if it's 
declared as immutable and adapt.


How can a type know it's qualifier?

struct Container(T)
{
   // access qualifier somehow do stuff with it
}

alias SM  = Container!int;
alias SIM = immutable Container!int;

It was my understanding that S!int only gets instantiated one 
time here? Or does the compiler instantiate (immutable S!int) and 
(S!int)? Or were you thinking of something else?





Re: Array of templated classes or structs

2015-10-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Saturday, 24 October 2015 at 19:00:57 UTC, TheFlyingFiddle 
wrote:
One thing about variant is that if the struct you are trying to 
insert is larger then (void delegate()).sizeof it will allocate 
the wrapped type on the gc heap.


This is not a concern if you want to have class templates as they 
are on the heap anyways and have a fixed size.


Re: Array of templated classes or structs

2015-10-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Saturday, 24 October 2015 at 18:40:02 UTC, TheFlyingFiddle 
wrote:
To complete TemplateStruct simply forward the remaing members 
of the
variant. Or use something like proxy!T in std.typecons. Or use 
an alias this v.
(I don't really recommend alias this it has all kinds of 
problems)


One thing about variant is that if the struct you are trying to 
insert is larger then (void delegate()).sizeof it will allocate 
the wrapped type on the gc heap. This might be detrimental to 
performance. So to help with this you could add an extra element 
on the TemplateStruct to sort of handle this.


struct TemplateStruct(alias template_, size_t size = (void 
delegate).sizeof)

{
VariantN!(size) v;
//Rest is the same.
}

Pick a good size for the template you want to make arrays of and 
it will lessen the stress on the gc heap.


For example:
struct vec4(T)
{
T[4] data;
//stuff
}

alias Vector4 = TemplateStruct!(template_, vec4!(double).sizeof);
Vector4[] array;

Additionaly you might want to look into the 
(http://forum.dlang.org/thread/jiucsrcvkfdzwinqp...@forum.dlang.org) if your interested in some cool stuff that can be done to call methods on such variant structs.




Re: Array of templated classes or structs

2015-10-24 Thread TheFlyingFiddle via Digitalmars-d-learn
On Saturday, 24 October 2015 at 18:29:08 UTC, TheFlyingFiddle 
wrote:

Variant[] array;
array ~= S!int(...);
array ~= S!double(...);
array ~= S!long(...);
array ~= "I am a string!";

And this is probably not what you want.


You can do this if you want to ensure that items stored in the 
variant are of

a specific template struct/class.

import std.traits, std.variant;
struct TemplateStruct(alias template_)
{
   private Varint v;
   void opAssign(TemplateStruct!template_ other)
   {
   this.v = other.v;
   }

   void opAssing(T)(T t) if(isInstanceOf!(template_, T))
   {
   this.v = t;
   }

   T* peek(T) { return v.peek!T; }
   auto visit(Handlers...) { return v.visit!handler; }
   //More variant stuff here.
}


This should work: (untested)
TemplateStruct!(S)[] array;
array ~= S!int(...);
array ~= S!long(...);
array ~= S!double(...);
array ~= "I am a string!"; //This line should issue a compiler 
error.


To complete TemplateStruct simply forward the remaing members of 
the
variant. Or use something like proxy!T in std.typecons. Or use an 
alias this v.

(I don't really recommend alias this it has all kinds of problems)






Re: D serialization temporary fixup?

2015-10-22 Thread TheFlyingFiddle via Digitalmars-d-learn
On Thursday, 22 October 2015 at 16:15:23 UTC, Shriramana Sharma 
wrote:
I wanted a D equivalent to: 
http://doc.qt.io/qt-5/qdatastream.html 
https://docs.python.org/3/library/pickle.html


and saw that one is under construction: 
http://wiki.dlang.org/Review/std.serialization


But till it's finalized, I'd just like to have a quick but 
reliable way to store real and int data types into a binary 
data file and read therefrom. Is there such a solution? The 
size of the data is fixed, but especially since I have real 
values, I'd like to not write to limited fixed decimal text 
format.


If your only interested in POD data something like this should do 
the trick.


module pod_encoding;
import std.traits;
import std.stdio;

void encode(T)(string s, T[] t)
   if(!hasIndirections!T && (is(T == struct) || isNumeric!T))
{
   File(s, "wb").rawWrite(t);
}

T[] decode(T)(string s)
   if(!hasIndirections!T && (is(T == struct) || isNumeric!T))
{
  File f = File(s, "rb");
  assert(f.size % T.sizeof == 0, "File does not contain array of 
" ~ T.stringof ~ ".");

  auto size = f.size / U.sizeof;
  auto data = new T[size];
  data  = f.rawRead(data);
  return data;
}


Re: Ternary if and ~ does not work quite well

2015-10-12 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 12 October 2015 at 05:19:40 UTC, Andre wrote:

Hi,
writeln("foo "~ true ? "bar" : "baz");
André


"foo" ~ true

How does this compile? All i can see is a user trying to append a 
boolean to a string which is obvously a type error. Or are they 
converted to ints and then ~ would be a complement operator? In 
that case.. horror.


Re: Degenerate Regex Case

2015-04-24 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 24 April 2015 at 18:28:16 UTC, Guillaume wrote:
Hello, I'm trying to make a regex comparison with D, based off 
of this article: https://swtch.com/~rsc/regexp/regexp1.html


I've written my code like so:

import std.stdio, std.regex;

void main(string argv[]) {

string m = argv[1];
	auto p = 
ctRegex!(a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?aaa);

if (match(m, p)) {
writeln(match);
} else {
writeln(no match);
}

}

And the compiler goes into swap. Doing it at runtime is no 
better. I was under the impression that this particular regex 
was used for showcasing the Thompson NFA which D claims to be 
using.


The regex 
a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?a?aaa 
can be simplified to a{30,60} (if i counted correctly).


The regex a{30,60} works fine.

[Speculation]
I don't have a good understanding of how D's regex engine work 
but I am guessing that it does not do any simplification of the 
regex input causing it to generate larger engines for each 
additional ? symbol. Thus needing more memory. Eventually as in 
this case the compiler runs out of memory.







Re: D Unittest shortcomings with DLLs

2015-03-04 Thread TheFlyingFiddle via Digitalmars-d

On Tuesday, 3 March 2015 at 17:49:07 UTC, Benjamin Thaut wrote:
Any suggestions how to fix this issue? I'm also open for 
implementation hints.


Kind Regards
Benjamin Thaut


Running unittests that access private symbols from the other side 
of the dll boundary sounds like a very hard problem to solve. On 
one hand you want to be able to access the symbols in order to 
run the unittests and on the other hand they should be private to 
clients of the api. The question is do we even want to run 
unittests that test private implementation on the other side of 
the dll boundery at all? These unittests test functionality the 
users of the dll are unable to access. As such they are not 
actually testing if symbols that should be exported are. However 
unittests that test the public interface of a module do. I think 
that any tool that is created should only care about the 
unittests which are testing the public interface of the module. 
Since this is the interface users of the dll will interact with.


Given the above, a tool will have to be able to identify if a 
unittest tests a public interface of a module and only extract 
the unittests that do. Since these unittests only test the public 
interface they should be able to be compiled if the programmer 
has remembered to export all the public interfaces.


You still have a problem if the unittest is actually testing the 
public interfaces but makes use of some private helper function 
in the process. Since this would not be identified by the tool as 
a public interface-testing unittest.
This problem would not matter if unittest that test the public 
interface are written in another testing module. In my opinion 
these kinds of unittests that test the public interface should 
always be written in another module to ensure that they don't 
unintentionally make use of private symbols.


Finding out if a unittest only accesses public symbols could be 
done by analyzing the ast of the method. Either inside the 
compiler of via one of the third party D parsers currently in use.


I hope that this reply was somewhat helpful to you.


Re: How can I do that in @nogc?

2015-02-25 Thread TheFlyingFiddle via Digitalmars-d-learn

On Wednesday, 25 February 2015 at 19:32:50 UTC, Namespace wrote:
How can I specify that 'func' is @nogc? Or can define the 
function otherwise?


An alternative solution would be to use function
templated on an alias.

import std.traits;
void glCheck(alias func)(string file = __FILE__,
 size_t line = __LINE__) @nogc
if(isCallable!func)
{
func();
glCheckError(file, line);
}


void foo() { new int(5); } //Uses GC
void bar() @nogc { /* ... */ } //Does not use GC

unittest
{
//Calling is a little different
glCheck!foo; //Does not compile not @nogc
glCheck!bar; //Works like a charm.
}


//If you wanted to take arguments to func
//it can be done like this.
void glCheck(alias func,
 string file = __FILE__,
 size_t line = __LINE__,
 Args...)(auto ref Args args) @nogc
 if(isCallable!func)
{
func(args);
glCheckError(file, line);
}

void buz(string a, uint b, float c) @nogc { /* ... */ }

unittest
{
   //Calling looks like this.
   glCheck!buz(foobar, 0xBAADF00D, 42.0f);
}



Re: Map one tuple to another Tuple of different type

2014-07-21 Thread TheFlyingFiddle via Digitalmars-d-learn

On Monday, 21 July 2014 at 15:04:14 UTC, TheFlyingFiddle wrote:

//Outputs 1 to 10 at compile-time.

Edit: 0 to 9



Re: How to define and use a custom comparison function

2014-06-17 Thread TheFlyingFiddle via Digitalmars-d-learn

On Tuesday, 17 June 2014 at 07:53:51 UTC, monarch_dodra wrote:

On Tuesday, 17 June 2014 at 04:32:20 UTC, Jakob Ovrum wrote:

On Monday, 16 June 2014 at 20:49:29 UTC, monarch_dodra wrote:

MyCompare cmp(SortOrder.ASC, 10);


This syntax is not valid D.

It should be:

   auto cmp = MyCompare(SortOrder,ASC, 10);


Well, techincally, the *syntax* is valid. If MyCompare 
contains a constructor, it's legit code to boot. It's part of 
the uniform initialization syntax, and it's what allows things 
like:


BigInt b = 5;
  or
BigInt b(5);

THAT said, yeah, the MyCompare I posted did not contain a 
constructor. SO my code was wrong, guilty as charged.


Since when is that syntax valid? Is there somewhere it is 
documented?





Re: Library design

2014-06-12 Thread TheFlyingFiddle via Digitalmars-d-learn

On Friday, 13 June 2014 at 04:11:38 UTC, Rutger wrote:
I'm trying to create a minimal tweening library in D based on 
the

commonly used easing equations by Robert Penner
(http://www.robertpenner.com/easing/).
One of the goals with the design of the library is that any
numeric type should be tweenable.(The user of the library
shouldn't have to do any casting of their own etc) Now how do I
go about and design a data structure that can take either 
floats,

ints or doubles, store them and modify them?

This is what I hacked together earlier today:


abstract class TweenWrapper{

}


class Tween(T) : TweenWrapper{

T owner;

string[] members;

/** Duration in milliseconds */
int duration;

/** Elapsed time in milliseconds */
int elapsedTime;

bool isComplete;

/** Type of easing */
EasingType easingType;

}

TweenWrapper is just what it sounds like, a wrapper so I don't
have to specify any type for the container holding the Tween
objects(DList!TweenWrapper).

__traits(getMember, owner, members[0]) =
valueReturnedFromEasingFunction;
Was How I planned to use this class.. but you know, compile time
only.


Let me know if this isn't enough to go on.
Is what I'm asking even possible(the easy way) in D?


TL;DR
Help me make D Dynamic!



From what i can tell you want something similar to this.

interface ITween
{
   void update(int elapsedTime);
}

class Tween(T, string member) : ITween
{
   //Expands to
   //|alias memberType = typeof(T.memberName);|

   mixin(alias memberType = typeof(
  ~ T.stringof ~ . ~ member ~ ););

   memberType from;
   memberType to;
   EasingType type;

   int duration;
   int elapsedTime;

   T owner;

   @property bool isComplete()
   {
  return elapsedTime = duration;
   }

   void update(int time)
   {
  elapsedTime = min(elapsedTime + time, duration);
  double amount = elapsedTime / cast(double)duration;
  auto tweened = ease(type, from, to, amount);
  __traits(getMember, owner, member) = tweened;
   }
}

Where ease is a method that will look something like this:

T ease(T)(EasingType type, T from, T to, double amount) 
if(isNumeric!T)

{
   double result;
   if(type == EasingType.linear)
 result = linearEase(amount);
   else
  assert(0, Not yet implemented);

   return cast(T)((to - from) * result + from));
}

double linearEase(double amount)
{
   return amount;
}

This will work for all number types.

Hope this was helpful.














Re: Operator/concept interoperability

2014-06-03 Thread TheFlyingFiddle via Digitalmars-d

On Tuesday, 3 June 2014 at 19:55:39 UTC, Mason McGill wrote:
I have a numerical/multimedia library that defines the concept 
of an n-dimensional function sampled on a grid, and operations 
on such grids. `InputGrid`s (analogous to `InputRange`s) can be 
dense or sparse multidimensional arrays, as well the results of 
lazy operations on other grids and/or functions 
(map/reduce/zip/broadcast/repeat/sample/etc.).


UFCS has been extremely beneficial to my API, enabling things 
like this:


  DenseGrid!(float, 2) x = zeros(5, 5);
  auto y = x.map!exp.reduce!max;

without actually defining `map` inside `DenseGrid` or `reduce` 
inside `MapResult`. `map` and `reduce` are defined once, at 
module scope, and work with any `InputGrid`.


As this is numerical code, it would be great to be able to do 
this with operators, as is possible in C++, Julia, and F#:


  auto opUnary(string op, Grid)(Grid g) if (isInputGrid!Grid)
{ /* Enable unary operations for *any* `InputGrid`. */ }

  DenseGrid!(float, 2) x = zeros(5, 5);
  auto y = -x;

This is currently not supported, which means users of my 
library get functions like `map` and `reduce` that work out of 
the box for any grids they define, but they need to do extra 
work to use convenient operator syntax for NumPy-style 
elementwise operations.


Based on my limited knowledge of DMD internals, I take it this 
behavior is the result of an intentional design decision rather 
than a forced technical one. Can anyone explain the reasoning 
behind it?


Also, does anyone else have an opinion for/against allowing the 
definition of operators that operate on concepts?


Thanks for your time,
-MM


Based on my limited knowledge of DMD internals, I take it this 
behavior is the result of an intentional design decision rather 
than a forced technical one. Can anyone explain the reasoning 
behind it?


Well one reason for this is that unlike methods it is hard to
resolve ambiguity between diffrent operator overloads that have
been defined in diffrent modules.

Example: 2D-vectors
//vector.d
struct Vector
{
float x, y;
}

//cartesian.d
Vector opBinary(string op : +)(ref Vector lhs, ref Vector rhs)
{
   //Code for adding two cartesian vectors.
}

//polar.d
Vector opBinary(string op : +)(ref Vector lhs, ref Vector rhs)
{
//Code for adding two polar vectors.
}

//main.d
import polar, vector;
void main()
{
auto a = Vector(2, 5);
auto b = Vector(4, 10);

auto c = a + b; //What overload should we pick here?

//This ambiguity could potentially be resolved like this:
auto d = polar.opBinary!+(a, b);
//But... This defeats the whole purpose of operators.
}


Side note:

You can achieve what you want to do with template mixins.

Example:

//Something more meaningful here.
enum isInputGrid(T) = true;

mixin template InputGridOperators()
{

   static if(isInputGrid!(typeof(this)))
   auto opUnary(string s)()
   {
  //Unary implementation
   }

   static if(isInputGrid!(typeof(this)))
   auto opBinary(string s, T)(ref T rhs) if(isInputGrid!(T))
   {

   }

   //etc.
}

struct DenseGrid(T, size_t N)
{
mixin InputGridOperators!();
//Implemtation of dense grid
}

While this implementation is not as clean as global operator
overloading it works today and it makes it very simple to add
operators to new types of grids.






















Re: std.benchmark

2014-05-28 Thread TheFlyingFiddle via Digitalmars-d

On Wednesday, 28 May 2014 at 16:33:13 UTC, Russel Winder via
Digitalmars-d wrote:
On Wed, 2014-05-28 at 15:54 +, Joakim via Digitalmars-d 
wrote:

[…]
A google search turns up a long review thread a couple years 
ago:


http://forum.dlang.org/thread/mailman.73.1347916419.5162.digitalmar...@puremagic.com


A somewhat depressing thread.

Looks like it was remanded for further fixes, it shows as a 
work in progress in the review queue:


http://wiki.dlang.org/Review_Queue


Given there has been no activity in over 2 years, I guess it is 
deemed a

dead duck in it's current form/place.

Instead of wallowing in the backwaters of non-Phobosity, it 
should
become a module accessible independently via a DVCS (presumably 
Git),
and Dub? (though I will be using SCons) such that people can 
amend,

extend and send in pull requests.


If i am not mistaken, parts of std.benchmark moved into
std.datetime. After this work on std.benchmark stopped.


Re: Cost of .dup vs. instantiation

2014-05-28 Thread TheFlyingFiddle via Digitalmars-d-learn

On Wednesday, 28 May 2014 at 14:36:25 UTC, Chris wrote:
I use Appender to fill an array. The Appender is a class 
variable and is not instantiated with each function call to 
save instantiation. However, the return value or the function 
must be dup'ed, like so:


Appender!(MyType[]) append;
public auto doSomething() {
  scope (exit) { // clear append }
  // ... do something
  append ~= item;
  return (append.data).dup
}

My question is whether I save anything with Appender as a class 
variable here. I have to .dup the return value (+ clear the 
Appender). If I had a new Appender with each function call, it 
might be just as good.


public auto doSomething() {
  Appender!(MyType[]) append;
  // 
  return append.data.
}

Right or wrong?


When it comes to optimizations it's hard to say. Benchmarking is 
better than relying advice/opinions on the internet in any case.


That being said i doubt that the instantiation cost of the 
Appender is relevant. (Btw the appender is not a class variable! 
It is a struct with reference semantics). Reusing an appender is 
more for those cases where you want to reuse the underlying 
memory of the appender itself.




Re: core.sync.rwmutex example

2014-05-10 Thread TheFlyingFiddle via Digitalmars-d-learn
On Friday, 9 May 2014 at 23:12:44 UTC, Charles Hixson via 
Digitalmars-d-learn wrote:


But I'm worried about the receiving end.  It needs, somehow, to 
ensure that the message it receives is the appropriate message, 
and that other messages don't get dropped while it's waiting 
for the answer...or, perhaps worse, substituted for the 
expected answer.  If I can depend on msg[0] of auto msg = 
receiveOnly!(Tid, bool) that will allow me to check that the 
message was received from the proper source


If you are worried that other messages having the same signature 
will be sent from other sources than the expected source you 
could make use of message tagging. Simply wrap the boolean result 
in a struct with a descriptive name.


struct SharedHashMapSetCB { bool flag; }
void set (string s, uint64_t id)
{
   tbl[s] = id;
   send (SharedHashMapSetCB(true));
}

//On the receiving end
auto msg = receiveOnly!SharedHashMapSetCB();

But doesn't this design lock the entire hash-table while the 
update is in progress?  Is there a better way?
I think a shared memory hash-map is better for your use case. 
Working with message passing is preferable done asynchronously. 
Blocking calls (send followed by receive) is likely to be slower 
then simply waiting on a semaphore.