Re: trait to get the body code of a function?

2018-07-29 Thread Mr.Bingo via Digitalmars-d-learn
So, does anyone want to take up the challenge of writing such a 
function that can safely get the function body? I guess D 
CTFE'able D parser would be required and basically use what I've 
given above. I've seen a few lexers around but not messed with 
them.


This will at least fill in a gap.


Re: How to best implement a DSL?

2018-07-29 Thread Mr.Bingo via Digitalmars-d-learn

On Saturday, 28 July 2018 at 14:59:31 UTC, Robert M. Münch wrote:
Hi, I'm seeking for ideas/comments/experiences how to best 
implement a DSL in D.


What I would like to do is something like this:

... my D code ...

my-dsl {
... my multi-line DSL code ...
trade 100 shares(x) when (time < 20:00) and timingisright()
}


... my D code ...


Some things that circle in my head:
* Can the D parser somehow be missued for a DSL? So I can skip 
all the generic features for types etc.?


* I could use a PEG grammer for parsing the DSL, but this leads 
to quite some overhead for a tiny DSL.


* For static DSL code I would like to use CTFE to convert it 
into D code

* Does this requires a CTFE compatible PEG parser tookit?
	* Could I somehow call an external program during compilation 
which gets the DSL block as input and returns D code?


* For dynamic DSL code I think I need to create something like 
an interpreter
	* How can I reference D variables from DSL code? Is there a 
lookup meachnisam or do I have to create a dictonary?

* Is it possible to populate such a dictonary via CTFE?


Why not simply turn your dsl in to D code? If it is pre-existing 
you'll need a simple mapping.


trade 100 shares(x) when (time < 20:00) and timingisright()

could be written any number of ways:

trade(100.shares(x).when("time < 20.0", "&", "timingisright()")

Simply construct your grammar around D types and use UFSC, 
members, and ranges and then either use it directly or map to it.


If the grammar is relatively small but simply lots of variations 
then it should be rather easy(e.g., if you just have trade, 
shares, and when then it would be very easy and the work would in 
implementation.


Re: trait to get the body code of a function?

2018-07-26 Thread Mr.Bingo via Digitalmars-d-learn

On Thursday, 26 July 2018 at 13:27:09 UTC, Alex wrote:

On Thursday, 26 July 2018 at 11:54:39 UTC, Mr.Bingo wrote:

The string itself could be useful however... Whatever OP has 
in mind with this string...



Having a code block is useful in many ways simply because not 
having it is the most limiting case. If one doesn't have it 
and requires it then it is impossible for them to do what they 
want. If one does have it and has no need, then no loss. Lots 
of people like to err on the side of "caution" which is really 
the side of limitations and frustrations and laziness. Of 
course, sometimes things should be limited, but I think this 
is probably not one of those cases(I see no harm in allowing 
meta code to get a function body and, say, create another 
function based off it but a slight change.


For example, one could mutate algorithms and run genetics 
algorithms on them to see how function scan evolve. This might 
lead to genetic ways to assemble programs. In any case, one 
can't do it since one can't get at a functions body.


For example, maybe one wants a way to debug a mixin:

mixin("int x;"); <- can't debug

void foo()
{
   int x;
}

mixin(funcBody!foo) <- can debug since we defined foo outside 
of the mixin and the compiler see's it naturally.


I'm with you in this part.

I would argue, just because of "type safety" you mentioned, D 
should not allow one to get the function body. The type 
safety is not achievable because of

https://en.wikipedia.org/wiki/Lambda_calculus#Undecidability_of_equivalence


What the hell does that have to do with anything? I don't know 
what you are talking about when it comes to "type safety" and 
equivalence. What I am talking about is being able to parse 
the function body in a type safe way rather than having to 
hack strings or write a D parser oneself.


Instead of

"int x;"

one has >

or whatever the parser parses in to(some type of AST).


I'll try to reformulate this.

Take the example given at
https://dlang.org/spec/function.html#closures
Paragraph 2.

Now, examine the contents of functions abc and def.

Does the functionality differ? For sure not.
Does the compiler knows it? It is not guaranteed.

The article I cited and the literature behind it state, that in 
general, an equivalence of function contents can not be 
tracked. However, to state for a function to be equal with 
another function exactly this is needed. And I thought you 
meant this by "type safety"...



But this doesn't matter... If the idea is to return a string then 
it doesn't matter, just return the requested function body. If is 
to return an AST then it just gives(in whatever suitable form) 
the AST for the requested function...


It's really not complicated:

Suppose one could get the function body as a string and suppose 
one had a D parser. Just use the D parser on the string and 
return the AST. D already has the D parser in it and also already 
has the string in it. It's just a matter of someone exposing that 
to the programmer to use. There is no issue anywhere because the 
function cannot change while compiling(hence there can be no sync 
issues).


I mean, if we want to see the function body we can just open up a 
text editor... If we want to know the syntactical structure we 
just parse the code with our brain. All that is being asked is to 
have the compiler give us the string(easy) and parse it for us in 
to some easy to use structure(some work but not hard, in fact, it 
might be easy).


There are no function pointers to deal with, no compiling, etc.


Again, we can already do this stuff by hand by using import to 
import the file containing the function we want to get... and use 
a D parser to find the function proper and then grab the function 
body and return it as a string. It is not hard to do but does 
require using -J on the source path.



Here is a simple dumb way to get the function body. The problem 
with "library" solutions is that they are not robust.



import std.stdio;



int foo(int x)
{
return x;
}

auto GetFunctionBody(alias B)()
{
	import std.traits, std.meta, std.typecons, std.algorithm, 
std.string, std.range;

enum ft = typeof(B).stringof;
enum rt = ReturnType!(B).stringof;
enum fn = fullyQualifiedName!(B);
enum n = split(fn, ".")[$-1];
enum mn = join(split(fn, ".")[0..$-1])~".d";
enum fs = rt~" "~n~ft[rt.length..$];

enum file = import(mn);


pragma(msg, n, ", ", ft, ", ", rt, ", ", fn, ", ", mn, ", ", fs);

auto res = "";
auto found = false;
for(int i = 0; i < file.length - fs.length; i++)
{
if (file[i..i+fs.length] == fs)
found = true;   


if (found)
res ~= file[i];
if (file[i] == '}')
found = false;  
}



return res;
}


int main()
{


Re: trait to get the body code of a function?

2018-07-26 Thread Mr.Bingo via Digitalmars-d-learn

On Thursday, 26 July 2018 at 10:20:04 UTC, Alex wrote:

On Thursday, 26 July 2018 at 07:32:19 UTC, Mr.Bingo wrote:
If all you need is the string you can write a template 
function that imports the file and searches for the function 
and returns it's body.


It's not very robust but it can work for some cases. D really 
should allow one to get the function body in D, possibly in a 
type safe way(such as each line is parsed properly and return 
type info about what the line contains.


I would argue, just because of "type safety" you mentioned, D 
should not allow one to get the function body. The type safety 
is not achievable because of

https://en.wikipedia.org/wiki/Lambda_calculus#Undecidability_of_equivalence


What the hell does that have to do with anything? I don't know 
what you are talking about when it comes to "type safety" and 
equivalence. What I am talking about is being able to parse the 
function body in a type safe way rather than having to hack 
strings or write a D parser oneself.


Instead of

"int x;"

one has >

or whatever the parser parses in to(some type of AST).

The string itself could be useful however... Whatever OP has in 
mind with this string...



Having a code block is useful in many ways simply because not 
having it is the most limiting case. If one doesn't have it and 
requires it then it is impossible for them to do what they want. 
If one does have it and has no need, then no loss. Lots of people 
like to err on the side of "caution" which is really the side of 
limitations and frustrations and laziness. Of course, sometimes 
things should be limited, but I think this is probably not one of 
those cases(I see no harm in allowing meta code to get a function 
body and, say, create another function based off it but a slight 
change.


For example, one could mutate algorithms and run genetics 
algorithms on them to see how function scan evolve. This might 
lead to genetic ways to assemble programs. In any case, one can't 
do it since one can't get at a functions body.


For example, maybe one wants a way to debug a mixin:

mixin("int x;"); <- can't debug

void foo()
{
   int x;
}

mixin(funcBody!foo) <- can debug since we defined foo outside of 
the mixin and the compiler see's it naturally.






Re: trait to get the body code of a function?

2018-07-26 Thread Mr.Bingo via Digitalmars-d-learn

On Tuesday, 24 July 2018 at 04:43:33 UTC, Guillaume Lathoud wrote:

Hello,

__traits and std.traits already offer access to function 
information like input parameters, e.g. in std.traits: 
ParameterIdentifierTuple ParameterStorageClassTuple


Even if that might sound strange, is there a compile time 
access to the body of the function, e.g. as a code string, or 
at least to the filename where it was declared?


I have not found a direct "string" access, but I found these:

https://dlang.org/phobos/std_traits.html#moduleName

https://dlang.org/phobos/std_traits.html#packageName

...and I am not sure how to proceed from here to get the code 
as a string that declared the function. Context: I have some 
compile-time ideas in mind (code transformation).


Best regards,
Guillaume Lathoud


If all you need is the string you can write a template function 
that imports the file and searches for the function and returns 
it's body.


It's not very robust but it can work for some cases. D really 
should allow one to get the function body in D, possibly in a 
type safe way(such as each line is parsed properly and return 
type info about what the line contains.


Re: Cleanup class after method?

2018-07-04 Thread Mr.Bingo via Digitalmars-d-learn

On Wednesday, 4 July 2018 at 15:47:25 UTC, JN wrote:

Imagine I have a very short-lived class:

void print(File f)
{
PrinterManager pm = new PrinterManager();
pm.print(f);
}

My understanding is that PrinterManager will be GC allocated, 
and when it goes out of scope, the GC will possibly clean it up 
at some point in the future. But I know that this class won't 
be used anywhere, I want to clean it up right now so that GC 
doesn't waste time later. In C++ it'd be handled by RAII, pm 
would be a unique_ptr. How to do it in D?


https://dlang.org/phobos/std_typecons.html#scoped


Re: Recursive Algebraic

2018-06-30 Thread Mr.Bingo via Digitalmars-d-learn
The problem is that it seems that when one has a parameterized 
type, you must completely specify the parameters when using 
Algebraic,


Algebraic!(T, Vector!int, Vector!(double, 3), Vector!(double, 3), 
...)[] data;


to be able to encapsulate an Algebraic on Vector(as a collection 
of all fixed point evaluations).


What I'd rather do is something akin to

Algebraic!(T, Vector!(T, N))[] data;

or, hell, even just

Algebraic!(T, Vector)[] data;

Since A!B bears no relationship to A!C except their name, this is 
not necessarily a good idea, but neither is having to explicitly 
express all kinds.


I imagine there is some trick using inheritance,

Maybe

class X;
class A(int N) : X;

then

Algebraic!(T, X)[] data;

only works if Algebraic handles inheritance and checks with 
handlers for specificity.


The problem with D's type system is that it does not allow one to 
express sets of types, which is a special type in and of itself, 
except in some specific cases such as using is(,...).



Here is a snippet of code

https://dpaste.dzfl.pl/8ff1cd3d7d46

That shows that an algebraic does not handle inheritance.


if Y : X then we'd expect

Algebraic!(T, Y) : Algebraic!(T, X)

and for there to be no problem.

For this not to be a problem two things need to be known:

1. To be able to get the type of X and Y
2. To be able to determine if Y : X.

D provides the ability to determine if a type inherits from 
another at runtime using classinfo which every D class contains 
and it's inheritance relationships, solving problem 2.


Problem 1 is to be able get the types of an object in a way that 
can be related to D's pre-existing methods for comparing type 
info. Variant stores the typeid so Problem 2 is solved!


Therefor, we can safely say that Algebraic can easily deal with 
inheritance. I am not familiar enough with the D internals to 
provide a fix. Anyone familiar with Algebraic should be able to 
write a few lines of code to allow for inheritance checking(or 
even complex patterns using a lambda to determine if a type is in 
the Algebraic. This becomes a function on the typeid's and 
classinfo's which is determined by the user.


Algebraic!((x) {
if (is(x.type == typeid(y)))
 return x.get!X;
return defaultX(x);
}, int)

Where the above algebraic allows any object that has the same 
typeid of some object y or some object that has the same type as 
defaultX(x)(which could be anything) or an int.


One could even do strange things like

Algebraic!((x) {
if (rand1(0.5)
 return x;
return 1;
}, int)


which, remembering that this lambda is run at runtime to 
determine if an object stored in the variant is "in the 
algebraic", will allow any object to pass 50% of the time(and 
potentially crash when some code gets an object it can't handle.


For example, if the above algebraic was to work only integral 
types then it would be problematic when x was a non integral 
type(but maybe floating point t types would work). This is 
assuming some handler was called, which wouldn't be because there 
is no handler that exists handle the non integral types.


Hence things like visit would have to allow, if the lambda was 
specified, a way to fall through.


a.visit!((int i) => 4*i, (Object o) => return 2);

Which, 50% of the time would return 2 and the other half of the 
time it would return 4.


Allowing functions to specify Algebraic relationships gives far 
more power. D seems to already have everything available to do 
it. For example, the original problem of inheritance 
relationships can easily be expressed:




Algebraic!((x) {
if (doesDerive!Y(x))
 return cast(Y)x;
if (doesDerive!Z(x))
 return new Q(x);
return null;
}, int)


The algebraic handles an object derived from Y, Q, and int.

a.visit!((int i) => 4*i, (Y) => 2, (Q q) => q.value);













Recursive Algebraic

2018-06-30 Thread Mr.Bingo via Digitalmars-d-learn
I'm trying to create a vector of vectors(more general than 
vectors or matrices).


The idea is that a Vector can contain another Vector or another 
type. Vector can be specified to be fixed in length or dynamic 
for efficiency. Vector!(T, N) creates a vector of leaf type T and 
length N. If N = size_t.max then the vector internally uses a 
dynamic array.


Vector is essentially a tree. Norm, dot, and many operators on 
Vector are computed recursively. The norm of a vector of vectors 
is the norm of the vector of the norm of vectors. This definition 
is valid when Vector is a simple Vector but also works for 
vectors of vectors. Other operators can be similarly defined.



I believe the problem is that Algebraic only accepts This of the 
fixed N. So as of now one can only have vectors of vectors with 
size 1 or fixed N.


I also get an error

overload for type 'VariantN!(4u, int, This*)*' hasn't been 
specified".


It would be nice if the compiler would construct the missing 
handler and give the signature so I could know what I'm doing 
wrong, I'm using (This* i) => ; but it does not recognize that.





import std.typecons, std.range, std.array, std.algorithm, 
std.variant, std.stdio, std.traits;


struct Vector(T, size_t N = size_t.max) // N could be small value 
if it is stored inside the vector as it could be stored as part 
of the flags

{
Flags flags; // Contains direction flag(row or column), 
normalized, etc


	import std.range, std.typecons, std.meta, std.algorithm, 
std.conv, std.math;


	static if (N == size_t.max)		// For size_t.max sets N to be 
infinite/dynamic;

{
mixin("Algebraic!(T, This*)[] data;");
@property size_t Length() { return data.length; }
}
else
{
mixin("Algebraic!(T, This*)[N] data;");
static @property size_t Length() { return N; }
}
alias data this;




@property double Norm(size_t n = 2)
{   
return iota(0, n).map!((a)
{
return 
data[a].visit!(

(T i) { return 1; },

(This* i) { return 2000; }

   );
}).sum();   
}


auto opDispatch(string s, Args...)(Args v)
{

enum index = to!int(s[1..$]);


		static if (N == size_t.max) while(index >= data.length) data ~= 
T.init;


static if (Args.length == 0)
return data[index];
else static if (Args.length == 1)
{
data[index] = [0];
}

}


}

void main()
{
import std.math, std.variant;

Vector!(int, 5) v;
v.x1 = v;
writeln(v.x1);  
v.x2 = 4;
v.x3 = 5;
writeln(v.x3);  
writeln(v.Norm);

getchar();

}


Re: Why tuples are not ranges?

2018-06-28 Thread Mr.Bingo via Digitalmars-d-learn

On Thursday, 28 June 2018 at 18:03:09 UTC, Ali Çehreli wrote:

On 06/28/2018 10:00 AM, Mr.Bingo wrote:

> But is this going to be optimized?

Not our job! :o)

> That is, a tuple is a range!

Similar to the array-slice distinction, tuple is a container, 
needing its range.


> It is clearly easy to see if a tuple is empty, to get the
front,

Ok.

> and to
> pop the front and return a new tuple with n - 1 elements,
which is
> really just the tuple(a sliced tuple, say) with the first
member hidden.

That would be special for tuples because popFront does not 
return a new range (and definitely not with a new type) but 
mutates the existing one.


Here is a quick and dirty library solution:

// TODO: Template constraints
auto rangified(T)(T t) {
import std.traits : CommonType;
import std.conv : to;

alias ElementType = CommonType!(T.tupleof);

struct Range {
size_t i;

bool empty() {
return i >= t.length;
}

ElementType front() {
final switch (i) {
static foreach (j; 0 .. t.length) {
case j:
return t[j].to!ElementType;
}
}
}

void popFront() {
++i;
}

enum length = t.length;

// TODO: save(), opIndex(), etc.
}

return Range();
}

unittest {
import std.typecons : tuple;
import std.stdio : writefln;
import std.range : ElementType;

auto t = tuple(5, 3.5, false);
auto r = t.rangified;

writefln("%s elements of '%s': %(%s, %)",
 r.length, ElementType!(typeof(r)).stringof, r);
}

void main() {
}

Prints

3 elements of 'double': 5, 3.5, 0

Ali


Thanks, why not add the ability to pass through ranges and arrays 
and add it to phobos?


Re: Why tuples are not ranges?

2018-06-28 Thread Mr.Bingo via Digitalmars-d-learn

On Thursday, 28 June 2018 at 16:02:59 UTC, Alex wrote:

On Thursday, 28 June 2018 at 14:35:33 UTC, Mr.Bingo wrote:

Seems like it would unify things quite a bit.


Yeah... this is, because you can't popFront on a tuple, as the 
amount of entries is fixed. You can, however, popFront on every 
range.


But as Timoses wrote you can easily make a range out of a 
tuple, for example by slicing:


´´´
import std.typecons, std.range, std.array, std.algorithm, 
std.stdio;


void main()
{
auto t = tuple(3,4,5,6);
auto m = [t[]];
writeln(m.map!(a => 3*a).sum());
}
´´´


But is this going to be optimized?

BTW, surely the tuple has a popFront! Just pop the last element 
and returns a new tuple.


That is, a tuple is a range! Just because it doesn't implement 
the proper functions doesn't mean it can't do so.


It is clearly easy to see if a tuple is empty, to get the front, 
and to pop the front and return a new tuple with n - 1 elements, 
which is really just the tuple(a sliced tuple, say) with the 
first member hidden.


Since a tuple is fixed at compile time you can "virtually" pop 
the elements, it doesn't mean that a tuple is not a range.


So, maybe for it to work the compiler needs to be able to slice 
tuples efficiently(not convert to dynamic arrays).


This is moot if the compiler can realize that it can do most of 
the work at compile time but I'm not so sure it can.


I mean, if you think about it, the memory layout of a tuple is 
sequential types:


T1
T2
...

So, to popFront a tuple is just changing the starting offset.



Why tuples are not ranges?

2018-06-28 Thread Mr.Bingo via Digitalmars-d-learn

Seems like it would unify things quite a bit.

import std.typecons, std.range, std.array, std.algorithm, 
std.stdio;


void main()
{
auto t = tuple(3,4,5,6);
//auto t = [3,4,5,6];

writeln(t.map!(a => 3*a).sum());


}


Re: Code failing unknown reason out of memory, also recursive types

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 25 June 2018 at 14:41:28 UTC, rikki cattermole wrote:
Let me get this straight, you decided to max out your memory 
address space /twice over/ before you hit run time, and think 
that this would be a good idea?


Well, that cause was suppose to allocate a dynamic array instead 
of a tuple. Somehow it got reverted. Works when allocating the 
dynamic array.


How bout the compiler predict how big a variable is going to be 
allocated and if it exceeds memory then give an error instead of 
an out of memory error. If it would have gave me a line number I 
would have saw the problem immediately.


Re: overload .

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 25 June 2018 at 13:58:54 UTC, aliak wrote:

On Monday, 25 June 2018 at 13:37:01 UTC, Mr.Bingo wrote:


One can overload assignment and dispatch so that something like

A.x = ... is valid when x is not a typical member but gets 
resolved by the above functions.


Therefore, I can create a member for assignment. How can I 
create a member for getting the value?


A.x = 3; // Seems to get translated in to A.opDispatch!("x")(3)

works but

foo(A.x); // fails and the compiler says x does not exist


I need something consistent with opDot. I am trying to create 
"virtual"(not as in function) fields and I can only get 
assignment but not accessor.


A.x is translated in to A.opDispatch!"x" with no args. So I 
guess you can overload or you can static if on a template 
parameter sequence:


import std.stdio;
struct S {
auto opDispatch(string name, Args...)(Args args) {
static if (!Args.length) {
return 3;
} else {
// set something
}
}
}
void main()
{
S s;
s.x = 3;
writeln(s.x);
}

Cheers,
- Ali


Ok, for some reason using two different templated failed but 
combining them in to one passes:


auto opDispatch(string name, T)(T a)
auto opDispatch(string name)()

Maybe it is a bug in the compiler that it only checks one 
opDispatch?






Code failing unknown reason out of memory, also recursive types

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn

import std.stdio;


union Vector(T, size_t N = size_t.max)
{
	import std.range, std.typecons, std.meta, std.algorithm, 
std.conv, std.math;
	static if (N == size_t.max)		// For size_t.max sets N to be 
infinite/dynamic;

{
mixin("Tuple!("~"T,".repeat(N).join()~") data;");
@property size_t Length() { return rect.length; }

@property double norm(size_t n = 2)()
{
			return (iota(0,data.length).map!(a => 
data[a].pow(n))).pow(1/cast(double)n);

}

}
else
{
mixin("Tuple!("~"T,".repeat(N).join()~") data;");
@property size_t Length() { return N; }


@property double norm(size_t n = 2)()
{
			mixin("return ("~(iota(0,N).map!(a => 
"data["~to!string(a)~"].pow(n)").join("+"))~").pow(1/cast(double)n);");

}

}

auto opDispatch(string s, Args...)(Args v)
if (s.length > 1 && s[0] == 'x')
{
static if (N == size_t.max)
if (data.length < to!int(s[1..$]))
	for(int i = 0; i < to!int(s[1..$]) - data.length; i++) data 
~= 0;


static if (Args.length == 0)
mixin(`return data[`~s[1..$]~`];`);
else static if (Args.length == 1)
mixin(`data[`~s[1..$]~`] = v[0];  `);   

}




alias data this;
}

void main()
{
import std.math, std.variant;

Vector!(Algebraic!(Vector!int, int)) v;
//v.x1 = 3;
//v.x2 = 4;
//v.x3 = 5;
//writeln(v.x3);
//writeln(v.norm);
}

Trying to create a vector of vectors where any entry can be 
another vector of vectors or an int.





overload .

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn



One can overload assignment and dispatch so that something like

A.x = ... is valid when x is not a typical member but gets 
resolved by the above functions.


Therefore, I can create a member for assignment. How can I create 
a member for getting the value?


A.x = 3; // Seems to get translated in to A.opDispatch!("x")(3)

works but

foo(A.x); // fails and the compiler says x does not exist


I need something consistent with opDot. I am trying to create 
"virtual"(not as in function) fields and I can only get 
assignment but not accessor.






Re: Determine if CTFE or RT

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 25 June 2018 at 10:49:26 UTC, Simen Kjærås wrote:
On Monday, 25 June 2018 at 09:36:45 UTC, Martin Tschierschke 
wrote:
I am not sure that I understood it right, but there is a way 
to detect the status of a parameter:


My question was different, but I wished to get a ctRegex! or 
regex used depending on the expression:


 import std.regex:replaceAll,ctRegex,regex;

 auto reg(alias var)(){
   static if (__traits(compiles, {enum ctfeFmt = var;}) ){
// "Promotion" to compile time value
enum ctfeReg =  var ;
pragma(msg, "ctRegex used");
return(ctRegex!ctfeReg);

   }else{
return(regex(var));
pragma(msg,"regex used");
}
   }
}
So now I can always use reg!("") and let the compiler 
decide.


To speed up compilation I made an additional switch, that when 
using DMD (for development)

alway the runtime version is used.

The trick is to use the alias var in the declaration and check 
if it can be assigned to enum.
The only thing is now, that you now always use the !() compile 
time parameter to call the function. Even, when in the end is 
translated to an runtime call.


reg!("") and not reg("...").


Now try reg!("prefix" ~ var) or reg!(func(var)). This works in 
some limited cases, but falls apart when you try something more 
involved. It can sorta be coerced into working by passing 
lambdas:



template ctfe(T...) if (T.length == 1) {
import std.traits : isCallable;
static if (isCallable!(T[0])) {
static if (is(typeof({enum a = T[0]();}))) {
enum ctfe = T[0]();
} else {
alias ctfe = T[0];
}
} else {
static if (is(typeof({enum a = T[0];}))) {
enum ctfe = T[0];
} else {
alias ctfe = T[0];
}
}
}

string fun(string s) { return s; }

unittest {
auto a = ctfe!"a";
string b = "a";
auto c = ctfe!"b";
auto d = ctfe!("a" ~ b); // Error: variable b cannot be 
read at compile time

auto e = ctfe!(() => "a" ~ b);
auto f = ctfe!(fun(b)); // Error: variable b cannot be read 
at compile time

auto g = ctfe!(() => fun(b));
}

--
  Simen


This doesn't work, the delegate only hides the error until you 
call it.


auto also does not detect enum. Ideally it should be a manifest 
constant if precomputed... this allows chaining of optimizations.


auto x = 3;
auto y = foo(x);


the compiler realizes x is an enum int and then it can also 
precompute foo(x).


Since it converts to a runtime type immediately it prevents any 
optimizations and template tricks.







Re: Determine if CTFE or RT

2018-06-25 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 25 June 2018 at 07:02:24 UTC, Jonathan M Davis wrote:
On Monday, June 25, 2018 05:47:30 Mr.Bingo via 
Digitalmars-d-learn wrote:
The problem then, if D can't arbitrarily use ctfe, means that 
there should be a way to force ctfe optionally!


If you want to use CTFE, then give an enum the value of the 
expression you want calculated. If you want to do it in place, 
then use a template such as


template ctfe(alias exp)
{
enum ctfe = exp;
}

so that you get stuff like func(ctfe!(foo(42))). I would be 
extremely surprised if the compiler is ever changed to just try 
CTFE just in case it will work as an optimization. That would 
make it harder for the programmer to understand what's going 
on, and it would balloon compilation times. If you want to 
write up a DIP on the topic and argue for rules on how CTFE 
could and should function with the compiler deciding to try 
CTFE on some basis rather than it only being done when it must 
be done, then you're free to do so.


https://github.com/dlang/DIPs

But I expect that you will be sorely disappointed if you ever 
expect the compiler to start doing CTFE as an optimization. 
It's trivial to trigger it explicitly on your own, and 
compilation time is valued far too much to waste it on 
attempting CTFE when in the vast majority of cases, it's going 
to fail. And it's worked quite well thus far to have it work 
only cases when it's actually needed - especially with how easy 
it is to make arbitrary code run during CTFE simply by doing 
something like using an enum.


- Jonathan M Davis


You still don't get it!

It is not trivial! It is impossible to trigger it! You are 
focused far too much on the optimization side when it is only an 
application that takes advantage of the ability for rtfe to 
become ctfe when told, if it is possible.


I don't know how to make this any simpler, sorry... I guess we'll 
end it here.


Re: Determine if CTFE or RT

2018-06-24 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 25 June 2018 at 05:14:31 UTC, Jonathan M Davis wrote:
On Monday, June 25, 2018 05:03:26 Mr.Bingo via 
Digitalmars-d-learn wrote:
The compiler should be easily able to figure out that foo(3) 
can be precomputed(ctfe'ed) and do so. It can already do this, 
as you say, by forcing enum on it. Why can't the compiler 
figure it out directly?


The big problem with that is that determining whether the 
calculation can be done at compile time or not means solving 
the halting problem. In general, the only feasible way to do it 
would be to attempt it for every function call and then give up 
when something didn't work during CTFE, which would balloon 
compilation times and likely cause the compiler to run out of 
memory on a regular basis given how it currently manages memory 
and how CTFE tends to use a lot of memory.


It was decided ages ago that the best approach to the problem 
was to use CTFE only when CTFE was actually required. If an 
expression is used in a context where its value must be known 
at compile time, then it's evaluated at compile time; 
otherwise, it isn't. The compiler never attempts CTFE as an 
optimization or because it thinks that it might be possible to 
evaluate the value at compile time. As things stand, it should 
be pretty trivial to be able to look at an expression and 
determine whether it's evaluated at compile time or not based 
on how it's used.


If you're looking to have the compiler figure out when to do 
CTFE based on the fact that an expression could theoretically 
be evaluated at compile time, or because you want the compiler 
to optimize using CTFE, then you're going to be disappointed, 
because that's never how CTFE has worked, and I'd be _very_ 
surprised if it ever worked any differently.


- Jonathan M Davis



The docs say that CTFE is used only when explicit, I was under 
the impression that it would attempt to optimize functions if 
they could be computed at compile time. The halting problem has 
nothing to do with this. The ctfe engine already complains when 
one recurses to deep, it is not difficult to have a time out 
function that cancels the computation within some user definable 
time limit... and since fail can simply fall through and use the 
rtfe, it is not a big deal.


The problem then, if D can't arbitrarily use ctfe, means that 
there should be a way to force ctfe optionally!


This means that the compiler will force ctfe if the input values 
are known, just like normal but if they are not known then it 
just treats the call as non-ctfe.


so, instead of

enum x = foo(y); // invalid or valid depending on if y is known 
at CT


we have

cast(enum)foo(y)

which, hypothetically of course, would attempt to precompute 
foo(y) as in the first case but if it is not precomputable then 
just calls it at compile time.


So, the above code becomes

static if (ctfeable(y))
enum x = foo(y);
return x; // compile time version ("precomputed")
else
return foo(y) // runtime version


The semantic above is defined for something like cast(enum)(might 
be confusing but anything could be used that works better).


hence
auto x = cast(enum)foo(y);

will result in x being an enum if y is an enum(since chaining can 
then occur), else a runtime variable with the return of foo 
calculated at runtime.



So, this has nothing to do with the halting problem. I'm not 
asking for the compiler to do the impossible, I'm asking for a 
notation that combines to different syntaxes in to a composite 
pattern so one can benefit from the other. Don't make it harder 
than it seems, it's a pretty easy concept.


It might even be possible to do this in a template:

import std.stdio;


auto IsCTFE()
{
if (__ctfe) return true;
return false;
}

template AutoEnum(alias s)
{
static if (IsCTFE)
{
pragma(msg, "CTFE!");
alias AutoEnum = s;
}
else
{
pragma(msg, "RTFE!");
alias AutoEnum = s;
}
}


double foo(int x) { return 3.42322*x; }

void main()
{
int x = 3;
writeln(AutoEnum!(foo(x)));
}

So, the problem is that if we replace x with 3 the above code is 
executed at compile time(a precomputation).


But the way it is it won't allow it to be computed even at 
runtime! There is no reason that the compiler can't just replace 
the code with `writeln(foo(x));` for RTFE... yet it errors 
*RATHER THAN* default to RTFE!




The simple fix is to allow for the alternative to not try to ctfe 
and just compute the value at runtime.






Re: Determine if CTFE or RT

2018-06-24 Thread Mr.Bingo via Digitalmars-d-learn

On Sunday, 24 June 2018 at 20:03:19 UTC, arturg wrote:

On Sunday, 24 June 2018 at 19:10:36 UTC, Mr.Bingo wrote:

On Sunday, 24 June 2018 at 18:21:09 UTC, rjframe wrote:

On Sun, 24 Jun 2018 14:43:09 +, Mr.Bingo wrote:

let is(CTFE == x) mean that x is a compile time constant. 
CTFE(x)
converts a x to this compile time constant. Passing any 
compile time

constant essentially turns the variable in to a compile time
constant(effectively turns it in to a template with template 
parameter)




You can use __ctfe:

static if (__ctfe) {
// compile-time execution
} else {
// run-time execution
}


This does not work:


import std.stdio;

auto foo(int i)
{
if (__ctfe)
{
return 1;
} else {
return 2;
}
}

void main()
{
writeln(foo(3));
}


should print 1 but prints 2.


you have to call foo with ctfe
enum n = foo(3);
writeln(n);


This defeats the whole purpose. The whole point is for the 
compiler to automatically compute foo(3) since it can be 
computed. Now, with your code, there is no way to simplify code 
like


foo(3) + foo(8);



auto foo(int i)
{
   if (__ctfe && i == 3)
   {
return 1;
   } else {
return 2;
   }
}


Now, this would precompute foo(3), unbeknownst to the caller of 
foo but then this requires, using your method, to write the code 
like


enum x = foo(3);
x + foo(8);

We would have to know which values foo would return as compile 
time values and what not.


foo(x) + foo(y)

could not work simultaneously for both compile time and run time 
variables.



e.g.,
enum x = 4;
enum y = 4;

foo(x) + foo(y)


would not precompute the values even though x and y are enum's, 
we have to do



enum x = 4;
enum y = 4;
enum xx = foo(x);
enum yy = foo(y);
xx + yy;


and if we changed x or y to a run time variable we'd then have to 
rewrite the expressions since your technique will then fail.


int x = 4;
enum y = 4;
enum xx = foo(x); // Invalid
enum yy = foo(y);
xx + yy;


The compiler should be easily able to figure out that foo(3) can 
be precomputed(ctfe'ed) and do so. It can already do this, as you 
say, by forcing enum on it. Why can't the compiler figure it out 
directly?


The problem here, if you didn't understand, is that one can't get 
foo to be "precomputed" IF it can be done, but IF NOT then it 
will just get the full computation. Because one does not 
necessarily know when and where and how foo will be 
precomputed(or even have completely different behavior for ctfe 
vs rtfe), one can't use two different methods that have the same 
syntax.








Re: Determine if CTFE or RT

2018-06-24 Thread Mr.Bingo via Digitalmars-d-learn

On Sunday, 24 June 2018 at 18:21:09 UTC, rjframe wrote:

On Sun, 24 Jun 2018 14:43:09 +, Mr.Bingo wrote:

let is(CTFE == x) mean that x is a compile time constant. 
CTFE(x)
converts a x to this compile time constant. Passing any 
compile time

constant essentially turns the variable in to a compile time
constant(effectively turns it in to a template with template 
parameter)




You can use __ctfe:

static if (__ctfe) {
// compile-time execution
} else {
// run-time execution
}


This does not work:


import std.stdio;

auto foo(int i)
{
if (__ctfe)
{
return 1;
} else {
return 2;
}
}

void main()
{
writeln(foo(3));
}


should print 1 but prints 2.


Determine if CTFE or RT

2018-06-24 Thread Mr.Bingo via Digitalmars-d-learn
let is(CTFE == x) mean that x is a compile time constant. CTFE(x) 
converts a x to this compile time constant. Passing any compile 
time constant essentially turns the variable in to a compile time 
constant(effectively turns it in to a template with template 
parameter)


void foo(size_t i)
{
static if (is(CTFE == i))
{
 pragma(msg, CTFE(i));
} else
{
writeln(i);
}
}

which, when called with a compile time constant acts effectively 
like


void foo(size_t i)()
{
pragma(msg, i);
}

so

foo(1), being a CTFE'able, triggers the pragma, while foo(i) for 
volatile i triggers the writeln.





Re: how to determine of a module or any other symbol is visible?

2018-06-18 Thread Mr.Bingo via Digitalmars-d-learn

On Monday, 18 June 2018 at 09:10:59 UTC, rikki cattermole wrote:

On 18/06/2018 9:03 PM, Mr.Bingo wrote:
In the code I posted before about CRC, sometimes I get a 
visibility error for some modules. I would like to be able to 
filter those out using traits. Is there any way to determine 
if a module is visible/reachable in the current scope?


The modules that are causing the problems are imported from 
other code that imports them privately. The iteration code 
still finds the module and tries to access it but this then 
gives a visibility error/warning.


Quite often when working with CTFE, the easiest thing to 
do, is to do a check to see if whatever you're doing compiles. 
Nice and simple!


This doesn't work with depreciation warnings.


how to determine of a module or any other symbol is visible?

2018-06-18 Thread Mr.Bingo via Digitalmars-d-learn
In the code I posted before about CRC, sometimes I get a 
visibility error for some modules. I would like to be able to 
filter those out using traits. Is there any way to determine if a 
module is visible/reachable in the current scope?


The modules that are causing the problems are imported from other 
code that imports them privately. The iteration code still finds 
the module and tries to access it but this then gives a 
visibility error/warning.





Re: cyclic redundancy

2018-06-18 Thread Mr.Bingo via Digitalmars-d-learn

I got tired of waiting for a solution and rolled my own:



static this()
{

import std.meta, std.stdio;

// Find all ___This functions linked to this module
auto Iterate()()
{
string[string] s;
void Iterate2(alias m, int depth = 0)()
{
static if (depth < 4)
{   
static foreach (symbol_name; 
__traits(allMembers, m))
{   

static if (symbol_name == "object") { }
else static if (symbol_name == 
m.stringof[7..$]~"___This")
{
s[m.stringof[7..$]] = 
m.stringof[7..$]~"___This";
	} else static if (isModule!(__traits(getMember, m, 
symbol_name)))

{

mixin("Iterate2!("~symbol_name~", depth + 1);");
}
}
}
}

mixin("Iterate2!("~.stringof[7..$]~");");

return s;
}


// Call all
enum fs = Iterate;
static foreach(k, f; fs)
mixin(k~"."~f~"();");   
}


This code simply finds all ___This static functions 
linked to the module this static constructor is called from and 
executes them.


This doesn't solve the original problem but lets one execute the 
static functions to simulate static this. One could 
hypothetically even add attributes to allow for ordering(which is 
what the original cyclic redundancy check is suppose to solve but 
makes things worse since it is an ignorant algorithm).


This type of method might be more feasible, a sort of static 
main(). It does not check for static this of aggregates though 
but gets around the CRC's banket fatal error.





cyclic redundancy

2018-06-18 Thread Mr.Bingo via Digitalmars-d-learn
I have static this scattered throughout. Some are module static 
this and some are struct and class.



In a test case I have reduced to a struct that uses a static this 
and I get a cycle... even though, of course, the static this does 
nothing accept internal things.


It is very annoying to have to hack these cycles. while they can 
be bypassed for testing but production requires passing a runtime 
argument which is useless for distribution to users.


The cyclic testing is so ignorant that it effectively makes using 
any static this with any type of complex importing impossible, 
even though no cycles actually exist.



Something new has to be done.

I propose that some way be made to disable a static this from 
being included in the testing:


@NoCyclicRedundancyCheck static this()
{

}

I'm not sure if attributes will persist at runtime though. Any 
method is better than what we have now which essentially prevents 
all static this usage because one can't guarantee that the future 
won't create a cycle and then the program will break with no easy 
fix.