Re: DIP 1012--Attributes--Preliminary Review Round 1

2017-07-27 Thread Iakh via Digitalmars-d

On Thursday, 27 July 2017 at 14:44:23 UTC, Mike Parker wrote:

Destroy!


Extend rationale: could be application to templates and using 
with CTFE.


"inferred" is not consistent. As I understand inferred applies to 
templates only. And default value is so called 
inferred_or_system. So it is inferred for templates and system 
for everything else. So whole safety group is:

 - safe
 - system
 - trusted
 - inferred_or_safe / soft_safe
 - inferred_or_system / soft_system


Re: [your code here] Pure RPN calculator

2017-07-26 Thread Iakh via Digitalmars-d

On Wednesday, 26 July 2017 at 09:46:45 UTC, Timon Gehr wrote:

 readln.split.fold!((stack,op){
 switch(op){
 static foreach(c;"+-*/") case [c]:
 return stack[0..$-2]~mixin("stack[$-2] "~c~" 
stack[$-1]");

 default: return stack~op.to!real;
 }
 })((real[]).init).writeln;


What does "case [c]:" mean?



Re: If Statement with Declaration

2017-07-20 Thread Iakh via Digitalmars-d

On Wednesday, 19 July 2017 at 15:41:18 UTC, Jack Stouffer wrote:

On Wednesday, 19 July 2017 at 13:30:56 UTC, sontung wrote:

Thoughts on this sort of feature?


To be frank, I don't think that helping the programmer reduce 
the line count in their program by one line is worth further 
complicating the language.


It is not about reduce number of lines. It is about binding
related things in one statement.


Re: If Statement with Declaration

2017-07-20 Thread Iakh via Digitalmars-d

On Wednesday, 19 July 2017 at 15:31:08 UTC, ag0aep6g wrote:

On 07/19/2017 03:30 PM, sontung wrote:

So I was thinking of some sort of syntax like this:

 if(int i = someFunc(); i >= 0)
 {
 // use i
 }
Thoughts on this sort of feature?


I'd prefer a new variant of `with`:


with (int i = someFunc()) if (i >= 0)
{
// use i
}


It's slightly more verbose, but the meaning is clearer 
(arguable). It extends automatically to other control 
structures like `switch`.


I wouldn't have this new `with (declaration)` have the magic 
lookup rules of the existing `with (expression)`. It would be a 
simpler tool that I'd probably use more than the existing 
`with`.


I like "with" variant. Very mach like haskells "where". I believe 
it

would be cool with expressions. Even can emulate named function
arguments

with (const skip_comments = false, const skip_empty_line = true)
auto diff_result = diff(textA, textB, skip_comments, 
skip_empty_line);


TypestateLite for safe copy/destroy of RC Slice

2017-06-11 Thread Iakh via Digitalmars-d

Just another idea.

So as I understand 
https://github.com/dlang/DIPs/blob/master/DIPs/DIP1000.md#owning-containers
there is problem with safe assignment of slices. But the problem 
is actual only if some element has scope ref.


1. Lets introduce type state "referred".
```
//safe code
RCSlice!int a = ...; // a is not refferd
RCSlice!int c = ...;
{
scope ref e = a[0]; // a is reffered
...; // a is reffered
auto b = a; // b is a copy and not refferd
b=c; // in fact it is safe. assignment to b will not destroy "e"
}
...; // a is not refferd
```
2. function attribute @disable_if(arg)
this attribute will disable function (in the same way as @disable 
does)

depending on its arg.
opAssign(RefCountedSlice rhs) @disable_if(reffered) @trusted
This way assign can by trusted
For simplicity only allow @disable_if to depend on "this" state.
3.Forbid to create refs/ptr to var with typestate (only to pass 
as "this"). It would simplify typestate tracking. But it could be 
hard to define what is var with typestate.


So how do you think about all this?


Re: DIP10005: Dependency-Carrying Declarations is now available for community feedback

2016-12-14 Thread Iakh via Digitalmars-d
On Tuesday, 13 December 2016 at 22:33:24 UTC, Andrei Alexandrescu 
wrote:

Destroy.

https://github.com/dlang/DIPs/pull/51/files


Andrei


How about something like Haskells "where" to allow any set of 
pre-declaration just for one declaration:


with
{
import ...;
alias Result = ...;
}
Result func(blah) {...}


Re: Getters/setters generator

2016-12-10 Thread Iakh via Digitalmars-d-announce

On Friday, 9 December 2016 at 16:30:55 UTC, Eugene Wissner wrote:

On Friday, 9 December 2016 at 12:37:58 UTC, Iakh wrote:


Is there possibility to remove affixes in generated accessor 
names?


No, there is no way to manipulate the accessor names. What 
affixes do you mean?


You can remove suffix "_" so "name_" becomes "name". But I like
to see genarated accessors "name" for field "m_name"


Re: Getters/setters generator

2016-12-09 Thread Iakh via Digitalmars-d-announce

mixin template GenerateFieldAccessorMethods()
{
static enum GenerateFieldAccessorMethods()
{
string result = "";
return result;
}
}

Strange syntax


Re: Getters/setters generator

2016-12-09 Thread Iakh via Digitalmars-d-announce

On Friday, 9 December 2016 at 10:27:05 UTC, Eugene Wissner wrote:

Hello,

we've just open sourced a small module ("accessors") that helps 
to generate getters and setters automatically:

https://github.com/funkwerk/accessors
http://code.dlang.org/packages/accessors

It takes advantage of the UDAs and mixins. A simple example 
would be:


import accessors;

class WithAccessors
{
@Read @Write
private int num_;

mixin(GenerateFieldAccessors);
}

It would generate 2 methods "num": one to set num_ and one to 
get its value. Of cause you can generate only @Read without 
@Write and vice versa. There are some more features, you can 
find the full documentation in the README.
"GenerateFieldAccessors" mixin should be added into each 
class/struct that wants to use auto generated accessors.


Is there possibility to remove affixes in generated accessor 
names?


Re: CTFE Status

2016-11-08 Thread Iakh via Digitalmars-d
On Tuesday, 8 November 2016 at 16:40:31 UTC, Nick Sabalausky 
wrote:

On 11/05/2016 11:48 AM, Marc Schütz wrote:
On Saturday, 5 November 2016 at 01:21:48 UTC, Stefan Koch 
wrote:


I recently lost 3 days of work because of my git-skills.


Unless you haven't committed your work yet, almost everything 
in Git can
be undone. Make a copy of your entire project directory 
(including .git)
and then have a look at `git reflog` around the time the 
disaster

happened. It will show you commit IDs that you can check out.


Yea, but unless you're a git-fu master, sometimes figuring out 
how to fix whatever got messed up can lose you 3 days of work ;)


I really want to make a saner CLI front-end for git, but that 
would require learning more about git than I really ever want 
to know :(


http://gitless.com/


Re: Templates do maybe not need to be that slow (no promises)

2016-09-09 Thread Iakh via Digitalmars-d

On Friday, 9 September 2016 at 15:28:55 UTC, Stefan Koch wrote:

On Friday, 9 September 2016 at 15:08:26 UTC, Iakh wrote:

On Friday, 9 September 2016 at 07:56:04 UTC, Stefan Koch wrote:

I was thinking on adding "opaque" attribute for template 
arguments

to force template to forget some information about type.
E.g if you use

class A(opaque T) {...}

you can use only pointers/references to T.

Probably compiler could determine it by itself is type used
as opaque or not.


you could use void* in this case and would not need a template 
at all.


And if you wont type-safe code?
With opaque it would be more like Java generics


Re: Templates do maybe not need to be that slow (no promises)

2016-09-09 Thread Iakh via Digitalmars-d

On Friday, 9 September 2016 at 07:56:04 UTC, Stefan Koch wrote:

I was thinking on adding "opaque" attribute for template arguments
to force template to forget some information about type.
E.g if you use

class A(opaque T) {...}

you can use only pointers/references to T.

Probably compiler could determine it by itself is type used
as opaque or not.




Re: Avoid GC with closures

2016-05-30 Thread Iakh via Digitalmars-d

On Sunday, 29 May 2016 at 11:16:57 UTC, Dicebot wrote:

On 05/28/2016 09:58 PM, Iakh wrote:
Yeah. It doesn't capture any context. But once it does it 
would be an error.


Custom allocators are not very suitable for things like 
closures because of undefined lifetime. Even if it was allowed 
to replace allocator, you would be limited to either GC or RC 
based one anyway to keep things @safe.


Yes. It's better to pass something like memory management strategy
instead of just allocator like rc ptr or uniquePtr


Re: Avoid GC with closures

2016-05-28 Thread Iakh via Digitalmars-d

On Thursday, 26 May 2016 at 21:10:30 UTC, bpr wrote:

On Thursday, 26 May 2016 at 18:53:35 UTC, Iakh wrote:
Functions with lambdas cannot be @nogc as far as they 
allocates closures.


Counterexample:

//  Note that this is NOT a good way to do numerical quadrature!

double integrate(scope double delegate(double x) @nogc f,
 double lo, double hi, size_t n) @nogc {
  double result = 0.0;
  double dx = (hi - lo) / n;
  double dx2 = dx * 0.5;
  for (size_t i = 1; i <= n; i++) {
result += f(lo + i * dx2) * dx;
  }
  return result;
}

double integrate(scope double delegate(double, double) @nogc f,
double x0, double x1,
double y0, double y1,
size_t nX, size_t nY) @nogc {
  return integrate((y) => integrate((x) => f(x,y), x0, x1, nX), 
y0, y1, nY);

}

Functions with @nogc downward funarg lambdas (delegates) can be 
@nogc.




Didn't know about "scope". It solves problem partially.


I can't parse the rest of your post, maybe I misunderstand you.


The problem is that delegates allocates with GC. And proposed
solution it to tell to the compiler what to use instead of GC 
heap.

It is not only about scoped. It is mostly about memory allocated
for closure.


Re: Avoid GC with closures

2016-05-28 Thread Iakh via Digitalmars-d

On Friday, 27 May 2016 at 10:34:38 UTC, Kagamin wrote:

On Thursday, 26 May 2016 at 18:53:35 UTC, Iakh wrote:

void g() @nogc
{
catch scope(void);
int[N] arr = [/*...*/];
arr[].sort!((a, b) => a > b);
}


This compiles just fine and doesn't allocate:
void g() @nogc
{
int[2] arr = [5,4];
arr[].sort!((a, b) => a > b);
}


Yeah. It doesn't capture any context. But once it does it
would be an error.


Avoid GC with closures

2016-05-26 Thread Iakh via Digitalmars-d
Functions with lambdas cannot be @nogc as far as they allocates 
closures.
And the way lambdas works is completely different from C++ way. 
In D using
lambda we define how some part of "stack" frame allocates. So in 
some aspect

closure allocation is property of a function. So we need a way
to tell compiler how to handle this part of frame.
For exapmle:

void g() @nogc
{
catch scope(void);
int[N] arr = [/*...*/];
arr[].sort!((a, b) => a > b);
}

Where "catch scope(void);" sets allocator for all lambdas. If it 
is void
closure will be allocated on the stack. Once we will have 
allocators we

will be able to pass them as closure handlers.


Re: Post-mixin-expansion source output

2016-05-08 Thread Iakh via Digitalmars-d

On Sunday, 8 May 2016 at 10:24:12 UTC, Mithun Hunsur wrote:

Hi all,

I was discussing this with Stefan Koch and a few other people 
at DConf - would it be possible to have a compiler switch for 
outputting the complete source file after the mixins have been 
expanded, similar to gcc -E?


I think it would be better to have mixin-dump file with all mixins
printed into it and references set to this file. So debug info
could point to some "real" source. But I didn't do any 
investigation

how to implement it.


[inout][templates][opaque] Ignorance is strength

2016-04-25 Thread Iakh via Digitalmars-d
Recent discussion about inout cause some thoughts on this. And 
they bring me to opaque
types. We could put a type into temlate but there is no direct 
way to limit information

about this type inside template.

There is no way to say that template uses only part info about 
type or don't use it

at all or use it only in some places.

The idea is to say that parameter type T  is only "forward 
declaration".
So lets consider we have keyword "opaque" for such purpose. If 
type T is opaque than
only address/ref could be used therefore lots of instances of 
template based on opaque

T would generate same code.


struct Array(opaque T)
if (isPOD!T)
{
this()(size_t length) // Template inside template could use 
full typeinfo about T

{
elemSize = sizeof(T);
//...
}

void emplaceBack(Args...)(Args args)
{
// allocate memory for one more element at the end and 
call emplace ctor
// Here available full info about T as emplaceBack is 
template.

}

ref T opIndex(size_t i) @trusted
{
return *cast(T*)elements[i * elemSize];
// reinterpret cast and dereferncing allowed in return 
statement.


// It's illegal to create instances (there is no info 
about ctor)

// You could use kind of factory-method
}

size_t elemSize;
size_t length;
size_t capacity;
void* elements;
static int someField; // Error: it is illegal to use static 
fields or variables with "opaque" template params

}


In the Array!T method opIndex will generate same code for any 
type. But if some
overloaded function will be called with ref/pointer to opaque T 
within template, it would emit different code for different 
instantiations:



void read(ref int a) { ... }
void read(ref double a) { ... }
void g(opaque T)(ref T a)
{
read(a);
}

int a;
g(a);

double b;
g(b);


So lets move step forward in our ignorance about types. 
"opaque(Smth) T" means that inside of template T is lowered to 
Smth:



ref T f(opaque(void) T)(ref T a); // Inside of function T is like 
a void. But here is problems with refs. Maybe opaque(void) isn't 
correct.


ref T f(opaque(const) T)(ref T a); // Inside of function T is 
like a const


ref T f(opaque(const int) T)(ref T a); // Inside of function T is 
cons int. So close to "inout". T could be const, mutable or 
immutable int.


ref T f(opaque(IForwardRange) T)(ref T a); // Inside of function 
T is interface.



Return value will be casted to original T. There is problem with 
"ref void".



Sample with opaque(interface):

struct KindOfGeneric(opaque(IForwardRange) R)
{
R get() // in fact it is "IForwardRange get()" with magic in 
call place

{
pragma(msg, R); // IForwardRange
return r;
}

R r; // Will be treated like IForwardRang r;
}

KindOfGeneric!(SomeRange) a = someFunc();
auto r = a.get(); // would generate: auto r = 
cast(SomeRange)a.get();



Sample with opaque(const int):

// This is template-like code but it will generate one instance
ref T refToMaxInt(opaque(const int) T)(ref T a, ref T b)
{
pragma(msg, T); // const int
return (a > b) ? a : b;
}



Re: Any usable SIMD implementation?

2016-03-31 Thread Iakh via Digitalmars-d

On Thursday, 31 March 2016 at 08:23:45 UTC, Martin Nowak wrote:
I'm currently working on a templated arrayop implementation 
(using RPN

to encode ASTs).
So far things worked out great, but now I got stuck b/c 
apparently none
of the D compilers has a working SIMD implementation (maybe GDC 
has but

it's very difficult to work w/ the 2.066 frontend).

https://github.com/MartinNowak/druntime/blob/arrayOps/src/core/internal/arrayop.d
 https://github.com/MartinNowak/dmd/blob/arrayOps/src/arrayop.d

I don't want to do anything fancy, just unaligned loads, 
stores, and integral mul/div. Is this really the current state 
of SIMD or am I missing sth.?


-Martin


Unfortunately my one(https://github.com/Iakh/simd) is far from
production code. For now I'm trying to figure out interface common
to all archs/compilers. And its more about SIMD comparison 
operations.


You could do loads, stores and mul with default D SIMD support
but not int div


Re: Improve reability of GC on win64

2016-03-24 Thread Iakh via Digitalmars-d

On Thursday, 24 March 2016 at 19:30:46 UTC, Temtaime wrote:

And what's a problem with unions by the way ? By specs, 
currently it's forbidden to have union with pointers and value 
types.


It's forbidden in @safe


Re: How to list all version identifiers?

2016-03-15 Thread Iakh via Digitalmars-d-learn

On Sunday, 13 March 2016 at 20:16:36 UTC, Basile B. wrote:

On Sunday, 13 March 2016 at 16:28:50 UTC, Iakh wrote:

On Sunday, 13 March 2016 at 15:50:47 UTC, Basile B. wrote:
trivial answer, let's say you have dcd-server running in the 
background:


dcd-client -c8 <<< "version("


Thanks. Will try.


But it was a joke actually. It works but this is not very 
straightforward. And it needs a bit of post processing since 
the output you'll get is normally made for the completion menu 
of D editors.


Looks like it shows all version identifiers listed in
https://dlang.org/spec/version.html#version
and it's not what I want. I need just actual ones under some
compiler/circumstances


Re: How to list all version identifiers?

2016-03-13 Thread Iakh via Digitalmars-d-learn

On Sunday, 13 March 2016 at 20:16:36 UTC, Basile B. wrote:

On Sunday, 13 March 2016 at 16:28:50 UTC, Iakh wrote:

On Sunday, 13 March 2016 at 15:50:47 UTC, Basile B. wrote:
trivial answer, let's say you have dcd-server running in the 
background:


dcd-client -c8 <<< "version("


Thanks. Will try.


But it was a joke actually. It works but this is not very 
straightforward. And it needs a bit of post processing since 
the output you'll get is normally made for the completion menu 
of D editors.


Maybe it is what I'm searching for if there is a way to specify
compiler/parser.


Re: How to list all version identifiers?

2016-03-13 Thread Iakh via Digitalmars-d-learn

On Sunday, 13 March 2016 at 15:50:47 UTC, Basile B. wrote:
trivial answer, let's say you have dcd-server running in the 
background:


dcd-client -c8 <<< "version("


Thanks. Will try.


How to list all version identifiers?

2016-03-13 Thread Iakh via Digitalmars-d-learn

There is trick for gcc:
gcc -dM -E - < /dev/null

It shows all default #defines

Is there way to show all version identifiers for D?
For all or any compiler you know


Re: Constructor - the both look the same

2016-03-12 Thread Iakh via Digitalmars-d-learn

On Saturday, 12 March 2016 at 07:43:59 UTC, Joel wrote:

Why does it come up with this?

source/setup.d(40,16): Error: constructor 
inputjex.InputJex.this (Vector2!float pos, int fontSize, 
InputType type = cast(InputType)0) is not callable using 
argument types (Vector2!float, int, InputType)

dmd failed with exit code 1.


Can you show your code?


Re: unit-threaded v0.6.5 - Type-parametrized tests

2016-03-10 Thread Iakh via Digitalmars-d-announce

On Wednesday, 9 March 2016 at 18:01:49 UTC, Atila Neves wrote:

@Types!(int, byte)
void testInit(T)() {
assert(T.init == 0);
}

Atila


It is not clear that this UDA is about unittesting


Re: Static inheritance (proof of concept)

2016-03-02 Thread Iakh via Digitalmars-d

On Monday, 29 February 2016 at 13:31:11 UTC, Atila Neves wrote:

http://forum.dlang.org/post/tgnxocozkurfvmxqo...@forum.dlang.org

Atila


If you about this:
http://forum.dlang.org/post/eejiauievypbfifky...@forum.dlang.org


On Wednesday, 29 July 2015 at 06:05:37 UTC, Kagamin wrote:

On Tuesday, 28 July 2015 at 13:10:43 UTC, Atila Neves wrote:
I guess, but not easily. I've written template mixins to do 
that before and it was awkward.


What was awkward?


Writing a generic solution that would work for multiple 
constraints without code repetition.


Atila


It could be done without type name repetition:

mixin checkConcept!isInputRange;
mixin checkConcept!(isOutputRange!(PlaceHolderForThisT, int));

Last one is even more general. It could handle checkers with odd 
order of parameters.


Re: Static inheritance (proof of concept)

2016-02-29 Thread Iakh via Digitalmars-d

On Monday, 29 February 2016 at 13:31:11 UTC, Atila Neves wrote:


I'm not familiar with github so i can't see why it was rejected
besides auto testing tools (Looks like it's just outdated)


http://forum.dlang.org/post/tgnxocozkurfvmxqo...@forum.dlang.org

Atila


I didn't get the point.
But I could propose one more way to implement it:) Just with 
template

mixins. If concept takes just one template
argument (as isInputRange do) then it could be passed just by 
name,

but if there are several args you have to fully instantiate it:

struct Range
{
mixin checkConcept!isInputRange;
mixin checkConcept!(isOutputRange!(Range, int));

int front()
{
return 0;
}
}

This way template mixin could take source line and file to 
generate better
report. First template arg of isOutputRange could be checked is 
it the

same as wrapping type to prevent copy-paste mistakes.


Re: Static inheritance (proof of concept)

2016-02-29 Thread Iakh via Digitalmars-d

On Monday, 29 February 2016 at 11:13:18 UTC, Atila Neves wrote:

On Saturday, 27 February 2016 at 13:35:30 UTC, Iakh wrote:
There was discussion and proposal to extend D with static 
inheritance:

http://forum.dlang.org/thread/jwzxngccuwwizyivp...@forum.dlang.org

But it could be done just with mixins, UDAs and some rewriting 
of

predicates.


It can, yes:

https://github.com/D-Programming-Language/phobos/pull/3677

That went nowhere, hence the DIP.

Atila


I'm not familiar with github so i can't see why it was rejected
besides auto testing tools (Looks like it's just outdated)


Re: Static inheritance (proof of concept)

2016-02-27 Thread Iakh via Digitalmars-d

On Saturday, 27 February 2016 at 20:51:22 UTC, Chris Wright wrote:

On Sat, 27 Feb 2016 19:45:58 +, Iakh wrote:

It is hard to pass all params to the mixin such as file, line 
and additional params for concept:


Maybe we need file and line number __traits.


Nice idea.
I am also voting for function formatCompilerErrorMessage(text, 
file, line);


Re: Static inheritance (proof of concept)

2016-02-27 Thread Iakh via Digitalmars-d

On Saturday, 27 February 2016 at 18:14:25 UTC, Chris Wright wrote:

On Sat, 27 Feb 2016 13:35:30 +, Iakh wrote:

Looks good. I'd prefer to have just the mixin or just the 
attribute -- the latter being tricky just now.

Agree.


It'd be just as easy to make it:

struct Range {
  mixin ensureConcept!isInputRange;
}


It is hard to pass all params to the mixin such as file, line and 
additional params for concept:


@concept!(isOutputRange, int)() // isSomething!(T, a, b, c, d, 
...)

struct Range
{
mixin checkConcepts; // mixin takes current line and file 
with default params to do error checking so there is no way to 
use variadic args (is it?)

}



Static inheritance (proof of concept)

2016-02-27 Thread Iakh via Digitalmars-d
There was discussion and proposal to extend D with static 
inheritance:

http://forum.dlang.org/thread/jwzxngccuwwizyivp...@forum.dlang.org

But it could be done just with mixins, UDAs and some rewriting of
predicates.

Exaple:

@concept!isInputRange()
struct Range
{
mixin checkConcepts;

int front()
{
return 0;
}
}

concept-check ~master: building configuration "application"...
source/app.d(73,0): Error: can't test for empty
source/app.d(73,0): Error: can't invoke popFront()
source/app.d(67,5): Error: static assert  "Range is not a 
concept!(isInputRange)"


checker function prototype:
bool checkConcept(Flag!"verbose" verbose = No.verbose,
string file = "", size_t line = "")()

if verbose==false this function could be used in template 
constraint
just to check is template parameter match conditions without 
printing errors


full code:
http://dpaste.dzfl.pl/5056ad68c7b4


Re: DIP 84: Static Inheritance

2016-02-25 Thread Iakh via Digitalmars-d

On Thursday, 25 February 2016 at 09:11:58 UTC, Atila Neves wrote:

On Thursday, 25 February 2016 at 01:57:37 UTC, Iakh wrote:

On Friday, 30 October 2015 at 14:39:47 UTC, Atila Neves wrote:

[...]


It could be better to extend UDA with checking and diagnostic 
functions


@IsInputRange
struct myRange {...

And some attrs not applicable for all things, extended UDA can 
handle it


Scanning for UDAs for a whole project isn't trivial and even 
worse optional.


Atila


I meant extend UDAs to match your proposal. But rules to build 
failFunc in both cases looks too sophisticated.

Simpler version could looks like this:

// Predicate:
enum bool checkConstraint(bool verbose) = /*Whatever you want*/

struct Struct{
mixin checkConstraint!(isOutputRange, int); // int represents 
tail template args

}

mixin checkConstrint!(...) adds this code:
static if(!isOutputRange!(Struct, 
int).checkConstraint!(No.verbose))

{
static assert(isOutputRange!(Struct, 
int).checkConstraint!(Yes.verbose));

}


Re: DIP 84: Static Inheritance

2016-02-24 Thread Iakh via Digitalmars-d

On Friday, 30 October 2015 at 14:39:47 UTC, Atila Neves wrote:
From the discussion here: 
http://forum.dlang.org/post/tgnxocozkurfvmxqo...@forum.dlang.org, I thought a library solution would do to fix the issue of getting decent error messages when a type fails to satisfy a template constraint that it was meant to, such as `isInputRange`. So I submitted a PR (https://github.com/D-Programming-Language/phobos/pull/3677), it's been there ever since and doesn't seem like it'll go anywhere from the discussion (http://forum.dlang.org/post/qvofihzmappftdiwd...@forum.dlang.org).


So the only other way is a DIP (http://wiki.dlang.org/DIP84) 
for language and compiler support for static inheritance. It's 
backwards-compatible and IMHO worth looking at.


Please let me know what you think.

Atila


It could be better to extend UDA with checking and diagnostic 
functions


@IsInputRange
struct myRange {...

And some attrs not applicable for all things, extended UDA can 
handle it


Re: Head Const

2016-02-19 Thread Iakh via Digitalmars-d
On Thursday, 18 February 2016 at 22:46:04 UTC, Walter Bright 
wrote:

On 2/18/2016 10:22 AM, Timon Gehr wrote:
He wanted to embed a mutable reference count literally within 
a const object.

Not a headconst object.


I know. I pointed out how it could be done in a way to achieve 
the same effect.


BTW, shared_ptr<> uses a pointer to the ref count.


Could D RefCounted!T be splitted into data (possibly immutable) 
and mutable metadata?


struct MutablePart
{
int rc;
T cache;
Mutex mutex;
}

class A
{
   int data;
   // Consider mutable(T)* works like const(T)*
   private mutable(MutablePart)* metadata;
   this() {metadata = new MutablePart}
}

immutable A a;
a.metadata.rc++;

This way instance of A could be in ROM and ref counter is mutable.

If A is const (and this way possibly immutable and shared) 
situation as bad as in case of storing metadata in an allocator. 
Field A.metadata should be shared in some cases and thread local 
in others. In fact D have to add possibly_shared type constructor 
to represent type of metadata for const data.


To make mutable field friendly with @pure, mutable field should be
1) unique. Mutual mutable treat like global.
2) private
3) allowed to update only with expression like:
this.updateCache(mutable.cache);
where:
updateCache() is const and pure;
updateCache depends only on "this";
Mutable parameter marked as out;
So for two identical objects cache will always be the same.
It is impossible to do incremental caching or even reuse 
resources.

4) accessible only by property like this:
   ref const(Data) getData() pure

Ofc metadata of immutable should be shared.

So what other things breaks mutable like this:
muttable(MutablePart)* metadata;
?


Re: An important pull request: accessing shared affix for immutable data

2016-02-13 Thread Iakh via Digitalmars-d
On Friday, 12 February 2016 at 19:12:48 UTC, Andrei Alexandrescu 
wrote:

https://github.com/D-Programming-Language/phobos/pull/3991

A short while ago Dicebot discussed the notion of using the 
allocator to store the reference count of objects (and 
generally metadata). The allocator seems to be a good place 
because in a way it's a source of "ground truth" - no matter 
how data is qualified, it originated as untyped mutable bytes 
from the allocator.

[...]

Destroy!

Andrei


So you can use metadata only with global allocators,
until you don't need to save ref to the allocator.

Why can't we use attribute @intransitive on ref fields to prevents
immutable being transitively applied on referred content?
Add some restrictions (only private, wrap using with trusted) and
it would be enough to implement RefCounting with immutable.


Re: An important pull request: accessing shared affix for immutable data

2016-02-13 Thread Iakh via Digitalmars-d
On Saturday, 13 February 2016 at 21:12:10 UTC, Andrei 
Alexandrescu wrote:

On 02/13/2016 03:07 PM, Iakh wrote:

So you can use metadata only with global allocators,
until you don't need to save ref to the allocator.


Well you can use other allocators if you save them so you have 
them available for deallocation. -- Andrei


Yeap. You did catch me. And if you don't have "@intransitive 
const" or

C++'s mutable you can't save ref to the allocator within object:

struct RCExternal
{
//alias allocator = theGlobalAllocator;
auto allocator =  // assign mutable to 
immutable


private void[] data;

~this () { allocator.decRef(data.ptr); }
}


Re: Safe cast of arrays

2016-02-10 Thread Iakh via Digitalmars-d
On Wednesday, 10 February 2016 at 20:14:29 UTC, Chris Wright 
wrote:
@safe protects you from segmentation faults and reading and 
writing outside an allocated segment of memory. With array 
casts, @safety is assured


Yes, @safe protects from direct cast to/from ref types but there
still is a trick with T[] -> void[] -> T2[] cast:

import std.stdio;

int[] f(void[] a) @safe pure
{
return cast(int[])a;
}

struct S
{
int* a;
}

void main() @safe
{
S[] a = new S[4];

immutable b = a.f();
writeln(b);
a[0].a = new int;
writeln(b);
//b[0] = 0;
//writeln(*a[0].a);

}

So no safety in this world.


Re: Safe cast of arrays

2016-02-10 Thread Iakh via Digitalmars-d

On Wednesday, 10 February 2016 at 08:49:21 UTC, w0rp wrote:

On Tuesday, 9 February 2016 at 21:20:53 UTC, Iakh wrote:

https://dlang.org/spec/function.html#function-safety
Current definition of safety doesn't mention cast of arrays.


I think this should be addressed, as if you can't cast between 
pointer types, you shouldn't be allowed to cast between slice 
types either. Because slices are just a pointer plus a length. 
Another way to demonstrate the problem is like this.


If address it in same fashion as it's done for other things:

 4. Cannot access unions that have pointers or references
overlapping with other types.

"reinterpret cast" is allowed using unions but only for some
types. So array casting could be allowed for same set of types.
e.g cast(T)arrB allowed if below compiles:
() @safe {
  union
  {
T a;
typeof(arraB[0]) b;
  }
}
Is the condition sufficient?
Does this change needs DIP?


Re: Safe cast away from immutable

2016-02-09 Thread Iakh via Digitalmars-d
On Tuesday, 9 February 2016 at 16:32:07 UTC, Steven Schveighoffer 
wrote:
I think the rules at the moment are that the compiler allows 
implicit conversion if the return value could not have come 
from any of the parameters. But what it may not consider is if 
the parameters could be return values themselves via a 
reference!


AFAIK it just traverse members and search for exactly match with 
return
type. Even more there is different checks for "pure" and "unique 
owned

result".

import std.stdio;

int[] f(void[] a) @safe pure
{
return cast(int[])a;
}

void main() @safe
{
int[] a = new int[4];

immutable b = a.f();
writeln(b);
a[0] = 1;
writeln(b);
}

This works until you change void[] to int[].
And error message is about casting to immutable not about purity.

This is terribly because if there is a code:

int[] f(SophisticatedClass a){...}
immutable a = f(new SophisticatedClass);

that was working last 100 years but then somebody adds member
of type int[] into SophisticatedClass.field.field.field
You take sort of this:
Error: cannot implicitly convert expression (f(a)) of type int[] 
to immutable(int[])


It's hard to prove that result is unique. So maybe don't try to 
do?


Safe cast of arrays

2016-02-09 Thread Iakh via Digitalmars-d

https://dlang.org/spec/function.html#function-safety
Current definition of safety doesn't mention cast of arrays.
E.g this code allowed by DMD

int[] f(void[] a) @safe pure
{
return cast(int[])a;
}

But same void* to int* cast is forbidden.
So we need some rules for dynamic arrays casting.
e.g allow only cast(void[]) as for pointers was done.

And definition of safety should be changed.


Re: Safe cast away from immutable

2016-02-08 Thread Iakh via Digitalmars-d
On Monday, 8 February 2016 at 20:43:23 UTC, Jonathan M Davis 
wrote:




in a bug report should be sufficient to show the bug, even 
without the rest of what you're doing.


In general, it should be impossible for member functions to be 
considered strongly pure unless they're marked as immutable, 
though the compiler could certainly be improved to determine 
that no escaping of the return value or anything referencing it 
occurs within the function and that thus it can get away with 
treating the return value as if it's the only reference to that 
data, but that would likely be farther into the realm of code 
flow analysis than the compiler typically does.


It does. A bit. If S.arr is int[] the program fails to compile.
Is all prams being const(but not immutable) not enough for
function to be Pure?



Regardless, your example definitely shows a bug. Please report 
it. Thanks.


Done
https://issues.dlang.org/show_bug.cgi?id=15660



Safe cast away from immutable

2016-02-08 Thread Iakh via Digitalmars-d

import std.stdio;

struct S
{
void[] arr;

auto f() pure @safe
{
int[] a = new int[4];
arr = a;
return a;
}
}
void main() @safe
{
S s;
immutable a = s.f();
int[] b = (cast(int[])s.arr);
writeln(a);
b[0] = 1;
writeln(a);
}

http://dpaste.dzfl.pl/13751913d2ff


Re: Safe cast away from immutable

2016-02-08 Thread Iakh via Digitalmars-d

On Monday, 8 February 2016 at 19:33:54 UTC, Jesse Phillips wrote:


I'm pretty sure this is not safe. Works, but not safe. You


So it is bug?


Re: Safe cast away from immutable

2016-02-08 Thread Iakh via Digitalmars-d
On Monday, 8 February 2016 at 21:48:30 UTC, Jonathan M Davis 
wrote:



that right now, but clearly, what it currently has is buggy,


Yeah. Looks like it just traverse params's AST and search for
exactly match with ReturnType.

The code with replaced (void, int) with (class A, class B : A)
behaves the same way as original:

import std.stdio;

class A
{
int i;
}

class B : A
{
}

struct S
{
A a;

auto f() pure @safe
{
B b = new B;
a = b;
return b;
}
}

void main() @safe
{
S s;
immutable a = s.f();
A b = s.a;
writeln(a.i);
b.i = 1;
writeln(a.i);
}


[idea] Mutable pointee/ RCString

2016-02-07 Thread Iakh via Digitalmars-d

Is it hard to make pointee data mutable?
E.g. if have:
--
struct RCString
{
private char[] data;
private @mutable int* counter;
}
--
So for optimiser (in case of immutable) this looks like
--
struct RCString
{
private char[] data;
private @mutable void* counter; // pointer to garbage
}
--


Re: voldemort stack traces (and bloat)

2016-02-07 Thread Iakh via Digitalmars-d
On Sunday, 7 February 2016 at 05:18:39 UTC, Steven Schveighoffer 
wrote:
4   testexpansion   0x00010fb5dbec pure 
@safe void 
testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!(testexpansion.s!


Why "bad" foo is void?


Is there a better way we should be doing this? I'm wondering if


Yeah would by nice to auto-repacle with 
testexpansion.S!(...)(...).Result.foo

or even with ...Result.foo


Re: [idea] Mutable pointee/ RCString

2016-02-07 Thread Iakh via Digitalmars-d

On Sunday, 7 February 2016 at 14:00:24 UTC, Iakh wrote:

Explanations:
As far as "immutable" transitive:
--
immutable RCString str;
*str.counter++; // Impossible/error/undefined behavior(with const 
cast)

--
Language defines immutable to do some optimizations based on true
constness of str, fields, and variables pointed by fields. But if
pointer would be treated by optimizer(not by GC) as void* (or 
size_t)

pointee "true constness" does not matter.

The only drawback is the immutable function that reads @mutable 
field

can't be @pure because it reads "global variable".


std.simd vision

2016-01-23 Thread Iakh via Digitalmars-d

I'm thinking of implementation std.simd in fashion different to
https://github.com/TurkeyMan/simd
since it looks dead and too sophisticated.

My proposal is:
- std.simd - processor independent intrinsics and some high 
level stuff.

  You only depend on different sizes of SIMD vector.
- std.simd.x86 - provides unified (compiler independent) 
x86/x86_64 intrinsic
  call. Each SIMD version (sse, sse2, ...) has its own 
namespace(struct) with

  intrinsic wrappers. Each wrapper is straightforward.
- std.simd.arm - provides unified arm-based SIMD.

Demo:
https://github.com/Iakh/simd
At the end of the file
https://github.com/Iakh/simd/blob/master/std/simd/x86.d
is a long unittest with sse2 memchar implementation.


Re: D ASM. Program fails

2016-01-22 Thread Iakh via Digitalmars-d-learn

On Friday, 22 January 2016 at 17:27:35 UTC, userABCabc123 wrote:

int pmovmskb(byte16 v)
{
asm
{
naked;
push RBP;
mov RBP, RSP;
sub RSP, 0x10;
movdqa dword ptr[RBP-0x10], XMM0;
movdqa XMM0, dword ptr[RBP-0x10];
pmovmskb EAX, XMM0;
mov RSP, RBP;
pop RBP;
ret;
}
}


Thanks. It works.
Buth shorter version too:

asm
{
naked;
push RBP;
mov RBP, RSP;
//sub RSP, 0x10;
//movdqa dword ptr[RBP-0x10], XMM0;
//movdqa XMM0, dword ptr[RBP-0x10];
pmovmskb EAX, XMM0;
mov RSP, RBP;
pop RBP;
ret;
}

Looks like the SIMD param is passed by SIMD reg



Re: D ASM. Program fails

2016-01-22 Thread Iakh via Digitalmars-d-learn

On Friday, 22 January 2016 at 12:18:53 UTC, anonymous wrote:


int pmovmskb(byte16 v)
{
int r;
asm
{
movdqa XMM0, v;
pmovmskb EAX, XMM0;
mov r, EAX;
}
return r;
}


This code returns 0 for any input v


Removed the `inout` because it doesn't make sense. You may be 
looking for `ref`.

yeah




Re: D ASM. Program fails

2016-01-22 Thread Iakh via Digitalmars-d-learn

On Friday, 22 January 2016 at 20:41:23 UTC, anonymous wrote:

On 22.01.2016 21:34, Iakh wrote:

This code returns 0 for any input v


It seems to return 5 here: http://dpaste.dzfl.pl/85fb8e5c4b6b


Yeah. Sorry. My bad.


Re: So... add maxCount and maxPos?

2016-01-21 Thread Iakh via Digitalmars-d
On Thursday, 21 January 2016 at 14:04:58 UTC, Andrei Alexandrescu 
wrote:

On 01/21/2016 08:42 AM, Era Scarecrow wrote:
I'd almost say lowestCount and highestCount would almost be 
better, but

i am not sure.


minCount is already a given. -- Andrei


countMost!less and posMost!less


D ASM. Program fails

2016-01-21 Thread Iakh via Digitalmars-d-learn

This code compiles but program exits with code -11
What's wrong?

import std.stdio;
import core.simd;

int pmovmskb(inout byte16 v)
{
asm
{
movdqa XMM0, v;
pmovmskb EAX, XMM0;
ret;
}
}
void main()
{
byte16 a = [-1, 0, -1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0];
auto i = pmovmskb(a);
}

Program exited with code -11

DMD64 D Compiler v2.069


Re: Benchmark memchar (with GCC builtins)

2015-11-02 Thread Iakh via Digitalmars-d
On Friday, 30 October 2015 at 21:33:25 UTC, Andrei Alexandrescu 
wrote:
Could you please take a look at GCC's generated code and 
implementation of memchr? -- Andrei


So i did. I rewrite code to do main work in cacheLineSize chunks. 
And this

is what GLIBC version do.
So main loop looks this:

-
do
{
// ptr16 is aligned 64
ubyte16 r1 = __builtin_ia32_pcmpeqb128(ptr16[0], niddles);
ubyte16 r2 = __builtin_ia32_pcmpeqb128(ptr16[1], niddles);
ubyte16 r3 = __builtin_ia32_pcmpeqb128(ptr16[2], niddles);
ubyte16 r4 = __builtin_ia32_pcmpeqb128(ptr16[3], niddles);

r3 = __builtin_ia32_pmaxub128(r1, r3);
r4 = __builtin_ia32_pmaxub128(r2, r4);
r4 = __builtin_ia32_pmaxub128(r3, r4);
mask = __builtin_ia32_pmovmskb128(r4);

if (mask != 0)
{
mask = __builtin_ia32_pmovmskb128(r1);
mixin(CheckMask); // Check and return value

++ptr16; num -= 16;
mask = __builtin_ia32_pmovmskb128(r2);
mixin(CheckMask);

++ptr16; num -= 16;
r3 = __builtin_ia32_pcmpeqb128(*ptr16, niddles);
mask = __builtin_ia32_pmovmskb128(r3);
mixin(CheckMask);

++ptr16; num -= 16;
r4 = __builtin_ia32_pcmpeqb128(*ptr16, niddles);
mask = __builtin_ia32_pmovmskb128(r4);
mixin(CheckMask);
}

num -= 64;
ptr16 += 4;
}
while (num > 0);
-

and my best result:

-
Naive:21.46 TickDuration(132842482)
SIMD: 1.161 TickDuration(7188211)
(was)SIMD: 3.04 TickDuration(18920182)
C:1 TickDuration(6189222)



Re: Benchmark memchar (with GCC builtins)

2015-10-31 Thread Iakh via Digitalmars-d

On Saturday, 31 October 2015 at 08:37:23 UTC, rsw0x wrote:

I got it to 1.5 the running time of C using SSE2 but couldn't


Can you share your solution?


Re: Benchmark memchar (with GCC builtins)

2015-10-31 Thread Iakh via Digitalmars-d
On Friday, 30 October 2015 at 21:33:25 UTC, Andrei Alexandrescu 
wrote:
Could you please take a look at GCC's generated code and 
implementation of memchr? -- Andrei


Copy-and-paste from glibc's memchr(runGLibC) gaves the result 
below.

-
Naive: 21.4 TickDuration(132485705)
SIMD:  3.17 TickDuration(19629892)
SIMDM: 2.49 TickDuration(15420462)
C:1 TickDuration(6195504)
runGLibC:  4.32 TickDuration(26782585)
SIMDU:  1.8 TickDuration(11128618)

ASM shows memchr is realy called. There is neither compiler magic 
nor

local memchr imlementation.
Aligned versions of memchr use aligned load from memory and 
unaligned

one uses unaligned load. So at this point optimisation done well.


Benchmark memchar (with GCC builtins)

2015-10-30 Thread Iakh via Digitalmars-d

I continue to play with SIMD. So I was trying to use std.simd
But it has lots of thing to be implemented. And I also gave up 
with
 core.simd.__simd due to problems with PMOVMSKB instruction (it 
is not implemented).


Today I was playing with memchr for gdc:
memchr: http://www.cplusplus.com/reference/cstring/memchr/
My implementations with benchmark:
http://dpaste.dzfl.pl/4c46c0cf340c

Benchmark results:
-
Naive:21.9  TickDuration(136456491)
SIMD: 3.04  TickDuration(18920182)
SIMDM:2.44  TickDuration(15232176)
SIMDU: 1.8  TickDuration(11210454)
C:   1  TickDuration(6233963)

Mid colon is duration relative to C implementation 
(core.stdc.string).


memchrSIMD splits an input into three parts: unaligned begin, 
unaligned end, and aligned mid.


memchrSIMDM instead of pmovmskb use this code:
--
if (Mask mask = *cast(Mask*)(result.array.ptr))
{
return ptr + bsf(mask) / BitsInByte;
}
else if (Mask mask = *cast(Mask*)(result.array.ptr + 
Mask.sizeof))

{
return ptr + bsf(mask) / BitsInByte + 
cast(int)Mask.sizeof;

}
--

memchrSIMDU (unaligned) applay SIMD instructions from first array 
elements


SIMD part of function:
--
ubyte16 niddles;
niddles.ptr[0..16] = value;
ubyte16 result;
ubyte16 arr;

for (; ptr < alignedEnd; ptr += 16)
{
arr.ptr[0..16] = ptr[0..16];
result = __builtin_ia32_pcmpeqb128(arr, niddles);
int i = __builtin_ia32_pmovmskb128(result);
if (i != 0)
{
return ptr + bsf(i);
}
}
--


Re: Benchmark memchar (with GCC builtins)

2015-10-30 Thread Iakh via Digitalmars-d
On Friday, 30 October 2015 at 21:33:25 UTC, Andrei Alexandrescu 
wrote:
Could you please take a look at GCC's generated code and 
implementation of memchr? -- Andrei


glibc uses something like pseudo-SIMD with ordinal x86 
instructions (XOR magic, etc).

Deap comarison I left for next time :)


Re: Playing SIMD

2015-10-26 Thread Iakh via Digitalmars-d

On Monday, 26 October 2015 at 09:49:00 UTC, Iakh wrote:

On Monday, 26 October 2015 at 00:00:45 UTC, anonymous wrote:

runBinary calls naiveIndexOf. You're not testing binaryIndexOf.


You are right.
This is fixed example:
http://dpaste.dzfl.pl/f7a54b789a21

and results at dpaste.dzfl.pl:
-
SIMD:   TickDuration(151000)
Binary: TickDuration(255000)
Naive:  TickDuration(459000)

So SIMD version ~1.68 faster than binary


At home with defult dub config "dub run --build=release":
-
SIMD:TickDuration(350644)
Binary:  TickDuration(434014)
Naive:   TickDuration(657548)


~1.24 times faster than binary and
~1.87 times faster than naive


Re: Playing SIMD

2015-10-26 Thread Iakh via Digitalmars-d

On Monday, 26 October 2015 at 00:00:45 UTC, anonymous wrote:

runBinary calls naiveIndexOf. You're not testing binaryIndexOf.


You are right.
This is fixed example:
http://dpaste.dzfl.pl/f7a54b789a21

and results at dpaste.dzfl.pl:
-
SIMD:   TickDuration(151000)
Binary: TickDuration(255000)
Naive:  TickDuration(459000)

So SIMD version ~1.68 faster than binary


Re: Playing SIMD

2015-10-26 Thread Iakh via Digitalmars-d

On Monday, 26 October 2015 at 12:35:39 UTC, Don wrote:


You need to be very careful with doing benchmarks on tiny test 
cases, they can be very misleading.


Can you advice some reading about benchmark?


Re: Playing SIMD

2015-10-26 Thread Iakh via Digitalmars-d
On Monday, 26 October 2015 at 11:47:56 UTC, Andrei Alexandrescu 
wrote:

On 10/26/2015 05:48 AM, Iakh wrote:

On Monday, 26 October 2015 at 00:00:45 UTC, anonymous wrote:
runBinary calls naiveIndexOf. You're not testing 
binaryIndexOf.


You are right.
This is fixed example:
http://dpaste.dzfl.pl/f7a54b789a21

and results at dpaste.dzfl.pl:
-
SIMD:   TickDuration(151000)
Binary: TickDuration(255000)
Naive:  TickDuration(459000)

So SIMD version ~1.68 faster than binary


That's a healthy margin. It may get eroded by startup/finish 
codes that need to get to the first aligned chunk and handle 
the misaligned data at the end, etc. But it's a solid proof of 
concept. -- Andrei


(Binary)Searching in large sorted continuous array e. g. 1 MB of 
bytes

1 MB = 2 ^^ 20 bytes
Imagine 4 binary levels pass in 4 ticks. So 20 levels in 20 
ticks. With SIMD last 4 levels would be done in 2 or 3 ticks. For 
20 levels 18 - 19 ticks. So overall improvement is 5-10%. 
Furthermore think of cache and memory pages misses on firs levels.


IMO SIMD needed for unsorted data(strings?) or for trees.


Re: Playing SIMD

2015-10-26 Thread Iakh via Digitalmars-d

On Monday, 26 October 2015 at 15:03:12 UTC, Iakh wrote:

(Binary)Searching in large sorted continuous array e. g. 1 MB 
of bytes

1 MB = 2 ^^ 20 bytes
Imagine 4 binary levels pass in 4 ticks. So 20 levels in 20 
ticks. With SIMD last 4 levels would be done in 2 or 3 ticks. 
For 20 levels 18 - 19 ticks. So overall improvement is 5-10%. 
Furthermore think of cache and memory pages misses on firs 
levels.


IMO SIMD needed for unsorted data(strings?) or for trees.


But yeah... Who needs 1000_000 0..256 values(sorted :D )?


Re: Playing SIMD

2015-10-25 Thread Iakh via Digitalmars-d
On Sunday, 25 October 2015 at 22:17:58 UTC, Matthias Bentrup 
wrote:

On Sunday, 25 October 2015 at 19:37:32 UTC, Iakh wrote:

Is it optimal and how do you implement this stuf?



I think it's better to use PMOVMSKB to avoid storing the 
PCMPEQB result in memory and you need only one BSF instruction.


Yeah but PMOVMSKB not implemented in core.simd.

Bit more comprehensive here:
http://forum.dlang.org/post/20150923115833.054fdb09@marco-toshiba


Re: Playing SIMD

2015-10-25 Thread Iakh via Digitalmars-d
On Sunday, 25 October 2015 at 21:13:56 UTC, Andrei Alexandrescu 
wrote:

[...]
This is compelling but needs a bit of work to integrate. Care 
to work on a PR that makes std.algorithm use it? Thanks! -- 
Andrei


First of all I need sort of investigation about PRs and 
std.algorithm. But in general challenge accepted.


Re: Adapting Tree Structures for Processing with SIMD,Instructions

2015-10-25 Thread Iakh via Digitalmars-d
On Wednesday, 23 September 2015 at 09:58:39 UTC, Marco Leise 
wrote:

Am Tue, 22 Sep 2015 16:36:40 +
schrieb Iakh :

[...]


Implementatation of SIMD find algorithm:
http://forum.dlang.org/post/hwjbyqnovwbyibjus...@forum.dlang.org


Playing SIMD

2015-10-25 Thread Iakh via Digitalmars-d
Here is my implementatation of SIMD find. Function returns index 
of ubyte in static 16 byte array with unique values.


--
immutable size_t ArraySize = 16;
int simdIndexOf(ubyte niddle, ref const ubyte[ArraySize] haystack)
{
ubyte16 arr;
arr.array = haystack[];
ubyte16 niddles;
niddles.array[] = niddle;
ubyte16 result;
result = __simd_sto(XMM.PCMPEQB, arr, niddles);
alias Mask = ulong;
static assert(2 * Mask.sizeof == result.sizeof);
immutable BitsInByte = 8;

if (Mask mask = *cast(Mask*)(result.array.ptr))
{
return bsf(mask) / BitsInByte;
}
else if (Mask mask = *cast(Mask*)(result.array.ptr + 
Mask.sizeof))

{
return bsf(mask) / BitsInByte + cast(int)Mask.sizeof;
}
else
{
return -1;
}

}
--

Is it optimal and how do you implement this stuf?

Full exaple with comparation of algorithms (SIMD, naive, binary 
search):

http://dpaste.dzfl.pl/f3b8989841e3

Benchmark result on dpaste.dzfl.pl:
SIMD:   TickDuration(157000)
Binary: TickDuration(472000)
Naive:  TickDuration(437000)

At home with defult dub config "dub run --build=release":
SIMD:TickDuration(241566)
Binary:  TickDuration(450515)
Naive:   TickDuration(450371)


Re: Adapting Tree Structures for Processing with SIMD,Instructions

2015-09-23 Thread Iakh via Digitalmars-d
On Wednesday, 23 September 2015 at 09:58:39 UTC, Marco Leise 
wrote:

Am Tue, 22 Sep 2015 16:36:40 +
schrieb Iakh :

[...]


thanks for the workaround(s)



Re: Adapting Tree Structures for Processing with SIMD,Instructions

2015-09-23 Thread Iakh via Digitalmars-d
On Tuesday, 22 September 2015 at 20:10:36 UTC, David Nadlinger 
wrote:

On Tuesday, 22 September 2015 at 19:45:33 UTC, Iakh wrote:

Your solution is platform dependent, isn't it?


Platform-dependent in what way? Yes, the intrinsic for PMOVMSKB 
is obviously x86-only.



core.simd XMM enum has commented this opcode
//PMOVMSKB = 0x660FD7


__simd is similarly DMD-only.

 – David


Yes, I meant compiler dependent.


__simd is similarly DMD-only.

 – David


Sad, didn't know it :( Thought core.simd is builtin-language 
feature as it described at http://dlang.org/simd.html


 - Iakh


Re: Adapting Tree Structures for Processing with SIMD,Instructions

2015-09-22 Thread Iakh via Digitalmars-d
On Tuesday, 22 September 2015 at 17:46:32 UTC, David Nadlinger 
wrote:

On Tuesday, 22 September 2015 at 16:36:42 UTC, Iakh wrote:
__mm_movemask_epi a cornerstone of the topic currently not 
implemented/not supported in D :(

AFAIK it has irregular result format


From ldc.gccbuiltins_x86:

int __builtin_ia32_pmovmskb128(byte16);
int __builtin_ia32_pmovmskb256(byte32);

What am I missing?

 — David


Your solution is platform dependent, isn't it?

core.simd XMM enum has commented this opcode
//PMOVMSKB = 0x660FD7

https://github.com/D-Programming-Language/druntime/blob/master/src/core/simd.d
line 241

PMOVMSKB  is opcode of the instruction. And there is no 
instruction generator for this opcode like this:
pure nothrow @nogc @safe void16 __simd(XMM opcode, void16 op1, 
void16 op2);


Re: Adapting Tree Structures for Processing with SIMD,Instructions

2015-09-22 Thread Iakh via Digitalmars-d
On Tuesday, 22 September 2015 at 13:06:39 UTC, Andrei 
Alexandrescu wrote:
A paper I found interesting: 
http://openproceedings.org/EDBT/2014/paper_107.pdf -- Andrei


__mm_movemask_epi a cornerstone of the topic currently not 
implemented/not supported in D :(

AFAIK it has irregular result format


Sparse array based on Adaptive Radix Tree implementation

2015-08-21 Thread Iakh via Digitalmars-d

Hi!
There is my pre alpha implementation of sparse array
based on adaptive radix tree:
https://github.com/Iakh/d-art-containers

It is based on this article
http://www-db.in.tum.de/~leis/papers/ART.pdf

And here is general information about radix tree:
https://en.wikipedia.org/wiki/Radix_tree

Radix tree uses node with Radix children. For this
iplementation Radix == 256(byte capacity). Each byte
of a key for the container corresponds one level of
the tree (tree haight is equal to size of the key).
Also radix tree doesn't store keys of the data
explicitly. A key can be restored on the path to the leaf.

ART uses several optimizations few of wich
currently implemented:
 - Adaptive node size: there are Node4, Node16,
Node48, Node256 with different strategies to store
elements (implemented Node4 and Node256)
 - Collapsing nodes. Nodes with one child does not
explixitly created. Only their keys stored.
(implemented)
 - SIMD optimisations for fast access in nodes
other then Node256. (not implemented)

Also implemented sort of range over each element.

Is the container needed in Phobos?