Re: DIP80: phobos additions

2015-06-25 Thread Tofu Ninja via Digitalmars-d

On Thursday, 25 June 2015 at 01:32:22 UTC, Timon Gehr wrote:

[...]


Heres what I came up with... I love D so much 3

module util.binOpProxy;

import std.algorithm : joiner, map;
import std.array : array;
struct __typeproxy(T, string s)
{
enum op = s;
T payload;
auto opUnary(string newop)()
{
return __typeproxy!(T,newop~op)(payload);
}
}

/**
 * Example:
 * struct test
 * {
 * mixin(binOpProxy!(~, *));
 *
 * void opBinary(string op : +~~, T)(T rhs)
 * {
 * writeln(hello!);
 * }
 *
 * void opBinary(string op : +~+-~*--+++*, T)(T rhs)
 * {
 * writeln(world);
 * }
 *
 * void opBinary(string op, T)(T rhs)
 * {
 * writeln(default);
 * }
 * }
 *
 */
enum binOpProxy(proxies ...) = `
import ` ~ __MODULE__ ~ ` : __typeproxy;
auto opBinary(string op, D : __typeproxy!(T, T_op), T, string 
T_op) (D rhs)

{
return opBinary!(op~D.op)(rhs.payload);
}
` ~ [proxies].map!((string a) = `
auto opUnary(string op : ` ~ a ~ `)()
{
return __typeproxy!(typeof(this),op)(this);
}
`).joiner.array;




Re: DIP80: phobos additions

2015-06-24 Thread Tofu Ninja via Digitalmars-d

On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:

On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:


I actually thought about it more, and D does have a bunch of 
binary operators that no ones uses. You can make all sorts of 
weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,

--, ++, ^^+, in++, |-, %~, ect...

void main(string[] args){
test a;
test b;
a +* b;
}
struct test{
private struct testAlpha{
test payload;
}
testAlpha opUnary(string s : *)(){
return testAlpha(this);
}
void opBinary(string op : +)(test rhs){
writeln(+);
}
void opBinary(string op : +)(testAlpha rhs){
writeln(+*);
}
}


Oh right, meant to respond to this.  I'll admit it took me a 
few to really get why that works-- it's fairly clever and 
moderately terrifying.  (I showed it to a friend and he opined 
it may violate the grammar.)


But playing with it a bit...well, it's very cumbersome having 
to do these overload gymnastics.  It eats away at your opUnary 
space because of the need for private proxy types, and each one 
needs an opBinary defined to support  it explicitly.  It also 
means you can't make overloads for mismatched types or builtin 
types (at least, I couldn't figure out how in the few minutes I 
spent poking it over lunch).


-Wyatt


I am thinking of writing a mixin that will set up the proxy for 
you so that you can just write.


struct test
{
 mixin binOpProxy(*);
 void opBinary(string op : +*, T)(T rhs){
  writeln(+*);
 }
}

The hard part will be to get it to work with arbitrarily long 
unary proxies. Eg:

mixin binOpProxy(~-~);
void opBinary(string op : +~-~, T)(T rhs){
 writeln(+~-~);
}


Re: DIP80: phobos additions

2015-06-24 Thread Timon Gehr via Digitalmars-d

On 06/24/2015 11:41 PM, Tofu Ninja wrote:

On Wednesday, 24 June 2015 at 19:04:38 UTC, Wyatt wrote:

On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:


I actually thought about it more, and D does have a bunch of binary
operators that no ones uses. You can make all sorts of weird
operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,
--, ++, ^^+, in++, |-, %~, ect...

void main(string[] args){
test a;
test b;
a +* b;
}
struct test{
private struct testAlpha{
test payload;
}
testAlpha opUnary(string s : *)(){
return testAlpha(this);
}
void opBinary(string op : +)(test rhs){
writeln(+);
}
void opBinary(string op : +)(testAlpha rhs){
writeln(+*);
}
}


Oh right, meant to respond to this.  I'll admit it took me a few to
really get why that works-- it's fairly clever and moderately
terrifying.  (I showed it to a friend and he opined it may violate the
grammar.)

But playing with it a bit...well, it's very cumbersome having to do
these overload gymnastics.  It eats away at your opUnary space because
of the need for private proxy types, and each one needs an opBinary
defined to support  it explicitly.  It also means you can't make
overloads for mismatched types or builtin types (at least, I couldn't
figure out how in the few minutes I spent poking it over lunch).

-Wyatt


I am thinking of writing a mixin that will set up the proxy for you so
that you can just write.

struct test
{
  mixin binOpProxy(*);
  void opBinary(string op : +*, T)(T rhs){
   writeln(+*);
  }
}

The hard part will be to get it to work with arbitrarily long unary
proxies. Eg:
mixin binOpProxy(~-~);
void opBinary(string op : +~-~, T)(T rhs){
  writeln(+~-~);
}


Obviously you will run into issues with precedence soon, but this should 
do it:


import std.stdio;
struct Test{
mixin(binOpProxy(+~+-~*--+++*));
void opBinary(string op : +~+-~*--+++*, T)(T rhs){
writeln(+~+-~*--+++*);
}
}

void main(){
Test a,b;
a +~+-~*--+++* b;
}

import std.string, std.algorithm, std.range;
int operatorSuffixLength(string s){
int count(dchar c){ return 2-s.retro.countUntil!(d=c!=d)%2; }
if(s.endsWith(++)) return count('+');
if(s.endsWith(--)) return count('-');
return 1;
}
struct TheProxy(T,string s){
T unwrap;
this(T unwrap){ this.unwrap=unwrap; }
static if(s.length){
alias NextType=TheProxy!(T,s[0..$-operatorSuffixLength(s)]);
alias FullType=NextType.FullType;
mixin(`
auto opUnary(string op : `~s[$-operatorSuffixLength(s)..$]~`)(){
return NextType(unwrap);
}`);
}else{
alias FullType=typeof(this);
}
}

string binOpProxy(string s)in{
assert(s.length=1+operatorSuffixLength(s));
assert(!s.startsWith(++));
assert(!s.startsWith(--));
foreach(dchar c;s)
assert(+-*~.canFind(c));
}body{
int len=operatorSuffixLength(s);
return `
auto opUnary(string op:`~s[$-len..$]~`)(){
return TheProxy!(typeof(this),`~s[1..$-len]~`)(this);
}
auto opBinary(string 
op:`~s[0]~`)(TheProxy!(typeof(this),`~s[1..$-1]~`).FullType t){

return opBinary!`~s~`(t.unwrap);
}
`;
}




Re: DIP80: phobos additions

2015-06-24 Thread Wyatt via Digitalmars-d

On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:


I actually thought about it more, and D does have a bunch of 
binary operators that no ones uses. You can make all sorts of 
weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,

--, ++, ^^+, in++, |-, %~, ect...

void main(string[] args){
test a;
test b;
a +* b;
}
struct test{
private struct testAlpha{
test payload;
}
testAlpha opUnary(string s : *)(){
return testAlpha(this);
}
void opBinary(string op : +)(test rhs){
writeln(+);
}
void opBinary(string op : +)(testAlpha rhs){
writeln(+*);
}
}


Oh right, meant to respond to this.  I'll admit it took me a few 
to really get why that works-- it's fairly clever and moderately 
terrifying.  (I showed it to a friend and he opined it may 
violate the grammar.)


But playing with it a bit...well, it's very cumbersome having to 
do these overload gymnastics.  It eats away at your opUnary space 
because of the need for private proxy types, and each one needs 
an opBinary defined to support  it explicitly.  It also means you 
can't make overloads for mismatched types or builtin types (at 
least, I couldn't figure out how in the few minutes I spent 
poking it over lunch).


-Wyatt


Re: DIP80: phobos additions

2015-06-23 Thread ponce via Digitalmars-d
On Wednesday, 10 June 2015 at 15:44:40 UTC, Andrei Alexandrescu 
wrote:

On 6/10/15 1:53 AM, ponce wrote:

On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
It's a shame ucent/cent never got implemented. But couldn't 
they be
added to Phobos? I often need a 128-bit type with better 
precision

than float and double.


FWIW:
https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d


Yes, arbitrary fixed-size integrals would be good to have in 
Phobos. Who's the author of that code? Can we get something 
going here? -- Andrei


Sorry for the delay. I wrote this code a while earlier.
I will relicense it anyway that is needed (if needed).
Currently lack the time to polish it more (adding custom literals 
would be the one thing to do).




Re: DIP80: phobos additions

2015-06-23 Thread Dominikus Dittes Scherkl via Digitalmars-d

On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:


I actually thought about it more, and D does have a bunch of 
binary operators that no ones uses. You can make all sorts of 
weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,

--, ++, ^^+, in++, |-, %~, ect...



+* is a specially bad idea, as I would read that as a + (*b), 
which is quite usual in C.


But in general very cool. I love ~~ and |- the most :-)


Re: DIP80: phobos additions

2015-06-23 Thread Tofu Ninja via Digitalmars-d
On Tuesday, 23 June 2015 at 16:33:29 UTC, Dominikus Dittes 
Scherkl wrote:

On Wednesday, 17 June 2015 at 09:28:00 UTC, Tofu Ninja wrote:


I actually thought about it more, and D does have a bunch of 
binary operators that no ones uses. You can make all sorts of 
weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~,

--, ++, ^^+, in++, |-, %~, ect...



+* is a specially bad idea, as I would read that as a + (*b), 
which is quite usual in C.


But in general very cool. I love ~~ and |- the most :-)


Yeah |- does seem like an interesting one, not sure what it would 
mean though, I get the impression it's a wall or something. Also 
you can basicly combine any binOp and any number of unaryOps to 
create an arbitrary number of custom binOps. ~+*+*+*+ could be 
valid! You could probably make something like brainfuck in D's 
unary operators.


Re: DIP80: phobos additions

2015-06-21 Thread Steven Schveighoffer via Digitalmars-d

On 6/19/15 9:50 PM, Joakim wrote:


Then there's always this:

http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado


Not the fault of the QR code of course, just an expired domain name, but
still funny. :)


Oh man. Note to marketing department -- all QR codes must point to 
ourcompany.com, you can redirect from there!!!


-Steve


Re: DIP80: phobos additions

2015-06-19 Thread Joakim via Digitalmars-d
On Sunday, 14 June 2015 at 01:57:37 UTC, Steven Schveighoffer 
wrote:

On 6/13/15 11:46 AM, Nick Sabalausky wrote:

On 06/08/2015 03:55 AM, ezneh wrote:


- Create / read QR codes, maybe ? It seems we see more and 
more QR Codes

here and there, so it could potentially be worth it


I see them everywhere, but does anyone ever actually use them? 
Usually
it's just an obvious link to some company's 
marketing/advertising. It's

basically just like the old CueCat, if anyone remembers it:
https://en.wikipedia.org/wiki/CueCat

Only time I've ever seen *anyone* actually using a QR code is 
when *I*
use a display QR link for this page FF plugin to send the 
webpage I'm

looking at to my phone.

Maybe I'm just not seeing it, but I suspect QR is more someone 
that
companies *want* people to care about, rather than something 
anyone

actually uses.



A rather cool usage of QR code I saw was a sticker on a device 
that was a link to the PDF of the manual.


Then there's always this:

http://www.theverge.com/2015/6/19/8811425/heinz-ketchup-qr-code-porn-site-fundorado

Not the fault of the QR code of course, just an expired domain 
name, but still funny. :)


Re: DIP80: phobos additions

2015-06-17 Thread Tofu Ninja via Digitalmars-d

On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:
From the outset, my thought was to strictly define the set of 
(eight or so?) symbols for this.  If memory serves, it was 
right around the time Walter's rejected wholesale user-defined 
operators because of exactly the problem you mention. 
(Compounded by Unicode-- what the hell is 2  8 supposed to 
be!?)  I strongly suspect you don't need many simultaneous 
extra operators on a type to cover most cases.


-Wyatt


I actually thought about it more, and D does have a bunch of 
binary operators that no ones uses. You can make all sorts of 
weird operators like +*, *~, +++, ---, *--, /++, ~~, ~-, -~, 
--, ++, ^^+, in++, |-, %~, ect...


void main(string[] args){
test a;
test b;
a +* b;
}
struct test{
private struct testAlpha{
test payload;
}
testAlpha opUnary(string s : *)(){
return testAlpha(this);
}
void opBinary(string op : +)(test rhs){
writeln(+);
}
void opBinary(string op : +)(testAlpha rhs){
writeln(+*);
}
}


Re: DIP80: phobos additions

2015-06-15 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
Phobos is awesome, the libs of go, python and rust only have 
better marketing.
As discussed on dconf, phobos needs to become big and blow the 
rest out of the sky.


http://wiki.dlang.org/DIP80

lets get OT, please discuss


N-dimensional slices is ready for comments!
Announce  
http://forum.dlang.org/thread/rilfmeaqkailgpxoz...@forum.dlang.org


Ilya


Re: DIP80: phobos additions

2015-06-15 Thread Ilya Yaroshenko via Digitalmars-d

On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:
I understand 'optimize default implementation to the speed of 
high quality BLAS for _any_/large matrix size'. Great if it is 
done but imo there is no real pressure to do it and probably 
needs lot of time of experts.


+1



Re: DIP80: phobos additions

2015-06-15 Thread via Digitalmars-d

On Monday, 15 June 2015 at 08:12:17 UTC, anonymous wrote:
sorry, I should read more careful. I understand 'optimize 
default implementation to the speed of high quality BLAS for 
_any_/large matrix size'. Great if it is done but imo there is 
no real pressure to do it and probably needs lot of time of 
experts.


To benchmark when existing BLAS is actually faster is than 
'naive brute force' sounds very good and reasonable.


Yes. Well, I think there are some different expectations to what 
a standard library should include. In my view BLAS is primarily 
an API that matters because people have existing code bases, 
therefore it is common to have good implementations for it. I 
don't really see any reason for why new programs should target it.


I think it is a good idea to stay higher level. Provide simple 
implementations that the optimizer can deal with. Then have a 
benchmarking program that run on different configurations 
(os+hardware) to measure when the non-D libraries perform better 
and use those when they are faster.


So I don't think phobos should provide BLAS as such. That's what 
I would do, anyway.


Re: DIP80: phobos additions

2015-06-15 Thread John Colvin via Digitalmars-d

On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:

On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:

N-dimensional slices is ready for comments!


It seems to me that the properties of the matrix require `row` 
and `col` like this:


import std.stdio, std.experimental.range.ndslice, std.range : 
iota;


void main() {

auto matrix = 100.iota.sliced(3, 4, 5);

writeln(matrix[0]);
// [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], 
[15, 16, 17, 18, 19]]


// writeln(matrix[0].row); // 4
// writeln(matrix[0].col); // 5
}

P.S. I'm not exactly sure that these properties should work 
exactly as in my code :)


try .length!0 and .length!1 or .shape[0] and .shape[1]


Re: DIP80: phobos additions

2015-06-15 Thread Dennis Ritchie via Digitalmars-d

On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:

N-dimensional slices is ready for comments!


It seems to me that the properties of the matrix require `row` 
and `col` like this:


import std.stdio, std.experimental.range.ndslice, std.range : 
iota;


void main() {

auto matrix = 100.iota.sliced(3, 4, 5);

writeln(matrix[0]);
// [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], 
[15, 16, 17, 18, 19]]


// writeln(matrix[0].row); // 4
// writeln(matrix[0].col); // 5
}

P.S. I'm not exactly sure that these properties should work 
exactly as in my code :)


Re: DIP80: phobos additions

2015-06-15 Thread Ilya Yaroshenko via Digitalmars-d

On Monday, 15 June 2015 at 13:55:16 UTC, John Colvin wrote:

On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:

On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:

N-dimensional slices is ready for comments!


It seems to me that the properties of the matrix require `row` 
and `col` like this:


import std.stdio, std.experimental.range.ndslice, std.range : 
iota;


void main() {

auto matrix = 100.iota.sliced(3, 4, 5);

writeln(matrix[0]);
// [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 
14], [15, 16, 17, 18, 19]]


// writeln(matrix[0].row); // 4
// writeln(matrix[0].col); // 5
}

P.S. I'm not exactly sure that these properties should work 
exactly as in my code :)


try .length!0 and .length!1 or .shape[0] and .shape[1]


Nitpick: shape contains lengths and strides: .shape.lengths[0] 
and .shape.lengths[1]


Re: DIP80: phobos additions

2015-06-15 Thread Dennis Ritchie via Digitalmars-d

On Monday, 15 June 2015 at 14:32:20 UTC, Ilya Yaroshenko wrote:
I am note sure that we need something like `height`/row and 
`width`/col for nd-slices. This kind of names can be used after 
casting to the future `std.container.matrix`.


Here something similar implemented:
https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L52-L56

Want in the future something like `rows' and `cols`:
https://github.com/k3kaimu/carbon/blob/master/source/carbon/linear.d#L156-L157

Waiting for `static foreach`. This design really helps a lot to 
implement multidimensional slices.


Re: DIP80: phobos additions

2015-06-15 Thread Ilya Yaroshenko via Digitalmars-d

On Monday, 15 June 2015 at 13:44:53 UTC, Dennis Ritchie wrote:

On Monday, 15 June 2015 at 10:00:43 UTC, Ilya Yaroshenko wrote:

N-dimensional slices is ready for comments!


It seems to me that the properties of the matrix require `row` 
and `col` like this:


import std.stdio, std.experimental.range.ndslice, std.range : 
iota;


void main() {

auto matrix = 100.iota.sliced(3, 4, 5);

writeln(matrix[0]);
// [[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], 
[15, 16, 17, 18, 19]]


// writeln(matrix[0].row); // 4
// writeln(matrix[0].col); // 5
}

P.S. I'm not exactly sure that these properties should work 
exactly as in my code :)


This works:

unittest {
import std.stdio, std.experimental.range.ndslice;
import std.range : iota;

auto matrix = 100.iota.sliced(3, 4, 5);

writeln(matrix[0]);
writeln(matrix[0].length);   // 4
writeln(matrix[0].length!0); // 4
writeln(matrix[0].length!1); // 5
writeln(matrix.length!2);// 5
}

Prints:

//[[0, 1, 2, 3, 4], [5, 6, 7, 8, 9], [10, 11, 12, 13, 14], [15, 
16, 17, 18, 19]]

//4
//4
//5

I am note sure that we need something like `height`/row and 
`width`/col for nd-slices. This kind of names can be used after 
casting to the future `std.container.matrix`.


Re: DIP80: phobos additions

2015-06-15 Thread anonymous via Digitalmars-d
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
2. Then write similar code with hardware optimized BLAS and 
benchmark where the overhead between pure C/LLVM and BLAS 
calls balance out to even.
may there are more important / beneficial things to work on - 
assuming total time of contributors is fix and used for other 
D stuff:)


Sure, but that is what I'd do if I had the time. Get a baseline 
for what kind of NxN sizes D can reasonably be expected to deal 
with in a naive brute force manner.


Then consider pushing anything beyond that over to something 
more specialized.


*shrugs*


On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
2. Then write similar code with hardware optimized BLAS and 
benchmark where the overhead between pure C/LLVM and BLAS 
calls balance out to even.
may there are more important / beneficial things to work on - 
assuming total time of contributors is fix and used for other 
D stuff:)


Sure, but that is what I'd do if I had the time. Get a baseline 
for what kind of NxN sizes D can reasonably be expected to deal 
with in a naive brute force manner.


Then consider pushing anything beyond that over to something 
more specialized.


*shrugs*


sorry, I should read more careful. I understand 'optimize default 
implementation to the speed of high quality BLAS for _any_/large 
matrix size'. Great if it is done but imo there is no real 
pressure to do it and probably needs lot of time of experts.


To benchmark when existing BLAS is actually faster is than 'naive 
brute force' sounds very good and reasonable.


Re: DIP80: phobos additions

2015-06-15 Thread via Digitalmars-d
On Sunday, 14 June 2015 at 21:50:02 UTC, Ola Fosheim Grøstad 
wrote:
Sure, but that is what I'd do if I had the time. Get a baseline 
for what kind of NxN sizes D can reasonably be expected to deal 
with in a naive brute force manner.


In case it isn't obvious: a potential advantage of a simple 
algorithm that do naive brute force is that the backend might 
stand a better chance optimizing it, at least when you have a 
matrix that is known at compile time.




Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 12:01:47 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:
I am really don't understand what you mean with generic 
keyword.


Do you want one matrix type that includes all cases???
I hope you does not.


Yes, that is what generic programming is about. The type should 
signify the semantics, not exact representation.


Then you alias common types float4x4 etc.


std.range has a lot of types + D arrays.
The power in unified API (structural type system).

For matrixes this API is very simple: operations like m1[] += m2, 
transposed, etc.


Ilya


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 11:43:46 UTC, Ilya Yaroshenko wrote:
I am really don't understand what you mean with generic 
keyword.


Do you want one matrix type that includes all cases???
I hope you does not.


Yes, that is what generic programming is about. The type should 
signify the semantics, not exact representation.


Then you alias common types float4x4 etc.

It does take a lot of abstraction design work. I've done some of 
it in C++ for sliced views over memory and arrays and I'd say you 
need many iterations to get it right.


If not, yes it should be generic like all other Phobos. But we 
will have one module for 3D/4D geometric and 3D/4D 
matrix/vector multiplications, another module for general 
matrix (std.container.matrix) and another module with generic 
BLAS (std.numeric.blas) for general purpose matrixes. After all 
of that we can think about scripting like m0 = m1*v*m2 
features.


All I can say is that  I have a strong incentive to avoid using 
Phobos features if D does not automatically utilize the best 
OS/CPU vendor provided libraries in a portable manner and with 
easy-to-read high level abstractions.


D's strength compared to C++/Rust is that D can evolve to be 
easier to use than those languages. C++/Rust are hard to use by 
nature. But usability takes a lot of API design effort, so it 
won't come easy.


D's strength compared to Go is that it can better take advantage 
of hardware and provide better library abstractions, Go appears 
to deliberately avoid it. They probably want to stay nimble with 
very limited hardware-interfacing so that you can easily move it 
around in the cloud.


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 12:52:52 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:

std.range has a lot of types + D arrays.
The power in unified API (structural type system).


Yeah, I agree that templates in C++/D more or less makes those 
type systems structural-like, even though C is using nominal 
typing.


I've also found that although the combinatorial explosion is a 
possibility, most applications I write have a types.h file 
that define the subset I want to use for that application. So 
the combinatorial explosion is not such a big deal after all.


But one need to be patient and add lots of static_asserts… 
since the template type system is weak.


For matrixes this API is very simple: operations like m1[] += 
m2, transposed, etc.


I think it is a bit more complicated than that. You also need 
to think about alignment, padding, strides, convolutions, 
identiy matrices, invertible matrices, windows on a stream, 
higher order matrices etc…


Alignment, strides (windows on a stream - I understand it like 
Sliding Windows) are not a problem.


Convolutions, identiy matrices, invertible matrices are stuff I 
don't want to see in Phobos. They are about MathD not about 
(big) standard library.


For hight order slices see 
https://github.com/D-Programming-Language/phobos/pull/3397


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d
I think there might be a disconnection in this thread. D only, or 
D frontend?


There are hardware vendor and commercial libraries that are 
heavily optimized for particular hardware configurations. There 
is no way a D-only solution can beat those. As an example Apple 
provides various implementations for their own machines, so an 
old program on a new machine can run faster than a static D-only 
library solution.


What D can provide is a unifying abstraction, but to get there 
one need to analyze what exists. Like Apple's Accelerate 
framework:


https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465

That goes beyond BLAS. We also need to look at vDSP etc. You'll 
find similar things for Microsoft/Intel/AMD/ARM etc…


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 12:18:39 UTC, Ilya Yaroshenko wrote:

std.range has a lot of types + D arrays.
The power in unified API (structural type system).


Yeah, I agree that templates in C++/D more or less makes those 
type systems structural-like, even though C is using nominal 
typing.


I've also found that although the combinatorial explosion is a 
possibility, most applications I write have a types.h file that 
define the subset I want to use for that application. So the 
combinatorial explosion is not such a big deal after all.


But one need to be patient and add lots of static_asserts… since 
the template type system is weak.


For matrixes this API is very simple: operations like m1[] += 
m2, transposed, etc.


I think it is a bit more complicated than that. You also need to 
think about alignment, padding, strides, convolutions, identiy 
matrices, invertible matrices, windows on a stream, higher order 
matrices etc…


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 14:02:59 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:
Alignment, strides (windows on a stream - I understand it like 
Sliding Windows) are not a problem.


It isn't a problem if you use the best possible abstraction 
from the start. It is a problem if you don't focus on it from 
the start.


I am sorry for this trolling:
Lisp is the best abstraction, thought.

Sometimes I find very cool abstract libraries, with relatively 
small number of users.
For example many programmers don't want to use Boost only because 
it's abstractions makes them crazy.


Convolutions, identiy matrices, invertible matrices are stuff 
I don't want to see in Phobos. They are about MathD not 
about (big) standard library.


I don't see how you can get good performance without special 
casing identity matrices, transposed matrices and so on. You 
surely need to support matrix inversion, Gauss-Jordan 
elimination (or the equivalent)  etc?


For daily scientific purposes - yes.
For R/Matlab like mathematical library - yes.
For real world application - no. Engineer can achieve best 
performance without special cases by lowering abstraction down. 
Simplicity and transparency (how it works) is more important in 
this case.


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 09:07:19 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:
nobody uses general purpose linear matrix libraries for 
games/graphics for a reason,


The reason is that C++ didn't provide anything. As a result 
each framework provide their own and you get N different 
libraries that are incompatible.


There is no good reason for making small-matrix libraries 
incompatible with the rest of eco-system given the templating 
system you have in D. What you need is a library that supports 
multiple representations and can do the conversions.


Of course, you'll do better if you also have 
term-rewriting/AST-macros.


The reason is general purpose matrixes allocated at heap, but 
small graphic matrices are plain structs. `opCast(T)` should be 
enough.


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 08:14:21 UTC, weaselcat wrote:
nobody uses general purpose linear matrix libraries for 
games/graphics for a reason,


The reason is that C++ didn't provide anything. As a result each 
framework provide their own and you get N different libraries 
that are incompatible.


There is no good reason for making small-matrix libraries 
incompatible with the rest of eco-system given the templating 
system you have in D. What you need is a library that supports 
multiple representations and can do the conversions.


Of course, you'll do better if you also have 
term-rewriting/AST-macros.


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 10:43:24 UTC, Ola Fosheim Grøstad 
wrote:
I think there might be a disconnection in this thread. D only, 
or D frontend?


There are hardware vendor and commercial libraries that are 
heavily optimized for particular hardware configurations. There 
is no way a D-only solution can beat those. As an example Apple 
provides various implementations for their own machines, so an 
old program on a new machine can run faster than a static 
D-only library solution.


What D can provide is a unifying abstraction, but to get there 
one need to analyze what exists. Like Apple's Accelerate 
framework:


https://developer.apple.com/library/prerelease/ios/documentation/Accelerate/Reference/AccelerateFWRef/index.html#//apple_ref/doc/uid/TP40009465

That goes beyond BLAS. We also need to look at vDSP etc. You'll 
find similar things for Microsoft/Intel/AMD/ARM etc…


+1


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 09:25:25 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:
The reason is general purpose matrixes allocated at heap, but 
small graphic matrices are plain structs.


No, the reason is that LA-libraries are C-libraries that also 
deal with variable sized matrices.


A good generic API can support both. You cannot create a good 
generic API in C. You can in D.


We need D own BLAS implementation to do it. Sight, DBLAS will be 
largest part of std.


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 13:48:23 UTC, Ilya Yaroshenko wrote:
Alignment, strides (windows on a stream - I understand it like 
Sliding Windows) are not a problem.


It isn't a problem if you use the best possible abstraction from 
the start. It is a problem if you don't focus on it from the 
start.


Convolutions, identiy matrices, invertible matrices are stuff I 
don't want to see in Phobos. They are about MathD not about 
(big) standard library.


I don't see how you can get good performance without special 
casing identity matrices, transposed matrices and so on. You 
surely need to support matrix inversion, Gauss-Jordan elimination 
(or the equivalent)  etc?




Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 09:19:19 UTC, Ilya Yaroshenko wrote:
The reason is general purpose matrixes allocated at heap, but 
small graphic matrices are plain structs.


No, the reason is that LA-libraries are C-libraries that also 
deal with variable sized matrices.


A good generic API can support both. You cannot create a good 
generic API in C. You can in D.


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 10:15:08 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:

We need D own BLAS implementation to do it.


Why can't you use version for those that want to use a BLAS 
library for the implementation?


Those who want replications of LAPACK/LINPACK APIs can use 
separate bindings? And those who want to use BLAS directly 
would not use phobos anyway, but a direct binding so they can 
switch implementation?


I think a good generic higher level linear algebra library for 
D should aim to be equally useful for 2D Graphics, 3D/4D GPU 
graphics, CAD solid modelling, robotics, 3D raytracing, higher 
dimensional fractals, physics sims, image processing, signal 
processing, scientific computing (which is pretty wide) and 
more.


The Phobos API should be user-level, not library-level like 
BLAS. IMO. You really want an API that look like this in Phobos?


http://www.netlib.org/blas/

BLAS/LAPACK/LINPACK all originate in Fortran with a particular 
scientific tradition in mind, so I think one should rethink how 
D goes about this. Fortran has very primitive abstraction 
mechanisms. This stuff is stuck in the 80s…


I am really don't understand what you mean with generic keyword.

Do you want one matrix type that includes all cases???
I hope you does not.

If not, yes it should be generic like all other Phobos. But we 
will have one module for 3D/4D geometric and 3D/4D matrix/vector 
multiplications, another module for general matrix 
(std.container.matrix) and another module with generic BLAS 
(std.numeric.blas) for general purpose matrixes. After all of 
that we can think about scripting like m0 = m1*v*m2 features.


I think LAPACK would not be implemented in Phobos, but we can use 
SciD instead.


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 09:59:22 UTC, Ilya Yaroshenko wrote:

We need D own BLAS implementation to do it.


Why can't you use version for those that want to use a BLAS 
library for the implementation?


Those who want replications of LAPACK/LINPACK APIs can use 
separate bindings? And those who want to use BLAS directly would 
not use phobos anyway, but a direct binding so they can switch 
implementation?


I think a good generic higher level linear algebra library for D 
should aim to be equally useful for 2D Graphics, 3D/4D GPU 
graphics, CAD solid modelling, robotics, 3D raytracing, higher 
dimensional fractals, physics sims, image processing, signal 
processing, scientific computing (which is pretty wide) and more.


The Phobos API should be user-level, not library-level like BLAS. 
IMO. You really want an API that look like this in Phobos?


http://www.netlib.org/blas/

BLAS/LAPACK/LINPACK all originate in Fortran with a particular 
scientific tradition in mind, so I think one should rethink how D 
goes about this. Fortran has very primitive abstraction 
mechanisms. This stuff is stuck in the 80s…


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 02:56:04 UTC, jmh530 wrote:
On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad 
wrote:


I think linear algebra should have the same syntax for small 
and large matrices and switch representation behind the scenes.


Switching representations behind the scenes? Sounds complicated.


You don't have much of a choice if you want it to perform. You 
have take take into consideration:


1. hardware factors such as SIMD and alignment

2. what is known at compile time and what is only known at runtime

3. common usage patterns (what elements are usually 0, 1 or a 
value)


4. when does it pay off to encode the matrix modifications and 
layout as meta information (like transpose and scalar 
multiplication or addition)


And sometimes you might want to compute the inverse matrix when 
doing the transforms, rather than as a separate step for 
performance reasons.


I would think that if you were designing it from the ground up, 
you would have one general matrix math library. Then a graphics 
library could be built on top of that functionality. That way, 
as improvements are made to the matrix math functionality, the 
graphics library would benefit too.


Yes, but nobody wants to use a matrix library that does not 
perform close to the hardware limitations, so the representation 
should be special cased to fit the hardware for common matrix 
layouts.


Re: DIP80: phobos additions

2015-06-14 Thread weaselcat via Digitalmars-d

On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
The tiny subset of numerical linear algebra that is relevant 
for graphics (mostly very basic operations, 2,3 or 4 
dimensions) is not at all representative of the whole. The 
algorithms are different and the APIs are often necessarily 
different.


Even just considering scale, no one sane calls in to BLAS to 
multiply a 3*3 matrix by a 3 element vector, simultaneously no 
one sane *doesn't* call in to BLAS or an equivalent to 
multiply two 500*500 matrices.


I think there is a conflict of interest with what people want. 
There seem to be people like me who only want or need simple 
matrices like glm to do basic geometric/graphics related stuff. 
Then there is the group of people who want large 500x500 
matrices to do weird crazy maths stuff. Maybe they should be 
kept separate? In which case then we are really talking about 
adding two different things. Maybe have a std.math.matrix and a 
std.blas?


+1

nobody uses general purpose linear matrix libraries for 
games/graphics for a reason, many game math libraries take 
shortcuts everywhere and are extensively optimized(e.g, for cache 
lines) for the general purpose vec3/mat4 types.


many performance benefits for massive matrices see performance 
detriments for tiny graphics-oriented matrices. This is just 
shoehorning, plain and simple.


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:


A naive basic matrix library is simple to write, I don't need 
standard library support for that + I get it to work the way I 
want by using SIMD registers directly... = I probably would 
not use it if I could implement it in less than 10 hours.


A naive std.algorithm and std.range is easy to write too.


I wouldn't know. People have different needs. Builtin 
for-each-loops, threads and SIMD support are more important to me 
than iterators (ranges).


But the problem with linear algebra is that you might want to do 
SIMD optimized versions where you calculate 4 equations at the 
time, do reshuffling etc. So a library solution has to provide 
substantial benefits.





Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d
On Sunday, 14 June 2015 at 18:05:33 UTC, Ola Fosheim Grøstad 
wrote:

On Sunday, 14 June 2015 at 15:15:38 UTC, Ilya Yaroshenko wrote:


A naive basic matrix library is simple to write, I don't need 
standard library support for that + I get it to work the way 
I want by using SIMD registers directly... = I probably 
would not use it if I could implement it in less than 10 
hours.


A naive std.algorithm and std.range is easy to write too.


I wouldn't know. People have different needs. Builtin 
for-each-loops, threads and SIMD support are more important to 
me than iterators (ranges).


But the problem with linear algebra is that you might want to 
do SIMD optimized versions where you calculate 4 equations at 
the time, do reshuffling etc. So a library solution has to 
provide substantial benefits.


Yes, but it would be hard to create SIMD optimised version.

What do you think about this chain of steps?

1. Create generalised (only type template and my be flags) BLAS 
algorithms (probably  slow) with CBLAS like API.
2. Allow users to use existing CBLAS libraries inside generalised 
BLAS.

3. Start to improve generalised BLAS with SIMD instructions.
4. And then continue discussion about type of matrixes we want...



Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 14:25:11 UTC, Ilya Yaroshenko wrote:

I am sorry for this trolling:
Lisp is the best abstraction, thought.


Even it if was, it does not provide the meta info and alignment 
type constraints that makes it possible to hardware/SIMD optimize 
it behind the scenes.


For example many programmers don't want to use Boost only 
because it's abstractions makes them crazy.


Yes, C++ templates are a hard nut to crack, if D had added 
excellent pattern matching to its meta programming repertoire the 
I think this would be enough to put D in a different league.


Application programmers should not have to deal with lots of type 
parameters, they can use the simplified version (aliases). That's 
what I do in my C++ libs, using templated aliasing to make a 
complicated type composition easy to use while still getting the 
benefits generic pattern matching and generic programming.



Convolutions, identiy matrices, invertible matrices are stuff

For daily scientific purposes - yes.
For R/Matlab like mathematical library - yes.
For real world application - no. Engineer can achieve best 
performance without special cases by lowering abstraction 
down. Simplicity and transparency (how it works) is more 
important in this case.


Getting platform optimized versions of frequently used heavy 
operations is the primary reason for why I would use a builtin 
library over rolling my own. Especially if the compiler has 
builtin high-level optimizations for the algebra.


A naive basic matrix library is simple to write, I don't need 
standard library support for that + I get it to work the way I 
want by using SIMD registers directly... = I probably would not 
use it if I could implement it in less than 10 hours.


Re: DIP80: phobos additions

2015-06-14 Thread weaselcat via Digitalmars-d
On Sunday, 14 June 2015 at 14:46:36 UTC, Ola Fosheim Grøstad 
wrote:
Yes, C++ templates are a hard nut to crack, if D had added 
excellent pattern matching to its meta programming repertoire 
the I think this would be enough to put D in a different league.




https://github.com/solodon4/Mach7


Re: DIP80: phobos additions

2015-06-14 Thread Ilya Yaroshenko via Digitalmars-d


A naive basic matrix library is simple to write, I don't need 
standard library support for that + I get it to work the way I 
want by using SIMD registers directly... = I probably would 
not use it if I could implement it in less than 10 hours.


A naive std.algorithm and std.range is easy to write too.


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 18:49:21 UTC, Ilya Yaroshenko wrote:

Yes, but it would be hard to create SIMD optimised version.


Then again clang is getting better at this stuff.


What do you think about this chain of steps?

1. Create generalised (only type template and my be flags) BLAS 
algorithms (probably  slow) with CBLAS like API.
2. Allow users to use existing CBLAS libraries inside 
generalised BLAS.

3. Start to improve generalised BLAS with SIMD instructions.
4. And then continue discussion about type of matrixes we 
want...


Hmm… I don't know. In general I think the best thing to do is to 
develop libraries with a project and then turn it into something 
more abstract.


If I had more time I think I would have made the assumption that 
we could make LDC produce whatever next version of clang can do 
with pragmas/GCC-extensions and used that assumption for building 
some prototypes. So I would:


1. protoype typical constructs in C, compile it with next version 
of llvm/clang (with e.g. 4xloop-unrolling and try different 
optimization/vectorizing options) the look at the output in LLVM 
IR and assembly mnemonic code.


2. Then write similar code with hardware optimized BLAS and 
benchmark where the overhead between pure C/LLVM and BLAS calls 
balance out to even.


Then you have a rough idea of what the limitations of the current 
infrastructure looks like, and can start modelling the template 
types in D?


I'm not sure that you should use SIMD directly, but align the 
memory for it. Like, on iOS you end up using LLVM subsets because 
of the new bitcode requirements. Ditto for PNACL.


Just a thought, but that's what I would I do.



Re: DIP80: phobos additions

2015-06-14 Thread anonymous via Digitalmars-d
1. Create generalised (only type template and my be flags) 
BLAS algorithms (probably  slow) with CBLAS like API.
See [1] (the Matmul benchmark) Julia Native is probably backed 
with Intel MKL or OpenBLAS. D version was optimized by Martin 
Nowak [2] and is still _much_ slower.


2. Allow users to use existing CBLAS libraries inside 
generalised BLAS.
I think a good interface is more important than speed of default 
implementation (at least for e.g large matrix multiplication). 
Just use existing code for speed...

Goto's papers about his BLAS: [3][4]
Having something a competitive in D would be great but probably a 
lot of work. Without a good D interface  dstep + openBLAS/Atlas 
header will not look that bad. Note I am not talking about small 
matrices/graphics.



3. Start to improve generalised BLAS with SIMD instructions.
nice, but not really important. Good interface to existing high 
quality BLAS seems more important to me than fast D linear 
algebra implementation + CBLAS like interface.


4. And then continue discussion about type of matrixes we 
want...



+1

2. Then write similar code with hardware optimized BLAS and 
benchmark where the overhead between pure C/LLVM and BLAS calls 
balance out to even.
may there are more important / beneficial things to work on - 
assuming total time of contributors is fix and used for other D 
stuff:)


[1] https://github.com/kostya/benchmarks
[2] https://github.com/kostya/benchmarks/pull/6
[3] http://www.cs.utexas.edu/users/flame/pubs/GotoTOMS2.pdf
[4] 
http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d

On Sunday, 14 June 2015 at 21:31:53 UTC, anonymous wrote:
2. Then write similar code with hardware optimized BLAS and 
benchmark where the overhead between pure C/LLVM and BLAS 
calls balance out to even.
may there are more important / beneficial things to work on - 
assuming total time of contributors is fix and used for other D 
stuff:)


Sure, but that is what I'd do if I had the time. Get a baseline 
for what kind of NxN sizes D can reasonably be expected to deal 
with in a naive brute force manner.


Then consider pushing anything beyond that over to something more 
specialized.


*shrugs*


Re: DIP80: phobos additions

2015-06-14 Thread via Digitalmars-d
Another thing worth noting is that I believe Intel has put some 
effort into next gen (?) LLVM/Clang for autovectorizing into 
AVX2. It might be worth looking into as it uses a mask that 
allows the CPU to skip computations that would lead to no change, 
but I think it is only available on last gen Intel CPUs.


Also worth keeping in mind is that future versions of LLVM will 
have to deal with GCC extensions and perhaps also Clang pragmas. 
So maybe take a look at:


http://clang.llvm.org/docs/LanguageExtensions.html#vectors-and-extended-vectors

and

http://clang.llvm.org/docs/LanguageExtensions.html#extensions-for-loop-hint-optimizations

?



Re: DIP80: phobos additions

2015-06-13 Thread ketmar via Digitalmars-d
On Sat, 13 Jun 2015 21:57:42 -0400, Steven Schveighoffer wrote:

 A rather cool usage of QR code I saw was a sticker on a device that was
 a link to the PDF of the manual.

it's k001, but i'll take a printed URL for it in any time. the old good 
URL that i can read with my eyes.

signature.asc
Description: PGP signature


Re: DIP80: phobos additions

2015-06-13 Thread jmh530 via Digitalmars-d
On Saturday, 13 June 2015 at 11:18:54 UTC, Ola Fosheim Grøstad 
wrote:


I think linear algebra should have the same syntax for small 
and large matrices and switch representation behind the scenes.


Switching representations behind the scenes? Sounds complicated.

I would think that if you were designing it from the ground up, 
you would have one general matrix math library. Then a graphics 
library could be built on top of that functionality. That way, as 
improvements are made to the matrix math functionality, the 
graphics library would benefit too.


However, given that there already is a well developed math 
graphics library, I'm not sure what's optimal. I can see the 
argument for implementing gl3n in the standard library (as a 
specialized math graphics option) on its own if there is demand 
for it.


Re: DIP80: phobos additions

2015-06-13 Thread Steven Schveighoffer via Digitalmars-d

On 6/13/15 11:46 AM, Nick Sabalausky wrote:

On 06/08/2015 03:55 AM, ezneh wrote:


- Create / read QR codes, maybe ? It seems we see more and more QR Codes
here and there, so it could potentially be worth it


I see them everywhere, but does anyone ever actually use them? Usually
it's just an obvious link to some company's marketing/advertising. It's
basically just like the old CueCat, if anyone remembers it:
https://en.wikipedia.org/wiki/CueCat

Only time I've ever seen *anyone* actually using a QR code is when *I*
use a display QR link for this page FF plugin to send the webpage I'm
looking at to my phone.

Maybe I'm just not seeing it, but I suspect QR is more someone that
companies *want* people to care about, rather than something anyone
actually uses.



A rather cool usage of QR code I saw was a sticker on a device that was 
a link to the PDF of the manual.


-Steve


Re: DIP80: phobos additions

2015-06-13 Thread Timon Gehr via Digitalmars-d

On 06/13/2015 12:35 PM, Tofu Ninja wrote:

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:

The tiny subset of numerical linear algebra that is relevant for
graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at
all representative of the whole. The algorithms are different and the
APIs are often necessarily different.

Even just considering scale, no one sane calls in to BLAS to multiply
a 3*3 matrix by a 3 element vector, simultaneously no one sane
*doesn't* call in to BLAS or an equivalent to multiply two 500*500
matrices.


I think there is a conflict of interest with what people want. There
seem to be people like me who only want or need simple matrices like glm
to do basic geometric/graphics related stuff. Then there is the group of
people who want large 500x500 matrices to do weird crazy maths stuff.


(It's neither weird nor crazy.)


Maybe they should be kept separate?


I think there's no point to that. Just have dynamically sized and fixed 
sized versions. Why should they be incompatible? It's the same concept.




Re: DIP80: phobos additions

2015-06-13 Thread weaselcat via Digitalmars-d

On Saturday, 13 June 2015 at 16:53:22 UTC, Nick Sabalausky wrote:

On 06/07/2015 02:27 PM, Robert burner Schadek wrote:
Phobos is awesome, the libs of go, python and rust only have 
better

marketing.
As discussed on dconf, phobos needs to become big and blow the 
rest out

of the sky.

http://wiki.dlang.org/DIP80

lets get OT, please discuss


What are the problems with std.json?


slow


Re: DIP80: phobos additions

2015-06-13 Thread Nick Sabalausky via Digitalmars-d

On 06/08/2015 03:55 AM, ezneh wrote:


- Create / read QR codes, maybe ? It seems we see more and more QR Codes
here and there, so it could potentially be worth it


I see them everywhere, but does anyone ever actually use them? Usually 
it's just an obvious link to some company's marketing/advertising. It's 
basically just like the old CueCat, if anyone remembers it: 
https://en.wikipedia.org/wiki/CueCat


Only time I've ever seen *anyone* actually using a QR code is when *I* 
use a display QR link for this page FF plugin to send the webpage I'm 
looking at to my phone.


Maybe I'm just not seeing it, but I suspect QR is more someone that 
companies *want* people to care about, rather than something anyone 
actually uses.




Re: DIP80: phobos additions

2015-06-13 Thread ketmar via Digitalmars-d
On Sat, 13 Jun 2015 11:46:41 -0400, Nick Sabalausky wrote:

 Maybe I'm just not seeing it, but I suspect QR is more someone that
 companies *want* people to care about, rather than something anyone
 actually uses.

same for me.

signature.asc
Description: PGP signature


Re: DIP80: phobos additions

2015-06-13 Thread Nick Sabalausky via Digitalmars-d

On 06/07/2015 02:27 PM, Robert burner Schadek wrote:

Phobos is awesome, the libs of go, python and rust only have better
marketing.
As discussed on dconf, phobos needs to become big and blow the rest out
of the sky.

http://wiki.dlang.org/DIP80

lets get OT, please discuss


What are the problems with std.json?


Re: DIP80: phobos additions

2015-06-13 Thread rsw0x via Digitalmars-d
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
Phobos is awesome, the libs of go, python and rust only have 
better marketing.
As discussed on dconf, phobos needs to become big and blow the 
rest out of the sky.


http://wiki.dlang.org/DIP80

lets get OT, please discuss


std.container.concurrent.*


Re: DIP80: phobos additions

2015-06-13 Thread John Colvin via Digitalmars-d

On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:

On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:
On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole 
wrote:


Humm, work on getting gl3n into phobos or work on my ODBC 
driver manager. Tough choice.


I can only speak for myself. I'm sure there's a lot of value 
in solid ODBC support. I use SQL some, but I use matrix math 
more.


I'm not that familiar with gl3n, but it looks like it's meant 
for the math used in OpenGL. My knowledge of OpenGL is 
limited. I had some cursory interest in the developments of 
Vulkan earlier in March, but without much of a background in 
OpenGL I didn't follow everything they were talking about. I 
don't think many other languages include OpenGL support in 
their standard libraries (though I imagine game developers 
would welcome it).


Matrix math is matrix math, it being for ogl makes no real 
difference.


The tiny subset of numerical linear algebra that is relevant for 
graphics (mostly very basic operations, 2,3 or 4 dimensions) is 
not at all representative of the whole. The algorithms are 
different and the APIs are often necessarily different.


Even just considering scale, no one sane calls in to BLAS to 
multiply a 3*3 matrix by a 3 element vector, simultaneously no 
one sane *doesn't* call in to BLAS or an equivalent to multiply 
two 500*500 matrices.


Re: DIP80: phobos additions

2015-06-13 Thread via Digitalmars-d

On Saturday, 13 June 2015 at 11:05:19 UTC, John Colvin wrote:
Linear algebra for graphics is the specialised case, not the 
other way around. As a possible name for something like gl3n in 
phobos, I like std.math.geometry


A geometry library is different, it should be type safe when it 
comes to units, lengths, distances, areas...


I think linear algebra should have the same syntax for small and 
large matrices and switch representation behind the scenes.


The challenge is to figure out what kind of memory layouts you 
need to support in order to interact with existing 
frameworks/hardware with no conversion.


Re: DIP80: phobos additions

2015-06-13 Thread Rikki Cattermole via Digitalmars-d

On 13/06/2015 10:35 p.m., Tofu Ninja wrote:

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:

The tiny subset of numerical linear algebra that is relevant for
graphics (mostly very basic operations, 2,3 or 4 dimensions) is not at
all representative of the whole. The algorithms are different and the
APIs are often necessarily different.

Even just considering scale, no one sane calls in to BLAS to multiply
a 3*3 matrix by a 3 element vector, simultaneously no one sane
*doesn't* call in to BLAS or an equivalent to multiply two 500*500
matrices.


I think there is a conflict of interest with what people want. There
seem to be people like me who only want or need simple matrices like glm
to do basic geometric/graphics related stuff. Then there is the group of
people who want large 500x500 matrices to do weird crazy maths stuff.
Maybe they should be kept separate? In which case then we are really
talking about adding two different things. Maybe have a std.math.matrix
and a std.blas?


IMO simple matrix is fine for a standard library. More complex highly 
specialized math library yeah no. Not enough gain for such a complex code.


Where as matrix/vector support for e.g. OpenGL now that will have a high 
visibility to game devs.


Re: DIP80: phobos additions

2015-06-13 Thread Tofu Ninja via Digitalmars-d

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
The tiny subset of numerical linear algebra that is relevant 
for graphics (mostly very basic operations, 2,3 or 4 
dimensions) is not at all representative of the whole. The 
algorithms are different and the APIs are often necessarily 
different.


Even just considering scale, no one sane calls in to BLAS to 
multiply a 3*3 matrix by a 3 element vector, simultaneously no 
one sane *doesn't* call in to BLAS or an equivalent to multiply 
two 500*500 matrices.


I think there is a conflict of interest with what people want. 
There seem to be people like me who only want or need simple 
matrices like glm to do basic geometric/graphics related stuff. 
Then there is the group of people who want large 500x500 matrices 
to do weird crazy maths stuff. Maybe they should be kept 
separate? In which case then we are really talking about adding 
two different things. Maybe have a std.math.matrix and a std.blas?


Re: DIP80: phobos additions

2015-06-13 Thread John Colvin via Digitalmars-d

On Saturday, 13 June 2015 at 10:35:55 UTC, Tofu Ninja wrote:

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:
The tiny subset of numerical linear algebra that is relevant 
for graphics (mostly very basic operations, 2,3 or 4 
dimensions) is not at all representative of the whole. The 
algorithms are different and the APIs are often necessarily 
different.


Even just considering scale, no one sane calls in to BLAS to 
multiply a 3*3 matrix by a 3 element vector, simultaneously no 
one sane *doesn't* call in to BLAS or an equivalent to 
multiply two 500*500 matrices.


I think there is a conflict of interest with what people want. 
There seem to be people like me who only want or need simple 
matrices like glm to do basic geometric/graphics related stuff. 
Then there is the group of people who want large 500x500 
matrices to do weird crazy maths stuff. Maybe they should be 
kept separate? In which case then we are really talking about 
adding two different things. Maybe have a std.math.matrix and a 
std.blas?


Yes, that's what I was trying to point out. Anyway, gl3n or 
similar would be great to have in phobos, I've used it quite a 
bit and think it's great, but it should be very clear that it's 
not a general purpose matrix/linear algebra toolkit. It's a 
specialised set of types and operations specifically for 
low-dimensional geometry, with an emphasis on common graphics 
idioms.


Re: DIP80: phobos additions

2015-06-13 Thread John Colvin via Digitalmars-d

On Saturday, 13 June 2015 at 10:37:39 UTC, Rikki Cattermole wrote:

On 13/06/2015 10:35 p.m., Tofu Ninja wrote:

On Saturday, 13 June 2015 at 08:45:20 UTC, John Colvin wrote:

[...]


I think there is a conflict of interest with what people want. 
There
seem to be people like me who only want or need simple 
matrices like glm
to do basic geometric/graphics related stuff. Then there is 
the group of
people who want large 500x500 matrices to do weird crazy maths 
stuff.
Maybe they should be kept separate? In which case then we are 
really
talking about adding two different things. Maybe have a 
std.math.matrix

and a std.blas?


IMO simple matrix is fine for a standard library. More complex 
highly specialized math library yeah no. Not enough gain for 
such a complex code.


Where as matrix/vector support for e.g. OpenGL now that will 
have a high visibility to game devs.


Linear algebra for graphics is the specialised case, not the 
other way around. As a possible name for something like gl3n in 
phobos, I like std.math.geometry


Re: DIP80: phobos additions

2015-06-13 Thread Dennis Ritchie via Digitalmars-d

Good start:
http://code.dlang.org/packages/dip80-ndslice
https://github.com/9il/dip80-ndslice/blob/master/source/std/experimental/range/ndslice.d

I miss the function `sliced` in Phobos.


Re: DIP80: phobos additions

2015-06-12 Thread Manu via Digitalmars-d
On 12 June 2015 at 15:22, Ilya Yaroshenko via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:

 On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:


 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 digitalmars-d@puremagic.com wrote:



 I believe that Phobos must support some common methods of linear
 algebra
 and general mathematics. I have no desire to join D with Fortran
 libraries
 :)




 D definitely needs BLAS API support for matrix multiplication. Best
 BLAS
 libraries are written in assembler like openBLAS. Otherwise D will have
 last
 position in corresponding math benchmarks.



 A complication for linear algebra (or other mathsy things in general)
 is the inability to detect and implement compound operations.
 We don't declare mathematical operators to be algebraic operations,
 which I think is a lost opportunity.
 If we defined the properties along with their properties
 (commutativity, transitivity, invertibility, etc), then the compiler
 could potentially do an algebraic simplification on expressions before
 performing codegen and optimisation.
 There are a lot of situations where the optimiser can't simplify
 expressions because it runs into an arbitrary function call, and I've
 never seen an optimiser that understands exp/log/roots, etc, to the
 point where it can reduce those expressions properly. To compete with
 maths benchmarks, we need some means to simplify expressions properly.



 Simplified expressions would [NOT] help because
 1. On matrix (hight) level optimisation can be done very well by
 programer
 (algorithms with matrixes in terms of count of matrix multiplications are
 small).


 Perhaps you've never worked with incompetent programmers (in my
 experience, 50% of the professional workforce).
 Programmers, on average, don't know maths. They literally have no idea
 how to simplify an algebraic expression.
 I think there are about 3-4 (being generous!) people in my office (of
 30-40) that could do it properly, and without spending heaps of time
 on it.

 2. Low level optimisation requires specific CPU/Cache optimisation.
 Modern
 implementations are optimised for all cache levels. See work by KAZUSHIGE
 GOTO
 http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf


 Low-level optimisation is a sliding scale, not a binary position.
 Reaching 'optimal' state definitely requires careful consideration of
 all the details you refer to, but there are a lot of improvements that
 can be gained from quickly written code without full low-level
 optimisation. A lot of basic low-level optimisations (like just using
 appropriate opcodes, or eliding redundant operations; ie, squares
 followed by sqrt) can't be applied without first simplifying
 expressions.


 OK, generally you are talking about something we can name MathD. I
 understand the reasons. However I am strictly against algebraic operations
 (or eliding redundant operations for floating points) for basic routines in
 system programming language.

That's nice... I'm all for it :)

Perhaps if there were some distinction between a base type and an
algebraic type?
I wonder if it would be possible to express an algebraic expression
like a lazy range, and then capture the expression at the end and
simplify it with some fancy template...
I'd call that an abomination, but it might be possible. Hopefully
nobody in their right mind would ever use that ;)

 Even float/double internal conversion to real
 in math expressions is a huge headache when math algorithms are implemented
 (see first two comments at
 https://github.com/D-Programming-Language/phobos/pull/2991 ). In system PL
 sqrt(x)^2  should compiles as is.

Yeah... unless you -fast-math, in which case I want the compiler to do
whatever it can.
Incidentally, I don't think I've ever run into a case in practise
where precision was lost by doing _less_ operations.

 Such optimisations can be implemented over the basic routines (pow, sqrt,
 gemv, gemm, etc). We can use approach similar to D compile time regexp.

Not really. The main trouble is that many of these patterns only
emerge when inlining is performed.
It would be particularly awkward to express such expressions in some
DSL that spanned across conventional API boundaries.


Re: DIP80: phobos additions

2015-06-12 Thread jmh530 via Digitalmars-d

On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:

Matrix math is matrix math, it being for ogl makes no real 
difference.


I think it’s a little more complicated than that. BLAS and LAPACK 
(or variants on them) are low-level matrix math libraries that 
many higher-level libraries call. Few people actually use BLAS 
directly. So, clearly, not every matrix math library is the same. 
What differentiates BLAS from Armadillo is that you can be far 
more productive in Armadillo because the syntax is friendly (and 
quite similar to Matlab and others).


There’s a reason why people use glm in C++. It’s probably the 
most productive way to do matrix math with OpenGL. However, it 
may not be the most productive way to do more general matrix 
math. That’s why I hear about people using Armadillo, Eigen, and 
Blaze, but I’ve never heard anyone recommend using glm. Syntax 
matters.


Re: DIP80: phobos additions

2015-06-12 Thread Rikki Cattermole via Digitalmars-d

On 13/06/2015 7:45 a.m., jmh530 wrote:

On Friday, 12 June 2015 at 17:56:53 UTC, Tofu Ninja wrote:


Matrix math is matrix math, it being for ogl makes no real difference.


I think it’s a little more complicated than that. BLAS and LAPACK (or
variants on them) are low-level matrix math libraries that many
higher-level libraries call. Few people actually use BLAS directly. So,
clearly, not every matrix math library is the same. What differentiates
BLAS from Armadillo is that you can be far more productive in Armadillo
because the syntax is friendly (and quite similar to Matlab and others).

There’s a reason why people use glm in C++. It’s probably the most
productive way to do matrix math with OpenGL. However, it may not be the
most productive way to do more general matrix math. That’s why I hear
about people using Armadillo, Eigen, and Blaze, but I’ve never heard
anyone recommend using glm. Syntax matters.


The reason I am considering gl3n is because it is old solid code. It's 
proven itself. It'll make the review process relatively easy.

But hey, if we want to do it right, we'll never get any implementation in.


Re: DIP80: phobos additions

2015-06-12 Thread Ilya Yaroshenko via Digitalmars-d

On Friday, 12 June 2015 at 11:00:20 UTC, Manu wrote:


Low-level optimisation is a sliding scale, not a binary 
position.
Reaching 'optimal' state definitely requires careful 
consideration of
all the details you refer to, but there are a lot of 
improvements that

can be gained from quickly written code without full low-level
optimisation. A lot of basic low-level optimisations (like 
just using
appropriate opcodes, or eliding redundant operations; ie, 
squares

followed by sqrt) can't be applied without first simplifying
expressions.



OK, generally you are talking about something we can name 
MathD. I
understand the reasons. However I am strictly against 
algebraic operations
(or eliding redundant operations for floating points) for 
basic routines in

system programming language.


That's nice... I'm all for it :)

Perhaps if there were some distinction between a base type and 
an

algebraic type?
I wonder if it would be possible to express an algebraic 
expression
like a lazy range, and then capture the expression at the end 
and

simplify it with some fancy template...
I'd call that an abomination, but it might be possible. 
Hopefully

nobody in their right mind would ever use that ;)


... for example we can optimise matrix chain multiplication 
https://en.wikipedia.org/wiki/Matrix_chain_multiplication


//calls `this(MatrixExp!double chain)`
Matrix!double = m1*m2*m3*m4;



Even float/double internal conversion to real
in math expressions is a huge headache when math algorithms 
are implemented

(see first two comments at
https://github.com/D-Programming-Language/phobos/pull/2991 ). 
In system PL

sqrt(x)^2  should compiles as is.


Yeah... unless you -fast-math, in which case I want the 
compiler to do

whatever it can.
Incidentally, I don't think I've ever run into a case in 
practise

where precision was lost by doing _less_ operations.


Mathematics functions requires concrete order of operations
http://www.netlib.org/cephes/  (std.mathspecial and a bit of 
std.math/std.numeric are based on cephes).


Such optimisations can be implemented over the basic routines 
(pow, sqrt,
gemv, gemm, etc). We can use approach similar to D compile 
time regexp.


Not really. The main trouble is that many of these patterns only
emerge when inlining is performed.
It would be particularly awkward to express such expressions in 
some

DSL that spanned across conventional API boundaries.


If I am not wrong in both LLVM and GCC `fast-math` attribute can 
be defined for functions. This feature can be implemented in D.


Re: DIP80: phobos additions

2015-06-12 Thread Wyatt via Digitalmars-d

On Friday, 12 June 2015 at 03:18:31 UTC, Tofu Ninja wrote:


What would the new order of operations be for these new 
operators?


Hadn't honestly thought that far.  Like I said, it was more of a 
nascent idea than a coherent proposal (probably with a DIP and 
many more words).  It's an interesting question, though.


I think the approach taken by F# and OCaml may hit at the right 
notes, though: precedence and fixity are determined by the base 
operator.  In my head, extra operators would be represented in 
code by some annotation or affix on a built-in operator... say, 
braces around it or something (e.g. [*] or {+}, though this is 
just an example that sets a baseline for visibility).


-Wyatt



Re: DIP80: phobos additions

2015-06-12 Thread jmh530 via Digitalmars-d

On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:

Humm, work on getting gl3n into phobos or work on my ODBC 
driver manager. Tough choice.


I can only speak for myself. I'm sure there's a lot of value in 
solid ODBC support. I use SQL some, but I use matrix math more.


I'm not that familiar with gl3n, but it looks like it's meant for 
the math used in OpenGL. My knowledge of OpenGL is limited. I had 
some cursory interest in the developments of Vulkan earlier in 
March, but without much of a background in OpenGL I didn't follow 
everything they were talking about. I don't think many other 
languages include OpenGL support in their standard libraries 
(though I imagine game developers would welcome it).


Re: DIP80: phobos additions

2015-06-12 Thread Tofu Ninja via Digitalmars-d

On Friday, 12 June 2015 at 17:10:08 UTC, jmh530 wrote:

On Friday, 12 June 2015 at 03:35:31 UTC, Rikki Cattermole wrote:

Humm, work on getting gl3n into phobos or work on my ODBC 
driver manager. Tough choice.


I can only speak for myself. I'm sure there's a lot of value in 
solid ODBC support. I use SQL some, but I use matrix math more.


I'm not that familiar with gl3n, but it looks like it's meant 
for the math used in OpenGL. My knowledge of OpenGL is limited. 
I had some cursory interest in the developments of Vulkan 
earlier in March, but without much of a background in OpenGL I 
didn't follow everything they were talking about. I don't think 
many other languages include OpenGL support in their standard 
libraries (though I imagine game developers would welcome it).


Matrix math is matrix math, it being for ogl makes no real 
difference.


Also if you are waiting to learn vulkan but have not done any 
other graphics, don't, learn ogl now, vulkan will be harder.


Re: DIP80: phobos additions

2015-06-11 Thread Andrei Alexandrescu via Digitalmars-d

On 6/11/15 5:17 AM, Steven Schveighoffer wrote:

On 6/11/15 4:15 AM, Marc =?UTF-8?B?U2Now7x0eiI=?= schue...@gmx.net
wrote:

On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:

OK, thanks for the explanation. I'd do it the other way around:
Flag!threadlocal, since we should be safe by default.


`RefCounted!T` is also thread-local by default, only
`shared(RefCounted!T)` needs to use atomic operations.


I may have misunderstood Andrei. We can't just use a flag to fix this
problem, all allocations are in danger of races (even thread-local
ones). But maybe he meant *after* we fix the GC we could add a flag? I'm
not sure.


Yes, we definitely need to fix the GC. -- Andrei


Re: DIP80: phobos additions

2015-06-11 Thread Steven Schveighoffer via Digitalmars-d
On 6/11/15 4:15 AM, Marc =?UTF-8?B?U2Now7x0eiI=?= schue...@gmx.net 
wrote:

On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer wrote:

OK, thanks for the explanation. I'd do it the other way around:
Flag!threadlocal, since we should be safe by default.


`RefCounted!T` is also thread-local by default, only
`shared(RefCounted!T)` needs to use atomic operations.


I may have misunderstood Andrei. We can't just use a flag to fix this 
problem, all allocations are in danger of races (even thread-local 
ones). But maybe he meant *after* we fix the GC we could add a flag? I'm 
not sure.


A flag at this point would be a band-aid fix, allowing one to optimize 
if one knows that his code never puts RefCounted instances on the heap. 
Hard to prove...


-Steve


Re: DIP80: phobos additions

2015-06-11 Thread jmh530 via Digitalmars-d

On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:


There are
https://github.com/9il/simple_matrix and
https://github.com/9il/cblas .
I will try to rework them for Phobos.

Any ideas and suggestions?



A well-supported matrix math library would definitely lead to me 
using D more. I would definitely applaud any work being done on 
this subject, but I still feel there are some enhancements (most 
seemingly minor) that would really make a matrix math library 
easy/fun to use.


Most of what I discuss below is just syntactical sugar for some 
stuff that could be accomplished with loops or std.algorithm, but 
having it built-in would make practical use of a matrix math 
library much easier. I think Armadillo implements some of these 
as member functions, whereas other languages like R and Matlab 
have them more built-in.


Disclaimer: I don't consider myself a D expert, so I could be 
horribly wrong on some of this stuff.


1) There is no support for assignment to arrays based on the 
values of another array.

int[] A = [-1, 1, 5];
int[] B = [1, 2];
int[] C = A[B];

You would have to use int[] C = A[1..2];. In this simple example, 
it’s not really a big deal, but if I have a function that returns 
B, then I can’t just throw B in there. I would have to loop 
through B and assign it to C. So the type of assignment is 
possible, but if you’re frequently doing this type of array 
manipulation, then the number of loops you need starts increasing.


2) Along the same lines, there is no support for replacing the B 
above with an array of bools like

bool[] B = [false, true, true];
or
auto B = A.map!(a = a  0);

Again, it is doable with a loop, but this form of logical 
indexing is a pretty common idiom for people who use Matlab or R 
quite a bit.


3) In addition to being able to index by a range of values or 
bools, you would want to be able to make assignments based on 
this. So something like

A[B] = c;

This is a very common operation in R or Matlab.

4) Along the lines of #2, as an alternative to map, there is no 
support for array comparison operators. Something like

int[3] B;
B[] = A[] + 5;

works, but

bool[3] B;
B[] = A[]  0;

doesn’t (I’m also not sure why I can’t just write auto B[] = A[] 
+ 5;, but that’s neither here nor there). Moreover, it seems like 
only the mathematical operators work in this way. Mathematical 
functions from std.math, like exp, don’t seem to work. You have 
to use map (or a loop) with exp to get the result. I don’t have 
an issue with map, per se, but it seems inconsistent when some 
things work but not others.


5) You can only assign scalars to slices of arrays. There doesn’t 
seem to be an ability to assign an array to a slice. For 
instance, in #1, I couldn’t write A[0..1] = B; or A[0, 1] = B; 
instead of what I had written for C.


6) std.range and std.algorithm seem to have much better support 
for one dimensional containers than if you want to treat a 
container as two-dimensional. If you have a two-dimensional array 
and want to use map on every element, then there’s no issue. 
However, if you want to apply a function to each column or row, 
then you’d have to use a for loop (not even foreach).


This seems to be a more difficult problem to solve than the 
others. I’m not sure what the best approach is, but it makes 
sense to look at other languages/libraries. In R, you have apply, 
which can operate on any dimensional array.  Matlab has arrayfun. 
Numpy has apply_along_axis. Armadillo has .each_col and .each_row 
(one other thing about Armadillo is that you can switch between 
what underlying matrix math library is being used, like OpenBlas 
vs. Intel MKL).


Re: DIP80: phobos additions

2015-06-11 Thread Wyatt via Digitalmars-d

On Thursday, 11 June 2015 at 21:30:22 UTC, jmh530 wrote:


Most of what I discuss below is just syntactical sugar for some 
stuff that could be accomplished with loops or std.algorithm,


Your post reminds me of two things I've considered attempting in 
the past:
1) a set of operators that have no meaning unless an overload is 
specifically provided (for dot product, dyadic transpose, etc.) 
and
2) a library implementing features of array-oriented languages to 
the extent it's possible (APL functions, rank awareness, trivial 
reshaping, aggregate lifting, et al).


Syntax sugar can be important.

-Wyatt


Re: DIP80: phobos additions

2015-06-11 Thread Ilya Yaroshenko via Digitalmars-d

On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:

On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
digitalmars-d@puremagic.com wrote:

On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:


On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
digitalmars-d@puremagic.com wrote:



I believe that Phobos must support some common methods of 
linear algebra
and general mathematics. I have no desire to join D with 
Fortran

libraries
:)




D definitely needs BLAS API support for matrix 
multiplication. Best BLAS
libraries are written in assembler like openBLAS. Otherwise 
D will have

last
position in corresponding math benchmarks.



A complication for linear algebra (or other mathsy things in 
general)

is the inability to detect and implement compound operations.
We don't declare mathematical operators to be algebraic 
operations,

which I think is a lost opportunity.
If we defined the properties along with their properties
(commutativity, transitivity, invertibility, etc), then the 
compiler
could potentially do an algebraic simplification on 
expressions before

performing codegen and optimisation.
There are a lot of situations where the optimiser can't 
simplify
expressions because it runs into an arbitrary function call, 
and I've
never seen an optimiser that understands exp/log/roots, etc, 
to the
point where it can reduce those expressions properly. To 
compete with
maths benchmarks, we need some means to simplify expressions 
properly.



Simplified expressions would [NOT] help because
1. On matrix (hight) level optimisation can be done very well 
by programer
(algorithms with matrixes in terms of count of matrix 
multiplications are

small).


Perhaps you've never worked with incompetent programmers (in my
experience, 50% of the professional workforce).
Programmers, on average, don't know maths. They literally have 
no idea

how to simplify an algebraic expression.
I think there are about 3-4 (being generous!) people in my 
office (of
30-40) that could do it properly, and without spending heaps of 
time

on it.

2. Low level optimisation requires specific CPU/Cache 
optimisation. Modern
implementations are optimised for all cache levels. See work 
by KAZUSHIGE

GOTO
http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf


Low-level optimisation is a sliding scale, not a binary 
position.
Reaching 'optimal' state definitely requires careful 
consideration of
all the details you refer to, but there are a lot of 
improvements that

can be gained from quickly written code without full low-level
optimisation. A lot of basic low-level optimisations (like just 
using
appropriate opcodes, or eliding redundant operations; ie, 
squares

followed by sqrt) can't be applied without first simplifying
expressions.


OK, generally you are talking about something we can name MathD. 
I understand the reasons. However I am strictly against algebraic 
operations (or eliding redundant operations for floating points) 
for basic routines in system programming language. Even 
float/double internal conversion to real in math expressions is a 
huge headache when math algorithms are implemented (see first two 
comments at 
https://github.com/D-Programming-Language/phobos/pull/2991 ). In 
system PL sqrt(x)^2  should compiles as is.


Such optimisations can be implemented over the basic routines 
(pow, sqrt, gemv, gemm, etc). We can use approach similar to D 
compile time regexp.


Best,
Ilya


Re: DIP80: phobos additions

2015-06-11 Thread Manu via Digitalmars-d
On 10 June 2015 at 02:40, Ilya Yaroshenko via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:

 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 digitalmars-d@puremagic.com wrote:


 I believe that Phobos must support some common methods of linear algebra
 and general mathematics. I have no desire to join D with Fortran
 libraries
 :)



 D definitely needs BLAS API support for matrix multiplication. Best BLAS
 libraries are written in assembler like openBLAS. Otherwise D will have
 last
 position in corresponding math benchmarks.


 A complication for linear algebra (or other mathsy things in general)
 is the inability to detect and implement compound operations.
 We don't declare mathematical operators to be algebraic operations,
 which I think is a lost opportunity.
 If we defined the properties along with their properties
 (commutativity, transitivity, invertibility, etc), then the compiler
 could potentially do an algebraic simplification on expressions before
 performing codegen and optimisation.
 There are a lot of situations where the optimiser can't simplify
 expressions because it runs into an arbitrary function call, and I've
 never seen an optimiser that understands exp/log/roots, etc, to the
 point where it can reduce those expressions properly. To compete with
 maths benchmarks, we need some means to simplify expressions properly.


 Simplified expressions would [NOT] help because
 1. On matrix (hight) level optimisation can be done very well by programer
 (algorithms with matrixes in terms of count of matrix multiplications are
 small).

Perhaps you've never worked with incompetent programmers (in my
experience, 50% of the professional workforce).
Programmers, on average, don't know maths. They literally have no idea
how to simplify an algebraic expression.
I think there are about 3-4 (being generous!) people in my office (of
30-40) that could do it properly, and without spending heaps of time
on it.

 2. Low level optimisation requires specific CPU/Cache optimisation. Modern
 implementations are optimised for all cache levels. See work by KAZUSHIGE
 GOTO
 http://www.cs.utexas.edu/users/pingali/CS378/2008sp/papers/gotoPaper.pdf

Low-level optimisation is a sliding scale, not a binary position.
Reaching 'optimal' state definitely requires careful consideration of
all the details you refer to, but there are a lot of improvements that
can be gained from quickly written code without full low-level
optimisation. A lot of basic low-level optimisations (like just using
appropriate opcodes, or eliding redundant operations; ie, squares
followed by sqrt) can't be applied without first simplifying
expressions.


Re: DIP80: phobos additions

2015-06-11 Thread Dennis Ritchie via Digitalmars-d

On Friday, 12 June 2015 at 00:51:04 UTC, Manu wrote:

Perhaps you've never worked with incompetent programmers (in my
experience, 50% of the professional workforce).
Programmers, on average, don't know maths. They literally have 
no idea

how to simplify an algebraic expression.
I think there are about 3-4 (being generous!) people in my 
office (of
30-40) that could do it properly, and without spending heaps of 
time

on it.


But you don't think you need to look up to programmers who are 
not able to quickly simplify an algebraic expression? :)


For example, I'm a little addicted to sports programming. And I 
could really use matrix and other math in the standard library.


Re: DIP80: phobos additions

2015-06-11 Thread Tofu Ninja via Digitalmars-d

On Friday, 12 June 2015 at 01:55:15 UTC, Wyatt wrote:
From the outset, my thought was to strictly define the set of 
(eight or so?) symbols for this.  If memory serves, it was 
right around the time Walter's rejected wholesale user-defined 
operators because of exactly the problem you mention. 
(Compounded by Unicode-- what the hell is 2  8 supposed to 
be!?)  I strongly suspect you don't need many simultaneous 
extra operators on a type to cover most cases.


-Wyatt


What would the new order of operations be for these new operators?


Re: DIP80: phobos additions

2015-06-11 Thread Rikki Cattermole via Digitalmars-d

On 12/06/2015 9:30 a.m., jmh530 wrote:

On Tuesday, 9 June 2015 at 03:26:25 UTC, Ilya Yaroshenko wrote:


There are
https://github.com/9il/simple_matrix and
https://github.com/9il/cblas .
I will try to rework them for Phobos.

Any ideas and suggestions?



A well-supported matrix math library would definitely lead to me using D
more. I would definitely applaud any work being done on this subject,
but I still feel there are some enhancements (most seemingly minor) that
would really make a matrix math library easy/fun to use.

Most of what I discuss below is just syntactical sugar for some stuff
that could be accomplished with loops or std.algorithm, but having it
built-in would make practical use of a matrix math library much easier.
I think Armadillo implements some of these as member functions, whereas
other languages like R and Matlab have them more built-in.

Disclaimer: I don't consider myself a D expert, so I could be horribly
wrong on some of this stuff.

1) There is no support for assignment to arrays based on the values of
another array.
int[] A = [-1, 1, 5];
int[] B = [1, 2];
int[] C = A[B];

You would have to use int[] C = A[1..2];. In this simple example, it’s
not really a big deal, but if I have a function that returns B, then I
can’t just throw B in there. I would have to loop through B and assign
it to C. So the type of assignment is possible, but if you’re frequently
doing this type of array manipulation, then the number of loops you need
starts increasing.

2) Along the same lines, there is no support for replacing the B above
with an array of bools like
bool[] B = [false, true, true];
or
auto B = A.map!(a = a  0);

Again, it is doable with a loop, but this form of logical indexing is a
pretty common idiom for people who use Matlab or R quite a bit.

3) In addition to being able to index by a range of values or bools, you
would want to be able to make assignments based on this. So something like
A[B] = c;

This is a very common operation in R or Matlab.

4) Along the lines of #2, as an alternative to map, there is no support
for array comparison operators. Something like
int[3] B;
B[] = A[] + 5;

works, but

bool[3] B;
B[] = A[]  0;

doesn’t (I’m also not sure why I can’t just write auto B[] = A[] + 5;,
but that’s neither here nor there). Moreover, it seems like only the
mathematical operators work in this way. Mathematical functions from
std.math, like exp, don’t seem to work. You have to use map (or a loop)
with exp to get the result. I don’t have an issue with map, per se, but
it seems inconsistent when some things work but not others.

5) You can only assign scalars to slices of arrays. There doesn’t seem
to be an ability to assign an array to a slice. For instance, in #1, I
couldn’t write A[0..1] = B; or A[0, 1] = B; instead of what I had
written for C.

6) std.range and std.algorithm seem to have much better support for one
dimensional containers than if you want to treat a container as
two-dimensional. If you have a two-dimensional array and want to use map
on every element, then there’s no issue. However, if you want to apply a
function to each column or row, then you’d have to use a for loop (not
even foreach).

This seems to be a more difficult problem to solve than the others. I’m
not sure what the best approach is, but it makes sense to look at other
languages/libraries. In R, you have apply, which can operate on any
dimensional array.  Matlab has arrayfun. Numpy has apply_along_axis.
Armadillo has .each_col and .each_row (one other thing about Armadillo
is that you can switch between what underlying matrix math library is
being used, like OpenBlas vs. Intel MKL).



Humm, work on getting gl3n into phobos or work on my ODBC driver 
manager. Tough choice.


Re: DIP80: phobos additions

2015-06-11 Thread jmh530 via Digitalmars-d

On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:

1) a set of operators that have no meaning unless an overload 
is specifically provided (for dot product, dyadic transpose, 
etc.) and



I see your point, but I think it might be a bit risky if you 
allow too much freedom for overloading operators. For instance, 
what if two people implement separate packages for matrix 
multiplication, one adopts the syntax of R (%*%) and one adopts 
the new Python syntax (@). It may lead to some confusion.


Re: DIP80: phobos additions

2015-06-11 Thread Manu via Digitalmars-d
On 10 June 2015 at 03:04, John Colvin via Digitalmars-d
digitalmars-d@puremagic.com wrote:
 On Tuesday, 9 June 2015 at 16:45:33 UTC, Manu wrote:

 On 10 June 2015 at 02:32, John Colvin via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 On Tuesday, 9 June 2015 at 16:18:06 UTC, Manu wrote:


 On 10 June 2015 at 01:26, Ilya Yaroshenko via Digitalmars-d
 digitalmars-d@puremagic.com wrote:

 [...]



 A complication for linear algebra (or other mathsy things in general)
 is the inability to detect and implement compound operations.
 We don't declare mathematical operators to be algebraic operations,
 which I think is a lost opportunity.
 If we defined the properties along with their properties
 (commutativity, transitivity, invertibility, etc), then the compiler
 could potentially do an algebraic simplification on expressions before
 performing codegen and optimisation.
 There are a lot of situations where the optimiser can't simplify
 expressions because it runs into an arbitrary function call, and I've
 never seen an optimiser that understands exp/log/roots, etc, to the
 point where it can reduce those expressions properly. To compete with
 maths benchmarks, we need some means to simplify expressions properly.



 Optimising floating point is a massive pain because of precision concerns
 and IEEE-754 conformance. Just because something is analytically the same
 doesn't mean you want the optimiser to go ahead and make the switch for you.


 We have flags to control this sort of thing (fast-math, strict ieee, etc).
 I will worry about my precision, I just want the optimiser to do its
 job and do the very best it possibly can. In the case of linear
 algebra, the optimiser generally fails and I must manually simplify
 expressions as much as possible.


 If the compiler is free to rewrite by analytical rules then I will worry
 about my precision is equivalent to either I don't care about my
 precision or I have checked the codegen. A simple rearrangement of an
 expression can easily turn a perfectly good result in to complete garbage.
 It would be great if compilers were even better at fast-math mode, but an
 awful lot of applications can't use it.

This is fine, those applications would continue not to use it.
Personally, I've never written code in 20 years where I didn't want fast-math.


Re: DIP80: phobos additions

2015-06-11 Thread Wyatt via Digitalmars-d

On Friday, 12 June 2015 at 00:11:16 UTC, jmh530 wrote:

On Thursday, 11 June 2015 at 22:36:28 UTC, Wyatt wrote:

1) a set of operators that have no meaning unless an overload 
is specifically provided (for dot product, dyadic transpose, 
etc.) and



I see your point, but I think it might be a bit risky if you 
allow too much freedom for overloading operators. For instance, 
what if two people implement separate packages for matrix 
multiplication, one adopts the syntax of R (%*%) and one adopts 
the new Python syntax (@). It may lead to some confusion.


From the outset, my thought was to strictly define the set of 
(eight or so?) symbols for this.  If memory serves, it was right 
around the time Walter's rejected wholesale user-defined 
operators because of exactly the problem you mention. (Compounded 
by Unicode-- what the hell is 2  8 supposed to be!?)  I 
strongly suspect you don't need many simultaneous extra operators 
on a type to cover most cases.


-Wyatt


Re: DIP80: phobos additions

2015-06-11 Thread via Digitalmars-d
On Wednesday, 10 June 2015 at 20:31:52 UTC, Steven Schveighoffer 
wrote:
OK, thanks for the explanation. I'd do it the other way around: 
Flag!threadlocal, since we should be safe by default.


`RefCounted!T` is also thread-local by default, only 
`shared(RefCounted!T)` needs to use atomic operations.


Re: DIP80: phobos additions

2015-06-10 Thread Steven Schveighoffer via Digitalmars-d

On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:

On 6/9/15 1:53 PM, Steven Schveighoffer wrote:

On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:

On 6/9/15 11:42 AM, Dennis Ritchie wrote:

And finally `std.bigint` offers good (but not outstanding)
performance.


BigInt should use reference counting. Its current approach to allocating
new memory for everything is a liability. Could someone file a report
for this please. -- Andrei


Slightly OT, but this reminds me.

RefCounted is not viable when using the GC, because any references on
the heap may race against stack-based references.


How do you mean that?


If you add an instance of RefCounted to a GC-destructed type (either in 
an array, or as a member of a class), there is the potential that the GC 
will run the dtor of the RefCounted item in a different thread, opening 
up the possibility of races.



Can we make RefCounted use atomicInc and atomicDec? It will hurt
performance a bit, but the current state is not good.

I spoke with Erik about this, as he was planning on using RefCounted,
but didn't know about the hairy issues with the GC.

If we get to a point where we can have a thread-local GC, we can remove
the implementation detail of using atomic operations when possible.


The obvious solution that comes to mind is adding a Flag!interlocked.


Can you explain it further? It's not obvious to me.

-Steve


Re: DIP80: phobos additions

2015-06-10 Thread ixid via Digitalmars-d

On Tuesday, 9 June 2015 at 16:14:24 UTC, Dennis Ritchie wrote:

On Tuesday, 9 June 2015 at 15:26:43 UTC, Ilya Yaroshenko wrote:
D definitely needs BLAS API support for matrix multiplication. 
Best BLAS libraries are written in assembler like openBLAS. 
Otherwise D will have last position in corresponding math 
benchmarks.


Yes, those programs on D, is clearly lagging behind the 
programmers Wolfram Mathematica :)

https://projecteuler.net/language=D
https://projecteuler.net/language=Mathematica

To solve these problems you need something like Blas. Perhaps 
BLAS - it's more practical way to enrich D techniques for 
working with matrices.


I suspect this is more about who the Mathematica and D users are 
as Project Euler is mostly mathematical rather than code 
optimization. More of the Mathematica users would have strong 
maths backgrounds. I haven't felt held back by D at all, it's 
only been my own lack of ability. I'm in 2nd place atm for D 
users.


Re: DIP80: phobos additions

2015-06-10 Thread ketmar via Digitalmars-d
On Wed, 10 Jun 2015 09:12:15 +, John Chapman wrote:

 On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
 It's a shame ucent/cent never got implemented. But couldn't they be
 added to Phobos? I often need a 128-bit type with better precision than
 float and double.
 
 Other things I often have a need for:
 
 Weak references

+inf for including that into Phobos. current implementations are hacks 
that may stop working when internals will change, but if it will be in 
Phobos, it will be always up-to-date.

signature.asc
Description: PGP signature


Re: DIP80: phobos additions

2015-06-10 Thread John Chapman via Digitalmars-d
On Wednesday, 10 June 2015 at 09:30:37 UTC, Robert burner Schadek 
wrote:

On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:


Logging


std.experimental.logger!?


Perfect, he said sheepishly.


Re: DIP80: phobos additions

2015-06-10 Thread ixid via Digitalmars-d

On Wednesday, 10 June 2015 at 08:50:31 UTC, Dennis Ritchie wrote:

On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
I suspect this is more about who the Mathematica and D users 
are as Project Euler is mostly mathematical rather than code 
optimization.


Here and I say that despite the fact that in D BigInt not 
optimized very well, it helps me to solve a wide range of tasks 
that do not require high performance, so I want to BLAS or 
something similar was in D. Something is better than nothing!


You rarely need to use BigInt for heavy lifting though, often 
it's just summing, not that I would argue against optimization. I 
think speed is absolutely vital and one of the most powerful 
things we could do to promote D would be to run the best 
benchmarks site for all language comers and make sure D does very 
well. Every time there's a benchmark contest it seems to unearth 
D performance issues that can be greatly improved upon.


I'm sure you will beat me pretty quickly, as I said my maths 
isn't very good but it might motivate me to solve some more! =)


Re: DIP80: phobos additions

2015-06-10 Thread John Chapman via Digitalmars-d
It's a shame ucent/cent never got implemented. But couldn't they 
be added to Phobos? I often need a 128-bit type with better 
precision than float and double.


Re: DIP80: phobos additions

2015-06-10 Thread via Digitalmars-d

On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
It's a shame ucent/cent never got implemented. But couldn't 
they be added to Phobos? I often need a 128-bit type with 
better precision than float and double.


I think the next release of LDC will support it, at least on some 
platforms...


Re: DIP80: phobos additions

2015-06-10 Thread Robert burner Schadek via Digitalmars-d

On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:


Logging


std.experimental.logger!?


Re: DIP80: phobos additions

2015-06-10 Thread Dennis Ritchie via Digitalmars-d

On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
I suspect this is more about who the Mathematica and D users 
are as Project Euler is mostly mathematical rather than code 
optimization. More of the Mathematica users would have strong 
maths backgrounds. I haven't felt held back by D at all, it's 
only been my own lack of ability. I'm in 2nd place atm for D 
users.


OK, if D is at least BLAS, I will try to overtake you :)


Re: DIP80: phobos additions

2015-06-10 Thread John Chapman via Digitalmars-d

On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
It's a shame ucent/cent never got implemented. But couldn't 
they be added to Phobos? I often need a 128-bit type with 
better precision than float and double.


Other things I often have a need for:

Weak references
Queues, stacks, sets
Logging
Custom date/time formatting
Locale-aware number/currency formatting
HMAC (for OAuth)
URI parsing
Sending email (SMTP)
Continuations for std.parallelism.Task
Database connectivity (sounds like this is on the cards)
HTTP listener


Re: DIP80: phobos additions

2015-06-10 Thread ponce via Digitalmars-d

On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:
It's a shame ucent/cent never got implemented. But couldn't 
they be added to Phobos? I often need a 128-bit type with 
better precision than float and double.


FWIW: 
https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d


Re: DIP80: phobos additions

2015-06-10 Thread Dennis Ritchie via Digitalmars-d

On Wednesday, 10 June 2015 at 08:39:12 UTC, ixid wrote:
I suspect this is more about who the Mathematica and D users 
are as Project Euler is mostly mathematical rather than code 
optimization.


Here and I say that despite the fact that in D BigInt not 
optimized very well, it helps me to solve a wide range of tasks 
that do not require high performance, so I want to BLAS or 
something similar was in D. Something is better than nothing!


Re: DIP80: phobos additions

2015-06-10 Thread via Digitalmars-d

On Wednesday, 10 June 2015 at 09:12:17 UTC, John Chapman wrote:

HMAC (for OAuth)


https://github.com/D-Programming-Language/phobos/pull/3233

Unfortunately it triggers a module cycle bug on FreeBSD that I 
can't figure out, so it hasn't been merged yet.


Re: DIP80: phobos additions

2015-06-10 Thread Andrei Alexandrescu via Digitalmars-d

On 6/10/15 1:53 AM, ponce wrote:

On Wednesday, 10 June 2015 at 07:56:46 UTC, John Chapman wrote:

It's a shame ucent/cent never got implemented. But couldn't they be
added to Phobos? I often need a 128-bit type with better precision
than float and double.


FWIW:
https://github.com/d-gamedev-team/gfm/blob/master/math/gfm/math/wideint.d


Yes, arbitrary fixed-size integrals would be good to have in Phobos. 
Who's the author of that code? Can we get something going here? -- Andrei


Re: DIP80: phobos additions

2015-06-10 Thread Steven Schveighoffer via Digitalmars-d

On 6/10/15 11:49 AM, Andrei Alexandrescu wrote:

On 6/10/15 3:52 AM, Steven Schveighoffer wrote:

On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:

On 6/9/15 1:53 PM, Steven Schveighoffer wrote:

On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:

On 6/9/15 11:42 AM, Dennis Ritchie wrote:

And finally `std.bigint` offers good (but not outstanding)
performance.


BigInt should use reference counting. Its current approach to
allocating
new memory for everything is a liability. Could someone file a report
for this please. -- Andrei


Slightly OT, but this reminds me.

RefCounted is not viable when using the GC, because any references on
the heap may race against stack-based references.


How do you mean that?


If you add an instance of RefCounted to a GC-destructed type (either in
an array, or as a member of a class), there is the potential that the GC
will run the dtor of the RefCounted item in a different thread, opening
up the possibility of races.


That's a problem with the GC. Collected memory must be deallocated in
the thread that allocated it. It's not really that complicated to
implement, either - the collection process puts the memory to deallocate
in a per-thread freelist; then when each thread wakes up and tries to
allocate things, it first allocates from the freelist.


I agree it's a problem with the GC, but not that it's a simple fix. It's 
not just a freelist -- the dtor needs to be run in the thread also. But 
the amount of affected code (i.e. any code that uses GC) makes this a 
very high risk change, whereas changing RefCounted is a 2-line change 
that is easy to prove/review. I will make the RefCounted atomic PR if 
you can accept that.



Can we make RefCounted use atomicInc and atomicDec? It will hurt
performance a bit, but the current state is not good.

I spoke with Erik about this, as he was planning on using RefCounted,
but didn't know about the hairy issues with the GC.

If we get to a point where we can have a thread-local GC, we can remove
the implementation detail of using atomic operations when possible.


The obvious solution that comes to mind is adding a Flag!interlocked.


Can you explain it further? It's not obvious to me.


The RefCounted type could have a flag as a template parameter.


OK, thanks for the explanation. I'd do it the other way around: 
Flag!threadlocal, since we should be safe by default.


-Steve


Re: DIP80: phobos additions

2015-06-10 Thread Dennis Ritchie via Digitalmars-d

On Wednesday, 10 June 2015 at 09:43:47 UTC, ixid wrote:
You rarely need to use BigInt for heavy lifting though, often 
it's just summing, not that I would argue against optimization. 
I think speed is absolutely vital and one of the most powerful 
things we could do to promote D would be to run the best 
benchmarks site for all language comers and make sure D does 
very well. Every time there's a benchmark contest it seems to 
unearth D performance issues that can be greatly improved upon.


Yes it is. Many are trying to find performance problems D. And 
sometimes it turns out.


I'm sure you will beat me pretty quickly, as I said my maths 
isn't very good but it might motivate me to solve some more! =)


No, I will start to beat you until next year, because, 
unfortunately, I will not have a full year of access to the 
computer. We can say that this is something like a long vacation 
:)


Re: DIP80: phobos additions

2015-06-10 Thread Andrei Alexandrescu via Digitalmars-d

On 6/10/15 3:52 AM, Steven Schveighoffer wrote:

On 6/9/15 5:46 PM, Andrei Alexandrescu wrote:

On 6/9/15 1:53 PM, Steven Schveighoffer wrote:

On 6/9/15 2:59 PM, Andrei Alexandrescu wrote:

On 6/9/15 11:42 AM, Dennis Ritchie wrote:

And finally `std.bigint` offers good (but not outstanding)
performance.


BigInt should use reference counting. Its current approach to
allocating
new memory for everything is a liability. Could someone file a report
for this please. -- Andrei


Slightly OT, but this reminds me.

RefCounted is not viable when using the GC, because any references on
the heap may race against stack-based references.


How do you mean that?


If you add an instance of RefCounted to a GC-destructed type (either in
an array, or as a member of a class), there is the potential that the GC
will run the dtor of the RefCounted item in a different thread, opening
up the possibility of races.


That's a problem with the GC. Collected memory must be deallocated in 
the thread that allocated it. It's not really that complicated to 
implement, either - the collection process puts the memory to deallocate 
in a per-thread freelist; then when each thread wakes up and tries to 
allocate things, it first allocates from the freelist.



Can we make RefCounted use atomicInc and atomicDec? It will hurt
performance a bit, but the current state is not good.

I spoke with Erik about this, as he was planning on using RefCounted,
but didn't know about the hairy issues with the GC.

If we get to a point where we can have a thread-local GC, we can remove
the implementation detail of using atomic operations when possible.


The obvious solution that comes to mind is adding a Flag!interlocked.


Can you explain it further? It's not obvious to me.


The RefCounted type could have a flag as a template parameter.


Andrei


Re: DIP80: phobos additions

2015-06-09 Thread John Colvin via Digitalmars-d
On Tuesday, 9 June 2015 at 06:59:07 UTC, Andrei Alexandrescu 
wrote:

On 6/8/15 8:26 PM, Ilya Yaroshenko wrote:
On Sunday, 7 June 2015 at 18:27:16 UTC, Robert burner Schadek 
wrote:
Phobos is awesome, the libs of go, python and rust only have 
better

marketing.
As discussed on dconf, phobos needs to become big and blow 
the rest

out of the sky.

http://wiki.dlang.org/DIP80

lets get OT, please discuss


There are
https://github.com/9il/simple_matrix and
https://github.com/9il/cblas .
I will try to rework them for Phobos.

Any ideas and suggestions?

Some notes about portability:
   1. OS X has Accelerated framework builtin.
   2. Linux has blast by default or it can be easily 
installed. However

default blast is very slow. The openBLAS is preferred.
   3. Looks like there is no simple way to have BLAS support 
on Windows.


Should we provide BLAS library with DMD for Windows and maybe 
Linux?


I think licensing matters would make this difficult. What I do 
think we can do is:


(a) Provide standard data layouts in std.array for the typical 
shapes supported by linear algebra libs: row major, column 
major, alongside with striding primitives.


I don't think this is quite the right approach. Multidimensional 
arrays and matrices are about accessing and iteration over data, 
not data structures themselves. The standard layouts are common 
special cases.


(b) Provide signatures for C and Fortran libraries so people 
who have them can use them easily with D.


(c) Provide high-level wrappers on top of those functions.


Andrei


That is how e.g. numpy works and it's OK, but D can do better.

Ilya, I'm very interested in discussing this further with you. I 
have a reasonable idea and implementation of how I would want the 
generic n-dimensional types in D to work, but you seem to have 
more experience with BLAS and LAPACK than me* and of course 
interfacing with them is critical.


*I rarely interact with them directly.


Re: DIP80: phobos additions

2015-06-09 Thread Ilya Yaroshenko via Digitalmars-d



size_t anyNumber;
auto ar = new int[3 * 8 * 9 + anyNumber];
auto slice = Slice[0..3, 4..8, 1..9];
assert(ar.canBeSlicedWith(slice)); //checks that ar.length = 3 
* 8 * 9


auto tensor = ar.sliced(slice);
tensor[0, 1, 2] = 4;

auto matrix = tensor[0..$, 1, 0..$];
assert(matrix[0, 2] == 4);


assert(matrix[0, 2] is tensor[0, 1, 2]);


  1   2   >