Re: Casting MapResult

2015-06-23 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 23 June 2015 at 10:50:51 UTC, John Colvin wrote:

If I remember correctly, core.simd should work with every 
compiler on every supported OS. What did you try that didn't 
work?


I figured out the issue! You have to compile using the -m64 flag 
to get it to work on Windows (this works on both dmd and rdmd). 
The 32bit specification does not support SIMD. I don't think I 
had ever noticed that it wasn't using a 64bit compilation.


I was a little disheartened getting an error running one of the 
first pieces of code on this page http://dlang.org/simd.html

the second line below, some casting issue.
int4 v = 7;
v = 3 * v;   // multiply each element in v by 3

Outside of that, I can see one issue. I was overloading a version 
of exp that takes a real and returns a real. I see no support for 
a real SIMD type, perhaps because the CPUs don't support it. So I 
pretty much could only overload the float or double versions.


On std.parallelism, I noticed that I could only loop through the 
static arrays with foreach with I appended them with []. I still 
get mixed up on that syntax. The good thing about static loops is 
that I could determine the length at compile time. I'm not 
positive, but I think I might be able to get it set up so that I 
could have different functions, one non-parallel below some 
length and one parallel above some length. This is good because 
the parallel one may not be able to use all the function 
attributes of the non-parallel ones.


I haven't been able to get anything like that to work for a 
dynamic array version as the length is not known at compile time, 
just one big function.


Re: Casting MapResult

2015-06-23 Thread John Colvin via Digitalmars-d-learn

On Tuesday, 23 June 2015 at 01:27:21 UTC, jmh530 wrote:

On Tuesday, 16 June 2015 at 16:37:35 UTC, John Colvin wrote:
If you want really fast exponentiation of an array though, you 
want to use SIMD. Something like http://www.yeppp.info would 
be easy to use from D.


I've been looking into SIMD a little. It turns out that 
core.simd only works for DMD on Linux machines.


If I remember correctly, core.simd should work with every 
compiler on every supported OS. What did you try that didn't work?


Re: Casting MapResult

2015-06-22 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 16:37:35 UTC, John Colvin wrote:
If you want really fast exponentiation of an array though, you 
want to use SIMD. Something like http://www.yeppp.info would be 
easy to use from D.


I've been looking into SIMD a little. It turns out that core.simd 
only works for DMD on Linux machines. Not sure about the other 
compilers, but I was a bit stuck for a little on it. I read a 
little on SIMD as I had no real understanding of it before you 
mentioned it. At least I understand why all the types on 
core.simd were so small. My initial reaction was there's no way I 
would want to write a code just for float[4], but now I'm like 
oh that's the whole point.


Anyway, I might try to put something together on my other machine 
one of these days, but I was able to make a little bit more 
progress with D's std.parallelism. The foreach loops work great, 
even on Windows, with little extra work required.


That being said, I'm not seeing any speed-up from parallel map. I 
put some code below doing some variations on std.algorithm.map 
and taskPool.map. The more the memory allocation (through .array) 
the longer everything takes. Keeping things as ranges seems to be 
much faster.


The most interesting result to me was that the taskPool.map was 
slower than std.algorithm.map in each case. Maybe a difference 
between being semi-eager versus lazy. The code below doesn't show 
it, but it seems like the parallel foreach loop is faster than 
std.algorithm.map or taskPool.map when doing everything with 
arrays.




import std.datetime;
import std.parallelism;
import std.conv : to;
import std.math : exp;
import std.stdio : writeln;
import std.array : array;
import std.range : iota;

enum real x_size = 100_000;

void f0()
{
auto y = std.algorithm.map!(a = exp(a))(iota(x_size));
}

void f1()
{
auto y = taskPool.map!exp(iota(x_size));
}

void f2()
{
auto y = std.algorithm.map!(a = exp(a))(iota(x_size)).array;
}

void f3()
{
auto y = taskPool.map!exp(iota(x_size)).array;
}

void f4()
{
auto y = std.algorithm.map!(a = exp(a))(iota(x_size).array);
}

void f5()
{
auto y = taskPool.map!exp(iota(x_size).array);
}

void f6()
{
	auto y = std.algorithm.map!(a = 
exp(a))(iota(x_size).array).array;

}

void f7()
{
auto y = taskPool.map!exp(iota(x_size).array).array;
}

void main() {
auto r = benchmark!(f0, f1, f2, f3, f4, f5, f6, f7)(100);
auto f0Result = to!Duration(r[0]);
auto f1Result = to!Duration(r[1]);
auto f2Result = to!Duration(r[2]);
auto f3Result = to!Duration(r[3]);
auto f4Result = to!Duration(r[4]);
auto f5Result = to!Duration(r[5]);
auto f6Result = to!Duration(r[6]);
auto f7Result = to!Duration(r[7]);
writeln(f0Result);  //prints ~ 17us on my machine
writeln(f1Result);  //prints ~ 4.3ms on my machine
writeln(f2Result);  //prints ~ 1.7s on my machine
writeln(f3Result);  //prints ~ 3.5s on my machine
writeln(f4Result);  //prints ~ 471ms on my machine
writeln(f5Result);  //prints ~ 473ms on my machine
writeln(f6Result);  //prints ~ 1.9s on my machine
writeln(f7Result);  //prints ~ 3.9s on my machine
}


Re: Casting MapResult

2015-06-16 Thread John Colvin via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 13:06:58 UTC, jmh530 wrote:

On Monday, 15 June 2015 at 22:40:31 UTC, Baz wrote:


Right, my bad. This one whould work:

---
float[] test(float[] x) {
auto result = x.dup;
result.each!((ref a) = (a = exp(a)));
return result;
}
---


That works. Thanks.

I did some benchmarking and found that map tended to be faster 
than each. For some large arrays, it was exceptionally faster. 
Perhaps it has to do with the extra copying in the each formula?


I also did an alternative to each using foreach and they were 
exactly the same speed.


Range based code is very dependant on aggressive optimisation to 
get good performance. DMD does a pretty bad/patchy job of this, 
LDC and and GDC will normally give you more consistently* fast 
code.


*consistent as in different implementations performing very 
similarly instead of seeing big differences like you have here.


Re: Casting MapResult

2015-06-16 Thread jmh530 via Digitalmars-d-learn

On Monday, 15 June 2015 at 22:40:31 UTC, Baz wrote:


Right, my bad. This one whould work:

---
float[] test(float[] x) {
auto result = x.dup;
result.each!((ref a) = (a = exp(a)));
return result;
}
---


That works. Thanks.

I did some benchmarking and found that map tended to be faster 
than each. For some large arrays, it was exceptionally faster. 
Perhaps it has to do with the extra copying in the each formula?


I also did an alternative to each using foreach and they were 
exactly the same speed.


Re: Casting MapResult

2015-06-16 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 13:15:05 UTC, John Colvin wrote:



*consistent as in different implementations performing very 
similarly instead of seeing big differences like you have here.


That's a good point. I tried numpy's exp (which uses C at a low 
level, I think) and found it takes about a fifth as long. I went 
searching for numpy's implementation, but could only find a C 
header containing the function prototype.


I only have dmd on my work computer and it probably would be a 
hassle to get the others working right now.


Re: Casting MapResult

2015-06-16 Thread John Colvin via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 14:43:17 UTC, jmh530 wrote:

On Tuesday, 16 June 2015 at 13:15:05 UTC, John Colvin wrote:



*consistent as in different implementations performing very 
similarly instead of seeing big differences like you have here.


That's a good point. I tried numpy's exp (which uses C at a low 
level, I think) and found it takes about a fifth as long. I 
went searching for numpy's implementation, but could only find 
a C header containing the function prototype.


I only have dmd on my work computer and it probably would be a 
hassle to get the others working right now.


Have you tried using core.stdc.math.exp instead of std.math.exp? 
It's probably faster, although not necessarily quite as accurate.


If you want really fast exponentiation of an array though, you 
want to use SIMD. Something like http://www.yeppp.info would be 
easy to use from D.


Re: Casting MapResult

2015-06-16 Thread John Colvin via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 14:43:17 UTC, jmh530 wrote:

On Tuesday, 16 June 2015 at 13:15:05 UTC, John Colvin wrote:



*consistent as in different implementations performing very 
similarly instead of seeing big differences like you have here.


That's a good point. I tried numpy's exp (which uses C at a low 
level, I think) and found it takes about a fifth as long. I 
went searching for numpy's implementation, but could only find 
a C header containing the function prototype.


I only have dmd on my work computer and it probably would be a 
hassle to get the others working right now.


What OS are you on? See http://wiki.dlang.org/Compilers


Re: Casting MapResult

2015-06-16 Thread jmh530 via Digitalmars-d-learn

On Tuesday, 16 June 2015 at 16:38:55 UTC, John Colvin wrote:


What OS are you on? See http://wiki.dlang.org/Compilers


I'm on Windows 7 at work, and I have both Win7 and linux at home. 
I figure I can try it on linux at home. Sometimes the work 
computer is a bit funky with installing things, so I didn't want 
to bother.


On Tuesday, 16 June 2015 at 16:37:35 UTC, John Colvin wrote:


If you want really fast exponentiation of an array though, you 
want to use SIMD. Something like http://www.yeppp.info would be 
easy to use from D.


I wasn't familiar with yeppp. Thanks. I'll probably keep things 
in as native D for now, but it's good to know there are other 
options.


I compared the results with Julia and R while I was at it. The D 
code was quite a bit faster than them. It's just that numpy's 
doing something that is getting better performance. After some 
investigation, it's possible that my version of numpy is using 
SSE, which is a form of SIMD from Intel. It doesn't seem to be 
easy to check this. The one method I found on stackoverflow 
doesn't work for me...


It looks like D has some support for simd, but only for a limited 
subset of matrices.


Re: Casting MapResult

2015-06-16 Thread jmh530 via Digitalmars-d-learn

Err...vectors not matrices.


Re: Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn

Thank you all for the very fast answers. It looks like that works.


Re: Casting MapResult

2015-06-15 Thread ketmar via Digitalmars-d-learn
On Mon, 15 Jun 2015 15:10:20 +, jmh530 wrote:

you shouldn't cast it like that. use `std.array.array` to get the actual 
array. like this:

  import std.array;

  auto y = x.map!(a = exp(a)).array;

the thing is that `map` returns so-called lazy range. lazy ranges 
trying to not do any work until they are explicitely asked. i.e.

  y = x.map!(a = exp(a))

doesn't do any real processing yet, it only prepares everything for it. 
and only when you're calling `y.front`, `map` is processing one element. 
only one, as it has no need to process next until you call `popFront`.

tl;dr: you can't simply cast that lazy range back to array, you have to 
use `std.std.array` to get the array from it.

signature.asc
Description: PGP signature


Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn

I wrote a simple function to apply map to a float dynamic array

auto exp(float[] x) {
auto y = x.map!(a = exp(a));
return y;
}

However, the type of the result is MapResult!(__lambda2, 
float[]). It seems like some of the things that I might do to a 
float[], I can't do to this type, like adding them together. So I 
tried to adjust this by adding in a cast to float[], as in


float[] exp(float[] x) {
auto y = x.map!(a = exp(a));
cast(float[]) y;
return y;
}

But I get an error that I can't convert MapResult!(__lambda2, 
float[]) to float[].


So I suppose I have two questions: 1) am I screwing up the cast, 
or is there no way to convert the MapResult to float[], 2) should 
I just not bother with map (I wrote an alternate, longer, version 
that doesn't use map but returns float[] properly).


Re: Casting MapResult

2015-06-15 Thread Justin Whear via Digitalmars-d-learn
On Mon, 15 Jun 2015 15:10:20 +, jmh530 wrote:

 So I suppose I have two questions: 1) am I screwing up the cast, or is
 there no way to convert the MapResult to float[], 2) should I just not
 bother with map (I wrote an alternate, longer, version that doesn't use
 map but returns float[] properly).

MapResult is a wrapper around your original range that performs the 
mapping operation lazily.  If you want eagerly evaluate and get back to 
an array use the std.array.array function:

import std.array : array;
auto y = x.map!(a = exp(a)).array;

Or if you have already allocated an array of the appropriate size you can 
use std.algorithm.copy:

import std.algorithm : copy;
float[] y = new float[](appropriate_length);
x.map!(a = exp(a)).copy(y);


Re: Casting MapResult

2015-06-15 Thread Adam D. Ruppe via Digitalmars-d-learn

On Monday, 15 June 2015 at 15:10:24 UTC, jmh530 wrote:
So I suppose I have two questions: 1) am I screwing up the 
cast, or is there no way to convert the MapResult to float[]


Don't cast it, just slap a .array on the end after importing 
std.range. Like so:


import std.algorithm;
import std.range; // add this line somewhere
float[] exp2(float[] x) {
auto y = x.map!(a = exp(a));
return y.array; // this line changed to make the array
}


The reason is that map returns a lazy generator instead of an 
array directly. It only evaluates on demand.


To get it to evaluate and save into an array, the .array function 
is called.


Tip though: don't call .array if you don't have to, chaining 
calls to map and such, even foreach(item; some_map_result) can be 
done without actually building the array and can give more 
efficiency.


Re: Casting MapResult

2015-06-15 Thread anonymous via Digitalmars-d-learn

On Monday, 15 June 2015 at 15:10:24 UTC, jmh530 wrote:

float[] exp(float[] x) {
auto y = x.map!(a = exp(a));
cast(float[]) y;
return y;
}

But I get an error that I can't convert MapResult!(__lambda2, 
float[]) to float[].


So I suppose I have two questions: 1) am I screwing up the 
cast, or is there no way to convert the MapResult to float[], 
2) should I just not bother with map (I wrote an alternate, 
longer, version that doesn't use map but returns float[] 
properly).


First off: Don't cast unless you know exactly what you're doing. 
It's easy to stumble into undefined behaviour land with casts.


To answer the question: You can convert from from MapResult to 
float[], but not with a cast. Instead, use std.array.array:

import std.array: array;
return x.map!(std.math.exp).array;


Re: Casting MapResult

2015-06-15 Thread wobbles via Digitalmars-d-learn

On Monday, 15 June 2015 at 15:10:24 UTC, jmh530 wrote:

snip
float[] exp(float[] x) {
auto y = x.map!(a = exp(a));
cast(float[]) y;
return y;
}




Also, I dont think your functions will work?
Your recursively calling exp in your map, but with a 'float' 
instead of 'float[]'.


Re: Casting MapResult

2015-06-15 Thread Ali Çehreli via Digitalmars-d-learn

On 06/15/2015 08:21 AM, Adam D. Ruppe wrote:

 don't call .array if you don't have to, chaining calls to
 map and such, even foreach(item; some_map_result) can be done without
 actually building the array and can give more efficiency.

To add, the OP can use 'sum' or 'reduce' for adding them together:

  http://dlang.org/phobos/std_algorithm_iteration.html

import std.stdio;
import std.algorithm;
import std.math;

void main()
{
float[] arr = [ 1.5, 2.5 ];
auto y = arr.map!exp;
writeln(y.sum);// same as sum(y)
}

An equivalent of the last line:

writeln(reduce!((result, a) = result + a)(y));

Ali



Re: Casting MapResult

2015-06-15 Thread ketmar via Digitalmars-d-learn
On Mon, 15 Jun 2015 17:07:55 +, jmh530 wrote:

 I have a little bit of a follow up.
 
 After making the recommended changes, the function seems to work with
 both static and dynamic arrays. I then noticed that all of the examples
 for functions that pass arrays in http://dlang.org/function.html use the
 dynamic array notation like my function above. Does this matter?

it doesn't, but i'd use `[]` anyway for code readability. static arrays 
can be converted to slices by the compiler when it needs to, but by using 
explicit `[]` it's easy to see if function expects slice or real static 
array right in the call site, without looking at function signature.

signature.asc
Description: PGP signature


Re: Casting MapResult

2015-06-15 Thread Baz via Digitalmars-d-learn

On Monday, 15 June 2015 at 15:10:24 UTC, jmh530 wrote:

I wrote a simple function to apply map to a float dynamic array

auto exp(float[] x) {
auto y = x.map!(a = exp(a));
return y;
}

However, the type of the result is MapResult!(__lambda2, 
float[]). It seems like some of the things that I might do to a 
float[], I can't do to this type, like adding them together. So 
I tried to adjust this by adding in a cast to float[], as in


float[] exp(float[] x) {
auto y = x.map!(a = exp(a));
cast(float[]) y;
return y;
}

But I get an error that I can't convert MapResult!(__lambda2, 
float[]) to float[].


So I suppose I have two questions: 1) am I screwing up the 
cast, or is there no way to convert the MapResult to float[], 
2) should I just not bother with map (I wrote an alternate, 
longer, version that doesn't use map but returns float[] 
properly).


In addition to the other answers you can use 
std.algorithm.iteration.each():


---
float[] _exp(float[] x) {
auto result = x.dup;
result.each!(a = exp(a));
return result;
}
---


Re: Casting MapResult

2015-06-15 Thread via Digitalmars-d-learn

On Monday, 15 June 2015 at 16:16:00 UTC, Ali Çehreli wrote:

On 06/15/2015 08:21 AM, Adam D. Ruppe wrote:

 don't call .array if you don't have to, chaining calls to
 map and such, even foreach(item; some_map_result) can be done
without
 actually building the array and can give more efficiency.

To add, the OP can use 'sum' or 'reduce' for adding them 
together:


  http://dlang.org/phobos/std_algorithm_iteration.html

import std.stdio;
import std.algorithm;
import std.math;

void main()
{
float[] arr = [ 1.5, 2.5 ];
auto y = arr.map!exp;
writeln(y.sum);// same as sum(y)
}

An equivalent of the last line:

writeln(reduce!((result, a) = result + a)(y));

Ali


`sum` is better for floating-point ranges, because it uses 
pair-wise or Kahan summation if possible, in order to preserve 
precision.


Re: Casting MapResult

2015-06-15 Thread Ali Çehreli via Digitalmars-d-learn
On 06/15/2015 09:39 AM, Marc =?UTF-8?B?U2Now7x0eiI=?= 
schue...@gmx.net wrote:



writeln(y.sum);// same as sum(y)
}

An equivalent of the last line:

writeln(reduce!((result, a) = result + a)(y));



`sum` is better for floating-point ranges, because it uses pair-wise or
Kahan summation if possible, in order to preserve precision.


Good point. I had mentioned that elsewhere after learning about it 
recently: the sum of the elements of a range should be calculated by 
std.algorithm.sum, which uses special algorithms to achieve more 
accurate calculations for floating point types. :)


  http://ddili.org/ders/d.en/fibers.html#ix_fibers.recursion

Ali



Re: Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn

I have a little bit of a follow up.

After making the recommended changes, the function seems to work 
with both static and dynamic arrays. I then noticed that all of 
the examples for functions that pass arrays in 
http://dlang.org/function.html use the dynamic array notation 
like my function above. Does this matter?


Re: Casting MapResult

2015-06-15 Thread Baz via Digitalmars-d-learn

On Monday, 15 June 2015 at 19:22:31 UTC, jmh530 wrote:

On Monday, 15 June 2015 at 19:04:32 UTC, Baz wrote:
In addition to the other answers you can use 
std.algorithm.iteration.each():


---
float[] _exp(float[] x) {
auto result = x.dup;
result.each!(a = exp(a));
return result;
}
---


Am I right that the difference is that map is lazy and each is 
greedy? Does that have any significant performance effects?


i think that the OP wants greedy. That's why he had to fight with 
map results.





Re: Casting MapResult

2015-06-15 Thread Baz via Digitalmars-d-learn

On Monday, 15 June 2015 at 19:30:08 UTC, Baz wrote:

On Monday, 15 June 2015 at 19:22:31 UTC, jmh530 wrote:

On Monday, 15 June 2015 at 19:04:32 UTC, Baz wrote:
In addition to the other answers you can use 
std.algorithm.iteration.each():


---
float[] _exp(float[] x) {
auto result = x.dup;
result.each!(a = exp(a));
return result;
}
---


Am I right that the difference is that map is lazy and each is 
greedy? Does that have any significant performance effects?


i think that the OP wants greedy. That's why he had to fight 
with map results.


Ah sorry it's you the OP. just get it. So you wanted greedy, 
didn't you ?





Re: Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn
I suppose I would want whichever has the best performance. 
Without testing, I'm not sure which one would be better. 
Thoughts?


I had been fighting with the map results because I didn't 
realize there was an easy way to get just the array.


I'm actually not having much luck with your original function 
(and I tried some variations on it). It just kept outputting the 
original array without applying the function. I tried it in main 
also (without being in a function) without much luck either.


Re: Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn

On Monday, 15 June 2015 at 19:32:12 UTC, Baz wrote:

On Monday, 15 June 2015 at 19:30:08 UTC, Baz wrote:

On Monday, 15 June 2015 at 19:22:31 UTC, jmh530 wrote:

On Monday, 15 June 2015 at 19:04:32 UTC, Baz wrote:
In addition to the other answers you can use 
std.algorithm.iteration.each():


---
float[] _exp(float[] x) {
auto result = x.dup;
result.each!(a = exp(a));
return result;
}
---


Am I right that the difference is that map is lazy and each 
is greedy? Does that have any significant performance effects?


i think that the OP wants greedy. That's why he had to fight 
with map results.


Ah sorry it's you the OP. just get it. So you wanted greedy, 
didn't you ?


I suppose I would want whichever has the best performance. 
Without testing, I'm not sure which one would be better. Thoughts?


I had been fighting with the map results because I didn't realize 
there was an easy way to get just the array.


Re: Casting MapResult

2015-06-15 Thread jmh530 via Digitalmars-d-learn

On Monday, 15 June 2015 at 19:04:32 UTC, Baz wrote:
In addition to the other answers you can use 
std.algorithm.iteration.each():


---
float[] _exp(float[] x) {
auto result = x.dup;
result.each!(a = exp(a));
return result;
}
---


Am I right that the difference is that map is lazy and each is 
greedy? Does that have any significant performance effects?


Re: Casting MapResult

2015-06-15 Thread Ali Çehreli via Digitalmars-d-learn

On 06/15/2015 12:44 PM, jmh530 wrote:

 On Monday, 15 June 2015 at 19:32:12 UTC, Baz wrote:

 Ah sorry it's you the OP. just get it. So you wanted greedy, didn't 
you ?


 I suppose I would want whichever has the best performance. Without
 testing, I'm not sure which one would be better. Thoughts?

There are different levels of lazyness regarding performance. :)

1) map and most algorithms are fully lazy. They don't do anything until 
elements are actually used.


2) Some algorithms that keep their state in struct objects do some work 
in their constructors to prepare the first element for use.


3) 'each' is fully eager. It walks through all elements of the range and 
does something with each of those.


4) 'array' is fully eager but it also creates an array to store all the 
elements in.


There are also semi-eager algorithms like 'asyncBuf' and 'map' in 
std.parallelism that consume the input range in waves.


Ali



Re: Casting MapResult

2015-06-15 Thread Baz via Digitalmars-d-learn

On Monday, 15 June 2015 at 20:10:30 UTC, jmh530 wrote:
I suppose I would want whichever has the best performance. 
Without testing, I'm not sure which one would be better. 
Thoughts?


I had been fighting with the map results because I didn't 
realize there was an easy way to get just the array.


I'm actually not having much luck with your original function 
(and I tried some variations on it). It just kept outputting 
the original array without applying the function. I tried it in 
main also (without being in a function) without much luck 
either.


Right, my bad. This one whould work:

---
float[] test(float[] x) {
auto result = x.dup;
result.each!((ref a) = (a = exp(a)));
return result;
}
---