There is a new Pharo build available!
The status of the build #159 was: FAILURE.
The Pull Request #3340 was integrated: "Fixes #3302 for Pharo 7."
Pull request url: https://github.com/pharo-project/pharo/pull/3340
Issue Url: https://pharo.fogbugz.com/f/cases/Pharo
Build Url:
Not a problem. I greatly respective other peoples time and priorities
and their personal lives.
Just for the record I am using 64bit Pharo on a fast i7, 16gb ram,
laptop running Xubunut 18.04 64bit.
I do not remember any problems loading. And within the small amount of
experimenting that I
Hi Jimmie,
I didn't take time yesterday to analyze your specific example because it
was quite late, but here are some remarks:
1) First, I recommend using 64bits Pharo, because number crunching and
Float operations will be faster (not FloatArray though).
2) it would be nice to use a profiler to
Hi Serge,
this is good news, having tensor flow bindings is also a must!
I have this in Smallapack with pure CPU unaccelerated blas (no MKL, nor
ATLAS, just plain and dumb netlib code)
| a b |
a := LapackDGEMatrix randNormal: #(1000 1000).
b := LapackDGEMatrix randNormal: #(1000 1000).
[a * b]
There is a new Pharo build available!
The status of the build #281 was: SUCCESS.
The Pull Request #3365 was integrated:
"3362-RBClassModelFactory-rbClass-should-be-pushed-to-instance-side"
Pull request url: https://github.com/pharo-project/pharo/pull/3365
Issue Url:
There is a new Pharo build available!
The status of the build #283 was: SUCCESS.
The Pull Request #3378 was integrated:
"3377-RBRefactoryChangeManager-changeFactory-should-be-nil"
Pull request url: https://github.com/pharo-project/pharo/pull/3378
Issue Url:
There is a new Pharo build available!
The status of the build #282 was: SUCCESS.
The Pull Request #3380 was integrated:
"3379-Hiedra-Migrate-example-to-current-Spec"
Pull request url: https://github.com/pharo-project/pharo/pull/3380
Issue Url:
I have updated Smallapack to version 1.6.1 so as to accelerate sum.
| a b c |
a := LapackSGEMatrix randNormal: #(1 1).
b := a as: FloatArray.
c := a asAbstractMatrix.
{a sum. b sum. c sum.}.
{[a sum] bench. [b sum] bench. [c sum] bench.}.
'27,500 per second. 36.3 microseconds per run.'
Le mar. 21 mai 2019 à 18:55, Nicolas Cellier <
nicolas.cellier.aka.n...@gmail.com> a écrit :
> I have updated Smallapack to version 1.6.1 so as to accelerate sum.
>
> | a b c |
> a := LapackSGEMatrix randNormal: #(1 1).
> b := a as: FloatArray.
> c := a asAbstractMatrix.
> {a sum. b sum. c
And my less performant 2.7 GHz Intel Core i5 MBP with Apple accelerated
VecLib is way faster than naive netlib BLAS (I guess it's multi-threaded):
| a b |
a := LapackDGEMatrix randNormal: #(1000 1000).
b := LapackDGEMatrix randNormal: #(1000 1000).
[a * b] timeToRun.
45
| a b |
a :=
10 matches
Mail list logo