Yes, I'm using the function distribute(). This is the hotspot of my code (C
= A*B)
C_local = pmap(fetch, {@spawnat p localpart(dA)*localpart(dB)
for p in procs(dA)})
Is it the right way to procede? In this way the multiplication is very slow
( I'm using 4 workers).
Many
Hi guys,
I wanted to look into this as well. The main issue I think is in the speed
of the objective function. Running @time on the objective function
suggested a large amount of byte allocation. Checking the type revealed
that getting x and y from data would set their types to Any.
So I
You're right, it is creating a 1x2 array in this case but it doesn't affect
execution time in either case. Another surprising observation: When line
2 is changed from a = [.5 2] to a = 1.*[.5 2], execution time slows down by
500%.
I'll report it on github.
On Saturday, June 21, 2014
Out of curiosity, what is wrong with Scala's method to handle this problem
(where only the concrete types can have constructors)? From the thread you
linked, it seems like the main complaint against doing so would create a
stronger coupling between the abstract and concrete types than desired.
Hi all,
I'm a Julia newbee, and I'm trying to learn Julia and wrote Julia version
of rougier's 100 numpy
exercises(http://www.loria.fr/~rougier/teaching/numpy.100/index.html).
https://github.com/chezou/julia-100-exercises
I'd like you to tell me more julia way or something wrong with.
Best
I am starting to write about Julia for the followup book of Seven Languages
in Seven Weeks. We're having a great time so far. I like it far more than I
thought I would. We're interviewing the creators of other languages in the
book, and I would like to interview one of the Julia creators, but I
Hi Bruce,
This is exciting. Glad that you are liking Julia, and we are sure this will
bring more users in the fold. We will drop a separate email to you directly
for any information you may want from us for your book.
-viral
On Sunday, June 22, 2014 7:05:29 PM UTC+5:30, Bruce Tate wrote:
I
The communication is probably happening in other parts of the code. How
large a problem are you trying? Can you post the full code in a gist or a
git repository? I will try it out. This is a good example to have in our
manual as well, and I just haven't got around to it.
-viral
On Sunday,
If x1, ..., x6 or coeff are Float64 arrays, then the initialization
u1 = 0; u2 = 0; u3 = 0; u4 = 0; u5 = 0; u6 = 0
is problematic as soon as you get to
for k=1:nVar
u1 += x1[i + ni*( k-1 + nk* (t-1))]*coeff[k]
u2 += x2[i + ni*( k-1 + nk* (t-1))]*coeff[k]
code_typed show that when writing as `a = [.5, 2]`, the type of a is not
successfully inferred within the function.
Dahua
On Sunday, June 22, 2014 8:08:08 AM UTC-5, a. kramer wrote:
You're right, it is creating a 1x2 array in this case but it doesn't
affect execution time in either case.
Hi Tim
is this a concern even-though I declare u1::Float64 = 0; at the beginning
of the function, in ll2?
t.
On Sunday, 22 June 2014 15:57:53 UTC+1, Tim Holy wrote:
If x1, ..., x6 or coeff are Float64 arrays, then the initialization
u1 = 0; u2 = 0; u3 = 0; u4 = 0; u5 = 0; u6 = 0
I didn't look at ll2. But that one seems OK.
I didn't read the whole thread; are you timing just the execution of the
objective function, or of the whole optimization? You can't easily interpret
the latter.
--Tim
On Sunday, June 22, 2014 09:13:49 AM Thibaut Lamadon wrote:
Hi Tim
is this a
Thank you for the explanations. Reading your papers is on my LOTTD. Some pub
for you here http://stats.stackexchange.com/a/104290/8402 ;)
I'm trying 4 procs and 300x300 dense matrices. I'm no used to git, so I put
the code here:
function cannon_par(a,b) # for square matrices, nworkers() must be set
s = size(a,1)
nblocks = nworkers() # number of procs
size_B = int(sqrt(nblocks)) # size of a,b
I just had a quick look. Here are some ideas for a few exercices.
You can use list comprehension in some exercices e.g.
Checkerboard pattern
Float64[(i+j)%2 for i=1:8, j=1:8]
10x10 matrix with row values ranging from 0 to 9
Float64[j for i=0:9, j=0:9 ]
It seems that what is called
Same with Apprentice.4 :
[(x,y) for x in linspace(0,1,10), y in linspace(0,1,10)]
meshgrid() isn't included in Julia because it's almost never really needed.
Good work on these exercises, although I fear that the questions, being
designed for numpy, may not accurately reflect typical julia
Hi people,
Just to let you know that I have installed the new version of Julia (the
one in Github), that has the new implementation of cumsum and the
improvement in performance is absolutely amazing!
By performing @profile for my function using the former version of cumsum
the resulting
Installation in OpenSuSE 13.1 running in VMware Workstation fails with the
following message:
/bin/sh: line 2: patch: command not found
make[2]: *** [dsfmt-2.2/config.status] Error 127
make[1]: *** [julia-debug] Error 2
make: *** [debug] Error 2
Installs OK on Scientific
did this code ever find its way into DualNumbers.jl? I do anticipate its
going to be quite helpful.
-Thom
On Fri, Jun 6, 2014 at 10:32 AM, Thomas Covert thom.cov...@gmail.com
wrote:
Haven't been able to try it since I'm currently travelling. I bet it will
turn out to be useful though.
Maybe. Did someone create a pull request?
— John
On Jun 22, 2014, at 5:22 PM, Thomas Covert thom.cov...@gmail.com wrote:
did this code ever find its way into DualNumbers.jl? I do anticipate its
going to be quite helpful.
-Thom
On Fri, Jun 6, 2014 at 10:32 AM, Thomas Covert
There was a PR that was prematurely merged, it's still being discussed:
https://github.com/JuliaDiff/DualNumbers.jl/pull/11
In the meantime, a generic Cholesky factorization has been proposed (
https://github.com/JuliaLang/julia/pull/7236) which would solve the
original issue but not necessarily
21 matches
Mail list logo