Fair point. I timed this in a loop and it seems to be about an order of
magnitude slower, which (considering that you're redefining the function
every time it runs) is actually surprisingly good – it seems that doing so
only takes a microsecond.
On Saturday, 17 May 2014 21:20:34 UTC+1, Tim
Although it's worth pointing out the overhead is only present when using
the scope workaround; if you're in an inner loop and the 1μs overhead is
non-negligible (it seems unlikely that this would actually be a bottleneck,
but who knows) you could just use my original macro. Overall I'm not
Thanks all, those look like neat solutions.
--
Carlos
On Fri, May 16, 2014 at 11:36 PM, Stefan Karpinski ste...@karpinski.orgwrote:
When you write fill!(arr, ChannVals()) you are asking to fill arr with
the one value that is the result of evaluating
This is probably related to openblas, but it seems to be that tanh() is not
multi-threaded, which hinders a considerable speed improvement.
For example, MATLAB does multi-thread it and gets something around 3x
speed-up over the single-threaded version.
For example,
x = rand(10,200);
forgot to add versioninfo():
julia versioninfo()
Julia Version 0.3.0-prerelease+2921
Commit ea70e4d* (2014-05-07 17:56 UTC)
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Xeon(R) CPU X5690 @ 3.47GHz
WORD_SIZE: 64
BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH
now that I think about it, maybe openblas has nothing to do here, since
@which tanh(y) leads to a call to vectorize_1arg().
If that's the case, wouldn't it be advantageous to have a
vectorize_1arg_openmp() function (defined in C/C++) that works for
element-wise operations on scalar arrays,
Le samedi 17 mai 2014 à 20:56 -0700, z...@rescale.com a écrit :
Sorry I missed a line when transcribing the code snippet in my
message. The while loop should read:
Please provide the complete code and check it's working before posting.
After replacing the loop in the rest of the code, I got
Hi Carlos,
I am working on something that will allow to do multithreading on Julia
functions (https://github.com/JuliaLang/julia/pull/6741). Implementing
vectorize_1arg_openmp is actually a lot less trivial as the Julia runtime
is not thread safe (yet)
Your example is great. I first got a
HI Tobias, I saw your pull request and have been following it closely, nice
work ;)
Though, in the case of element-wise matrix operations, like tanh, there is
no need for extra allocations, since the buffer should be allocated only
once.
From your first code snippet, is julia smart enough to
On Saturday, May 17, 2014 02:04:49 PM Andrew Dabrowski wrote:
But is it wrong to use it the way I suggested?
Not wrong, but whether it buys you anything depends on what comes next. The
sizehint takes effect after the call to f(oldarray) returns, and it doesn't
affect f at all. However, if the
Hi all,
After some twiddling and debugging I can finally announce the first alpha
version of (what I suspect to be the first) hydrodynamics code written in
julia:
http://github.com/natj/hydro https://github.com/natj/hydro
There are still quite a lot of things to do like parallelization but
This is really cool! Thanks for sharing it, especially with the video.
You might be interested in packaging this
uphttp://docs.julialang.org/en/latest/manual/packages/#package-development–
it doesn't have to be officially registered or anything, but just putting
everything into a module would
btw, the code you just sent works as is with your pull request branch?
--
Carlos
On Sun, May 18, 2014 at 1:04 PM, Carlos Becker carlosbec...@gmail.comwrote:
HI Tobias, I saw your pull request and have been following it closely,
nice work ;)
Though,
sure, the function is Base.parapply though. I had explicitly imported it.
In the case of vectorize_1arg it would be great to automatically
parallelize comprehensions. If someone could tell me where the actual
looping happens, this would be great. I have not found that yet. Seems to
be
It starts to get interesting if you want to have a 2D online display where
data is colorized and you want to apply filter operations that can easily
done on the GPU. Interesting but IMHO basic 3D graphics is more important
at this stage.
Am Sonntag, 18. Mai 2014 16:02:10 UTC+2 schrieb Andreas
Sounds great!
I just gave it a try, and with 16 threads I get 0.07sec which is impressive.
That is when I tried it in isolated code. When put together with other
julia code I have, it segfaults. Have you experienced this as well?
Le 18 mai 2014 16:05, Tobias Knopp tobias.kn...@googlemail.com a
The computation of `tanh` is done in openlibm, not openblas, and it is not
multithreaded. Probably, MATLAB uses Intel's Vectorized Mathematical
Functions (VML) in MKL. If you have MKL you can do that yourself. It makes
a big difference as you also saw in MATLAB. With openlibm I get
julia @time y
Well it's probably no bad performance for a CPU based rendering system.
But with my crappy Intel GPU on fullscreen(1920x1200) and anti-aliased, I
can draw 10,000,000 3D lines in ~ 0.02 seconds, which makes 1 second for
100,000 lines look reeally slow.
Even 100,000,000 still offers near real time
Dear Community,
Sorry if this question was already asked, I'm a real julia novice! I
installed the version julia-0.2.1-win64.exe and tried the first steps in
Windows 8.1 environment with Julia but I got the following error message
(DOS box: C:\WINDOWS\system32\cmd.exe):
Fatal: error throw and
And I am pretty excited that it seems to scale so well at your setup. I
have only 2 cores so could not see if it scales to more cores.
Am Sonntag, 18. Mai 2014 16:40:18 UTC+2 schrieb Tobias Knopp:
Well when I started I got segfaullt all the time :-)
Could you please send me a minimal code
I've inluded the entire code as attachments per Milan's request.
Thanks!
enc
On Sunday, May 18, 2014 3:28:14 AM UTC-7, Milan Bouchet-Valat wrote:
Le samedi 17 mai 2014 à 20:56 -0700, za...@rescale.com javascript: a
écrit :
Sorry I missed a line when transcribing the code snippet in
Yeah, I intend to wrap this into a package once I get all the testing and
major refactors done. First I also have to come up with a proper name, too.
Sadly there seems to be no famous researchers named Julia in the field,
whose last name I could have stolen for this. I was thinking Toro or Bull
On my retina display it often times uses less than half of terminal window
width.
Are you expanding the window after starting Julia? We’re using the same
machinery as Base Julia to determine the window width, which doesn’t adjust
dynamically.
— John
On May 17, 2014, at 3:04 PM, Rob J.
High quality and low complexity vector graphics.
By high quality I mean clear, nice and customizable.
By low complexity I mean direct generation of SVG/PDF primitives : opengl2ps is
horrible for that purpose, for instance opengl spheres are approximated by
thousands of triangles instead of
Many thanks Cameron, I'll try that setup. Did I understand that you use
brew to compile julia?
On Friday, May 16, 2014 4:21:19 PM UTC+2, Ethan Anderes wrote:
+1 for Cameron. I use the same workflow.
Just downloaded it today again to try it out and the binary has the same
startup times as from source now. The version of darwin in the binary is
still 12.5.0 vs 13.1.0 from source. I have no idea if that's an issue or
not but the startup time is fine now, thanks Elliot.
Dom
On Saturday, May
Funny, in a similar machine (but running Windows) I get the opposite
Matlab 2012a (32 bits)
tic; inv(K); toc
Elapsed time is 3.837033 seconds.
julia tic(); inv(K); toc()
elapsed time: 1.157727675 seconds
1.157727675
julia versioninfo()
Julia Version 0.3.0-prerelease+3081
Commit eb4bfcc*
Any particular reason that when s is a Set, sizehint( s, n ) returns
s.dict rather than s?
Thanks. What confused me is whether it's advantageous to prepare the new
variable for the initial value, or whether that's done automatically. I.e.
is
newarray = f( oldarray )
OK or is it better to do
newarray = similar( oldarray )
followed by adding the elements of f( oldarray ) 1 by 1?
Seems like the windows and Mac versions of Julia call different blas/lapack
routines. Might that be the cause? Is it possible for me to ask julia to
use a different blas/lapack?
On Sunday, May 18, 2014, J Luis jmfl...@gmail.com wrote:
Funny, in a similar machine (but running Windows) I get
If you know the size in advance, it's always better to use that information.
Use a comprehension, or allocate it at the final size and use a loop. sizehint
is really for things like push!. Try timing different approaches, and you'll
discover that sizehint helps some but does not make push! as
Because not so many may have Matlab installed on Linux,
here are my votes:
Matlab: 1.59 sec
Julia: 0.92 sec
julia versioninfo()
Julia Version 0.3.0-prerelease+2985
Commit 645c696 (2014-05-10 17:14 UTC)
Platform Info:
System: Linux (x86_64-linux-gnu)
CPU: Intel(R) Core(TM) i3-3217U CPU @
Really nice work! Would be awesome (and non-trivial :D) to parallelize it,
would be a pretty cool Julia demonstration.
On Sunday, May 18, 2014 7:41:43 AM UTC-4, Joonas Nättilä wrote:
Hi all,
After some twiddling and debugging I can finally announce the first alpha
version of (what I
Probably because nobody noticed/cared before you did. It seems likely that this
could happen by accident.
See: https://github.com/JuliaLang/julia/blob/master/base/set.jl#L29
Great to see that Tobias' PR rocks ;)
I am still getting a weird segfault, and cannot reproduce it when put to
simpler code.
I will keep working on it, and post it as soon as I nail it.
Tobias: any pointer towards possible incompatibilities of the current state
of the PR?
thanks.
I just noticed, I haven't mentioned anywhere, that this will be OpenGL
plotting only.
There are just limited options for rendering shapes, but there are
definitely some more tricks to render shapes, than by approximating them
with a gazillion triangles.
One is, to draw pixels on a quad in the
Yes, that's an oversight. It should return s instead.
On Sun, May 18, 2014 at 5:30 PM, Ivar Nesje iva...@gmail.com wrote:
Probably because nobody noticed/cared before you did. It seems likely that
this could happen by accident.
See:
There are instructions in the Julia README and on Intel's website for
running Julia with MKL:
https://github.com/JuliaLang/julia#intel-math-kernel-libraries
https://software.intel.com/en-us/articles/julia-with-intel-mkl-for-improved-performance
-- Leah
On Sun, May 18, 2014 at 3:59 PM, Thomas
I actually said *not* to implement direct jpeg export. I thought you were
targeting stuff like 3D graphs in which case jpeg export generaly does more
harm than good because most of the people don't know which image format to
choose and use what they are the most used to see: jpeg; even though
Awesome, glad it's working.
The Darwin version is because the binaries distributed online are built on
an OSX 10.8 machine, so it gets baked in with a different Darwin version.
-E
On Sun, May 18, 2014 at 11:06 AM, Dom Luna dluna...@gmail.com wrote:
Just downloaded it today again to try it out
Hi Jon,
No -- I pull julia via git on github, and compile by hand every few days.
I've symlinked ~/bin/julia to the directory that I compile julia into, so
julia is in my path.
On 10.9, the native option is Clang, which works fine.. I've been able
to dodge gcc (gnu) for all system dependencies
iterating over a dictionary yields (key, value) tuples:
julia proportionmap([1,1,2,2,3])
[2=0.4,3=0.2,1=0.4]
julia for (k,v) in proportionmap([1,1,2,2,3])
println(k)
end
2
3
1
julia
—James
On Sunday, May 18, 2014 7:26:39 PM UTC-5, Jason Solack wrote:
Hello everyone!
I'm
perfect, thank you very much!
On Sunday, May 18, 2014 10:10:47 PM UTC-4, James Porter wrote:
iterating over a dictionary yields (key, value) tuples:
julia proportionmap([1,1,2,2,3])
[2=0.4,3=0.2,1=0.4]
julia for (k,v) in proportionmap([1,1,2,2,3])
println(k)
end
2
3
1
Actually, as far as I can tell now, one should really have 0.3 to use
DataFrames after all. Sorry for posting the question.
On Friday, May 16, 2014 8:47:57 PM UTC-7, Travis Porco wrote:
This from Julia 0.2.1 on MacOSX 10.7.5, using the DataFrames package,
just now updated before the
44 matches
Mail list logo