For the toy example you are right, parametric function would be better -
what I'm actually trying to do is define several `searchsortfirst`
functions, where the lt function is different.
This is because I have a vector of one immutable composite type of several
values I wish to sort and
Alos, as Mauro said, instead of manually creating the functors, packages
like FastAnonymous.jl makes it easier for you.
On Wednesday, June 17, 2015 at 10:11:22 PM UTC+2, Kristoffer Carlsson wrote:
You could also use something called functors which basically are types
that overload the call
Dear all,
I have just finished giving a 2-day Julia workshop in Paris for scientists
who already had some knowledge
on scientific computing. The materials (Jupyter notebooks) are available at
https://github.com/dpsanders/hands_on_julia/
This is somewhat different from most previous materials,
Hi, I want to create a macro with which I can create a function with a
custom bit of code:
In the repl I can do a toy example:
*name = :hi*
*vectype = Vector{Int}*
*quote** function ($name)(v::$vectype)*
*println(hi)*
*end*
*end*
However if I try to put this in a macro and use
My gut reaction is that you don't want to use a macro here. Can you use a
parametric definition:
f{T}(vectype:T) = do something useful with the T
or can you just use multiple dispatch:
f{T:FloatingPoint}(v::Vector{T}) = something for floats
f{T:Integer}(v::Vector{T}) = something for ints
On Wed, 2015-06-17 at 21:23, Ben Ward axolotlfan9...@gmail.com wrote:
For the toy example you are right, parametric function would be better -
what I'm actually trying to do is define several `searchsortfirst`
functions, where the lt function is different.
This is because I have a vector
You could also use something called functors which basically are types that
overload the call function. When you pass these as argument the compiler
can specialize the function on the type of the functor and thus inline the
call. See here for example for them being used effectively for
I think I'm moving in the right direction. I downloaded several packages
that IJulia depends on and put them in the ~/.julia/v0.3 directory along
with the IJulia package itself. Before I was just sticking them in the
~/.julia directory, and I don't think Julia was seeing the packages.
When
Well, if https is in fact accessible then the best bet is to troubleshoot
git directly first. After configuring the `insteadOf` git setting (per the
README) try something simple like `git clone
https://github.com/JuliaLang/julia`. There are a lot of guides on the
internet for troubleshooting this
That's great that it's fixed in 0.4, but even in 0.3.X I would still label
it inconsistent behavior, or perhaps even a bug. Why should this happen:
*julia **x.2 == x*.2*
*true*
*julia **x0.2 == x*0.2*
*ERROR: x0 not defined*
*julia **x2 == x*2*
*ERROR: x2 not defined*
It seems consistent
I guess for Base sort! passing a functior overriding call() as lt is
possible, if it accepts the three arguments passed to lt in lt(o, x, v[j-1])
On Wednesday, June 17, 2015 at 9:11:22 PM UTC+1, Kristoffer Carlsson wrote:
You could also use something called functors which basically are types
i guess i had less confidence that types come out right with vcat than i do
with tuple. but now i come to justify why, i can't, so yes, sorry for
ignoring that earlier.
On Wednesday, 17 June 2015 19:18:32 UTC-3, Jameson wrote:
In 0.4, you can now construct vector types directly:
After much discussion this was changed in 0.4.
On Jun 17, 2015, at 6:33 PM, Phil Tomson philtom...@gmail.com wrote:
Maybe this is expected, but it was a bit of a surprise to me:
julia function foo()
red::Uint8 = 0x33
blue::Uint8 = 0x36
(red-blue)
Maybe this is expected, but it was a bit of a surprise to me:
julia function foo()
red::Uint8 = 0x33
blue::Uint8 = 0x36
(red-blue)
end
julia foo()
0xfffd
julia typeof(foo())
Uint64
The fact that it overflowed wasn't surprising, but
This has been changed on 0.4.
https://github.com/JuliaLang/julia/issues/3759
-Jacob
On Wed, Jun 17, 2015 at 4:33 PM, Phil Tomson philtom...@gmail.com wrote:
Maybe this is expected, but it was a bit of a surprise to me:
julia function foo()
red::Uint8 = 0x33
Sadly, this function is pretty close to the real workload. I do n-body
simulations of planetary systems. In this problem, the value of n is
small, but the simulation runs for a very long time. So this nested for
loop will be called maybe 10 billion times in the course of one simulation,
and that's
yea, i figured the same thing since I am on the same system using https
through my browser, but for some reason that I don't understand, Julia
won't add/update packages, even when git is configured to use https
On Wednesday, June 17, 2015 at 3:04:26 PM UTC-4, Isaiah wrote:
Are you using the
In 0.4, you can now construct vector types directly:
Vector{T}(dims...)
making construction of some arrays a bit clear.
But I think the array equivalent of `tuple` is `vcat`.
On Wed, Jun 17, 2015 at 5:20 PM andrew cooke and...@acooke.org wrote:
thanks for all the replies.
i really wanted
On Thursday, June 18, 2015 at 3:05:26 AM UTC+10, Kevin Squire wrote:
`open(cmd, w)` gives back a tuple. Try using
f, p = open(`gnuplot`,w)
write(f, plot sin(x))
There was a bit of discussion when this change was made (I couldn't find
it with a quick search),
Thanks! `readandwrite` looks exactly like what I need. I'll try it out.
I don't mind too much that this (and related) commands return a tuple,
although I'd prefer the alternative discussed in issue 9659. What I'd much
prefer is for them to be documented under Running external programs, and
not
Can you arrange the problem so that you send each CPU a few seconds of
work? The overhead would become negligible.
-- mb
On Wed, Jun 17, 2015 at 4:38 PM, Daniel Carrera dcarr...@gmail.com wrote:
Sadly, this function is pretty close to the real workload. I do n-body
simulations of planetary
thanks for all the replies.
i really wanted something that took *only* the contents of the array, while
getindex has the type too. so i've defined my own array function:
array(T) = (builder(x...) = T[x...])
which i can use as, say, array(Any).
(the final use case is that this is given to a
Okay, at home I have a mac and so I just downloaded the packages that
`Pkg.add(IJulia)` installed on my home machine, because I wasn't sure
what packages IJulia depended on, and I didn't want to sift through the
source code. So I've now deleted the Homebrew directory.
`using WinRPM` does
Hi,
Daniel Carrera dcarr...@gmail.com writes:
I already have simulation software that works well enough for this. I
just wanted to experiment with Julia to see if this could be made
parallel. An irritating problem with all the codes that solve
planetary systems is that they are all serial --
Hi,
For some strange reason the nightly build does not update/upgrade
beyond the old commit (317a4d1). Any idea why ubuntu does not pull the
updated PPA builds?
$ julia -e 'versioninfo()'
Julia Version 0.4.0-dev+5149
Commit 317a4d1 (2015-06-01 18:58 UTC)
Platform Info:
System: Linux
Hi Jim,
A couple of points:
1) Maybe I'm missing something, but you appear to be calculating the same
inverse twice on every iteration of your loop. That is,
inv(Om_e[(j-1)*nvar+1:j*nvar,:]) gets called twice.
2) As Mauro points out, memory allocation is currently triggered when
slicing into
Actually, looking at it again with fresh eyes - I can define a few new
classes inhering from Ordering, and then define my own lt(o, x, [j-1]) for
each. It looks like these predefined orderings and lt methods in the
sorting api - ForwardOrdering, ReverseOrdering and so on are fast as their
lt()
Changing this would be breaking--syntax that currently works (even if you
don't expect it to) wouldn't work anymore. If someone is actually using
this syntax, then we'd break their code on a release which is billed as a
minor maintenance release. That's not going to work.
There may be a
Dear Mauro:
Thanks for the advice. I did not declare const nvar=6 , but will and let
you know the result.
Yes, I have read the performance section of the manual. Part of the
problem is I still think in MatLab coding rules.
Jim Nason
On Wednesday, June 17, 2015 at 3:00:03 AM UTC-4, Mauro
Great stuff, seems like I've still got a bunch of research to do,
especially regarding ZMQ.
Jack, I'm really interested in your approach of using your julia process as
a separate dockerfile. Could you explain what you meant by using a ZMQ
server when you needed more communication?
On
It certainly does -- thanks a lot!
On Monday, June 15, 2015 at 4:37:35 PM UTC+2, Huda Nassar wrote:
julia f = open(test2.txt,w)
IOStream(file test2.txt)
julia @printf(f,%0.2f,1/3)
julia close(f)
This should do the job
On Monday, June 15, 2015 at 9:50:17 AM UTC-4, Robert DJ wrote:
Hi,
Not sure it will help your specific use case, but see
http://docs.julialang.org/en/release-0.3/manual/calling-c-and-fortran-code/#thread-safety
and an example in
https://github.com/JuliaGPU/CUDArt.jl/blob/master/src/stream.jl
--Tim
On Wednesday, June 17, 2015 12:29:18 AM
Have you tried macroexpanding the expression? Doing so yields
julia macroexpand(:( for i = 1:N
@sync @parallel for j = (i + 1):N
tmp[j] = i * j
end
end ))
:(for i = 1:N # line 2:
Thanks. That does help. I'm still having problems making it all work, but I
think I need to prepare a minimalist example that illustrates what I want
to do. I'll post a new thread with an example later.
Cheers,
Daniel.
On Tuesday, 16 June 2015 22:28:22 UTC+2, Avik Sengupta wrote:
The
Actually, it seems that @sync is also responsible for setting the variable
#6#v equal to the return object of the call to Base.pfor and then returning
#6#v after calling Base.sync_end().
On Wednesday, June 17, 2015 at 8:22:08 AM UTC-4, David Gold wrote:
Have you tried macroexpanding the
The (global) constants are nvar = 6, number of variables (Int64) and plags
= 6, number of lags in the vector autoregression (Int64), which are defined
previously in the programs.
did you actually declare those const:
const nvar=6
?
If not, this will impact your loop.
function
I haven't used @everywhere in combination with begin..end blocks, I usually
pair @sync with @parallel - see an example here
https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl,
where I've parallelized the entire nested loop ranging from lines 25 to 47.
Hello,
I have been having a lot of trouble figuring out how to write parallel code
in Julia. Consider this toy example of a serial program:
N = 5
for i = 1:N
for j = (i + p):N
println(i * j)
end
end
Now, suppose that i * j is an expensive operation, so I
Unfrtunatly can`t run, (in Julia 3.6 the same)
_
_ _ _(_)_ | A fresh approach to technical computing
(_) | (_) (_)| Documentation: http://docs.julialang.org
_ _ _| |_ __ _ | Type help() for help.
| | | | | | |/ _` | |
| | |_| | | | (_| | |
It's not clear to me from your post what you are trying to do - do you want
to plot each of the 10 rows of vec as a line with a different color?
You might want to read this discussion
https://github.com/dcjones/Gadfly.jl/issues/526 suggesting that it is
much easier to to the kind of plot you
On Wednesday, 17 June 2015 10:28:37 UTC+2, Nils Gudat wrote:
I haven't used @everywhere in combination with begin..end blocks, I
usually pair @sync with @parallel - see an example here
https://github.com/nilshg/LearningModels/blob/master/NHL/NHL_6_Bellman.jl,
where I've parallelized the
Did I hear AstroJulia? :)
Best,
Charles
On 16 June 2015 at 14:38, Daniel Carrera dcarr...@gmail.com wrote:
I would love to see the paper when it comes out. I cannot use your code
directly because I need to do a direct NBody rather than a tree code (I am
modelling planetary systems). But
The lack of a formal grammar (and the ability to have one - it's not just
that one hasn't been written yet, I'm not sure that one *could* be written
- the grammar is basically defined only by the parser itself) is one of the
harder problems with Julia.
I had enough problems with a simple
Indeed
*julia **getindex(Any, (1,2,3))*
*1-element Array{Any,1}:*
* (1,2,3)*
Basically, any code with X[y...] gets lowered to getindex, and then the
corresponding method takes over to do the right thing. Makes the lowering
relatively simple, i imagine.
On Wednesday, 17 June 2015
You can also use chomp() which is specific to newlines (strip() removes all
whitespace)
Base.chomp(string)
Remove a trailing newline from a string
julia a = asd
asd
julia chomp(a)
asd
julia strip(a)
asd
On Wednesday, June 17, 2015 at 4:41:45 AM UTC-5, René Donner wrote:
Hi,
you
Hi everyone,
My adventures with parallel programming with Julia continue. Here is a
different issue from other threads: My parallel function is 8300x slower
than my serial function even though I am running on 4 processes on a
multi-core machine.
julia nprocs()
4
I have Julia 0.3.8. Here is
Oh, I think the call() thing is just me being confused. That's *only* a
mechanism to allow non-functions to look like functions? I guess my
misunderstanding is more about how apply is defined (it mentions call),
which really isn't important to me right now, so feel free to ignore that
part
So again, this is an informal description... In particular, my nomenclature
is not precise...
So basically, an @parallel is a construct which will take the work to be
done in each iteration of a for loop, and will farm them out to available
remote processors, all at once. This will happen
Wait #6#v is the name of a variable? How is that possible?
On 17 June 2015 at 14:25, David Gold david.gol...@gmail.com wrote:
Actually, it seems that @sync is also responsible for setting the variable
#6#v equal to the return object of the call to Base.pfor and then returning
#6#v after
I think it just uses getindex (a bit of a hack...):
julia @which Int[3]
getindex(T::Union(DataType,UnionType,TypeConstructor),vals...) at array.jl:119
On Wed, 2015-06-17 at 14:50, andrew cooke and...@acooke.org wrote:
Oh, I think the call() thing is just me being confused. That's *only* a
Today I spend many time to find a bug in my code. It is turn out that I
mistakenly wrote sum(X,2) as sum(X.2). No any error information is reported
and Julia regarded X.2 as X*0.2. The comma , is quite close to dot . in
the keyboard and looks quite similar in some fonts. As there is no any
Great, looking forward the next version!!
My Julia is 0.3.9 so far.
On Wednesday, June 17, 2015 at 3:14:24 PM UTC+2, Seth wrote:
On Wednesday, June 17, 2015 at 8:04:11 AM UTC-5, Jerry Xiong wrote:
Today I spend many time to find a bug in my code. It is turn out that I
mistakenly wrote
Thanks. I didn't know about macroexpand(). To me macros often feel like
black magic.
On 17 June 2015 at 14:22, David Gold david.gol...@gmail.com wrote:
Have you tried macroexpanding the expression? Doing so yields
julia macroexpand(:( for i = 1:N
@sync @parallel for
The `getindex` explanation is not quite correct. Note, for instance, that
Avik's example returns a 1-element array containing the tuple (1, 2, 3),
not the 3-element array [1, 2, 3].
As far as *callable* objects are concerned, I think you probably want
`vcat` (or 'hcat', depending on your
If I want to pass the function that constructs an array of Any, given some
values, to another function, what do I use?
Here's an example that might make things clearer:
julia f(x...) = Any[x...]
f (generic function with 1 method)
julia apply(f, 1,2,3)
3-element Array{Any,1}:
1
2
3
julia
However, it does look like this use pattern does resolve to 'getindex',
*julia **dump(:( Int[1, 2, 3, 4] ))*
Expr
head: Symbol ref
args: Array(Any,(5,))
1: Symbol Int
2: Int64 1
3: Int64 2
4: Int64 3
5: Int64 4
typ: Any
*julia **getindex(Int, 1, 2, 3, 4)*
Note that in most plotting packages, `plot(vec)` is interpreted as 1000
functions of 10 elements each. That is, when given a matrix as argument,
they plot the columns.
-- mb
On Tue, Jun 16, 2015 at 8:10 PM, Nelson Mok laishun@gmail.com wrote:
HI,
Please comment, how to plot the matrix
I need to call waitformultipleobjects (windows equivalent of select
call in linux) from julia using ccall. As this is a blocking function, I
would like to call it within another coroutine (task).
The problem is that the “Taks” in Julia only function effectively if all
the blocking calls
^ True, I assumed in my post above that the rows where the object of
interest here, just because it seems more natural to plot 10 1000 points
data series than 1000 data series with 10 observations each...
On Wednesday, June 17, 2015 at 3:35:18 PM UTC+1, Miguel Bazdresch wrote:
Note that in
You're copying a lot of data between processes. Check out SharedArrays. But I
still fear that if each job is tiny, you may not get as much benefit without
further restructuring.
I trust that your real workload will take more than 1ms. Otherwise, it's
very unlikely that your experiments in
... and we're always looking for new contributors to JuliaAstro
http://juliaastro.github.io/ for commonly-used functionality!
- Kyle
On Wed, Jun 17, 2015 at 4:56 AM, Tim Holy tim.h...@gmail.com wrote:
On Wednesday, June 17, 2015 12:00:02 PM Charles Novaes de Santana wrote:
Did I hear
But it's so colorful!!
http://imgur.com/F7PqMMZ
On Wednesday, June 17, 2015 at 11:09:26 AM UTC-4, Nils Gudat wrote:
^ True, I assumed in my post above that the rows where the object of
interest here, just because it seems more natural to plot 10 1000 points
data series than 1000 data
Gah. I'm sorry. I can't reproduce my original results! I don't know why,
but the same tests I ran two days ago are not giving me the same timing. I
need to go back to the drawing board here.
On Wednesday, June 17, 2015 at 11:37:52 AM UTC-5, Josh Langsfeld wrote:
For me, map is 100x slower:
`open(cmd, w)` gives back a tuple. Try using
f, p = open(`gnuplot`,w)
write(f, plot sin(x))
There was a bit of discussion when this change was made (I couldn't find it
with a quick search), about this returning a tuple--it's a little
unintuitive, and could be `fixed` in a few different ways
I would have expected the comprehension to be faster. Is this in global
scope? If so you may want to try the speed comparison again where each of
these occur in a function body and only depend on function arguments.
On Tue, Jun 16, 2015 at 10:12 AM, Seth catch...@bromberger.com wrote:
I have
Soo, in summary (and I do apologize for spamming this thread; I don't
usually drink coffee, and when I do it's remarkable):
1) Mauro and Avik seem to be right about the common use case Any[1, 2, 3,
4] lowering to getindex -- my apologies for hastily suggesting otherwise
2) I suspect I was wrong
The speedups are both via the REPL (global scope?) and inside a module. I
did a code_native on both - results are
here: https://gist.github.com/sbromberger/b5656189bcece492ffd9.
On Wednesday, June 17, 2015 at 9:56:22 AM UTC-5, Stefan Karpinski wrote:
I would have expected the comprehension
For me, map is 100x slower:
julia function f1(g,a)
[Pair(x,g) for x in a]
end
f1 (generic function with 1 method)
julia function f2(g,a)
map(x-Pair(x,g), a)
end
f2 (generic function with 1 method)
julia @time f1(2,ones(1_000_000));
25.158 milliseconds (28491
Note that inside a module is also global scope as each module has its
own global scope. Best move it into a function. M
On Wed, 2015-06-17 at 17:22, Seth catch...@bromberger.com wrote:
The speedups are both via the REPL (global scope?) and inside a module. I
did a code_native on both -
We've put Julia and our package in a docker container and built a Ruby on
Rails front end that spins up containers when needed. When we've needed to
do a little more communication with the julia instance, we've run a ZMQ
server in Julia and fed in the required data. (after poking open the ports
Hello,
Gaston.jl is a plotting package based on gnuplot. Gnuplot is command-line
tool, so I send commands to it via a pipe. I open the pipe (on Linux) with
a ccall to popen, and write gnuplot commands to the pipe using a ccall to
fputs.
This works fine, but I'm trying to see if Julia's native
Sorry - it's part of a function:
in_edges(g::AbstractGraph, v::Int) = [Edge(x,v) for x in badj(g,v)]
vs
in_edges(g::AbstractGraph, v::Int) = map(x-Edge(x,v), badj(g,v))
On Wednesday, June 17, 2015 at 10:51:22 AM UTC-5, Mauro wrote:
Note that inside a module is also global scope as each
72 matches
Mail list logo