Ah, thanks for that question. I was testing from IJulia. Those tests were
not showing a boost from Steven Johnson's hash function, perhaps because of
the sequence in which I executed the code.
Rerunning as a script from command line using the Base.hash trick +
SubArrays yields run times
data = matread(data.mat)
data [data=2884x4 Array {Float64,2}:
0.0 1.0 2.0 3.0
4.0 5.0 5.0 4.0
...
2.0 1.0 3.0 3.0]
sum(data) the same like uper :/
how to take clear data ?
Paul
On Wednesday, March 5, 2014 4:52:19 AM UTC+1, Keith Campbell wrote:
Using Symbols seems to help, ie:
Symbols are interned, making comparison and hashing very fast, but they are not
garbage collected https://github.com/JuliaLang/julia/issues/5880.
According to your use case, it may be
Hi all,
I collected some of my convenience functions for 3D plots using OpenGL into
a repository. You can make surface plots from matrices and parametric
surfaces form functions. The plots are not rendered to screen but into an
Image object from the Images package which show directly in an
Hello everyone!
I'm just at some steps from actually starting the julia documentation
translation into spanish, I have been studying the tools I'm going to need
and use (sphinx, lokalize, pology, dictionaries, spell-checkers,
glossaries, git, github, etc.) and I would like to have some advice
Sorry for bumping a month-old thread, but I was searching for this:
more time is spent building and precompiling the system image (which
cannot be sped up with -j N) than actually compiling and linking its C
sources.
Can you explain why building the system image can't be parallelized? Could
Theoretically, yes. The tedious-but-possible theoretical is to have the
following things, plus independent pre-compilation of packages; then the
minimal base can be compiled first and all the internal packages
compiled independently and linked. Big wins would come from precompiling
linalg and
Are you using a binary or a source install? I’ve had a few issues with the
changed libRmath library name when working with older binary releases.
— John
On Mar 5, 2014, at 3:10 AM, James W watso...@gmail.com wrote:
Hey everyone,
Here's my problem:
I type:
using Distributions
Hello everyone, i have some code that loops quite a bit. I have set up
before the loop begins a template array of zeros
sT= zeros(Float64,1000)
then within my loop i'm setting an array s = sT
s=sT
i'm then doing a bunch of work with the s array... at the top of the loop
i'd like to zero
Like C or Python, but unlike Matlab or R, when you write s = sT, s is a
reference to the same array as sT, so if you change one, you change both –
since they are the same array. If you want to make a copy, you can
explicitly make a copy: s = copy(sT). If the matrix you are copying is all
zeros,
Note that matread returns a dictionary of all variables contained within
the file data.mat, with the saved variable names as the keys (since there
can be more than one variable saved within a matfile). What I think you're
really after is something more like this:
julia vars =
Seems like that link is broken - works for me if I use raw.github.com/...
and then save the file.
Nice work, though! Is there any way you could get this to work with
Three.js?
On Wednesday, 5 March 2014 14:22:28 UTC, Fabian Gans wrote:
Hi all,
I collected some of my convenience functions
it seemed to me (when i looked at the code) that would be trivial to add to
Memoize.jl by adding a type parameter to the macro and reading from the
cache into a typed variable.
but since i spilt coke all over my netbook yesterday my julia programming
is on hiatus :o(
andrew
On Wednesday, 5
It already accepts a type as an optional parameter.
On Wednesday, March 5, 2014 7:41:42 PM UTC+1, andrew cooke wrote:
it seemed to me (when i looked at the code) that would be trivial to add
to Memoize.jl by adding a type parameter to the macro and reading from the
cache into a typed
i'm trying to check out how Julia handles parallel for loops and i'm
getting some errors
i'm not sure if it's because i'm working with a single array and i can't
append to this in parallel, so i thought i'd ask
nprocs()
addprocs(6)
tic()
onPass = 1
a = Array(Int64, 0)
@parallel for i =
Is j = 100 a typo (did you mean 1:100)?
On Wednesday, March 5, 2014 2:41:51 PM UTC-5, Jason Solack wrote:
i'm trying to check out how Julia handles parallel for loops and i'm
getting some errors
i'm not sure if it's because i'm working with a single array and i can't
append to
Hi Jason,
I don't really understand what you are trying to do but the problem is that
the array a is not read-only in this context.
From the manual: Using “outside” variables in parallel loops is perfectly
reasonable if the variables are read-only.
See the parallel section of the manual:
If you have e.g. a function that always returns Int, then you can specify
the associative type as Dict{Any,Int}, which allows type inference to
determine that anything pulled out of it is an Int, so the return type of
the memoized function can be inferred:
julia using Memoize
julia @memoize
Dear community,
I apologize if this is very trivial, but I haven't been able to figure this
out on my own.
I have been looking in the Julia documentation - especially the Text I/O
section - for a way of appending data to a file. I have a for loop for
which I would like to append some info to
ah, ok, thanks. andrew
On Wednesday, 5 March 2014 19:03:49 UTC-3, Simon Kornblith wrote:
If you have e.g. a function that always returns Int, then you can specify
the associative type as Dict{Any,Int}, which allows type inference to
determine that anything pulled out of it is an Int, so
You need to open the file in append mode. See here:
http://docs.julialang.org/en/latest/stdlib/base/#Base.open
open(file,a) do x
writecsv(x,data)
end
Then write to it, I believe using writecsv or writedlm work fine on open
files, but I can't confirm write now; otherwise, you'll need to use
another question here made me realise i don't understand how return types
are handled in julia.
after all, return types are not specified in functions (are they?). so how
does the system know that get() for Dict{A,B} returns type B?
i guess there has to be whole program type inference on
Hi guys,
Finally, today I could pack Julia for openSUSE 13.1 (32 and 64 bits
version).
I need testers to verify if everything is ok, specially for the 32 bits
version.
The repository link is:
https://build.opensuse.org/package/show/home:Ronis_BR/julia
I will appreciate any help to improve
oh, i think i get it.
you're not solving, you're just propagating.
so you need the specified types to infer the return. and that's local to
the function so scales.
ignore me :o)
cheers,
andrew
On Wednesday, 5 March 2014 19:40:30 UTC-3, andrew cooke wrote:
another question here made me
Good names are more than functionality.
I don't think of the frobenius norm as a vector norm, even though
I'm fully aware that it is.
Note that the two norm of the singular values gives the same answer
but the infinity norm of the singular values is the matrix 2- norm.
not that big a deal , but
On Wed, Mar 5, 2014 at 6:00 PM, Alan Edelman mit.edel...@gmail.com wrote:
Note that the two norm of the singular values gives the same answer
but the infinity norm of the singular values is the matrix 2- norm.
Can you expand on this argument? I'm not getting the point...
then how is it a jit? i just checked wikipedia and the definition there is
interpreter + compiler. which would give you stats from the interpreter?
(not trying to be confrontational, just not understanding...!)
thanks,
andrew
On Wednesday, 5 March 2014 20:07:12 UTC-3, Tim Holy wrote:
One could argue that Julia is “statically compiled at run time”. See this talk
by http://vimeo.com/84661077 for a discussion of that viewpoint, which I like.
— John
On Mar 5, 2014, at 3:20 PM, andrew cooke and...@acooke.org wrote:
then how is it a jit? i just checked wikipedia and the
huh. today i learned. thanks. talk running now...
On Wednesday, 5 March 2014 20:25:06 UTC-3, John Myles White wrote:
One could argue that Julia is “statically compiled at run time”. See this
talk by http://vimeo.com/84661077 for a discussion of that viewpoint,
which I like.
— John
Stefan’s discussion of “static compilation” starts around minute 45 or so.
— John
On Mar 5, 2014, at 3:30 PM, andrew cooke and...@acooke.org wrote:
huh. today i learned. thanks. talk running now...
On Wednesday, 5 March 2014 20:25:06 UTC-3, John Myles White wrote:
One could argue
good talk; i should have watched it before now.
On Wednesday, 5 March 2014 20:31:44 UTC-3, John Myles White wrote:
Stefan’s discussion of “static compilation” starts around minute 45 or so.
— John
On Mar 5, 2014, at 3:30 PM, andrew cooke and...@acooke.org javascript:
wrote:
huh.
It's pretty easy to write a simple radix-2 FFT for power-of-two sizes (e.g.
the Wikipedia article on the Cooley-Tukey FFT has pseudocode). For
arbitrary precision arithmetic, there's not too much point in much fancier
algorithms.
(FFTW's benchmark/testing code actually contains an
I'm getting the following message. I definitely have internet connection,
but it can't download/extract?
Any tip would be greatly appreciated. Thank you.
Pkg.build(Homebrew)
INFO: Building Homebrew
% Total% Received % Xferd Average Speed TimeTime Time
Current
The `dist` argument specifies the number of parts each dimension has to be
split into.
So, try with.
a_d = distribute(a, workers(), [length(workers()), 1]);
prod(dist) must be = length(pids) so, distributing one row per worker
does not make sense unless you have few rows and a huge number of
To expand a little bit: what you need here is to tell Julia that some
variable is a Ptr{MyImmutable} (returned by a ccall). Then Julia will know
the proper memory layout if you `unsafe_load` to get values from the
pointee struct.
here is a very quick sketch based on one of the sundown examples.
Thanks Jacob that seems to be exactly what I needed! :-)
36 matches
Mail list logo