Thanks Michele, I also discovered [(Int64=Int64)[]] would also do it,
except the returned object contains 1 element:
julia [(Int64=Int64)[]]
1-element Array{Dict{Int64,Int64},1}:
Dict{Int64,Int64}()
On Tue Nov 11 2014 at 3:39:06 PM Michele Zaffalon
michele.zaffa...@gmail.com wrote:
I didn't look at your code but it sounds like you are doing row wise
operations. However, the sparse matrices in Julia (and in Matlab too, I
think) are much faster at column-wise access:
XX[:,i] is fast
XX[i,:] is slow
If you have to do both, then you can consider doing column-wise first
then
I should know better by now than to put time estimates on things like this.
The good news is, all the bottles have been built, so please don't hesitate
to test things out and report any issues you may run into. As a small
aside, I have also bottled Julia proper, so the time required to go from
I should also point out that support for OSX Lion (10.7) is gradually being
phased out. I am following the same deprecation schedule as Homebrew
proper, as Homebrew.jl relies quite a bit on the bottles provided by the
Homebrew maintainers.
-E
On Tue, Nov 11, 2014 at 1:58 AM, Elliot Saba
Le lundi 10 novembre 2014 à 19:06 -0800, Todd Leo a écrit :
I do, actually, tried expanding vectorized operations into explicit
for loops, and computing vector multiplication / vector norm in BLAS
interfaces. For explicit loops, it did allocate less memory, but took
much more time. Meanwhile,
Le lundi 10 novembre 2014 à 22:30 -0800, Michael Louwrens a écrit :
I was looking at the Devectorise package and was wondering, why not
have an operator that calls elementwise operations?
Several people around indeed consider this as a very good idea, but so
far an agreement on what the good
Thanks!
I had read that issue last week and I had completely forgotten about it.
Will give it a reread now
Fantastic, I got it working. Thanks for all the help.
Simon.
On Tuesday, 11 November 2014 04:56:14 UTC, Pontus Stenetorp wrote:
On 11 November 2014 10:49, Tony Kelman to...@kelman.net javascript:
wrote:
I don't want to steal Pontus Stenetorp's thunder since he did all the
work,
but
Thanks for your response. Because the operations are on sparse matrices I'm
pretty sure the arithmetic is already more optimized than something I would
write:
https://github.com/JuliaLang/julia/blob/master/base/sparse/sparsematrix.jl#L530
I did actually spend some time searching for sparse
Thanks. The instances of XX[i, :] that appear in my post here are just
pseudo-code. In the actual implementation only column-wise slices are used.
On Tuesday, November 11, 2014 3:49:00 AM UTC-5, Mauro wrote:
I didn't look at your code but it sounds like you are doing row wise
operations.
I should have posted the output from @time in the initial post. The version
of the code using sparse matrices is reporting 87% gc time. I thought I
could fix the problem in the second version of the code that uses Sets
instead of sparse matrices. Indeed, v2 only reports about 20% gc time, but
Hi
I've run into some array issues that I suspect are quite easy, but I
haven't found answers on Uncle Google or in the docs.
In the example below I would like to do the following (more elegant):
- Create the array of arrays A where the size of the inner arrays are
determined by the input
Is this the proper thread to send in found minor typos? Page 15:
If x and y are not both real or not both complex, then g(x,y) is an
error.
I'm pretty sure that should be f(x,y)instead.
Also, on the same page it is said that single static dispatch isn't done in
practice, but Go takes a
Also (still page 15) section 4.2: *Number/Function “*11 *misses the closing
quotes
(I hope it's obvious I find this an enjoyable read in general, just trying
to help out with the editing)
Le mardi 11 novembre 2014 à 04:52 -0800, Joshua Tokle a écrit :
Thanks for your response. Because the operations are on sparse
matrices I'm pretty sure the arithmetic is already more optimized than
something I would write:
I think I see: you're suggesting a single loop that performs the
a*b*(c+d+e) operation element-wise, and this will prevent the allocation of
intermediate results. Is that right?
Yes, the sparsity pattern will differ between the vectors I'm operating on,
but maybe I can adapt the code from
GLPlot master should work on 0.4!
(GLAbstraction, ModernGL, Reactive, GLWindow, all need to be master as
well... If I get a few voices that master works for them, I might tag a the
newer versions)
You can actually inofficially display Obj's, so there is mostly everything
in place to display
Le mardi 11 novembre 2014 à 09:54 -0500, Joshua Tokle a écrit :
I think I see: you're suggesting a single loop that performs the
a*b*(c+d+e) operation element-wise, and this will prevent the
allocation of intermediate results. Is that right?
Yes.
Yes, the sparsity pattern will differ between
Reworked some of the construction to be more succint:
function example(dims...)
A = [zeros(2^l*[dims...]...) for l = 0:2]
@show A
J = [1 3 ; 2 4]
I = kron(J, ones(Int,2,2))
K = rand(4)
B = K[I]
@show B
C = zeros( dims... )
@show C
nothing
end
What are the differences/advantages of following:
```
# 1
abstract MyType
#2
type MyType end
```
Is there any reason to prefer one over the other?
Yep, it's working again. Thanks!
On Monday, November 10, 2014 5:41:16 PM UTC, Steven G. Johnson wrote:
Should be fixed now, sorry.
There also seem to be some typos in the final code block in Section 6.3.
In the definition of `function stochastic`, in the line computing the
diagonal, I found it necessary to replace `2 sqrt` by `2 * sqrt`. Also, in
the serial `for` loop used in In[73], it was necessary to replace `hist(`
Seems like you answered your own questions?
Have a look at:
http://docs.julialang.org/en/release-0.3/manual/types/
Abstract:
- no instances
- can be used as parents of other types
type/immutable:
- can create instances of them
Both can be used for dispatch in functions.
Best try it out.
On Tue, 2014-11-11 at 17:13, Spencer Lyon
In particular, check out the section in the manual about singletons;
where there exists only one instance of a concrete type. I.e.
type MyType end
const singleton = MyType()
const singleton2 = MyType()
singletone === singleton2 # == true
Not exactly sure how that plays into the difference
julia version 0.3.3
julia rdstdout, wrstdout = redirect_stdout()
(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting))
julia print(hello)
julia s = readavailable(rdstdout)
hello
Using a IOBuffer() is more idiomatic though. Look at the source of the
filter function for
When developing Meshes.jl, I would dump a PLY (which, unlike STL, preserves
topology) and view it with meshlab http://meshlab.sourceforge.net/.
If you pull from master you should be able to just use threejs in IJulia:
https://baconscript.github.io/Meshes.jl/
On Tue, Nov 11, 2014 at 3:13 PM, Tracy Wadleigh tracy.wadle...@gmail.com
wrote:
When developing Meshes.jl, I would dump a PLY (which, unlike STL,
preserves topology) and view it
When I have a tuple type of the form
julia dt = (Int...,)
(Int64...,)
julia eltype(dt)
Any
How do I extract Int64 from that tuple of that form?
I was hoping I could borrow ideas from reflection.jl:
eltype{T}(x::(T...)) = T
which works for values, like eltype( (1,2,3,4,5) ) - Int, but I
I think I got it, (not sure how kosher it is)
julia function eltype( x::(DataType,) )
fst = x[1]
if fst.name.name == :Vararg
return fst.parameters[1]
else
return fst
end
end
julia eltype( (Int..., ) )
Int64
On
PythonTeX (https://github.com/gpoore/pythontex) now supports Julia with the
package option
usefamily=jl
or
usefamily={julia,jl}
I am still flailing around trying to get both the expression that is
evaluated and its value shown in the compiled LaTeX document. Does anyone
have experience
Is there a technical reason Homebrew is phasing out 10.7 (as I understand
it 10.6 is more different to build for than 10.7-or-newer), or did they
just decide to put a cap on building bottles for at most 3 versions
simultaneously?
On Tuesday, November 11, 2014 2:07:13 AM UTC-8, Elliot Saba
Milan,
A useful trick for linking to lines of code on github is to hit the y
key, and github will reload the page you're on but with the specific sha in
the url - like so
https://github.com/JuliaLang/julia/blob/d0a951ccb3a7ebae7909665f4445a019f2ee54a1/base/sparse/sparsematrix.jl#L530
That way
Thanks, now the 2-norm calculation for sparse matrices of cosine() can be
optimized by your sumsq() function. But is dot product part of cosine() for
sparse matrices optimizable?
On Tuesday, November 11, 2014 6:57:44 PM UTC+8, Milan Bouchet-Valat wrote:
Le lundi 10 novembre 2014 à 19:06
At this point the most effective way to write decently-performing sparse
matrix code in Julia is to familiarize yourself with the compressed sparse
column format
(http://en.wikipedia.org/wiki/Sparse_matrix#Compressed_sparse_column_.28CSC_or_CCS.29)
and work directly on the colptr, rowval, and
Le mardi 11 novembre 2014 à 18:15 -0800, Tony Kelman a écrit :
Milan,
A useful trick for linking to lines of code on github is to hit the
y key, and github will reload the page you're on but with the
specific sha in the url - like so
Le mardi 11 novembre 2014 à 19:29 -0800, Todd Leo a écrit :
Thanks, now the 2-norm calculation for sparse matrices of cosine()
can be optimized by your sumsq() function. But is dot product part of
cosine() for sparse matrices optimizable?
I think so. Since the product of a zero and a nonzero
37 matches
Mail list logo