Slightly OT, but since I won't talk about it myself I don't feel this will harm
the current thread ...
I don't know if it can be of any help/use/interest for any of you but some
people (some at Intel) are actively working on SIMD use with LLVM:
https://ispc.github.io/index.html
But I really
Hi,
I was experimenting with @simd and was a bit surprised about some results
on different implementations of a plain summing function:
julia f(a)
399921.25f0
julia g(a)
399916.2f0
julia sum(a)
399922.25f0
julia sum_kbn(a)
399920.66f0
julia @printf(%.6f,g(a))
399916.187500
I don't
But this is not the same thing. AFAIK, this is not possible yet as a Base
class. But I'm far from knowing everything.
I've seen discussions about providing a way to inherit default methods from
a field of a composite type but I can't find them. One problem of the
method is type preservation.
Interesting. Then I'll restate things and try to answer correctly to your
comments. Sorry, that will be a long post again :/ but I really need that
to show that the CLT partly fails in this case.
*1- Restating things: the function run a single time *
I know this is not what you are
I may have missed something but wouldn't
immutable t
x
y
end
immutable t
x
y
end
type u
x
y
end
work?
julia myvar = t(1,2)
julia myvar.x=5
ERROR: type t is immutable
julia v = u(t(1,2), t(3,4))
u(t(1,2),t(3,4))
julia v.x
t(1,2)
v.x=t(5,6)
t(5,6)
v.x.x=42
ERROR:
I wonder if we should provide access to DSFMT's random array generation,
so that one can use an array generator. The requirements are that one has
to generate at least 384 random numbers at a time or more, and the size of
the array must necessarily be even.
We should not allow this
A few days ago, Ján Dolinský posted some microbenchmark
https://groups.google.com/d/topic/julia-users/n3LfteWJAd4/discussions
about Julia being slower than Octave for squaring quite large matrices. At
that time, I wanted to answer with the traditional *microbenchmarks are
evil*™ type of
@time(begin
for n = 1:10
d1 = Float64[ a1[a,b,i,j] .* b1[a,i,j] .* c1[b,i,j] for a = 1:10,
b = 1:10, i=1:100,j=1:100 ]
end
end)
For the sake of completeness,
begin
...
end
blocks are not local.
I thought let blocks would but it appears they don't.
For the behavior of === you'll want to google Henry Baker EGAL.
Essentially two values are === if and only if there is program that can
distinguish them. There's no way to distinguish two instances of 1 since
integers are immutable. I put instances in quotes because it's not even
really
You're right that microbenchmarks often do not reflect real-world
performance, but in defense of using the sample mean for benchmarking, it's
a good estimator of the population mean (provided the distribution has
finite variance), and if you perform an operation n times, it will take
In general, it's important to account for uncertainty. This is the biggest
failing of Benchmark.jl. If I were to rewrite that package today, I would
place much more emphasis on reporting confidence intervals and I might not
even provide point estimates at all.
Amen
Why are you using metaprogramming stuff instead of a dict or an array?
It's not what creates the problem, but at least with a dict, it should be
easier for you to keep everything perfectly clear in your head.
So instead of generating symbols on the fly to @eval them, create a dict with
p2 and
BTW, if what you want to achieve is the concatenation of a few arrays, you may
want to do it from the start instead of putting them in different variables
first to concatenate them afterwards.
The binding of the argument to the function can never change, though the
values of that binding might.
It you be more correct to say that a method cannot change the binding of its
arguments. You can change bindings, you just can't do it inside of a method
because of scoping rules. It's just
Wouldn't it be enough to put it in a local scope (let block or in a function?).
For more information, you can ask or look at the Performance tips part of the
manual.
Hi Alex,
As John said, a dict would be far better.
If you feel the need to access the bounding names through a template like this,
then you'd better build a dict from the start or just before using your trick.
If your elements are well known, you could even directly use an array : if you
need
By the way, if your actions on these variables are independant and in-place,
the array method would have the extra advantage that it can be made parallel at
no cost and that it whould let you access the updated values through the array
but also through their original names.
Hi, this is matplotlib-related (the python lib behind PyPlot). You have to
change the figsize or the dpi either this way:
http://stackoverflow.com/questions/17230797/how-to-set-the-matplotlib-figure-default-size-in-ipython-notebook
Or that way:
I'm a little surprised that you have found the performance implications
unconvincing in discussions where the OGs advocate devectorization.
A 10x speed-up on a 10 ms calculation is generally considered unconvincing. An
unknown gain on an unprofiled code for an undefined context is definitely
(I was also thinking about element-wise operations)
I'm not familiar with lazy evaluation (I've not used any language implementing
it). But I was wondering...
Why not have a 'calculate_now' function to let the programmer choose when a
result is guaranteed to be calculated? Otherwise, resort to lazy
representations.
There could be some
I can't test right now. Did you setup passwordless ssh?
I still can't check but I'm wondering if Julia is alway trying to resolve what
it is given in a machonefile.
Did you try with 127.0.0.1 (assuming you got ssh setup on that machine)?
Oh! I misread what you wrote (I should be sleeping already).
I suggest you try to connect on the remote node to check wether Julia is
started and trying to connect to the master node (with ps to spot julia and
netstat or ss to spot the socket usage).
Maybe the worker can't connect back to the
To be more precise...
SSH is only used to launch Julia on the worker nodes. Afterwards, Julia sets up
a socket on both nodes and passes messages through that. If the sockets can't
be bound, because of a restrictive firewall for instance, Julia would try for
60 seconds and then time out.
At
One nit: can you really support your assertion that C++ and Fortran are the
two major languages of scientific computing? In my world, Matlab is used by
about 20x (maybe 50x) as many people as C++. I suspect there's a major divide
depending on field. Science is a big place.
It's just as you
From my understanding, Julia being row major, it makes little sense to split
your arrays vertically.
If your algorithm makes use of a complete row instead of a complete column,
you'd better either implement an alternative row-major array object or (far far
more simply), work with the transpose
Sorry, I obviously meant *column* major (as fortran, not C) which makes
horizontal splitting sensible as the data is naturally contiguous in memory.
While vertical splitting would require at least two extra copies of the data.
I didn't exactly want to launch a discussion about security in general. I did
not even intend to *actually* mean that your sysadmin just want to look at what
you are doing (that thing called irony ...).
Given more time, seeing your message, I would have come back and cleared your
worries much
I think that one of the main reasons it is so difficult to find is that inv
should generally be avoided.
As Mauro wrote, if you can, rephrase your problem so that you can use x = A\b.
I agree. What I meant is that since inv is avoided, it's not used nor well
referenced in search engines or on this list. :)
If you want to gain more performances, first identify hot spots by profiling
your code. This is fairly easy in Julia:
http://julia.readthedocs.org/en/latest/stdlib/profile/
Making your code less readable to gain a 0.01% speed increase in your whole
program doesn't worth the pain.
Once
Your progress is impressive! Thanks for your continuous work.
Hi, I'm starting to replace some of my Python with Julia for actual work
(post-processing actually). My files are on a remote cluster with an HTTP
proxy between us. With Python, I have to Popen ssh/sftp because Paramiko
has a bug with proxies in Python 3.
AFAIK, Julia is leveraging SSH for
Julia shells out for all ssh functionality.
Oh, interesting. That would explain why I couldn't find any SSH.jl with
libssh bindings nor libssh references in Julia's sources.
supporting libgit in base would bring in libssh so maybe this should be
reconsidered in the future.
Good to
Just for you to know, there are HDF5 bindings in almost every language I know,
C++ included.
Don't take what I said as a suggestion though: it's perfectly fine as you did
it. But others might read this thread and say HDF5? why not, indeed.
Hi Abe,
Looks like you just reimplemented SubString.
julia x = Hi there!
Hi there!
julia SubString(x, 2, 4)
i t
julia typeof(ans)
SubString{ASCIIString} (constructor with 1 method)
Which is totally understandable, as there seems to be almost zero
documentation about them. Would
One thought, it might be nice to have something a little more automatic.
Specifically, I was thinking if there was an 'ImmutableString' class, you
would always know that you should make a view rather than a copy. It would
also be in keeping with the general design where specifying a type
to make it work right now and hopefully what I posted previously in the
future. :)
Oh my! Does that even make sense?!
I meant to say:
s= SubString(randstring(size), 1)
can be used right as what you described as your ImmutableStrings.
While, with very little work in the source code, you
Oh ok, I understand now, your ImmutableString are already SubString:
s= SubString(randstring(size), 1)
to make it work right now and hopefully what I posted previously in the
future. :)
Many things are not right.
L = 1:1000
stride = 5
function dowork(i)
if i 1000*: **not in python*
sleep(5)
else
break *you are assuming your code will be inlined, but it won't*
end *an extra end is missing here*
That specific piece seems weird, I'm not
i = 1
@sync @parallel for i in i:i+stride
dowork(i)
i += 1
end
Also, don't assume this kind of trick would work in general (i =
i:i+stride). Here this is working because you declared i in the global
scope (before entering the loop). But if you didn't, you would get:
julia
What would *also* be pretty useful is allowing different compression filters in
HDF5.jl.
HDF5 compression capabilitied are not limited to Deflate. Blosc, for one, has
allowed fast and efficient compression/decompression of data in my case.
Your program would have to be changed to save data in
Being a Linux user, I find this behavior weird and explains why I'm mainly
using packages distributed by my distribution of choice.
Python is not really a reference there since they still consider having
distribution problems. Two of the main concerns regarding install time scripts
are
I'm just wondering: is it that hard to put a floating point somewhere in the
expression to get all inputs promoted to floating points?
Which is asking for troubles...
I always thought that non exported objects could not be imported.
In recent tests, the best distribution estimator was a geometric mean because
the elapsed-time distribution was log-normal.
If each iteration is not too short (to avoid timing inaccuracies) and not too
long (to avoid spending a day benchmarking), you'd better get each elapsed-time
individually
*Arrays*
function f1(a, b, c)
return (a.^2)*sum(a).*c.+exp(-b)
end
function f2(a, b, c=3)
return (a.^2).*sum(a).*c.+exp(-b)
end
f3(a, b) = f1(a, b, 3)
f4 = (a, b) - f1(a, b, 3)
a = rand(1000);
b = rand(1000);
julia @time for i=1:1 f1(a, b, 3) end
elapsed time: 0.813792743 seconds
Hi everyone.
I was thinking for some time about dimensionful arrays or variables in
general. (I really *really* *really* don't know whether the idea is sane.
Actually it tends to create more problem than it solves for what I've
tried.)
Dimensionful variables would be a variable which refer to
(Repost, the first one was not clear)
Hi everyone.
I was thinking for some time about dimensionful arrays or variables in
general. (I really *really* *really* don't know whether the idea is sane.
Actually it tends to create more problem than it solves for what I've
tried.)
Dimensionful
Both, I reposted something clearer. I was not aware of these efforts, I'm
gonna look at that. Thanks.
Le vendredi 30 mai 2014 12:57:26 UTC+2, Tim Holy a écrit :
I'm not sure if you're interested in this as an example of inheritance, or
if
you're interested in this specific application. If
It is behaving very well and is far smarter than what I was thinking of:
this is really impressive!
The only drawback I see is that for now, units are 100 times slower than
raw calculations on floats and 15 times slower than calculations on a
Array{5000}. I really hope this will improve over
Le vendredi 30 mai 2014 15:46:55 UTC+2, Tim Holy a écrit :
Are you sure you don't have a type problem somewhere?
Nope, same timings than yours. Your computer and mine should be quite
similar.
But try this:
import SIUnits: Meter
raw = rand(5000);
units = raw*Meter;
sum(raw);
sum(units);
But I removed the first one within 5 minutes! I'm sorry about that, and even
more because the post is long.
Shouldn't it behave the opposite way?
Array{:FloatingPoint}
representing an array composed of elements which are subtypes of
FloatingPoint and
Array{FloatingPoint}
representing an array composed of elements of the exact same type which is
a subtype of FloatingPoint?
Before implementing this
Are there some real life examples out there showing such a use? If so, couldn't
they be replaced by types or collections of object?
Are such arrays still continguous memory segments?
As others said, this is not consistent from Julia's user perspective (while it
could from Julia developpers
Le dimanche 25 mai 2014 08:29:02 UTC+2, James Crist a écrit :
I've been struggling with this for a while, and haven't found a way to do
it that I'm happy with. I'm sure there is one though. Basically, I want to
declare a function that works on an array of floats. Doesn't matter what
kind
Le dimanche 25 mai 2014 08:29:02 UTC+2, James Crist a écrit :
I've been struggling with this for a while, and haven't found a way to do
it that I'm happy with. I'm sure there is one though. Basically, I want to
declare a function that works on an array of floats. Doesn't matter what
kind
Le jeudi 22 mai 2014 15:23:53 UTC+2, sam cooper a écrit :
Hi,
I have an inner loop function which uses a 'constant' tuple:
fhandle = (expdat,obsv,expdev,t)
with types
fhandle::(Matrix{Float64},Vector{Int64},Float64,Vector{Int64})
Currently I am passing fhandle to the function each
Le jeudi 22 mai 2014 16:12:12 UTC+2, Tobias Knopp a écrit :
Sure I just made a very quick benchmark and it should not be taken too
seriously. I just thought we should not speculate too much on what Matlab
does but *better measure it*.
That's how I understood it. (But I can't deter myself
Le mercredi 21 mai 2014 14:55:34 UTC+2, Tim Holy a écrit :
I agree that vectorizing is very nice in some circumstances, and it will
be
great if Julia can get to the point of not introducing so many
temporaries.
(Like Tobi, I am not convinced this is an easy problem to solve.)
But
Just to add one thing. Julia should definitely not use fast for loops as
a selling point. If one has access to unboxed values, it's done. No hard
work. This is a selling point in comparison to matlab, python and r, but
not in general.
A very smart automatic devectorizer is much much much
While I cannot not agree with this ;), I'd like to state that:
1) High level functions might leverage clever algorithms faster than plain
loops (best example comming to mind: dot).
2) Vectorized code is always easier to understand, write, fix and maintain
because the intent is clear from the
I don't want to be pushy. So this is the last time I talk about that but VTK
*is* fast and already provide volume rendering among other things.
An opengl binding would be great though but if I'm given the choice between a
fully featured, highly optimized, highly parallel lib and doing it all
High quality and low complexity vector graphics.
By high quality I mean clear, nice and customizable.
By low complexity I mean direct generation of SVG/PDF primitives : opengl2ps is
horrible for that purpose, for instance opengl spheres are approximated by
thousands of triangles instead of
I actually said *not* to implement direct jpeg export. I thought you were
targeting stuff like 3D graphs in which case jpeg export generaly does more
harm than good because most of the people don't know which image format to
choose and use what they are the most used to see: jpeg; even though
Hi, my mind might be completely fried by intensive use of numpy arrays, but
I find slicing in Julia quite odd and I'd like to understand why. I'm not
saying Julia *should* behave like numpy but I find numpy's behaviour really
convenient. Let me explain...
Let say I created a random 2D array in
I don't understand this bit: Numpy's notation has the advantage to make
the most expensive operations harder to write, it also easily reminds us
that arrays in numpy are row-major. How does this make the more expensive
operation harder to write?
Just by writing this (for a a 2D array):
for i
I don't understand this bit: Numpy's notation has the advantage to make
the most expensive operations harder to write, it also easily reminds us
that arrays in numpy are row-major. How does this make the more expensive
operation harder to write?
Just by writing this (for a a 2D array):
for i
69 matches
Mail list logo