Hey,
This warning has been around for awhile. Nothing to be concerned about,
but also hopefully will be something that goes away soon.
On Monday, August 29, 2016 at 10:26:29 PM UTC-7, Dennis Eckmeier wrote:
>
> Hi,
>
> Getting started with Julia, I installed Julia, and then Juno on Atom. I
>
Tuples are immutable by design. This is why they are fast, but also why you
can't push into them.
On Monday, August 29, 2016 at 10:26:29 PM UTC-7, Alexei Serdiuk wrote:
>
> Hi,
>
> I need to choose items from a range (1:N) according to some condition and
> to push them into a tuple.
>
> I
But the core issue isn't 'eye' specific, it's "what should the default type
for functions that create matrices be?"Christoph's comments do not
deviate from this question.
The answer to this question affects 'rand', 'zeros', 'ones', 'linspace',
etc. just as much as eye. ArrayFire means
I should point out that the linalg tests are expected to fail for now,
since we are awaiting a new openblas release, which is known to fix these
issues.
-viral
On Friday, August 19, 2016 at 10:26:38 AM UTC+5:30, Viral Shah wrote:
>
> I have uploaded Julia-0.5 on Power8 binaries here. These are
We are getting this onto our buildbots thanks to Elliot and Tony. It would
be great if folks can give these binaries a try.
https://s3.amazonaws.com/julianightlies/bin/linux/ppc64le/0.5/julia-0.5.0-3005940a21-linuxppc64.tar.gz
Do run Base.runtests(). The processor warning is known, and report
I've been working on porting a script I wrote in python to julia and have
been having some issues with the script freezing.
So pretty much all this script does is generate a random IP address and
checks to see if its valid(the Python version will give http error codes)
then logs the results
Hi,
Getting started with Julia, I installed Julia, and then Juno on Atom. I got
Julia to start, but I get the following warning:
WARNING: Method definition
require(Symbol) in module Base at loading.jl:317 overwritten in module Main
at C:\Users\\.julia\v0.5\Requires\src\require.jl:12.
WARNING:
Hi,
I need to choose items from a range (1:N) according to some condition and
to push them into a tuple.
I understand how to add items into an array:
t=Array()
for i = 1:N
if condition==true
push!(t, i)
end; end
Is there a way to push them into a tuple?
Thanks.
I agree with everyone who's saying you should look at Plots.jl, but if for
now you must use Winston, you might have luck with the version I forked and
hacked to make run on 0.5:
Pkg.clone("g...@github.com:MetServiceDev/Tk.jl.git")
Pkg.clone("g...@github.com:MetServiceDev/Winston.jl.git")
I
I was running Julia to run my MPC code. I needed to upgrade and hence i
deleted the folder i cloned from git hub. Now I have two problems:
1) Installing julia by sudo get-apt install julia, I get the following
message:
Reading package lists... Done
Building dependency tree
Reading state
Sorry for the combative tone Christoph. I thought it was necessary in order
to not deviate too much from the core issue. Thank you for your
participation and for raising your personal opinions about the topic.
-Júlio
Personal opinion again: I think it is not good to underestimate the
importance of teaching. Mathematics students in particular tend to stick
with the language they learn first. It is part of why Matlab is so
successful in the applied mathematics community.
P.S.: Not sure why such a combative
But eye and linspace are not the only culprits: there's rand, zeros, etc. so
unless everything is a special type, ArrayFire will still need constructors like
rand(AFArray{Float64},5,5)
Sent from my iPhone
> On 30 Aug 2016, at 13:02, Júlio Hoffimann wrote:
>
> So
So maybe add a dimension or create a type that makes more sense for the
application?
-Júlio
I doesn't have a dimension so you can't do collect(I)
Sent from my iPhone
> On 30 Aug 2016, at 12:53, Júlio Hoffimann wrote:
>
> Why is it so important to have all this machinery around linspace and eye?
> collect is more than enough in my opinion and all the
Why is it so important to have all this machinery around linspace and eye?
collect is more than enough in my opinion and all the proposals for keeping
both versions and pass a type as a parameter are diluting the main issue
here: the need for a smart mechanism that handles things efficiently and
I’d also propose adding
linspace(Vector{Float64},0,100)
linspace(Range,0,100)
linspace(LinSpace{Float64},0,100)
as constructors that return the corresponding type. That way Christoph can use
the linspace(Vector{Float64},0,100) version in his teaching.
> On 30 Aug
I just think the moment you start mixing default behaviors, you teaching
becomes exponentially harder. I found range objects not that hard to teach,
just "it uses an abstract version of an array to not really build the
array, but know what the value would be every time you want one. If you
Hi Tim, I changed my package directory on OSX to be in Google Drive, but
*using* the package on the REPL calls it from the default location,
/.julia/v0.4... How do I change *using* to call the package from Google
Drive? Thanks, Kevin
On Tuesday, September 30, 2014 at 7:14:41 AM UTC-3, Tim Holy
Thanks, I implemented a weighted distance across n neighbouring points to
solve my problem for the moment.
Using a triangulation would probably be a better solution, though.
On Tuesday, 30 August 2016 00:49:13 UTC+10, Tim Holy wrote:
>
> For unstructured grids, it depends a lot on what you want.
I confess I'm not quite sure what the right answer is here. It would seem
overkill to have both `I` and something that's the same thing, except sized.
OTOH, I see the attraction.
Maybe if it were part of a general mechanism, e.g., SizedArrayOperator{O,N}
(for performing an operation `O` on
Just noticed that you're allocating memory on each iteration. If you have the
patience to write out all those matrix operations explicitly, it should help.
Alternatively, perhaps try ParallelAccelerator.
Best,
--Tim
On Monday, August 29, 2016 10:49:40 AM CDT Marius Millea wrote:
> Thanks, just
Christoph,
Can you elaborate on why you want to have both? Also, why you wanted the
non-lazy linspace? If I understood correctly, you're talking about
returning a array with allocated memory versus a range object, isn't the
latter always preferred? I don't understand.
-Júlio
I think the ArrayFire.jl (https://github.com/JuliaComputing/ArrayFire.jl)
syntax for matrices (e.g. rand(AFArray{Float64},100,100)) should be adopted
in Base to allow multiple eye commands that return different types:
eye(Diagonal{Float64},10)
eye(SparseMatrix,10) # default to Float?
On Mon, Aug 29, 2016 at 3:59 PM, Jared Crean wrote:
> Here is an oddity:
>
> julia> s
> Set([2,3,1])
>
> julia> in(s, 2)
> false
>
> julia> in(2, s)
> true
>
> I would have though the first use of in would be an error because asking
> if a set is contained in a number is not
On Monday, August 29, 2016 at 4:26:44 PM UTC+2, Daniel Carrera wrote:
>
> On 29 August 2016 at 16:07, Chris Rackauckas > wrote:
>
>> That's exactly the reason why it's a good idea. The backends aren't
>> swappable, but the code is. And for the most part that means you can
It also works in 0.4.6 in the REPL
Jared,
You might be interested in what I consider the most useful Julia function
that's not actually in
Base:
http://stackoverflow.com/questions/29661315/vectorized-in-function-in-julia
vectorin(2, s)
0-dimensional Array{Bool,0}:
true
vectorin(s, 2)
3-element Array{Bool,1}:
true
Two give my two cents:
I think this is an inconsistency in the design of the standard library.
* while we have both I and eye available for the Identity matrix,
* linspace was replaced with a lazy data-structure.
Personally I would like to keep both I and eye; but I also would have liked
to
Here is an oddity:
julia> s
Set([2,3,1])
julia> in(s, 2)
false
julia> in(2, s)
true
I would have though the first use of in would be an error because asking if
a set is contained in a number is not defined. Is there some other
interpretation of the operation?
On Monday, August 29, 2016 at
This should be reported as an issue - probably to the DataFrames.jl
package. Can you get it to happen in Julia 0.4 outside of IJulia?
On Monday, August 29, 2016 at 12:30:19 PM UTC-7, Rock Pereira wrote:
>
> I switched to 0.5.0 rc3
> Everything is OK.
>
I switched to 0.5.0 rc3
Everything is OK.
Ah, yes. That's it.
Thanks,
Jared Crean
On Monday, August 29, 2016 at 3:11:02 PM UTC-4, Erik Schnetter wrote:
>
> Jared
>
> Are you looking for the function `in`?
>
> -erik
>
> On Mon, Aug 29, 2016 at 3:06 PM, Jared Crean > wrote:
>
>> I'm looking for a data
Hi Tim,
Thanks for the update. After pulling the master I was able to detect some
blobs depending on what I used for the sigmas. However, I realized the
problem of actually tracking the areas of the blobs between frames was
quite a bit more complicated than I originally thought, and I've
Jared
Are you looking for the function `in`?
-erik
On Mon, Aug 29, 2016 at 3:06 PM, Jared Crean wrote:
> I'm looking for a data structure that allows O(1) querying if a value is
> contained in the data structure, and reasonably fast construction of the
> data structure
I'm looking for a data structure that allows O(1) querying if a value is
contained in the data structure, and reasonably fast construction of the
data structure given that the initial size is unknown (although this
criteria is not that strict). I was looking at the Set in base, but I
can't
That's a good showing for Julia for the larger matrices? However, for
smaller matrices it's a large constant time. Is it including
startup/compilation time? Did they not "run it twice"?
On Monday, August 29, 2016 at 8:57:32 AM UTC-7, Páll Haraldsson wrote:
>
>
> I have no relation to this..
>
>
Dict can be slow. Try
d_cl = Array{Array,1}
for i=1:np
d_cl[i] = copy(inv_cl)
end
I can't say if that leads to good threads since nthreads() gives me only 1
today.
Thanks, I did notice that, but regardless this shouldn't affect the scaling
with NCPUs, and in fact as you say, it doesn't change performance at all.
On Monday, August 29, 2016 at 7:27:44 PM UTC+2, Diego Javier Zea wrote:
>
> Looks like the type of *d_cl* isn't inferred correctly. *d_cl =
Thanks, just tried wrapping the for loop inside a function and it seems to
make the @threads version slightly slower and serial version slightly
faster, so I'm even further from the speedup I was hoping for! Reading
through that Issue and linked ones, I guess I may not be the only one
seeing
Tim,
Would it make sense to have "I" as an object that acts like UniformScaling
and doesn't require any memory allocation, but is only transformed into
sparse matrix via the [] operator? Maybe something similar to arrays ->
subarrays?
-Júlio
Looks like the type of *d_cl* isn't inferred correctly. *d_cl = Dict(i =>
ones(3,3,nl) for i=1:np)::Dict{Int64,Array{Float64,3}}* helps with that,
but I din't see a change in performance. Best
Julia Version 0.4.6
Commit 2e358ce (2016-06-19 17:16 UTC)
Windows 10
using RDatasets
RDatasets.datasets("rpart")
UndefVarError: displaysize not defined
in writemime at
C:\Users\Rock\.julia\v0.4\DataFrames\src\abstractdataframe\io.jl:181
in sprint at iostream.jl:206
in display_dict at
Very quickly (train to catch!): try this https://github.com/JuliaLang/julia/
issues/17395#issuecomment-241911387
and see if it helps.
--Tim
On Monday, August 29, 2016 9:22:09 AM CDT Marius Millea wrote:
> I've parallelized some code with @threads, but instead of a factor NCPUs
> speed
Further: there is nothing about the command line. If you
include("script.jl") in julia, the first time it fails. Immediately, run
it a second time and it works.
Go figure.
This is still not really as it should be and looking for any further help.
On Monday, August 29, 2016 at 9:45:02 AM
Deeper view of the problem. Either code in Julia packages or OS X itself
can't resolve symlinks properly.
If you look at the conflict message, the conflict is between libtk.dylib
and /Library/Frameworks/Tk.framework/Versions/8.5/Tk.
Here is the absurdity: libtk.dylib is a symlink to
The reason is because `I` doesn't have a size, so you can't `full(I)`. I
think the most reasonable thing would be for `I` to have an optional size
(and then act like an identity operator of that size, so error on the wrong
size multiplications, etc.), and then have sparse(I), Diagonal(I), and
I'd seen that reference you suggested and I don't really want to change all
the dependencies.
Something in the latest mods of ImageView or Images broke things. All I've
done is to reinstall Julia 0.4.6 and all packages. All of the
dependencies: Python, TK, tcl have been left as is.
If I
But if it does not makes sense from your point to give a full matrix, why
does it make more sense to return a diagonal matrix? It is still lots of
heap allocated memory. So you replace a large performance trap with a
smaller one. What has been gained? We would still have to teach people the
I've parallelized some code with @threads, but instead of a factor NCPUs
speed improvement (for me, 8), I'm seeing rather a bit under a factor 2. I
suppose the answer may be that my bottleneck isn't computation, rather
memory access. But during running the code, I see my CPU usage go to 100%
Ah, I got it now. Thanks.
segunda-feira, 29 de Agosto de 2016 às 17:05:58 UTC+1, Tim Holy escreveu:
>
> To rephrase what Steven and Tony said, for some things you won't need a
> macro.
> For example, `unsafe_wrap` didn't exist on Julia 0.4, but Compat contains
> an
> implementation of
On Monday, August 29, 2016 10:40:10 AM CDT Júlio Hoffimann wrote:
> Why would one want dense identity matrix?
Because you might want it to serve as an initialization for an iterative
optimization routine (e.g., ICA) that updates the solution in place, and which
assumes a dense matrix?
We could
The confusion is pretty clear. Someone suggested code like
m = m + lambda*eye(m)
when eye(m) shouldn't be used there. And if lambda is a vector of
eigenvalues, eye(m) still shouldn't be used there and instead the Diagonal
should be used. So if you shouldn't be using this command, why should it
To rephrase what Steven and Tony said, for some things you won't need a macro.
For example, `unsafe_wrap` didn't exist on Julia 0.4, but Compat contains an
implementation of `unsafe_wrap` for use on Julia 0.4. It's just a plain-old
function call, so you don't need `@compat`---just use it in
As Julio asks, why should the default be a dense identity matrix? Who
actually wants to use that? I agree `I` should be in more visible in the
docs and in most cases it's the better option. But if you actually want a
matrix as an array instead of just an operator (this is reasonable, because
I have no relation to this..
https://github.com/remore/julializer
[may not work to transpile Ruby on Rails to Julia - yet, there is an old
package RoR that allows it to work with Julia though.]
Interesting benchmarks here ("virtual_module" is transpiled, but "Julia
0.4.6 not, only to
Or perhaps make the package Deconvolution.jl, and have wiener be the first (and
currently only) method?
Best,
--Tim
On Sunday, August 28, 2016 12:44:44 PM CDT Mosè Giordano wrote:
> Hi all,
>
> I wrote a very simple implementation of the Wiener deconvolution
>
Ok, but than how do I quiet the tons of deprecation messages that show up?
segunda-feira, 29 de Agosto de 2016 às 15:57:34 UTC+1, Tony Kelman escreveu:
>
> You generally only need to call the @compat macro when you're trying to
> use some new syntax that didn't parse correctly on older versions
Why would one want dense identity matrix?
What is the confusion? Use eye(n) if you want a dense identity matrix. Use
I if you want something that acts like an identity element. Use
Diagonal(ones(n)) if you want a diagonal identity matrix. I see no reason
at all why eye should be changed.
On Monday, August 29, 2016 at 4:32:34 PM UTC+2,
You generally only need to call the @compat macro when you're trying to use
some new syntax that didn't parse correctly on older versions of Julia. If
it parses correctly, Compat usually implements it with normal functions and
methods, no need for a syntax-rewriting macro.
On Monday, August
If you can't find another home, I'd be happy to have it in Images, but to me
DSP seems like the (slightly) better choice.
That said, I also don't think it's terrible to have small packages (they are
easier to document and faster for newcomers to come to grips with), so it
could also stay a
For unstructured grids, it depends a lot on what you want. I'm a fan of
piecewise linear polyhedral interpolation, but there are many other choices:
https://en.wikipedia.org/wiki/Multivariate_interpolation#Irregular_grid_.
28scattered_data.29. One of the (many) Voronoi/Delaunay packages should
On 29 August 2016 at 16:07, Chris Rackauckas wrote:
> That's exactly the reason why it's a good idea. The backends aren't
> swappable, but the code is. And for the most part that means you can just
> avoid the cons of any backend instead of having to fight against them. You
>
But you don't want a sparse matrix. It would not be an efficient way to
actually use it since sparse matrices have a bit of overhead due to their
table structure. Even better would be a Diagonal since it's just an array
with dispatches to act like a diagonal matrix. But best would be to use the
It is more than sparse, it acts as scalar at first, maybe the operator []
modifies the type to sparse?
-Júlio
You mean a sparse matrix?
Andreas, is there a way to get the best of both worlds? Let's say eye() is
deprecated, can we somehow set off-diagonal terms in a type that is smart
like UniformScaling and supports indexing with operator []?
-Júlio
The important points of this thread:
- There hasn't been a tag in a year for Winston, but there have been
fixes, including from Jeff and Stefan. Check out master and try again
- You might consider other packages, because there's lots of options out
there
- Chris REALLY wants to
Isn't `I` better here?
On Monday, August 29, 2016 at 6:49:41 AM UTC-7, Evan Fields wrote:
>
>
>
> On Monday, August 29, 2016 at 9:39:19 AM UTC-4, Júlio Hoffimann wrote:
>>
>> I'd like to understand the existence of eye() in Julia, it is still not
>> clear to me. Is it because one wants type
This isn't the thread for this, but I'll bite.
That's exactly the reason why it's a good idea. The backends aren't
swappable, but the code is. And for the most part that means you can just
avoid the cons of any backend instead of having to fight against them. You
could be making all of your
Evan, this is exactly where you should use I, i.e.
m = m + λ*I
The reason is that eye(m) will first allocate a dense matrix of size(m,1)^2
elements. Then * will do size(m,1)^2 multiplications of lambda and allocate
a new size(m,1)^2 matrix for the result. Finally, size(m,1)^2 additions
will be
I currently use PyPlot. It has a lot going for it, as it is the most mature
plotting package for Julia right now (thanks to Matplotlib). I don't use
Plots.jl right now because I am happier using PyPlot directly. I like the
API better, and I have more control because I have a way to issue any
On Monday, August 29, 2016 at 9:39:19 AM UTC-4, Júlio Hoffimann wrote:
>
> I'd like to understand the existence of eye() in Julia, it is still not
> clear to me. Is it because one wants type stability when updating a matrix
> iteratively? Is this possibly a limitation from the design of the
Plots.jl is a good idea, but the backends are not really swappable. You can
get a fairly different plot if you swap the backend.
On Saturday, 27 August 2016 02:19:45 UTC+2, Chris Rackauckas wrote:
>
> You should really check out Plots.jl. It's a plotting metapackage which
> lets you use the
Julia is developing over time. Originally, eye was probably implemented to
mimic Matlab. Later we realized that the type system allowed us to define
the much nicer UniformScaling which has the special case
const I = UniformScaling(1)
which is almost alway better to use unless you plan to modify
We could deprecate eye. Then the users would get a warning directing them
to use `I` instead.
On Mon, Aug 29, 2016 at 6:29 AM, Júlio Hoffimann
wrote:
> I still think that having a "global variable" named "I" is not robust.
> I've read so many scripts in matlab that do
I'd like to understand the existence of eye() in Julia, it is still not
clear to me. Is it because one wants type stability when updating a matrix
iteratively? Is this possibly a limitation from the design of the language?
-Júlio
I still think that having a "global variable" named "I" is not robust. I've
read so many scripts in matlab that do I = eye(n). This approach is not
gonna work.
-Júlio
If we could somehow make `I` more visible, wouldn't you think that
B = I*A
is better than
B = eye(1)*A
?
Small side note: the best we can hope for is probably performance similarly
to B = copy(A) because it wouldn't be okay to alias A and B when B has been
constructed from *.
On Mon, Aug
Hi Andreas,
As a user I would like to write
B = eye(1) * A
and have the performance of
B = A
90% of the users won't be aware of this 1-character variable "I" defined in
Base nor use it. Also, I can guarantee that "I" is much easier to overwrite
than a well known function name.
-Júlio
>
> No, it is:
>
> t = unsafe_wrap(Array, Gb.data, h.size)
>
> as in the deprecation warning.
>
Thanks (I'd figured it out too meanwhile)
> (You don't need @compat just for function calls. You only need @compat
> for things where the syntax changes in a more complicated way.)
>
Hmm,
Hi Tom,
Mose: what version of julia are you on? Anonymous functions and closures
> are much faster on 0.5... In fact there should be no performance penalty vs
> regular functions, which allows you to rethink your paradigm.
>
It was Julia 0.4.6, but I get similar results also with Julia
No. We are only exposing `cond` but as you can see
in https://github.com/JuliaLang/julia/blob/master/base/linalg/lu.jl#L235 we
are actually getting `rcond` from LAPACK and then calling `inv`. I can see
the usefulness of working with a number in [0,1] instead of [1,inf) but it
seems superfluous
You can also overwrite eye
Could you elaborate on the "90% of the users won't be aware of these
internal details in their day-to-day coding" part? If we ignore the name
for a while, why is `I` not what you want here? It is as efficient as it
can possibly be.
On Sunday, August 28, 2016 at
Hello colleague,
On Saturday, August 27, 2016 at 12:04:22 AM UTC+2, K leo wrote:
>
> so that it works with version 0.5.
could you please list/report what errors you get? Do you use Tk or Gtk? I
looked a little bit around yesterday (with Gtk) and it looks like plotting
does work, but not
Deniz thanks for the package. I plan on reviewing it this week to decide if
it's a good fit for JuliaML. We're in the market for fast and well typed
backprop.
Mose: what version of julia are you on? Anonymous functions and closures
are much faster on 0.5... In fact there should be no performance
Hi Mosè,
Thanks for the wonderful feedback!
AutoGrad has some overhead for recording and differentiating primitive
operators. However I think there is room for improvement - the current
design has elements of the original Python package, anonymous functions,
closures, etc. that are probably not
88 matches
Mail list logo