Very cool! Is there a page where these changes are described more? How has
the abstract type hierarchy been changed?
Also, I heard there was supposed to be a complex root isolation method?
On Wednesday, July 27, 2016 at 9:07:44 AM UTC+12, Bill Hart wrote:
>
> Hi all,
>
> We are pleased to
What exactly are you trying to do? What's a "julia sample"?
On Sunday, July 31, 2016 at 6:17:13 AM UTC+12, jq...@tibco.com wrote:
>
> Hi
>
>I try to build a Julia sample with Visual Studio, got link error (
> works on Linux). Which library it should link to ?
>
> unresolved external symbol
:06 PM UTC+2, Alireza Nejati wrote:
>>
>> Simplification and equality testing are *exact* operations as they work
>> by distinctly specifying the roots of a minimal polynomial. Two algebraic
>> numbers are distinct if their minimal polynomials are distinct. If their
&g
implementation. If you find a bug please let me know on the github
repo.
On Thursday, July 14, 2016 at 2:16:42 AM UTC+12, Fredrik Johansson wrote:
>
>
>
> On Wednesday, July 13, 2016 at 2:21:18 AM UTC+2, Alireza Nejati wrote:
>>
>> Ever wanted to do exact arithmetic, geome
I see. Will try it, thanks for the tip
On Thursday, July 14, 2016 at 12:49:19 AM UTC+12, Tommy Hofmann wrote:
>
> You have the same problem with QQ. Use FlintQQ instead of QQ.
>
> On Wednesday, July 13, 2016 at 1:31:23 PM UTC+2, Alireza Nejati wrote:
>>
>> Tommy: Thanks!
Indeed!
Hecke.jl also has some similar abilities.
On Wednesday, July 13, 2016 at 12:29:28 PM UTC+12, Jeffrey Sarnoff wrote:
>
> (another good use of Nemo!)
>
> On Tuesday, July 12, 2016 at 8:21:18 PM UTC-4, Alireza Nejati wrote:
>>
>> Ever wanted to do exact arithmetic, g
https://github.com/anj1/ThinPlateSplines.jl
Indeed you're right; I wrote that example in a hurry. It's been updated
(I've taken a lot more care in the actual code - documentation was never my
forte).
What kind of vehicle routing algorithms do you use? I'd be interested to
know the uses people can find for stuff like this. I'm personally
https://github.com/anj1/AffineSpaces.jl
If you're doing computational geometry but are tired of copy-pasting
fragile stackoverflow answers to do simple things like point-line distance
and so on (or choosing between that and some huge and bloated computational
geometry library like CGAL), this
John Lambert: Unless you can run julia in sagemath cloud, I fail to see how
it is relevant to the discussion at hand. Sheehan clearly wants to run
julia, not sage.
> From reading through some of the TensorFlow docs, it seems to currently
only run on one machine. This is where MXNet has an advantage (and
MXNet.jl) as it can run across multiple machines/gpus
I think it's fair to assume that Google will soon release a distributed
version.
> problem is,
Hello,
I'd like to join the JuliaML group. My github account name is anj1.
Regards,
Al Nejati
Both! :)
If anyone draws up an initial implementation (or pathway to implementation,
even), I'd gladly contribute. I think it's highly strategically important
to have a julia interface to TensorFlow.
Randy: To answer your question, I'd reckon that the two major gaps in julia
that TensorFlow could fill are:
1. Lack of automatic differentiation on arbitrary graph structures.
2. Lack of ability to map computations across cpus and clusters.
Funny enough, I was thinking about (1) for the past
I'm with Tomas here - if anything this thread is testimony to the fact that
both '=' and 'in' should be left in.
Most things in 0.3 will still work in 0.4 except with a handy depreciation
error which will tell you exactly what to fix. Here are some tips:
- Use Compat.jl. It makes life a LOT easier. It lets you write a single
version of your code for 0.4 (or, more generally, the latest version of
julia)
Nice!
Any plans on merging this functionality with that of Mocha.jl?
Michele's solution is preferred here, but you can also do it like this:
string(lista[3])[1]
Also look at: https://github.com/dcjones/Gadfly.jl/blob/master/src/theme.jl
(line_style)
The design philosophy of Gadfly seems to be that you should think about the
data and let the software worry about how to present it.
That said, it is possible to change things like fonts, line thicknesses and
dash styles, and legend placement through
themes: http://gadflyjl.org/themes.html
Hi all,
I was wondering if this is a julian use of the @generated macro:
type Functor{Symbol} end
# A simple general product-sum operator;
# returns a[1]⊙b[1] ⊕ a[2]⊙b[2] ⊕ ...
@generated function dot{⊕,⊙,T}(::Type{Functor{⊕}}, ::Type{Functor{⊙}},
a::Array{T}, b::Array{T})
return quote
I missed a word in there. Meant to say, "runs at the same speed as
hard-coding the * and + functions". Anyway, I wanted to know if there's a
better way of doing something similar without using the @generated macro.
I've been coding in julia so much lately that I actually think my brain
might be forgetting the other languages I used to know!
On Monday, October 26, 2015 at 4:30:26 PM UTC+13, Yakir Gagnon wrote:
>
> Hi Julia community and developers,
> I'm a postdoc researching color vision, biological
I didn't know there was already a discussion going on this. Thanks for the
links.
My goal here isn't to replace Dot{} but rather to figure out what the most
julian way of doing this would be. Thanks again though.
There is no difference, as far as I know.
'=' seems to be used more for explicit ranges (i = 1:5) and 'in' seems to
be used more for variables (i in mylist). But using 'in' for everything is
ok too.
The '=' is there for familiarity with matlab. Remember that julia's syntax
was in part
There were some issues with the gradient_descent method which have now been
solved; thanks to Sam Lendel https://github.com/lendle for pointing them
out.
On Wednesday, July 23, 2014 8:15:56 PM UTC+12, Alireza Nejati wrote:
For about two weeks now, Zac Cranko, Pasquale Minervini, and I
For about two weeks now, Zac Cranko, Pasquale Minervini, and I (Alireza
Nejati a.k.a. anj1) have been working on a new package for neural networks
in julia: NeuralNets.jl https://github.com/anj1/NeuralNets.jl.
The goal is to create a clean, modular implementation of neural networks
that can
John, just to give some explanation: push! is there as an efficient push
operation - one that ideally takes O(1) time because it simply extends the
vector in-place rather than copying everything to a new vector. (In
practice, though, it takes slightly longer than O(1) time because of the
FWIW, I did a huge upgrade on my system and now that error goes away, but I
still don't have autocomplete. Anyway, it's not that important, everything
else is usable and nice.
On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:
Hey all,
I've released the latest version of the
As someone who is a relative newcomer preparing packages for submission to
METADATA, I'm also inclined to agree with the above posts. When I first
started using Julia I was under no illusions that what's in METADATA may
not necessarily be sanctioned by the core julia developers and caveat
I'm on 64-bit linux by the way
On Sunday, June 29, 2014 9:46:21 PM UTC+12, Mike Innes wrote:
Hey all,
I've released the latest version of the Julia environment
https://github.com/one-more-minute/Jupiter-LT I'm building. There are a
whole bunch of improvements but the main ones are:
-
Nice work, looks much better than my previous Sublime setup.
One issue I'm having is that autocomplete doesn't seem to be working, and
I'm repeatedly getting:
[8725:0630/141807:ERROR:vsync_provider.cc(70)] glXGetSyncValuesOML should
not return TRUE with a media stream counter of 0.
And It
Another issue: I'm not sure if this is Jupiter-specific, but it overrides
my tab settings. It changes all the tabs in my files to 2 spaces, which is
horrendous. I tried changing lt.objs.editor/tab-settings in all 3 of
default behaviors, user behaviors, and jupiter behaviors, but no dice.
On
Actually that's not a bad idea; someone should start a Julia-specific
exercise repo.
About Expert.4, I'm not sure how you're running it, but matlist and veclist
should obviously be lists of matrices and vectors, respectively.
matlist = Matrix[rand(4,4), rand(4,4)]
veclist = Vector[rand(4),
Actually, a slight modification. The way I wrote it, it will compute the
product of all matrices with all vectors (pxp mults), which is not what you
want. You just want each matrix to multiply its respective vector (p
mults). The solution to that is:
p = length(matlist)
reduce(+,
Same with Apprentice.4 :
[(x,y) for x in linspace(0,1,10), y in linspace(0,1,10)]
meshgrid() isn't included in Julia because it's almost never really needed.
Good work on these exercises, although I fear that the questions, being
designed for numpy, may not accurately reflect typical julia
The easiest way is probably to run julia in a chroot jail:
http://docs.oracle.com/cd/E37670_01/E36387/html/ol_cj_sec.html
Note that that method is only to prevent unintentional mistakes from
harming your system, not to defend against a determined hacker. For that,
you'd need to look at more
This is specific to the REPL display. If you try:
import Base.Multimedia.displays
then use display(displays[1], fm) instead, you will not get the new lines.
(This might vary depending on setup; on my setup displays[1] is the text
display and displays[2] is the REPL). It's obviously not a good
circshift
On Thursday, June 19, 2014 10:20:57 PM UTC+12, Paweł Biernat wrote:
Is there a function that cycles the list as follows?
cycle([1,2,3,4,5,6],2) - [5,6,1,2,3,4]
cycle([1,2,3,4,5,6],-2) - [3,4,5,6,1,2]
cycle([1,2,3,4,5,6],0) - [1,2,3,4,5,6]
But for fastest transcendental function performance, I assume that one
must use the micro-coded versions built into the processor's FPU--Is that
what the fast libm implementations do?
Not at all. Libm's version of log() is about twice as fast as the CPU's own
log function, at least on a
Dahua: On my setup, most of the time is spent in the log function.
On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:
Dear all,
I thought you might find this paper interesting:
http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
It takes a standard model from
It's my impression that to do this sort of stuff you should use Julia's
built-in process creation/communication facilities. Have a look at this
page: http://docs.julialang.org/en/release-0.1/manual/parallel-computing/
On Monday, June 16, 2014 10:57:28 AM UTC+12, Aerlinger wrote:
I'm writing a
Kevin: Thanks, yeah I didn't pay any attention to the version
On Monday, June 16, 2014 10:57:28 AM UTC+12, Aerlinger wrote:
I'm writing a package to allow a Julia program to asynchronously listen
and respond to file change events on disk, but I've hit a bit of a
stumbling block. I need a
44 matches
Mail list logo