Thanks for putting this together! One more thing about authors, on pages
like for example this one
http://www.juliabloggers.com/using-asciiplots-jl/
there should be the same attribution as on the front page.
On Tuesday, June 17, 2014 1:19:48 AM UTC+1, Randy Zwitch wrote:
Ok, there is now more
Hi Dahua,
I cannot find Base.maxabs (i.e. Julia says Base.maxabs not defined)
I'm here:
julia versioninfo()
Julia Version 0.3.0-prerelease+2703
Commit 942ae42* (2014-04-22 18:57 UTC)
Platform Info:
System: Darwin (x86_64-apple-darwin12.5.0)
CPU: Intel(R) Core(TM) i5-2435M CPU @ 2.40GHz
On Monday, 16 June 2014 03:33:32 UTC+2, Jacob Quinn wrote:
it has nice discoverability properties (tab-completion)
Oh that's an interesting one. Never consciously thought of the interaction
between naming conventions and autocomplete functionality before.
isn't generally too awkward
It seems Base.maxabs was added (by Dahua) as late as May 30
-
https://github.com/JuliaLang/julia/commit/78bbf10c125a124bc8a1a25e8aaaea1cbc6e0ebc
If you update your Julia to the latest master, you'll have it =)
// T
On Tuesday, June 17, 2014 10:20:05 AM UTC+2, Florian Oswald wrote:
Hi Dahua,
hi tim - True!
(why on earth would I do that?)
defining it outside reproduces the speed gain. thanks!
On 16 June 2014 18:30, Tim Holy tim.h...@gmail.com wrote:
From the sound of it, one possibility is that you made it a private
function
inside the computeTuned function. That creates the
Is there a way to use multiple axes in PyPlot, such as [this matplotlib
example](http://matplotlib.org/examples/api/two_scales.html)?
I've tried various approaches to get the axes objects I need, but I can't
manage to get them as anything but general PyObjects, which of course don't
have the
On Tue, 2014-06-17 at 09:29, j.l.vanderz...@gmail.com wrote:
On Monday, 16 June 2014 03:33:32 UTC+2, Jacob Quinn wrote:
it has nice discoverability properties (tab-completion)
Oh that's an interesting one. Never consciously thought of the interaction
between naming conventions and
Hello colleague,
i'm doing Gtk(2)+cairo animations (in python to admit it) in a different
context and i have done it using the gtk main loop and glib timers; but i
had to put some effort into making the animation fast, so not to interfere
with the gtk main for too long.
In the Gtk.jl afaiu
Le lundi 16 juin 2014 à 14:59 -0700, Jesus Villaverde a écrit :
Also, defining
mylog(x::Float64) = ccall((:log, libm), Float64, (Float64,), x)
made quite a bit of difference for me, from 1.92 to around 1.55. If I
also add @inbounds, I go down to 1.45, making Julia only twice as
sslow as
I have:
type parms
r::Float64
K::Float64
end
k=Array(parms,20)
for i =1:20 k[i]=parms(1.1,2.2) end
addprocs(1)
nprocs() - 2
@parallel for i=1:20 k[i].r=2.0 end
gives error:
julia @parallel for i=1:20 k[i].r=2.0 end
fatal error on 2:
julia ERROR: parms not defined
in deserialize at
I'll second that, great community and some very very helpful people that
put a lot of effort into this.
Thanks
I am thinking about a blog specializing in numerical math, optimization,
and maybe simulation, in Julia. I will not set up a blog of my own, but
I would like to contribute to such a blog. Is there something like a
shared/joined blog, or are there some of you interested in this area
and in
You’ll need to evaluate the type definition on all processes, using the
macro
@everywhere type parms
...
end
*after* adding the worker process. If I do that, I can run your code
without error. (However, k seems to be unchanged - you might have to use a
DArray (distributed array) in
Great, solve first problem, thanks.
using DArray though gives
julia k=DArray(parms,20)
exception on 2: ERROR: no method parms((UnitRange{Int64},))
in anonymous at multi.jl:840
in run_work_thunk at multi.jl:613
in run_work_thunk at multi.jl:622
in anonymous at task.jl:6
ERROR: assertion
also this works but does not change values in b
@parallel for i=1:20 b[i]=k[i].r*k[i].K end
I tried making b=DArray{Float64,1} or b=dones(20,1) but still values in b
are not updated
do I need to use spawn/fetch or pmap or something like this?
Sorry, not fluent in parallel programming yet, but
Thank you everyone for the fast replies!
After looking at ImageView and the sources, here's the solution I came up
with:
w = Gtk.@Window() |
(body=Gtk.@Box(:v) |
(canvas=Gtk.@Canvas(600, 600)) |
showall
function redraw_canvas(canvas)
ctx = getgc(canvas)
h = height(canvas)
w =
absolutely agree!
Also Douglas: very happy to hear that you are porting the lme4 package to
Julia.
On Tuesday, 17 June 2014 11:43:22 UTC+1, Jon Norberg wrote:
I'll second that, great community and some very very helpful people that
put a lot of effort into this.
Thanks
pmap is probably useful here.
Just to make sure, you *have* read the manual section on parallel
programming http://docs.julialang.org/en/latest/manual/parallel-computing/,
right? There's a lot of good stuff there =)
//T
On Tuesday, June 17, 2014 1:30:36 PM UTC+2, Jon Norberg wrote:
also
*Is there an implementation of the Remez algorithm in Julia,or is someone
working on this?*
Sometimes it is important to have a (polynomial) *minmax approximation* to
a curve or function (on a finite interval), i.e., an approximating
polynomial of a certain maximum degree such that the
I'm able to catch the InterruptException with the code below when running
in the REPL, but it doesn't seem to get thrown when running the code in a
script.
while true
try sleep(1)
println(running...)
catch err
println(error: $err)
end
end
On Monday, 16 June 2014
I wanted to do this for DSP.jl as this is used for filter design, but all
opensource implementations I could find to use as a reference just wrapped
the same old piece of Fortran code or a low-level translation of it to C
(this is the case in Scipy). As I am not terribly familiar with the
For outer, I think that it would be clearer to say that it repeats (or
clones?) the whole array along the specified dimensions. For inner, I think
it's ok.
Looking at the tests for repeat, I think we could use this as an example:
As an illustrative example, let's consider array A:
A =
Hi Johan,
Thanks for posting that example, that really helped to speed things up.
Pre-allocating the array and storing the computation directly into that
array is what I was hoping the array comprehension could do, but I can't
trick it into inserting the array directly into a 2D array
I meant
My apologies, I think the link got mangled last time. Here it is again:
http://www.juliabloggers.com/feed/
On Monday, June 16, 2014 9:18:45 PM UTC-4, K leo wrote:
Is there something wrong with the feed?
http://www.juliabloggers.com/feed/
juliabloggers.com
I think this is just a caching issue, the attribution should be on all
pages.
On Tuesday, June 17, 2014 3:44:47 AM UTC-4, Mauro wrote:
Thanks for putting this together! One more thing about authors, on pages
like for example this one
http://www.juliabloggers.com/using-asciiplots-jl/
there
Hi Abe,
the idea of the wait condition is that the program would be immidiately
closed if run in script mode (from the shell). In the REPL one usually
wants the program to return so that the REPL is still active.
Cheers,
Tobi
Am Dienstag, 17. Juni 2014 13:46:32 UTC+2 schrieb Abe Schneider:
Hi Pr. Villaverde, just wanted to say that it was your paper that made me
try Julia. I must say that I am very happy with the switch! Will you
continue using Julia for your research?
Ah Sorry, over 20 years of coding in Matlab :(
Yes, you are right, once I change that line, the type definition is
irrelevant. We should change the paper and the code ASAP
On Tuesday, June 17, 2014 12:03:29 AM UTC-4, Peter Simon wrote:
By a process of elimination, I determined
Randy, would it be possible to integrate the page in julialang.org (under
the blog section)?
If not it would probably be good to add a link there + maybe remove the
dublicated posts.
Cheers,
Tobi
Am Dienstag, 17. Juni 2014 14:39:46 UTC+2 schrieb Randy Zwitch:
I think this is just a caching
Not your fault at all. We need to make this kind of thing easier to discover.
Eg with
https://github.com/astrieanna/TypeCheck.jl
On Jun 17, 2014, at 8:35 AM, Jesus Villaverde vonbismarck1...@gmail.com
wrote:
Ah Sorry, over 20 years of coding in Matlab :(
Yes, you are right,
On Jun 17, 2014, at 7:51 AM, Florian Oswald florian.osw...@gmail.com wrote:
Also Douglas: very happy to hear that you are porting the lme4 package to
Julia.
Doug has been on a quiet, long-term mission to accomplish this exact goal,
building all the necessary bits and pieces. In fact, I
João, I found the same code
https://github.com/scipy/scipy/blob/master/scipy/signal/sigtoolsmodule.c#L229-L531
a few days ago, hoping it would give me a good starting point for a Julia
implementation. But I understand it at all. A blind port wouldn't be
feasible either considering all those
I think so! Matlab is just too slow for many things and a bit old in some
dimensions. I often use C++ but for a lot of stuff, it is just to
cumbersome.
On Tuesday, June 17, 2014 8:50:02 AM UTC-4, Bruno Rodrigues wrote:
Hi Pr. Villaverde, just wanted to say that it was your paper that made me
Thanks! I'll learn those tools. In any case, paper updated online, github
page with new commit. This is really great. Nice example of aggregation of
information. Economists love that :)
On Tuesday, June 17, 2014 9:11:08 AM UTC-4, Stefan Karpinski wrote:
Not your fault at all. We need to make
On Tuesday, June 17, 2014 04:46:31 AM Abe Schneider wrote:
I haven't looked yet into what is required to do double-buffering (or if
it's enabled by default). I also copied the 'wait(Condition())' from the
docs, though it's not clear to me what the condition is (if I close the
window, the
I’d like to add the following example too, of using *both* inner and outer,
to show off the flexibility of repeat:
julia repeat(A, inner=[1,2], outer=[2,1])
4x4 Array{Int64,2}:
1 1 2 2
3 3 4 4
1 1 2 2
3 3 4 4
Until I had tried that in the REPL myself, I didn’t really trust
I did add the julialang.org/blog feed to Julia Bloggers already. The
attribution is a bit messed up because they are re-directing their feed
using Feedburner, then Feedburner re-directs to the actual URL; I'll try to
figure out how to get the attribution to point directly to the blog post.
I just tried roots in the Polynomial package
here's what happened
@time roots(Poly([randn(100)]));
LAPACKException(99)
while loading In[10], in expression starting on line 44
in geevx! at linalg/lapack.jl:1225
in eigfact! at linalg/factorization.jl:531
in eigfact at
Your matrices are kinda small so it might not make much difference, but it
would be interesting to see whether using the Tridiagonal type could speed
things up at all.
On Tuesday, June 17, 2014 6:25:24 AM UTC-7, Jesus Villaverde wrote:
Thanks! I'll learn those tools. In any case, paper
Do any of the more initiated have an idea why Numba performs better for
this application, as both it and Julia use LLVM? I'm just asking out of
pure curiosity.
Cameron
On Tue, Jun 17, 2014 at 10:11 AM, Tony Kelman t...@kelman.net wrote:
Your matrices are kinda small so it might not make much
I'm able to register a callback function using signal in libc, see the code
below.
SIGINT=2
function catch_function(x)
println(caught signal $x)
exit(0)::Nothing
end
catch_function_c = cfunction(catch_function, None, (Int64,))
ccall((:signal, libc), Void, (Int64, Ptr{Void}), SIGINT,
This code is not valid, since getgc does not always have a valid drawing
context to return. Instead you need to provide Canvas with a callback
function via a call to redraw in which you do all the work, then just call
draw(canvas) in your timer callback to force an update to the view.
The implementation in https://github.com/Keno/Polynomials.jl looks a bit
better-scaled (not taking reciprocals of the eigenvalues of the companion
matrix), though it still might be better off on the original Wilkinson
example if the companion matrix were transposed.
Doesn't look like Julia has
I see both Polynomial and Polynomials in METADATA - is Polynomials a
replacement for Polynomial?
On Tuesday, June 17, 2014 10:46:55 AM UTC-4, Tony Kelman wrote:
The implementation in https://github.com/Keno/Polynomials.jl looks a bit
better-scaled (not taking reciprocals of the eigenvalues
On Tuesday, June 17, 2014 09:06:20 AM Stefan Karpinski wrote:
I can't possibly second the recognition of the amazing quality of Dahua and
Tim's code enough. The fact that you are various pieces of very high
quality numerical code written by them – or Steven Johnson – is one of the
critical
I can't make roots fail with the example you gave.
It is right that there isn't yet eigfact methods for the Hessenberg type
and the LAPACK routines are not wrapped. The last part wouldn't be
difficult, but we need to think about the Hessenberg type. Right now it is
a Factorization and it stores
I set that up back in 2012 and I know squat about RSS or setting up feeds,
so if anyone has any good ideas about alternate setups, I'm all ears. I
have no attachment to feedburner.
On Tue, Jun 17, 2014 at 9:33 AM, Randy Zwitch randy.zwi...@fuqua.duke.edu
wrote:
I did add the julialang.org/blog
That is very unlikely to be reliable, but it's cool that it works. I think
that we probably should change SIGINT from raising a normal error to
triggering some kind of interrupt handling mechanism (which can in turn
raise an error by default).
On Tue, Jun 17, 2014 at 10:41 AM, Stephen Chisholm
I like the idea of an interrupt handling mechanism. What do you see that
would make the signal/libc approach unreliable?
On Tuesday, 17 June 2014 12:18:11 UTC-3, Stefan Karpinski wrote:
That is very unlikely to be reliable, but it's cool that it works. I think
that we probably should change
On Tue, Jun 17, 2014 at 10:53 AM, Iain Dunning iaindunn...@gmail.com
wrote:
I see both Polynomial and Polynomials in METADATA - is Polynomials a
replacement for Polynomial?
Yes, Polynomials is the newer version with good indexing order – i.e. p[0]
is the constant term. We should probably get
Maybe we should use a couple of examples. I agree that Tomas’s example is an
important one to include to make people understand things.
I’d also suggest including one example that produces a higher-dimensional
output than the input. This is one of the big differences between repeat and
repmat.
There's very few things you can safely do in a signal handler. Calling a
julia function can potentially lead to code generation, GC, etc., all of
which is bad news in a signal handler. That's why we need a first-class
mechanism for this: install a Julia function as a handler and the system
Note that Remez algorithm can be used to find optimal (minimax/Chebyshev)
rational functions (ratios of polynomials), not just polynomials, and it
would be good to support this case as well.
Of course, you can do pretty well for many functions just by sampling at a
lot of points, in which case
Hi, this is a continuation of what I by mistake posted in devs-list
https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo
that now stroke me again (in another place)
function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax,
_ymin, _ymax)
Is imImage an abstract type? That would account for both behaviors.
Julia's type parameters are invariant, which means that Array{Int,1} is
not a subtype of Array{Real,1}. The documentation talks about this in the
parametric section: Parametric Composite Types
It's up to you. If you remove Feedburner, you remove whatever stats you
might be getting. But you'll get a better looking attribution URL on Julia
Bloggers. So you'd get:
http://julialang.org/blog/2013/05/graphical-user-interfaces-part1/
instead of
That is right. The Remez algorithm finds the minimax polynomial independent
of any given prior discretization of the function.
For discrete points you can apply LP or an iteratively reweighted least
square approach that for this problem converges quickly and is quite
accurate. Implementing it
Looks like you're not actually calling typeof in the first statement.
On Tue, Jun 17, 2014 at 11:55 AM, J Luis jmfl...@gmail.com wrote:
Hi, this is a continuation of what I by mistake posted in devs-list
https://groups.google.com/forum/?fromgroups=#!topic/julia-dev/GojOx4nI-xo
that now
I've just done measurements of algorithm inner loop times in my machine by
changing the code has shown in this commit
https://github.com/cdsousa/Comparison-Programming-Languages-Economics/commit/4f6198ad24adc146c268a1c2eeac14d5ae0f300c
.
I've found out something... see for yourself:
using
That's a good point regarding code gen and garbage collection, etc.. I
should be able to work with this for now but definitely look forward to a
safer first-class mechanism. Is there an issue opened for this on github?
On Tuesday, 17 June 2014 12:36:02 UTC-3, Stefan Karpinski wrote:
There's
That definitely smells like a GC issue. Python doesn't have this particular
problem since it uses reference counting.
On Tue, Jun 17, 2014 at 12:21 PM, Cristóvão Duarte Sousa cris...@gmail.com
wrote:
I've just done measurements of algorithm inner loop times in my machine by
changing the code
Sounds like we need to rerun these benchmarks after the new GC branch gets
updated.
-- John
On Jun 17, 2014, at 9:31 AM, Stefan Karpinski ste...@karpinski.org wrote:
That definitely smells like a GC issue. Python doesn't have this particular
problem since it uses reference counting.
*imType* is a composite type
Keno, is right. But now I do and still ...
function imcdCanvasPutImage(_canvas, _image, _x, _y, _w, _h, _xmin, _xmax,
_ymin, _ymax)
println(_image: , _image)
println(typeof(_image): , typeof(_image))
println(isa(_image, Ptr{imImage}): , isa(_image,
@Tim: Awesome, exactly what I was looking for. Thank you.
@Jameson: Just to check, do you mean something like:
function redraw_canvas(canvas)
draw(canvas)
end
draw(canvas) do widget
# ...
end
If so, I'll re-post my code with the update. It may be useful to someone
else to see the entire
Are you talking about the incremental GC
https://github.com/JuliaLang/julia/pull/5227?
It happens that, since I'm making some experiments with a (pseudo-)realtime
simulation with Julia, I also have that branch compiled.
In my realtime experiment, at the activation of a Timer with a period of
As pointed out by Dahua, there is a lot of unnecessary memory allocation.
This can be reduced significantly by replacing the lines
maxDifference = maximum(abs(mValueFunctionNew-mValueFunction))
mValueFunction= mValueFunctionNew
mValueFunctionNew =
...but the Numba version doesn't use tricks like that.
The uniform metric can also be calculated with a small loop. I think that
requiring dependencies is against the purpose of the exercise.
2014-06-17 18:56 GMT+02:00 Peter Simon psimon0...@gmail.com:
As pointed out by Dahua, there is a lot
I'm not sure Google loves a content farm, that creates no new material,
but just republishes what others have published previously.
kl. 17:38:24 UTC+2 tirsdag 17. juni 2014 skrev Randy Zwitch følgende:
It's up to you. If you remove Feedburner, you remove whatever stats you
might be
Yes. Although I think the draw...do function is actually redraw...do (this
is actually a shared interface with Tk.jl, although I recommend Gtk :)
Sent from my phone.
On Tuesday, June 17, 2014, Abe Schneider abe.schnei...@gmail.com wrote:
@Tim: Awesome, exactly what I was looking for. Thank
You're right. Replacing the NumericExtensions function calls with a small
loop
maxDifference = 0.0
for k = 1:length(mValueFunction)
maxDifference = max(maxDifference, abs(mValueFunction[k]-
mValueFunctionNew[k]))
end
makes no significant difference in
thanks peter. I made that devectorizing change after dalua suggested so. It
made a massive difference!
On Tuesday, 17 June 2014, Peter Simon psimon0...@gmail.com wrote:
You're right. Replacing the NumericExtensions function calls with a small
loop
maxDifference = 0.0
for k
I don't think we should be concerned about google. We've got plenty of google
mojo already and we're not trying to sell anything here.
On Jun 17, 2014, at 1:18 PM, Ivar Nesje iva...@gmail.com wrote:
I'm not sure Google loves a content farm, that creates no new material, but
just
I submitted three pull requests to the original repo that get rid of three
different array allocations in loops and that make things a fair bit faster
altogether:
https://github.com/jesusfv/Comparison-Programming-Languages-Economics/pulls
I think it would also make sense to run these
Does anyone have an explanation for why a type variable (e.g. Uint64) acts
differently when accessed in an array (e.g. [Uint64]) than by itself?
Here's some sample code to show what I mean:
import Base.convert
function convert{T1:Integer, T2:String}(totype::Type{T1}, value::T2)
Sorry, Florian and David, for not seeing that you were way ahead of me.
On the subject of the log function: I tried implementing mylog() as
defined by Andreas on Julia running on CentOS and the result was a
significant slowdown! (Yes, I defined the mylog function outside of main,
at the
Another interesting result from the paper is how much faster Visual C++ 2010
generated code is than gcc, on Windows. For their example, the gcc runtime is
2.29 the runtime of the MS compiled version. The difference might be even
larger with Visual C++ 2013 because that is when MS added an
There are some remaining issues but compilation with MSVC is almost
possible. I did some initial work and Tony Kelman made lots of progress
in https://github.com/JuliaLang/julia/pull/6230. But there have not been
any speed comparisons as far as I know. Note that Julia uses JIT
compilation and
I got pretty far on that a few months ago,
see https://github.com/JuliaLang/julia/pull/6230
and https://github.com/JuliaLang/julia/issues/6349
A couple of tiny changes aren't in master at the moment, but I was able to
get libjulia compiled and julia.exe starting system image bootstrap. It hit
I was more thinking that this might make a difference for some of the
dependencies, like openblas? But I’m not even sure that can be compiled at all
using MS compilers…
From: julia-users@googlegroups.com [mailto:julia-users@googlegroups.com] On
Behalf Of Tobias Knopp
Sent: Tuesday, June 17,
Hi, Doug,
Thanks for the nice words. Contributing to Julia and its ecosystems has
been one of the most rewarding activity that I has ever experienced. I also
learned a lot from the discussions with fellow contributors. It feels
wonderful to see your efforts are impacting the world in a
We're diverging from the topic of the thread, but anyway...
No, MSVC OpenBLAS will probably never happen, you'd have to CMake-ify the
whole thing and probably translate all of the assembly to Intel syntax. And
skip the Fortran, or use Intel's compiler. I don't think they have the
resources to
I think one has to distinguish between the Julia core dependencies and the
runtime dependencies. The later (like OpenBlas) don't tell us much how fast
Julia is. The libm issue discussed in this thread is of such a nature.
Am Dienstag, 17. Juni 2014 22:03:51 UTC+2 schrieb Tony Kelman:
We're
Is there a way to keep packages that have been cloned using Pkg.clone up to
date?
I currently manually go to the package directory and do git pull but I
thought there might be a command from within Julia.
Thanks,
Tobi
I'm pretty sure `Pkg.update()` does this.
-E
On Tue, Jun 17, 2014 at 1:30 PM, Tobias Knopp tobias.kn...@googlemail.com
wrote:
Is there a way to keep packages that have been cloned using Pkg.clone up
to date?
I currently manually go to the package directory and do git pull but I
thought
At least, on 0.3. Perhaps not on 0.2.1
-E
On Tue, Jun 17, 2014 at 1:55 PM, Elliot Saba staticfl...@gmail.com wrote:
I'm pretty sure `Pkg.update()` does this.
-E
On Tue, Jun 17, 2014 at 1:30 PM, Tobias Knopp tobias.kn...@googlemail.com
wrote:
Is there a way to keep packages that have
Ok I thought this would be not the case. But one issue might be that I had
modified files in some packages and this cannot work then of course.
Am Dienstag, 17. Juni 2014 22:56:29 UTC+2 schrieb Elliot Saba:
At least, on 0.3. Perhaps not on 0.2.1
-E
On Tue, Jun 17, 2014 at 1:55 PM, Elliot
I am presenting a workshop on Julia programming for faculty and students in
my department. I assume those attending have a background in R so some of
the discussion has a things are done that way in R and this way in Julia
flavor.
Notebooks (well, one notebook as of today but more to come)
I use g2reader (http://www.g2reader.com) and can't subscribe to this.
Don't know why. It complains about:
Entered url doesn't contain valid feed or doesn't link to feed. It is
also possible feed contains no items.
On 06/17/2014 08:38 PM, Randy Zwitch wrote:
My apologies, I think the link got
I run the code on 0.3.0. It did not improve things (in fact, there was a
3-5% deterioration)
On Tuesday, June 17, 2014 1:57:47 PM UTC-4, David Anthoff wrote:
I submitted three pull requests to the original repo that get rid of three
different array allocations in loops and that make things
But for fastest transcendental function performance, I assume that one
must use the micro-coded versions built into the processor's FPU--Is that
what the fast libm implementations do?
Not at all. Libm's version of log() is about twice as fast as the CPU's own
log function, at least on a
Okay, what works for me using your suggestion (except for me [with a 1 day
old Julia and package] it's 'draw' and not 'redraw'), I have:
function update(canvas, scene::Scene)
# update scene
# ...
# redraw
draw(canvas)
end
function init(canvas, scene::Scene)
update_timer =
Perhaps we should first profile the codes, and see which part constitutes
the bottleneck?
Dahua
On Tuesday, June 17, 2014 5:23:24 PM UTC-5, Alireza Nejati wrote:
But for fastest transcendental function performance, I assume that one
must use the micro-coded versions built into the
Dahua: On my setup, most of the time is spent in the log function.
On Tuesday, June 17, 2014 3:52:07 AM UTC+12, Florian Oswald wrote:
Dear all,
I thought you might find this paper interesting:
http://economics.sas.upenn.edu/~jesusfv/comparison_languages.pdf
It takes a standard model from
This is a dumb question I am sure, but I found no obvious answer in the
standard docs.
How does one take chunks of an array? In my particular case I also need
them to overlap. What I need:
list = [1,2,3,4,5]
chunks(list, 3, 1)
# - [[1,2,3], [2,3,4], [3,4,5]]
Although I am sure
On Monday, June 16, 2014 4:44:11 PM UTC-4, Stefan Karpinski wrote:
Generic functions are the reason this issue is less pressing in Julia.
Instead of Ngrams.report and Words.report or ngramsreport and wordsreport,
you can have report(x::Ngrams, ...) and report(x::Words, ...) – Ngrams,
how about this?
```
julia a = [1,2,3,10,20,30,100,200,300]
julia r = reshape(a,3,3)
3x3 Array{Int64,2}:
1 10 100
2 20 200
3 30 300
julia r[:,1]
3-element Array{Int64,1}:
1
2
3
```
(It's not clear to me how your answer matches the OP's request...?)
The partition function in https://github.com/JuliaLang/Iterators.jl will do
this:
julia using Iterators
julia list = [1,2,3,4,5]
5-element Array{Int64,1}:
1
2
3
4
5
julia for p in partition(list, 3, 1)
Hope i understand your question :)
I think this may be related the pwd.
I create a bin directory, and touch corpus
and the in the corpus I write
```
println(the pwd is:)
println(pwd())
println(the code/clj.lj is:)
println(joinpath(pwd(),../))
```
Then i run the file by ` julia bin/corpus `
Actually, this was fixed in the latest version of the package, but it
wasn't tagged, so I did that. The following works with the latest tagged
version:
julia collect(partition(list, 3, 1))
3-element Array{(Int64,Int64,Int64),1}:
(1,2,3)
(2,3,4)
(3,4,5)
On Tue, Jun 17, 2014 at 7:34 PM,
I have an expression of an M-element Array{Float64, 1}, and I have N points
I would like to evaluate this expression on and store the results into an
NxM Array{Float64,2}. I can get the output I want by passing the
expression to be evaluated (expr), the parametric variable (exprt), and a
1 - 100 of 106 matches
Mail list logo