I do not know an answer, but you may want to use the amazon cluster
generator (cfncluster). It is under constant development, unlike
starcluster. It may be easier.
https://github.com/awslabs/cfncluster
On Friday, September 9, 2016 at 7:21:33 PM UTC-5, Alexandros Fakos wrote:
>
> Hi,
>
> I
In my experience it is helpful to have a Fortran version to compare speeds
against when learning Julia.
This would be easy w.r.t. your problem.
It is easy to right functional Julia code that is very slow, but looks
okay. The best way for me was to have
have a basis of comparison.
On Monday, S
I ran into this problem before with punctuation.
https://github.com/JuliaLang/julia/commit/b418a03529a9afec07c5aa032a9124b03cef912e#diff-91ec6806d45dd62d07012f6a018b151f
Maybe this should be addressed again.
On Wednesday, September 23, 2015 at 5:32:57 PM UTC-5, Alex Copeland wrote:
>
>
>
> Hi,
Thanks for the detailed answers. They were very helpful.
This sort of thing where you have data -> calculated abstraction -> recover
data, is so
frequent that having a generic way of building this sort of tool will be
hugely valuable.
Recovering data by text is useful, but there are times wher
I was thinking the second option. It would be great to be able to hotwire
the gadfly part and read
in plots with pointers to previous rendered images. I do not understand
how difficult it would be
to mimic gadfly output from other sources. How tricky was that part of
this work? I would be
+100 This has loads of possibilities. Great work.
I am guessing it would be possible, but how difficult would it
be to read in a jpg, and explore it in a similar fashion.
On Monday, September 14, 2015 at 10:15:04 AM UTC-5, Tim Holy wrote:
>
> I'm pleased to announce the availability of the Immer
This looks very impressive. I assume it works with multi-dimensional
functions f(x,y,z)?
It also looks very fast. What are the limitations to it? Where would you
still use analytic derivatives?
That is great news. Well done.
This is great progress.
Similarly, is there a way for benchmarking on different versions of the
code?
Automating this will be very helpful.
>
>
This is great progress.
Along these lines is there a way for doing bench marking against different
versions of the code?
On Thursday, July 30, 2015 at 7:20:06 AM UTC-5, Tony Kelman wrote:
>
> Hey folks, an announcement for package authors and users who care about
> testing:
>
> We've had suppo
I implemented an program in Fortran and Julia for time comparison when
learning the language.
This was very helpful to find problems in how I was learning julia. Maybe
I did not read carefully enough,
but I would compile the fortran with the intel compilers (not MKL) instead
of gcc as another
What you are doing makes sense. Starting from multiple starting points is
important.
I am curious why you just don't just run 20 different 1-processor jobs
instead of bothering with the parallelism?
On Saturday, July 26, 2014 11:22:07 AM UTC-5, Iain Dunning wrote:
>
> The idea is to call the
1. Download the source, and unzip
2. Under julia/doc its all there.
3. Run make for the details
On Tuesday, July 22, 2014 10:56:56 PM UTC-5, K leo wrote:
>
>
>
+1
On Friday, July 18, 2014 3:40:14 PM UTC-5, Viral Shah wrote:
>
> I think that most new users are unlikely to know about apropos. Perhaps we
> should put it in the julia banner.
>
> We can say something like:
> Type "help()" for function usage or "apropos()" to search the
> documentation.
>
Julia is not as performant with anonymous functions, and list
comprehension.
The compiler has a harder time with the optimization step. This is not a
surprise and is
known to the language designers. This is not a surprise.
On Saturday, July 5, 2014 8:01:46 PM UTC-5, james.dill...@gmail.com
15 matches
Mail list logo