Answered my own question after a few more hours of sweat and tears. I had
mis-understood the documentation, and what I said previously was correct.
The cluster manager must maintain the stdout IO stream and pass it to the
WorkerConfig.io field.
Is it the case that the cluster manager must continue to redirect stdout to
the master after the master/slave handshake has been completed?
I noticed this line in the documentation:
- The cluster manager captures the stdoutās of each worker and makes it
available to the master process
I've launched workers on remote servers using my own cluster manager. It
appears to be configured correctly, the workers launch, I can execute
remotecall on them. But when I try to run a remote `println` command I get
a broken pipe. `stdout` doesn't seem to forward to the master as I would
I want to set up a specific Julia environment on a cluster for other people
to use. I have particular packages, including some that are dev branches,
and even some that draw from my own fork of a branch to support a custom
feature for this environment. So the package setup is non trivial and I
I meant that the MPI package doesn't create standard Julia workers, and
hence doesn't allow you to using things like `remotecall(...)` and other
julia-specific parallel constructs. By using the MPI package you are
restricted to using the constructs supplied in the MPI package (or so it
appears
Nice idea Erik, I appreciate it!
Though if I'm not wrong this locks me into only using MPI as the only
transport mechanism and I want to use julias remotecall and other built in
functionality rather than just MPI constructs.
I see that the latest documentation for julia shows a --bind-to option
Hi Mauro, thanks for the response!
I get the logic behind an uninitialized array, but shouldn't such an array
not return an iterable with 1 element of garbage? Why would it be
initialized to having 1 random element and not 0?
I would expect an empty constructor to do something reasonable.
The example below demonstrates `remotecall_fetch` hanging when the remote
worker is under heavy CPU load.
It seems like the listener thread on the remote machine doesn't have high
enough priority to ensure that the remote call interrupts the busy while
loop.
This is of course disastrous
Is there hidden logic behind this command that I don't see, or did I bump
into a genuine bug here? v0.4.5
julia> x = Array{Int64}()
0-dimensional Array{Int64,0}:
47917330527848
julia> for i in x; println(i); end
47917330527848
julia> println(x)
47917330527848
julia> print(typeof(x))
I've been reading various discussions on here about launching a Julia
cluster.
It's hard to tell how current they are, and the documentation is a bit
lacking on how to launch clusters under environments such as Torque. I'm
using Julia 0.4.5 (e.g. no cookies)
The primary approach that is
turn
>> `a`. If we gave a "must write a return" error, many would just
>> reflexively add
>>
>> function set1(a)
>> return a[:] = 1
>> end
>>
>> which doesn't clarify anything. It seems like a documentation problem
>> to me. If the mea
The last line of a function is its implicit return value in Julia.
Can someone tell me why this is a great idea?
I'm realizing a serious downside to it in reading someone else's code. It's
not immediately clear to me if a use of a particular function is using the
last statement as a return
My argument here is really for consistency.
The beauty of a REPL, for a new user, is being able to hack through a few
lines of code, see how they work in practice, and then use them
confidently. It really speeds learning a new package, understanding a
language feature, or just coding something
at you want... Writing to a png, inline plotting in IJulia,
> etc, and it already displays automatically when returned to the repl.
>
> On Sunday, May 22, 2016, David Parks <davidp...@gmail.com >
> wrote:
>
>> function test(); for i=1:2; plot(); end; end; test()
function test(); for i=1:2; plot(); end; end; test()# Another favorite
gotcha, plot in a loop mysteriously and silently acted differently than
plot outside of a loop
Hey there Tom, great to hear from you!
Yeah, after the earlier conversation on Gitter I posted here since this
sounded like more of a Julia issue than Plots specifically. I hadn't
noticed your response there.
My main experience with a REPL is Matlab, so I come with that bias. I would
argue
The following examples will fail to open a plot window in the REPL (notably
without warning or error, making it devilishly hard to troubleshoot).
using Plots # This is common to all plotting
platforms as far as I know
function test(); plot(); print(); end
test()
One minor correction, it's not looking to my home directory, it's looking
to my current directory for the script file. The point remains, it does not
use LOAD_PATH to find the script file I want to run. So to run a script I
need to specify the full path to the julia script file on the command
dnesday, May 18, 2016 at 6:46:12 PM UTC-7, Tony Kelman wrote:
>
> juliarc doesn't get loaded when you run Julia in script mode unless you
> add the -F (--startup-file=yes) flag.
>
>
> On Wednesday, May 18, 2016 at 6:34:20 PM UTC-7, David Parks wrote:
>>
>> When run
When running a script from the command line, julia seems to only search for
the file in the `HOME` directory.
It doesn't appear to search the `LOAD_PATH` directory.
Therefore any script not in `HOME` (which is probably every script) would
need to be referenced by its full path.
Am I missing
I'm a few weeks into Julia and excited and motivated to learn and be as
efficient as possible. I'm sure I'm not alone. I know my way around now,
but am I as efficient as I can be?
What haven't I tried? What haven't I seen? What haven't I asked?
For those of you who have been around longer,
I'm just getting started in this area myself, so this is not personal
experience (yet), but for an example of what's been done you might want to
look through the Mocha code which implements a deep learning library on the
GPU. It's probably a great example
On Thursday, April 28, 2016 at 1:13:56
Never mind, I ended up solving the problem, though not quite sure where the
issue was.
After numerous install/reinstall attempts I installed the Atom package in
the julia command line successfully, and then opened Atom successfully. It
showed INFO: Precompiling module Atom... and succeeded.
And to be complete, I completely deleted Atom and re-installed it, and the
uber-juno package. First use of the console goes through the setup, then
hits a road block at "Precompiling module Atom"
INFO: Precompiling module Atom...
Please submit a bug report with steps to reproduce this fault,
I'm setting up the Juno IDE and Julia for the first time. Windows 10.
Julia runs from the command line ok.
When I start Atom it starts installing packages in Julia and runs into
errors.
I've opened up Julia's command prompt and Pkg.rm("Atom") shut down Atom and
restarted it.
Again I saw errors
25 matches
Mail list logo