Re: [julia-users] How to determine which functions to overload, or, who is at the bottom of the function chain?

2016-10-17 Thread Patrick Belliveau
I would add the general comment that in julia 0.5 you can use Gallium to 
step into a call to a base function and explore what's actually being 
called. For the .< example, from the julia prompt:

using Gallium
@enter 0.4 .< 0.5

@enter 0.4 .< 0.5 
In operators.jl:159 
158 .!=(x::Number,y::Number) = x != y 
159 .<( x::Real,y::Real) = x < y 
160 .<=(x::Real,y::Real) = x <= y 
161 const .≤ = .<= 
 
About to run: (<)(0.4,0.5)

For your problem, checking the documentation seems like a better place to 
start than firing up the debugger but it's another good tool to have in the 
toolbox.

Patrick

On Sunday, October 16, 2016 at 4:02:21 PM UTC-7, Colin Bowers wrote:
>
> This was a very helpful answer. Thank you very much for responding.
>
> Cheers,
>
> Colin
>
> On 16 October 2016 at 20:23, Milan Bouchet-Valat  > wrote:
>
>> Le samedi 15 octobre 2016 à 20:36 -0700, colint...@gmail.com 
>>  a
>> écrit :
>> > Hi all,
>> >
>> > Twice now I've thought I had overloaded the appropriate functions for
>> > a new type, only to observe apparent inconsistencies in the way the
>> > new type behaves. Of course, there were no inconsistencies. Instead,
>> > the observed behaviour stemmed from overloading a function that is
>> > not at the bottom of the function chain. The two examples where I
>> > stuffed up were:
>> >
>> > 1) overloading Base.< instead of overloading Base.isless, and
>> In this case, the help is quite explicit:
>> help?> <
>> search: < <= << <: .< .<= .<<
>>
>>   <(x, y)
>>
>>   Less-than comparison operator. New numeric types should implement this
>>   function for two arguments of the new type. Because of the behavior of
>>   floating-point NaN values, < implements a partial order. Types with a
>>   canonical partial order should implement <, and types with a canonical 
>> total
>>   order should implement isless.
>>
>> > 2) overloading Base.string(x) instead of overloading Base.show(io,
>> > x).
>> This one is a bit trickier, since the printing code is complex, and not
>> completely stabilized yet. Though the help still gives some hints:
>>
>> help?> string
>> search: string String stringmime Cstring Cwstring RevString RepString
>> readstring
>>
>>   string(xs...)
>>
>>   Create a string from any values using the print function.
>>
>> So the more fundamental function to override is print(). The help for
>> print() says it falls back to show() if there's no print() method for a
>> given type. So if you don't have a special need for print(), override
>> show().
>>
>> > My question is this: What is the communities best solution/resource
>> > for knowing which functions are at the bottom of the chain and thus
>> > are the ones that need to be overloaded for a new type?
>> In general, look at the help for a function. If there's no answer
>> (which is a most likely a lack in the documentation which should be
>> reported), look for it in the manual. The latter can always be useful,
>> even if the help already gives a reply.
>>
>> But documentation is perfectible, so do not hesitate to ask questions
>> and suggest enhancements (ideally via pull requests when you have found
>> out how it works).
>>
>>
>> Regards
>>
>>
>> > Cheers and thanks in advance to all repsonders,
>> >
>> > Colin
>>
>
>

[julia-users] Populating RemoteChannel on julia 0.5

2016-09-26 Thread Patrick Belliveau
Hi all,
   I'm updating some parallel Julia code to support 0.5. I see that 
the RemoteRef type is no longer supported. It seems I need to replace these 
with a mix of Future and RemoteChannel. The use of Future is clear enough 
but I'm unclear on the best way to use RemoteChannels, in the following 
context: Suppose I want to initialize an instance of some complicated 
composite type MyBigType on a remote worker. Later on I want to modify that 
instance, use it for some computations and continue storing it for another 
later use. On julia 0.4 I can do

  R = RemoteRef{Channel{Any}}
  R = remote_call(pid,constructMyBigType,arguments)

a can then be a reusable storage location, i.e. one can do things like
  Rval = take!(R)
  Rval = computeSomething(Rval) #Modifies some fields of aval, leaves 
others untouched
  put!(R,Rval)

That doesn't work on 0.5. since the RemoteRef type is gone. So far, my 
solution is to first send all arguments needed by the constructor function 
to the remote worker, then define a helper function that calls the 
constructor then puts it in a channel. The helper function is

@everywhere begin 
  function remoteTypeConstructor(arguments)
 obj = constructMyBigType(arguments)
 chan = Channel{MyBigType}(1)
 put!(chan,obj)
  end
end


Then from the master Julia process I use the following two lines of code to 
define the remote channel, which is a reference to the MyBigType instance 
stored on worker pid. 
sendToWorker(arguments,pid)

R = RemoteChannel(()->remoteTypeConstructor(arguments), pid)

Once I've defined R I can work with it using take!() and put!() just like 
in 0.4. The type constructor function here could possibly be 
computationally intensive and the resulting object could occupy a lot of 
memory so I'd like to have the constructor function run on the remote 
worker. The approach I've just defined seems to work but it's pretty 
awkward and could become tricky/a pain to manage if I have to write one of 
these helper functions for every new type of data I want to compute and 
store using a RemoteChannel. I'm wondering if there's a better way to do 
what I'm trying to do?

Thanks very much, Patrick





[julia-users] Re: Juno workspace variable display.

2016-09-12 Thread Patrick Belliveau
Works for me too. Thanks Uwe! I'll put in a feature request to have a it 
added to the menu. Juno's getting really good.

Patrick

On Monday, September 12, 2016 at 2:46:31 PM UTC-7, Patrick Belliveau wrote:
>
> Hi all,
>   In his JuliaCon 2016 talk 
> <https://www.youtube.com/watch?v=yDwUL3aRSRc> on Juno's new graphical 
> debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
> that displays currently defined variable values from an interactive Julia 
> session. My impression from the video is that this feature should be 
> available in the latest version of Juno but I can't get it to show up. As 
> far as I can tell, the feature is not included in my version of Juno. Am I 
> missing something or has this functionality not been released yet? I'm on 
> linux, running 
>
> Julia 0.5.0-rc4+0
> atom 1.9.9
> master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
> ink 0.5.1
> julia-client 0.5.2
> language-julia 0.6
> uber-juno 0.1.1
>
> Thanks, Patrick
>
> P.S. I've just started using Juno and in general I'm really liking it, 
> especially the debugging gui features. Great work Juno team!
>


[julia-users] Juno workspace variable display.

2016-09-12 Thread Patrick Belliveau
Hi all,
  In his JuliaCon 2016 talk 
 on Juno's new graphical 
debugging capabilities, Mike Innes also showed off a workspace pane in Juno 
that displays currently defined variable values from an interactive Julia 
session. My impression from the video is that this feature should be 
available in the latest version of Juno but I can't get it to show up. As 
far as I can tell, the feature is not included in my version of Juno. Am I 
missing something or has this functionality not been released yet? I'm on 
linux, running 

Julia 0.5.0-rc4+0
atom 1.9.9
master branches of Atom.jl,CodeTools.jl,Juno.jl checked out and up to date
ink 0.5.1
julia-client 0.5.2
language-julia 0.6
uber-juno 0.1.1

Thanks, Patrick

P.S. I've just started using Juno and in general I'm really liking it, 
especially the debugging gui features. Great work Juno team!


[julia-users] Re: "WARNING: replacing module" when invoking function with @everywhere - but module not available otherwise

2016-09-09 Thread Patrick Belliveau
Hi,
 Running your code effectively executes

@everywhere using HttpServer

This is known to generate those method redefinition warnings. The behaviour 
of using in a parallel environment is a known unresolved bug. It seems like 
the best syntax to use right now is

import HttpServer #Executed only on master process
@everywhere using HttpServer

For a more detailed discussion and links to the relevant issues on github 
see this thread 

.

On Friday, September 9, 2016 at 10:04:53 AM UTC-7, Adrian Salceanu wrote:
>
> Hi, 
>
> I'm fumbling around with a little script with the end goal of running 
> HttpServer handlers on multiple ports, in parallel, with each handler on a 
> separate worker. 
>
> The code looks like this: 
>
> # parallel_http.jl
> using HttpServer
>
>
> function serve(port::Int)
>   http = HttpHandler() do req::Request, res::Response
>   Dates.now() |> string
>   end
>
>
>   server = Server(http)
>   run(server, port)
> end
>
>
> function run_in_parallel()
>   servers = Vector{RemoteRef}()
>   for w in workers()
> println("About to start server on $(8000 + w)")
> push!(servers, @spawn serve(8000 + w))
>   end
>
>
>   servers
> end
>
>
> And in the REPL, running with julia -p 2: 
>
> julia> @everywhere include("parallel_http.jl")
> WARNING: replacing module HttpServer
> WARNING: Method definition write(Base.IO, HttpCommon.Response) in module 
> HttpServer at /Users/adrian/.julia/v0.4/HttpServer/src/HttpServer.jl:178 
> overwritten in module HttpServer at /Users/adrian/.julia/v0.4/HttpServer/
> src/HttpServer.jl:178.
> WARNING: replacing module HttpServer
> WARNING: Method definition write(Base.IO, HttpCommon.Response) in module 
> HttpServer at /Users/adrian/.julia/v0.4/HttpServer/src/HttpServer.jl:178 
> overwritten in module HttpServer at /Users/adrian/.julia/v0.4/HttpServer/
> src/HttpServer.jl:178.
>
>
> julia> servers = run_in_parallel()
> About to start server on 8002
> About to start server on 8003
> 2-element Array{RemoteRef{T<:AbstractChannel},1}:
>   From worker 3: Listening on 0.0.0.0:8003...
>  From worker 2: Listening on 0.0.0.0:8002...
> RemoteRef{Channel{Any}}(2,1,17)
>  RemoteRef{Channel{Any}}(3,1,18)
>
>
> The WARNING seems to imply that I'm doing something wrong - but if I don't 
> run "using" on each worker, to avoid the warning, the module is not 
> available to the worker. Am i missing something? Is there a better way to 
> do this? Thanks! 
>
>

[julia-users] Re: caching the pivot ordering for a sparse cholesky factorization?

2016-07-08 Thread Patrick Belliveau
Hi,
  I'm not sure whether it's possible to access the reordering using the 
builtin julia interface to the suitesparse library. If you have access to 
the Pardiso solver (or MKL, which includes a version of Pardiso) you can do 
so using the Pardiso.jl package. At some point there was an interface to 
the MUMPS package that was aiming to expose the full MUMPS API (thus 
allowing access to reordering/analysis phase of the solver) but I couldn't 
find it using a quick google search. The two MUMPS julia interfaces I know 
of off hand (https://github.com/JuliaSparse/MUMPS.jl and 
https://github.com/JuliaSmoothOptimizers/MUMPS.jl) don't allow you to save 
the results of the analysis phase but if you know a little Fortran (for the 
JuliaSparse interface) or C (for the JuliaSmoothOptimizers version) you 
could add that functionality and make a pr.

Also, fwiw, in my experience with both MUMPS and Pardiso on moderate 
problem sizes (10^5, maybe 5 X 10^5 unknowns) the analysis phase is quite 
quick and for some reason there's significant overhead associated with 
calling the analysis and factorization phases of the solvers separately. 
Enough overhead that it's been faster to make a new combined 
analyze/factorize call for each new matrix rather than performing the 
analysis once and then just calling factorize a bunch of times. That 
experience comes from symmetric indefinite matrices. 

Patrick

On Friday, July 8, 2016 at 1:40:45 AM UTC-7, Gabriel Goh wrote:
>
> Hey,
>
> I have a sequence of sparse matrix factorizations I need to do, each one a 
> different matrix but with the same sparsity structure. Is there a way I can 
> save the AMD (or any other) ordering that sparsesuite returns, it does not 
> need to be recomputed each time?
>
> Thanks a lot for the help!
>
> Gabe
>


Re: [julia-users] Linux Build Error

2016-06-21 Thread Patrick Belliveau
Henh,
   I'm sorry, I'm not sure what's going on there. It's possible 
that this is a bug in the build system or that I'm wrong and you do in fact 
need gcc 4.7 when building with intel. I thought I'd done a purely intel 
build before but turns out I used the intel fortran compiler and gcc so I 
can't confirm that pure intel works.

On Tuesday, June 21, 2016 at 2:50:47 PM UTC-7, AB wrote:
>
> Thanks again for your comment.  
>
> I followed those instructions, but I am not sure I did everything right 
> because it seems like it still is looking for GCC during the install. 
> 'make' terminated with this set of errors.  (On the 9th line it appears to 
> be looking for gcc.)
>
> checking for C compiler default output file name... a.out
> checking whether the C compiler works... yes
> checking whether we are cross compiling... no
> checking for suffix of executables...
> checking for suffix of object files... o
> checking whether we are using the GNU C compiler... yes
> checking whether icc  accepts -g... yes
> checking for icc  option to accept ISO C89... none needed
> checking whether we are using the GNU C++ compiler... yes
> checking whether icpc  accepts -g... yes
> checking how to run the C preprocessor... icc  -E
> checking whether GCC or Clang is our host compiler... gcc
> checking build system type... x86_64-redhat-linux-gnu
> checking host system type... x86_64-redhat-linux-gnu
> checking target system type... x86_64-redhat-linux-gnu
> checking type of operating system we're going to host on... Linux
> checking type of operating system we're going to target... Linux
> checking target architecture... x86_64
> checking whether GCC is new enough... no
> configure: error:
> The selected GCC C++ compiler is not new enough to build LLVM. Please 
> upgrade
> to GCC 4.7. You may pass --disable-compiler-version-checks to configure to
> bypass these sanity checks.
> make[1]: *** [build/llvm-3.7.1/build_Release/config.status] Error 1
> make: *** [julia-deps] Error 2
>
> At other points in the install it seemed like it was using ICC, not GCC. 
>  For example, the second to last line here:
>
> checking for a BSD-compatible install... /usr/bin/install -c
> checking whether build environment is sane... yes
> checking for a thread-safe mkdir -p... /bin/mkdir -p
> checking for gawk... gawk
> checking whether make sets $(MAKE)... yes
> checking whether make supports nested variables... yes
> checking for style of include used by make... GNU
> checking for gcc... icc
> checking whether the C compiler works... yes
>
> The Make.user file contains the lines:
>
> USEICC = 1
> USEIFC = 1
> USE_INTEL_MKL = 1
> USE_INTEL_MKL_FFT = 1
> USE_INTEL_LIBM = 1
>
>
> I also ran this: 
>
> source /opt/apps/intel/15/composer_xe_2015.2.164/bin/compilervars.sh 
> intel64 
>
>
> This was from a fresh clone.  Is there something else I should try?
>
> Thanks again!
>
> AB
>
>
> On Tuesday, June 21, 2016 at 4:10:48 PM UTC-5, Patrick Belliveau wrote:
>>
>> Yep, using that version of gcc definitely won't work. However, if you're 
>> using the intel compilers then you don't need gcc. If you haven't seen it, 
>> instructions for using the intel compilers are here 
>> <https://github.com/JuliaLang/julia#intel-compilers-and-math-kernel-library-mkl>
>> .
>>
>> Cheers, Patrick
>>
>> On Tuesday, June 21, 2016 at 1:57:52 PM UTC-7, AB wrote:
>>>
>>> Thanks for the feedback.  I have been told that the version of gcc 
>>> available on this system is a bit old.  
>>>
>>> gcc --version returns: 
>>>
>>> gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16)
>>>
>>> I think version 4.7 is required for julia.  The administrators of the 
>>> cluster would like me to use the Intel compilers, which I am trying to 
>>> figure out, but I will ask if I can update gcc too.
>>>
>>> Thanks again!
>>>
>>> AB 
>>>
>>>
>>>
>>> On Tuesday, June 21, 2016 at 12:01:21 PM UTC-5, Patrick Belliveau wrote:
>>>>
>>>> It's not clear to me what's going on here but as a first attempt at 
>>>> troubleshooting, are you sure that you have up to date installations of 
>>>> all 
>>>> the required build tools and external libraries required to build Julia 
>>>> from source? In particular, you should check that make is using recent 
>>>> enough versions of gcc and g++. See the Julia github page for required 
>>>> versions. Even if you have a satisfactory version of gcc installed on your 
>>>> system, make may be using an older default versio

Re: [julia-users] Linux Build Error

2016-06-21 Thread Patrick Belliveau
Yep, using that version of gcc definitely won't work. However, if you're 
using the intel compilers then you don't need gcc. If you haven't seen it, 
instructions for using the intel compilers are here 
<https://github.com/JuliaLang/julia#intel-compilers-and-math-kernel-library-mkl>
.

Cheers, Patrick

On Tuesday, June 21, 2016 at 1:57:52 PM UTC-7, AB wrote:
>
> Thanks for the feedback.  I have been told that the version of gcc 
> available on this system is a bit old.  
>
> gcc --version returns: 
>
> gcc (GCC) 4.4.7 20120313 (Red Hat 4.4.7-16)
>
> I think version 4.7 is required for julia.  The administrators of the 
> cluster would like me to use the Intel compilers, which I am trying to 
> figure out, but I will ask if I can update gcc too.
>
> Thanks again!
>
> AB 
>
>
>
> On Tuesday, June 21, 2016 at 12:01:21 PM UTC-5, Patrick Belliveau wrote:
>>
>> It's not clear to me what's going on here but as a first attempt at 
>> troubleshooting, are you sure that you have up to date installations of all 
>> the required build tools and external libraries required to build Julia 
>> from source? In particular, you should check that make is using recent 
>> enough versions of gcc and g++. See the Julia github page for required 
>> versions. Even if you have a satisfactory version of gcc installed on your 
>> system, make may be using an older default version if you haven't specified 
>> your c compiler in Make.user. You can do that by adding the following two 
>> lines to Make.user:
>> CC=/path_to_gcc_binary
>> CXX=/path_to_g++_binary
>>
>>
>> On Monday, June 20, 2016 at 5:26:19 PM UTC-7, AB wrote:
>>>
>>> Sorry!  I thought that was the relevant part.  I just ran make again. It 
>>> returned this:
>>>
>>> In file included from src/s_fmax.c:32:
>>> src/fpmath.h:105: error: duplicate member ‘manl’
>>> In file included from src/math_private.h:26,
>>>  from src/s_fmax.c:33:
>>> src/math_private_openbsd.h:54: error: conflicting types for 
>>> ‘ieee_quad_shape_type’
>>> src/math_private_openbsd.h:35: note: previous declaration of 
>>> ‘ieee_quad_shape_type’ was here
>>> src/math_private_openbsd.h:141: error: conflicting types for 
>>> ‘ieee_extended_shape_type’
>>> src/math_private_openbsd.h:123: note: previous declaration of 
>>> ‘ieee_extended_shape_type’ was here
>>> In file included from src/s_fmax.c:33:
>>> src/math_private.h:78: error: conflicting types for 
>>> ‘ieee_double_shape_type’
>>> src/math_private.h:60: note: previous declaration of 
>>> ‘ieee_double_shape_type’ was here
>>> make[2]: *** [src/s_fmax.c.o] Error 1
>>> make[1]: *** 
>>> [build/openlibm-e2fc5dd2f86f1e1dc47e8fa153b6a7b776d53ab5/libopenlibm.so] 
>>> Error 2
>>> make: *** [julia-deps] Error 2
>>>
>>> On Monday, June 20, 2016 at 7:05:38 PM UTC-5, Yichao Yu wrote:
>>>>
>>>> On Mon, Jun 20, 2016 at 7:50 PM, AB <aus.be...@gmail.com> wrote: 
>>>> > Hello - 
>>>> > 
>>>> > I'm trying to install Julia on a machine at my university. 
>>>> > 
>>>> > When I run "make" the process terminates with the following errors: 
>>>> > 
>>>> > make[2]: *** [src/s_fmax.c.o] Error 1 
>>>> > make[1]: *** 
>>>> > 
>>>> [build/openlibm-e2fc5dd2f86f1e1dc47e8fa153b6a7b776d53ab5/libopenlibm.so] 
>>>> > Error 2 
>>>> > make: *** [julia-deps] Error 2 
>>>>
>>>> FWIW, you need to copy in the actual error for anyone to help. The 
>>>> output you pasted is merely `make` complaining that something when 
>>>> wrong, but not the compile command that actually went wrong. 
>>>>
>>>> > 
>>>> > Is this a problem with a dependency? 
>>>> > 
>>>> > The version information of the machine is: 
>>>> > 
>>>> > 
>>>> LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>>>  
>>>>
>>>> > 
>>>> > and: 
>>>> > 
>>>> > Linux login2.stampede.tacc.utexas.edu 2.6.32-431.17.1.el6.x86_64 #1 
>>>> SMP Wed 
>>>> > May 7 23:32:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux 
>>>> > 
>>>> > I appreciate any help! 
>>>> > 
>>>> > Thanks, 
>>>> > 
>>>> > ABB 
>>>>
>>>

Re: [julia-users] Linux Build Error

2016-06-21 Thread Patrick Belliveau
It's not clear to me what's going on here but as a first attempt at 
troubleshooting, are you sure that you have up to date installations of all 
the required build tools and external libraries required to build Julia 
from source? In particular, you should check that make is using recent 
enough versions of gcc and g++. See the Julia github page for required 
versions. Even if you have a satisfactory version of gcc installed on your 
system, make may be using an older default version if you haven't specified 
your c compiler in Make.user. You can do that by adding the following two 
lines to Make.user:
CC=/path_to_gcc_binary
CXX=/path_to_g++_binary


On Monday, June 20, 2016 at 5:26:19 PM UTC-7, AB wrote:
>
> Sorry!  I thought that was the relevant part.  I just ran make again. It 
> returned this:
>
> In file included from src/s_fmax.c:32:
> src/fpmath.h:105: error: duplicate member ‘manl’
> In file included from src/math_private.h:26,
>  from src/s_fmax.c:33:
> src/math_private_openbsd.h:54: error: conflicting types for 
> ‘ieee_quad_shape_type’
> src/math_private_openbsd.h:35: note: previous declaration of 
> ‘ieee_quad_shape_type’ was here
> src/math_private_openbsd.h:141: error: conflicting types for 
> ‘ieee_extended_shape_type’
> src/math_private_openbsd.h:123: note: previous declaration of 
> ‘ieee_extended_shape_type’ was here
> In file included from src/s_fmax.c:33:
> src/math_private.h:78: error: conflicting types for 
> ‘ieee_double_shape_type’
> src/math_private.h:60: note: previous declaration of 
> ‘ieee_double_shape_type’ was here
> make[2]: *** [src/s_fmax.c.o] Error 1
> make[1]: *** 
> [build/openlibm-e2fc5dd2f86f1e1dc47e8fa153b6a7b776d53ab5/libopenlibm.so] 
> Error 2
> make: *** [julia-deps] Error 2
>
> On Monday, June 20, 2016 at 7:05:38 PM UTC-5, Yichao Yu wrote:
>>
>> On Mon, Jun 20, 2016 at 7:50 PM, AB  wrote: 
>> > Hello - 
>> > 
>> > I'm trying to install Julia on a machine at my university. 
>> > 
>> > When I run "make" the process terminates with the following errors: 
>> > 
>> > make[2]: *** [src/s_fmax.c.o] Error 1 
>> > make[1]: *** 
>> > 
>> [build/openlibm-e2fc5dd2f86f1e1dc47e8fa153b6a7b776d53ab5/libopenlibm.so] 
>> > Error 2 
>> > make: *** [julia-deps] Error 2 
>>
>> FWIW, you need to copy in the actual error for anyone to help. The 
>> output you pasted is merely `make` complaining that something when 
>> wrong, but not the compile command that actually went wrong. 
>>
>> > 
>> > Is this a problem with a dependency? 
>> > 
>> > The version information of the machine is: 
>> > 
>> > 
>> LSB_VERSION=base-4.0-amd64:base-4.0-noarch:core-4.0-amd64:core-4.0-noarch:graphics-4.0-amd64:graphics-4.0-noarch:printing-4.0-amd64:printing-4.0-noarch
>>  
>>
>> > 
>> > and: 
>> > 
>> > Linux login2.stampede.tacc.utexas.edu 2.6.32-431.17.1.el6.x86_64 #1 
>> SMP Wed 
>> > May 7 23:32:49 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux 
>> > 
>> > I appreciate any help! 
>> > 
>> > Thanks, 
>> > 
>> > ABB 
>>
>

[julia-users] Re: Pivoting when inverting a sparse matrix

2016-06-13 Thread Patrick Belliveau
Julia doesn't seem to have a built in sparse direct solver specifically for 
symmetric indefinite matrices. However, as Tony alluded to, there are Julia 
packages that interface with the mumps and pardiso libraries, which do pivoting 
for symmetric indefinite matrices while taking advantage of symmetry. Using 
pardiso requires a paid license or mkl but mumps is free and open source.

[julia-users] Unable to launch remote workers

2016-05-31 Thread Patrick Belliveau
Hi all,
   I normally launch multiple julia workers across multiple nodes 
of my research group's cluster from the linux shell prompt using 

julia --machinefile 

where each line of the machine file contains the host name of a cluster 
node accessible via
   
   ssh 

This worked like a charm until the cluster was recently physicall moved. It 
runs CentOS 5.11. There weren't any upgrades to the cluster done after the 
move but I'm guessing some security setting may have been made. Now when I 
execute julia --machinefile, I get the error

   Master process (id 1) could not connect within 60.0 seconds.

I've checked that I can ssh to all the hosts in the machine file and launch 
julia locally from there. I also tried adding remote workers via 
addprocs(). This throws a different error:

   ERROR: connect: host is unreachable (EHOSTUNREACH)

My julia versioninfo output is:

Julia Version 0.4.6-pre+7
Commit 273b487* (2016-03-28 14:46 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU   E5410  @ 2.33GHz
  WORD_SIZE: 64
  BLAS: libmkl_rt
  LAPACK: libmkl_rt
  LIBM: libimf
  LLVM: libLLVM-3.3

I'm not sure if this is purely a cluster sys admin problem or if I might 
have to change something in Julia to work with the cluster. If anyone has 
any tips or has run into this issue before, help would be greatly 
appreciated.

Thanks, Patrick


[julia-users] Re: ANN / Request for testers: Pardiso.jl

2016-05-25 Thread Patrick Belliveau
Hi Kristoffer,
 First, thanks for this package, it's very useful. 
After installing the BaseTestNext package I was able to build Pardiso.jl 
and all tests pass for me. I agree with Jared that BaseTestNext should be 
included as a dependency. I tested on a small cluster running CentOS 5.11. 
Julia versioninfo() output is:

Julia Version 0.4.6-pre+7
Commit 273b487* (2016-03-28 14:46 UTC)
Platform Info:
  System: Linux (x86_64-unknown-linux-gnu)
  CPU: Intel(R) Xeon(R) CPU   E5410  @ 2.33GHz
  WORD_SIZE: 64
  BLAS: libmkl_rt
  LAPACK: libmkl_rt
  LIBM: libimf
  LLVM: libLLVM-3.3

I only tested MKL pardiso because I don't have access to pardiso 5.0. My 
own code that uses Pardiso.jl ran successfully after updating for the 
function name changes.
Cheers, Patrick Belliveau

On Tuesday, May 24, 2016 at 1:33:18 AM UTC-7, Kristoffer Carlsson wrote:
>
> Hello everyone,
>
> I recently took a bit of time to clean up my wrapper to the linear solver 
> library Pardiso that exist in MKL and as a standalone project (
> http://www.pardiso-project.org/). It should now hopefully work on both 
> UNIX and Windows systems and have a decent interface. 
>
> The Pardiso library is commercial software but MKL has academic and 
> community licenses while Project Pardiso has academic licenses so they are 
> quite available.
>
> I would like to tag a new version of it but before I do that it would be 
> nice if someone else could try it out and see if things work. Since this is 
> a wrapper to a binary library a lot of things can go wrong and a bit more 
> battle testing would be very useful. I have run it with passing tests on 
> two unix machines and one windows but this is of course a very small 
> configuration space. If anyone is interested, it would be helpful if you 
> could look at the installation instructions and see if a 
> Pkg.test("Pardiso") works. Any comments about the interface is also 
> appreciated.
>
> Note that the master version is needed:
>
> Pkg.add("Pardiso")
> Pkg.checkout("Pardiso")
>
> Here is a benchmark solving a (positive definite) system from a 
> discretized heat problem comparing it to Julias default which in this case 
> is CHOLMOD:
>
> julia> nnz(A)
> 668656
>
> julia> @time factorize(A) \ x
> # 3.442693 seconds
>
> julia> ps = MKLPardisoSolver()
>
> julia> @time solve(ps, A, x)
> # 1.495566 seconds
>
> Thanks!
>
> // Kristoffer
>
>

[julia-users] Re: plotting symbols in pyplot

2016-05-13 Thread Patrick Belliveau
You might be looking for something more sophisticated but a very simple 
example of this is:

julia> using PyPlot
julia> x = collect(0:0.1:5);
julia> y = 2*x;
julia> y2 = y + 0.2*randn(length(x));
julia> plot(x,y,"b-")
julia> plot(x,y2,"r--")
julia> xlabel("the x-axis")
julia> ylabel("the y-axis")

That will create a single plot with a solid blue line showing y = 2x and a 
dashed red line showing y2 = 2x + noise. It's useful to remember that 
PyPlot.jl is just a wrapper of the python PyPlot library. My usual 
procedure when I'm trying to figure out how to plot something with 
PyPlot.jl is to figure out (through a combo of the matplotlib manual and 
stack overflow posts) how to do what I want to do in Python and then use 
the PyPlot.jl readme to translate into julia.

Patrick


On Friday, May 13, 2016 at 12:28:01 AM UTC-7, Ferran Mazzanti wrote:
>
> Dear all,
>
> I'm sure this is a simple question but can't find out the right 
> information (manuals?) for Pyplot interfacing in Julia...
> I want to compare several (similar) figures in the same plot, and some of 
> them should be plotted with lines (solid, dashed...) and 
> colors, while other should be plotted with different symbols (and colors). 
> How could I do that in Julia?
> Does anybody has a simple exemple of this that I can take a look at?
>
> Thanks in advance,
>
> Ferran.
>


[julia-users] Calling fortran subroutines with derived type input arguments

2015-07-23 Thread Patrick Belliveau
Hello,
I'm a big fan of the MuMPS linear solver. I've used the 
dpo/MUMPS.jl and juliaSparse/MUMPS.jl packages to call MuMPS from julia. 
Those packages call glue routines (written in C and Fortran respectively) 
to call the actual MuMPS library functions. I've been considering writing a 
boilerplate free interface that calls the MuMPS libraries directly from 
julia but my preliminary investigation indicates that this would be at best 
extremely tedious and easy to get wrong and at worst not possible with my 
current version of julia (0.4.0-dev+6142). The MuMPS routines that one 
would call from fortran code are subroutines that take a single input 
argument. The input argument is a derived type with over 100 fields. Some 
of those fields are arrays of fixed size. If I understand the doc page on 
calling C and Fortran code, these arrays would have to be expanded 
manually. I've posted a minimal example of this on github:

https://github.com/focus-shift/fortran-derived-types-from-julia

Is it likely that julia will make it possible in the near future to pass 
composite types with array fields to C or Fortran without manually writing 
out each entry? Secondly, Some elements of the MuMPS derived type are 
pointers to arrays, e.g.

DOUBLE PRECISION, DIMENSION(:), POINTER :: RHS_SPARSE.

I don't really understand how Fortran stores pointers to arrays internally 
so I'm not sure how one might go about passing a pointer to a Julia array 
properly into Fortran, when Fortran is expecting a pointer. Is this 
possible?

Thanks, Patrick


[julia-users] Re: collect(a:b) vs. [a:b;] in v0.4

2015-07-10 Thread Patrick Belliveau
Thanks Josh! That makes a lot of sense. I didn't see that github discussion 
when I was googling this.

Patrick

On Friday, July 10, 2015 at 8:10:14 AM UTC-7, Josh Langsfeld wrote:

 You can see the discussion about changing the deprecation suggestion here: 
 https://github.com/JuliaLang/julia/pull/11369

 I think people liked it because it's a bit of an abuse of terminology to 
 call the [a:b;] form 'concatenation' when there is only one object being 
 put into the array. The 'collect' form is semantically closer to converting 
 the range into an array rather than concatenating it. It is just a 
 suggestion though and I believe [a:b;] will continue to work fine for the 
 foreseeable future. Another option that works is 'convert(Vector{Int}, 
 1:5)'. All three call the 'vcat' function under the hood.

 [c;d] should never have given a deprecation warning because that was 
 always the proper syntax for vertical concatenation of two array-like 
 objects. It was only the meaning of the comma and wrapping a single object 
 in brackets that was changed. At least one semicolon is needed so that the 
 parser knows to call vcat rather than vector creation. Any further 
 superfluous ones are just ignored by the parser.

 On Thursday, July 9, 2015 at 11:19:53 PM UTC-4, Patrick Belliveau wrote:

 Hi,
   I'm running Julia Version 0.4.0-dev+5852. I ran some old code 
 yesterday that used the [a:b] syntax (where a and b are integers). Executing

 array = [a:b]

 gives the deprecation warning 

 WARNING: [a] concatenation is deprecated; use collect(a) instead. 

 Neither array = [3:5;], nor array = collect(3:5) produce any warnings and 
 they seem to give the same output. The deprecation warning would seem to 
 suggest that the collect(a:b) syntax is preferable to [a:b;]. I'm wondering 
 why that's the case? On a related note, for vectors c and d, array = [c;d] 
 doesn't give a deprecation warning suggesting the array = [c;d;] syntax, 
 which I believe it did on older 0.4.0-dev builds. What is the significance 
 of the trailing semicolon?

 Thanks very much, Patrick




[julia-users] collect(a:b) vs. [a:b;] in v0.4

2015-07-09 Thread Patrick Belliveau
Hi,
  I'm running Julia Version 0.4.0-dev+5852. I ran some old code 
yesterday that used the [a:b] syntax (where a and b are integers). Executing

array = [a:b]

gives the deprecation warning 

WARNING: [a] concatenation is deprecated; use collect(a) instead. 

Neither array = [3:5;], nor array = collect(3:5) produce any warnings and 
they seem to give the same output. The deprecation warning would seem to 
suggest that the collect(a:b) syntax is preferable to [a:b;]. I'm wondering 
why that's the case? On a related note, for vectors c and d, array = [c;d] 
doesn't give a deprecation warning suggesting the array = [c;d;] syntax, 
which I believe it did on older 0.4.0-dev builds. What is the significance 
of the trailing semicolon?

Thanks very much, Patrick