[julia-users] Is GLSO useful?

2016-03-19 Thread Jeff Waller
It's a pretty old paper and feels interesting, but I'm surprised it's not 
implemented anywhere. it's called GLSO 
. Has anyone
heard of this,  has it been found not useful and  discarded or something? 
 Improved upon?  Maybe it goes by another name?


[julia-users] Re: julia on ARM for a drone

2016-02-17 Thread Jeff Waller
You know it's kind of a feeling right now + 1 specific thing and + planned 
thing.

The grand idea is. I want to imbue a drone with AI, which in general 
requires an implementation and a community and Julia is a good pick. 
 Latency is a possible problem.
Right now the hardware is not super fast but maybe not for long 
. 
 Not entirely new, but a better approach.

Currently, I'm using Kalman filters 
 for a better GPS 
estimation, and in the future, maybe constrained optimization for 
determining the best location to navigate to best video angle.


[julia-users] Re: julia on ARM for a drone

2016-02-15 Thread Jeff Waller
There are 2 things that appear to be direct result of this new OS/processor.

On startup:

WARNING: unable to determine host cpu name

And also, apparently julia sets the git crt path based on the OS, but in 
the case of this and others like it, the OS will not be recognized.

Fails:

*julia> **Pkg.update()*

*INFO: Updating METADATA...*

*WARNING: fetch: GitError(Code:ECERTIFICATE, Class:SSL, The SSL certificate 
is invalid)*


But not if:


*julia> *
*LibGit2.set_ssl_cert_locations("/etc/ssl/certs/ca-certificates.crt")*

*0*



Also, anything that requires JIT is slow.  So having everything 
pre-compiled if possible is desirable.





[julia-users] Re: julia on ARM for a drone

2016-02-15 Thread Jeff Waller


On Monday, February 15, 2016 at 5:55:19 AM UTC-5, Viral Shah wrote:
>
> Making sure I understand correctly, were you using the distribution on 
> julialang.org/downloads? If you had to do a few things to get it to work, 
> can you file issues? I personally would love to see more such applications 
> and that was the motivation to have an ARM port in the first place.
>

Sure can.  I'll answer some of this now.

The first thing to note is a lot of the workaound was a result of the 
arbitrary nature of this distro rather than ARM per se, however, at the 
same time expect that ARM will more
likely be associated with previously unseen OS since it is likely to be 
tailored to the processor and the processor is tailored to the 
task/device/application.  In this case, the 
processor was likely chosen for its ability to work at low wattage while 
still delivering enough CPU to do the task.  Here it is:


3dr_solo:~$ cat /proc/cpuinfo 

processor   : 0

model name  : ARMv7 Processor rev 10 (v7l)

BogoMIPS: 1988.29

Features: swp half thumb fastmult vfp edsp neon vfpv3 tls 

CPU implementer : 0x41

CPU architecture: 7

CPU variant : 0x2

CPU part: 0xc09

CPU revision: 10
Hardware: Freescale i.MX6 Quad/DualLite (Device Tree)

Revision: 

Serial  : 








[julia-users] Re: I think I'm obtaining the nightly ubuntu PPA incorrectly...

2016-02-07 Thread Jeff Waller
this is a downside of having a project that spans 3 travis languages; (c++, 
node, julia)  -- currently the language is set to c++.  
Ironically, the c++ provided by language c++ is the wrong one on Linux (gcc 
4.6 is too old).  I think I copied that script back in
the day, but have since modified it to allow additional permutations. 
 Isn't though there an issue to seeing what a project backed
by a language julia instance does -- julia is pre-installed?


[julia-users] Re: I think I'm obtaining the nightly ubuntu PPA incorrectly...

2016-02-06 Thread Jeff Waller


On Saturday, February 6, 2016 at 8:12:11 AM UTC-5, Tony Kelman wrote:
>
> Don't think the problem is on your end. That's probably just how long it's 
> been since the ppa nightlies were updated successfully. Elliot has been too 
> busy to keep them updated regularly, and has asked elsewhere if anyone 
> would be willing to take over maintenance from him.
>
> I recommend using the generic linux nightlies instead of the 
> ubuntu-specific ppa.
>

I thought something like that might be the case.  But I think it's the arm 
build that's failing which is preventing things as the amd and x86 builds 
 
look successful.

I'll help out if I can though I might have even less time to do it.

In the meantime are there some cut-n-paste instructions for (linux generic 
possibly) travis somewhere?


[julia-users] Re: julia on ARM for a drone

2016-02-06 Thread Jeff Waller
An update:

It turns out that the default linux client compiled for ARM does work with 
a few modifications . 
 It is indeed
slow to JIT-compile, (on the order of seconds), but is fast after that. 
 See randomMatrix.txt in the gist, for
exact measures, but for example rand(100,100) takes 1 second the first time 
and then 0.0005 seconds subsequently --
that's quite a speedup.  Likewise, svd of that 100x100 random matrix is 3 
seconds and then 0.05 seconds
afterwards. 

Also, of course, I tried using node-julia and that was difficult, but I 
managed to get that working too.  Firstly, I had
to add -march=armv7-a to the compile step to allow use of C++ std::thread 
to be successful; apparently this
is a known  bug of 
openembedded , and not 
an issue with Julia. However, I also had to add -lgfortran and -lopenblas
to the link step to satisfy the loader which is reminiscent of the error I 
had 

 
when I first tried to get thigs working on 
linux in 2014.  This is not the same thing but feels possibly related.  I 
also had to create a symbolic link for gfortran
as only the specialized version (libgfortran.so.3) existed; ln -s 
libgfortran.so.3 libgfortran.so; that should probably
be fixed.

All but 3 of the node-julia regressions worked once I increased the default 
timeout time.  You can see the relative
speed differences between an OS/X labtop and the drone in the gist as well. 
 Some of the timings are comparable,
some are not.

I believe the large difference in time is due to not only the processor 
being slower, but the (flash) filesystem as well.
Exec-ing processes on this drone takes a very large amount of time 
(relative to normal laptops/desktops).  The memory
is limited too (512 MB), so you really have to be careful about resources.



[julia-users] I think I'm obtaining the nightly ubuntu PPA incorrectly...

2016-02-05 Thread Jeff Waller
For travis

sudo add-apt-repository ppa:staticfloat/julianightlies

I end up with one from November


1.81s$ julia -e 'versioninfo()'

Julia Version 0.5.0-dev+1491

Commit 41fb1ba (2015-11-27 16:54 UTC)

Platform Info:

System: Linux (x86_64-linux-gnu)

CPU: Intel(R) Xeon(R) CPU @ 2.30GHz

WORD_SIZE: 64

BLAS: libopenblas (NO_LAPACK NO_LAPACKE DYNAMIC_ARCH NO_AFFINITY Haswell)

LAPACK: liblapack.so.3

LIBM: libopenlibm

LLVM: libLLVM-3.3



[julia-users] julia on ARM for a drone

2016-02-01 Thread Jeff Waller
Does update last month 

 and 
the discussion about arm nightlies  
mean that 
ARM is generally supported now?

I have a specific purpose in mind that doubles down on the difficulty, 
however.  The install method need not
be by RPM, but though the processor is an ARMv7,  it's more 
specifically Cortex-A9, and is not related to Redhat
or Rasperry, but rather the OS is Yocto Linux 
; which is an interesting project -- it's a 
distro built from the result
of a cross-compilation framework paired with a bunch of metadata describing 
the processor.  The idea is that
if one can describe the processor, then the distro can be tailored to it.

Trying to get julia compiled within the openembedded framework 
 might be a good proj itself, but my specific 
purpose
is because that distro is used by this drone , 
and I feel that (would like that) Julia could well play a part in the 
control
software.

You know, any sort of tips would help; hey I just try one of the nightlies 
and see what happens.


[julia-users] Re: Can Julia function be serialized and sent by network?

2015-08-10 Thread Jeff Waller
 

 My question is: does Julia's serialization produce completely 
 self-containing code that can be run on workers? In other words, is it 
 possible to send serialized function over network to another host / Julia 
 process and applied there without any additional information from the first 
 process? 

 I made some tests on a single machine, and when I defined function without 
 `@everywhere`, worker failed with a message function myfunc not defined on 
 process 1. With `@everywhere`, my code worked, but will it work on 
 multiple hosts with essentially independent Julia processes? 

 
According to Jey here 
https://groups.google.com/forum/#!searchin/julia-users/jey/julia-users/bolLGcSCrs0/fGGVLgNhI2YJ,
 
Base.serialize does what we want; it's contained in serialize.jl 
https://github.com/JuliaLang/julia/blob/master/base/serialize.jl



[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-30 Thread Jeff Waller

Specifically I think this will not work because of the following: 
 

 double ArrayMaker::ArrayMak(int iNum, float fNum) {

 jl_init(JULIA_INIT_DIR);


 jl_atexit_hook();
 return sol;
 }
 }


What this is doing it starting up a 2nd julia engine inside of the original 
julia engine, and that can only work if julia has
no globals that must only be initialized once and all mutual exclusive 
sections protected by mutex, (i.e. thread safe),
and currently it is not. 

In other words you're going beyond the question can Julia be embedded in 
C++ as well as can C++ be embedded
in Julia simultaneously; answer is yes.  You're asking can Julia be 
embedded within itself, and I think the answer is
no.

But from what you're describing as the problem you're attempting to  solve, 
you don't really to accomplish Julia inside
Julia. You just need to call C++ from Julia.

What happens if you just simply leave out the calls to jl_init and 
jl_atexit_hook?



[julia-users] Re: PyPlot; how to get rid of that annoying flashed-tk-window artifact?

2015-07-22 Thread Jeff Waller



 matplotlib 1.1.1


 I would recommend trying a newer Matplotlib, and on OS/X I strongly 
 recommend using the Anaconda Python distro.  Anything else is a big 
 headache to maintain in the long run. 


It's ok. no need for that.  I was wrong anyway it was 1.4.3 all along.  I 
did find a way to keep this from happening.

import PyPlot
PyPlot.switch_backend(Agg);
PyPlot.pygui(true);
PyPlot.ioff();
PyPlot.svg(true); 

and then use

fig = PyPlot.gcf();
display(fig)

but that does not consume the figure, so you must perform PyPlot.gcf() 
afterwards, or just before the new figure.


[julia-users] Re: PyPlot; how to get rid of that annoying flashed-tk-window artifact?

2015-07-21 Thread Jeff Waller


 What OS are you on, and what Python distro?  I haven't noticed this... 


OS/X Mavericks, Python 2.7.5 (Apple default) 

matplotlib 1.1.1

Julia Version 0.4.0-dev+6075

Commit 4b757af* (2015-07-19 00:53 UTC)

Platform Info:

  System: Darwin (x86_64-apple-darwin13.4.0)

  CPU: Intel(R) Core(TM) i7-4850HQ CPU @ 2.30GHz

  WORD_SIZE: 64

  BLAS: libopenblas (USE64BITINT DYNAMIC_ARCH NO_AFFINITY Haswell)

  LAPACK: libopenblas

  LIBM: libopenlibm

  LLVM: libLLVM-3.3


[julia-users] PyPlot; how to get rid of that annoying flashed-tk-window artifact?

2015-07-20 Thread Jeff Waller
Whenever PyPlot renders something, I'm getting a brief flashed window;  I 
think it's a result of the backend
trying to render in a window directly, but can't find a way to disable it.

Running inside IJulia I have

PyPlot.pygui(false)

PyPlot.backend 

TkAgg



[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-19 Thread Jeff Waller
So I built and installed Cxx and then attempted to use it with embedded 
Julia and was successful.  The tricky part is pretty tricky especially for
someone not familiar how the whole thing is laid out,  I've put most of 
what I did in this gist 
https://gist.github.com/waTeim/ec622a0630f220e6b3c3

Step 1.  Build and run make install
Step 2.  While still in the top level source directory, install Cxx as per 
the Cxx webpage
Step 3.  Take the installed directory structure and put it in an installed 
location.
Step 4.  There are additional include files that Cxx needs access to after 
it has built whenever c++ source needs to be compiled, they can be found
in the following directories relative to the source tree root.

usr/include/{clang,llvm,llvm-c}

usr/lib/clang/

Cxx attempts to access the files in these directories when it gets imported, 
and although addIncludeDir is supposed to help here, I could not make it work 
like that, so i ended up copying all of this to the installed directory (see 
the gist).

Step 5.  In addition for embedding the relative paths are off, so a couple of 
symbolic links need to be added.

Step 6 access Cxx from embedding framework and I have some show and tell here 
for that

Consider the following file (index.js) this is taken from the Cxx example page, 
I simplified
it to merely good ol' hello world


var julia = require('node-julia');
 
julia.eval('using Cxx');
julia.eval('cxx #includeiostream ');
julia.eval('cxx void mycppfunction() { std::cout  hello world  
std::endl; } ');
julia.eval('julia_function() = @cxx mycppfunction()');
 
 
julia.exec('julia_function');


and when run:

bizarro% node index.js 

hello world
So that's JITed-C++ within JITed-Julia within JITed-Javascript, which I 
think is a first.
It's pretty slow because of all of the compiling going on, and there's a 
bunch of eval
where there could be something else, but I think pre-compilation will fix 
that.

Pretty exciting.




[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-19 Thread Jeff Waller



 Symbolic links to link which files though?
 Sorry if the questions I raise are already answered by the git, I just 
 failed to understand it properly so that's
 why I come back with these questions now.
  


I've added a listing of the entire directory structure 
https://gist.github.com/waTeim/ec622a0630f220e6b3c3#file-directorylayout 

Here are the symbolic links.  I believe with some simple changes to Cxx, 
the need for these
will disappear, but for now there's where it searches.

/usr/local/julia/lib

bizarro% ls -l

total 16

drwxr-xr-x   4 jeffw  staff   136 Jul 18 18:26 clang

lrwxr-xr-x   1 root   staff10 Jul 19 03:16 include - ../include

drwxr-xr-x  49 jeffw  staff  1666 Jul 18 20:11 julia

lrwxr-xr-x   1 root   staff 1 Jul 19 03:16 lib - .


[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-18 Thread Jeff Waller


On Saturday, July 18, 2015 at 5:24:33 AM UTC-4, Kostas Tavlaridis-Gyparakis 
wrote:

 Hello again and thanks a lot for the suggestions.
 Thing is, I went back to the source version of Julia where Cxx is already 
 built and
 runs properly, so there wasn't much for me to do, I run in terminal also 
 the command of:

 addHeaderDir(/home/kostav/julia/src/)

 Then proceeded again with make install and run the version of julia from 
 this folder but
 still not being able to use or install Cxx. So, maybe I didn't understand 
 properly what you
 were suggesting for me to do in the source version of Julia...
 Or maybe there is sth else I should try..?


Well it's guesswork on my part, the exact failure would be helpful. 
 However, I'm going
to try to do this as well.  Not exactly the thing you're doing but related, 
you're more
familiar with Cxx, and I'm more familiar with embed, let's see what can be 
done. 


[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-17 Thread Jeff Waller
Yea I think what you're seeing is that Cxx needs to use the source tree
for example Make.inc only exists in the source tree while embedding 
assumes the installed tree.

How to reconcile this the best way I'm not sure yet.  I myself found Cxx
interesting enough to try, but the build failed a couple of times and I
haven't had a chance to pick it back up.  But here's what I assume 
could be done.

Don't attempt to do them at the same time but first Cxx (using source)
and then embed second (using install).

Build Cxx following the Cxx instructions.

Then install (this is the tricky part and only something I can guess at
right now).  You might find that a number of things necessary for Cxx
to function are not installed by default, or it will all go smoothly.  I 
think
though that Cxx is going to need clang in some way and that is definitely
NOT installed by default.  I'm not sure how much Julia must/can provide
and how much the Cxx package can/must provide.

Then build embed against installed stuff.

Report errors, they may require multiple iterations.


Re: [julia-users] Re: Embedding Julia with C++

2015-07-14 Thread Jeff Waller


It returns: -Wl,-rpath,/home/kostav/julia/usr/lib -ljulia


Ok next test, what happens when you paste that into the almost working
step above (the one the works except for when you run it, it can't find 
libjulia?
make sure that -ljulia and -lLLVM3.7svn come after the -Wl,rpath directives

Related, what's in the directory /home/kostav/julia/usr/lib
 



 ***Did you do a make install of Julia itself and are running out of that*
 *installed directory or are you running out of where it compiled or 
 possibly*
 *simply copied the directories? That script assumes the former and I 
 think*

 *maybe you're doing the latter.*
 I downloaded the source code in folder (home/julia) and run inside this 
 file make 
 and make install of Julia inside.


This should result in a directory with a hex-number associated, didn't it?
 


 ***/home/kostav/julia/bin/julia  /home/kostav/julia/contrib/*julia-config.jl 
  --ldlibs

 I actually don't have any bin folder inside my julia folder, so I can't do 
 any of
 the suggested modifications.
 What could I do different in this case?


Where is it?  Use that instead.
 








 On Monday, July 13, 2015 at 7:55:10 PM UTC+2, Jeff Waller wrote:




  


 On Monday, July 13, 2015 at 11:36:34 AM UTC-4, Kostas 
 Tavlaridis-Gyparakis wrote:

 Ok, my current Julia version (that I installed via running the source 
 code) is: 
 *Version 0.4.0-dev+5841 (2015-07-07 14:58 UTC)*So, what should I do 
 different so that -I flag gets the proper value?


 Could you run /home/kostav/julia/contrib/julia-config.jl  --ldlibs and 
 tell me what it returns?

 Did you do a make install of Julia itself and are running out of that
 installed directory or are you running out of where it compiled or 
 possibly
 simply copied the directories? That script assumes the former and I think
 maybe you're doing the latter.

 If so you can still use it, but you have to specify which julia so 
 something like this

 /home/kostav/julia/bin/julia  /home/kostav/julia/contrib/julia-config.jl 
  --ldlibs

 is there any difference?  To get something working, cut-paste the output 
 to the
 semi-working step above

 to use in a Makefile modify to 

 $(shell /home/kostav/julia/bin/julia 
 /home/kostav/julia/contrib/julia-config.jl --cflags) 

 etc



[julia-users] Re: Errors while trying to use cxx and embed Julia together

2015-07-14 Thread Jeff Waller
Some notes

I think a gist might be better and then we can annotate and stuff in place. 
 Google groups is just a mailing
list and is bad for extended debugging info don't worry about it too much 
just a suggestion.

1)

 jl_value_t * ret = jl_call2(func, argument, argument2);
sol =  jl_unbox_float64(ret);

this is ok for this example and testing and the fact you are very confident 
it wont fail, but in general you always
want to follow this up with 

jl_value_t *ex = jl_exception_occurred();
because something could go wrong and then all bets are off.

2)
CFLAGS = -fPIC -Wall -g -Wl,-rpath,/home/kostav/julia/usr/lib 

-Wl,-rpath,/home/kostav/julia/usr/lib---  Strictly speaking these are 
flags for the linker they might be being applied incorrectly here

3)

#include(/home/kostav/julia/src/julia.h)
cxxinclude(ArrayMaker.h)
maker = @cxxnew ArrayMaker()

Ooh, I don't know about that.  Because you are starting julia, and then 
using Cxx to create ArrayMaker which starts
embedded Julia and so you have Julia running inside Julia, I feel that's a 
little too crazy for Julia in its current state 
to handle.  Perhaps that explains the never returns behavior.

4)

Did you see my comments at the end of the previous thread.  I see you're 
using Cxx here
and I think that has some requirements on using the source tree since it 
needs access a
bunch of LLVM libs not normally distributed, and probably some other things 
as well.  

I wonder if those things could be copied into the distribution tree 
manually for this and it would
end up helping?


Re: [julia-users] Re: Embedding Julia with C++

2015-07-14 Thread Jeff Waller


 Your path for includes doesn't have julia/src, which is where julia.h 
 lives.


This is in part true, but is not the whole story.  Yes it is true that 
julia.h is found
in src in the source directory tree, but the file julia.h as well as julia 
the executable,
libjulia, etc get copied into the installed directory structure when 
make install
is executed.  It's that structure that should be used (and optionally 
lifted/copied
out of the build and put somewhere else) and in that tree, the src 
directory does
not exist. Nor does it exist in any packaged version of Julia, so it would 
be
better to assume that structure as this whole approach can eventually be 
applied
to releases, nightly builds, etc.

I think what is going on is even though Kostas is performing a make 
install he's 
continuing to use the source directory structure instead of the installed
directory structure which is causing confusion -- for both julia-config and 
me.

 jl_init_with_image(/home/kostav/julia/usr/lib/julia, sys.so);

This is unfortunate if required, it should not be required.

It's good that something is working now, however, can we try one more thing

Perform make install again, and note the directory (it will be something 
like julia-)
where  is some hexadecimal number that all of this gets installed
to as described above, and copy that entire tree elsewhere (using cp -a), 
 you can change
the name if you want, it's not necessary to preserve the name of the 
directory.  Once that is done
change the PATH and/or use symbolic links if that's more convenient so that 
julia is
the julia in THAT directory not the one in the source tree.

Then revert back to the cut-and-paste Makefile version rather than this 
modified
version but keep the old one around somewhere just in case it does't work 
out
so you don't have to re-create it from scratch.  That way if this attempt 
ultimately
fails, you can as least have something going with what you have now by 
switching
PATH back to what it is now.


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


 This is the problem, or part of it anyway. That file lives under 
 `julia/contrib` in a source build.


Hmm  this should be bundled into all 0.4, oh oh.


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller
Ok this is turning into kind of a debugging session, but 
/home/kostav/julia/contrib/julia-config.jl --cflags

just run it on the command line.  What's the output?

What version of 0.4?  Is it current as of a few days ago?  In the version 
I'm using
all is well. I will obtain/compile the latest


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


 CFLAGS   += $(shell $/home/kostav/julia/contrib/julia-config.jl --cflags)


take out the 2nd $

CFLAGS   += $(shell /home/kostav/julia/contrib/julia-config.jl --cflags) 

what results?


Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 11:36:34 AM UTC-4, Kostas Tavlaridis-Gyparakis 
wrote:

 Ok, my current Julia version (that I installed via running the source 
 code) is: 
 *Version 0.4.0-dev+5841 (2015-07-07 14:58 UTC)*So, what should I do 
 different so that -I flag gets the proper value?


Could you run /home/kostav/julia/contrib/julia-config.jl  --ldlibs and tell 
me what it returns?

Did you do a make install of Julia itself and are running out of that
installed directory or are you running out of where it compiled or possibly
simply copied the directories? That script assumes the former and I think
maybe you're doing the latter.

If so you can still use it, but you have to specify which julia so 
something like this

/home/kostav/julia/bin/julia  /home/kostav/julia/contrib/julia-config.jl 
 --ldlibs

is there any difference?  To get something working, cut-paste the output to 
the
semi-working step above

to use in a Makefile modify to 

$(shell /home/kostav/julia/bin/julia 
/home/kostav/julia/contrib/julia-config.jl --cflags) 

etc


[julia-users] Re: MongoDB and Julia

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 3:27:49 AM UTC-4, Kevin Liu wrote:

 Any help would be greatly appreciated. I am even debating over the idea of 
 contributing to the development of this package because I believe so much 
 in the language and need to use MongoDB. 


I think this is why it's untestable.  

https://travis-ci.org/pzion/Mongo.jl/jobs/54034564

Lytol/Mongo.jl looks abandoned.  It has a bunch of issues created over the 
past 2 years and
the last update was in 2013 the pzion repo is a fork which was updated 4 
months ago, maybe
it's abandoned too and you'll have to fork.  But it's at least work 
contacting him.




Re: [julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller


On Monday, July 13, 2015 at 10:54:57 AM UTC-4, Kostas Tavlaridis-Gyparakis 
wrote:

 Any ideas on how to fix the compile error: julia.h: No such file or 
 directory  #include julia.h compilation terminated.?


Yea of course.  This is a result of the -I flag having the wrong value.  

There was a space of time that this command (julia_config.jl) was not 
working
as a result of the new use of  sys.so instead of sys.ji, but that has sense 
been fixed, so
that's why I'm asking what version it was.  It would be working in the 
newest version, but
I am verifying that now.

The cause of libjulia.so not found is the link step is missing -Wl,-rpath, 
which julia_config gives you
that's why I keep coming back to it.


[julia-users] Re: Embedding Julia with C++

2015-07-13 Thread Jeff Waller
It's not clear to me which version you are using?  Depending on the 
version, It is referred
to in the URL you linked...  

I'll just cut to the chase use 0.4 and julia_config.jl as described in the 
doc, create a
Makefile just cut-and-paste the example, and augment with your source.  All 
but
one of your errors is a result of the wrong compile/link flags.

The last error is that main() is either not being compiled or linked, 
that's just straight-up
C programming, and has nothing to do with Julia.

As far as eclipse goes, I'm confident it's possible, I can't imagine 
eclipse not supporting
compilation using Makefiles, but even if it doesn't you can still automate 
things, but just
get something working first and you can embellish later.

TL;DR

0.4,  julia_config, cut-paste Makefile, add your source, done


[julia-users] Deep Dreams via Mocha?

2015-07-10 Thread Jeff Waller
So the deep dreams setup is available and the notebook is very simple here 
are the relevant components that it uses

from cStringIO import StringIO
import numpy as np
import scipy.ndimage as nd
import PIL.Image
#from PIL import Image
from IPython.display import clear_output, Image, display
from google.protobuf import text_format

import caffe

I'm wondering what prevents a Julia variant of this. The scipy/numpy stuff 
Julia has, and
Mocha uses caffe,. But can the cafe files be used directly?  Perhaps 
google.protobuf
support would have to be created.  The rest of the things seem like imaging 
stuff.


[julia-users] ANN node-julia 1.2.0

2015-07-09 Thread Jeff Waller
A couple of new features with this version

Windows  support (finally)

There were a couple of things preventing this.  

First off Julia is compiled with gcc on Windows, but node-gyp needs MSVC, 
that had to be overcome, the
the good news was the library libjulia.dll can be used by the MIcrosoft 
compiler/linker so long as an implib
is created first (libjulia.lib).  This can be generated from libjulia.dll. 
 The same thing has to occur with
libopenlibm.dll, because julia uses libm functionality not available in 
Microsofts libm.  These openlibm symbols
had to be taken directly from openlibm.dll because though it's linked into 
julia.exe, it's not (can't be?) linked into 
libjulia.dll.  Any embedding program on Windows would suffer from this see 
#11419 https://github.com/JuliaLang/julia/issues/11419 for instance.

Second, this really needs to all happen automatically.  The previous 
version assumed these Microsoft libraries
were already in place, but that's really putting too much of a burden for 
someone that want's to just do `npm install`
This version takes care of that.



Shared Buffers

Hey remember the question in this announce 
https://groups.google.com/forum/#!msg/julia-users/xSSrQRThSJw/tZlkQFBmtT0J, 
if it's possible for Julia and Javascript to share the same underlying
memory buffer for arrays?  Well it is possible, and this version implements 
that.

Yes, It is more efficient especially if the array is used multiple times, 
as before it would have to be copied back and forth
and in addition there are some cute tricks.  

For instance node has problems with large arrays if it has to manage them

 x = new Int32Array(536870911)

RangeError: Invalid array buffer length

   at new ArrayBuffer (native)

   at new Int32Array (native)

   at repl:1:5

   at REPLServer.defaultEval (repl.js:132:27)

   at bound (domain.js:254:14)

   at REPLServer.runBound [as eval] (domain.js:267:12)

   at REPLServer.anonymous (repl.js:279:12)

   at REPLServer.emit (events.js:107:17)

   at REPLServer.Interface._onLine (readline.js:214:10)

   at REPLServer.Interface._line (readline.js:553:8)

...

 x = new Int32Array(268435455)

FATAL ERROR: invalid array length Allocation failed - process out of memory


But v8 will allow indexing of arrays up to 2^31 -1 if only those could 
somehow be created...

 bizarro% node

 julia = require('node-julia')


 var x = julia.eval('Array(Int32,2^31 -1)')

undefined

 x.length

2147483647

 x[2147483646] = 1;  // does not crash


There are some other updates s as well, but that's the highlights.



Re: [julia-users] calling julia functions in C++

2015-06-30 Thread Jeff Waller


On Tuesday, June 30, 2015 at 1:52:46 PM UTC-4, Tom Breloff wrote:

 I remember a post recently about improved performance when packing many 
 function arguments into an immutable for function calling.  Is this 
 (jl_call1 vs jl_call) the reason?


No that's just for convenience.  

Also, if possible, try to void jl_eval_string as it's comparably slow to 
its counterparts.

 


Re: [julia-users] readandwrite: how can I read a line as soon as it's written by the process?

2015-06-28 Thread Jeff Waller


On Saturday, June 27, 2015 at 5:03:53 PM UTC-4, ele...@gmail.com wrote:

 Is your od program flushing its output?  Maybe its stuck in its internal 
 buffers, not in the pipe.


nah man, if it's it's od, it's not going to do that, the only thing simpler 
is cat; I suppose you could try cat.

What happens when you do something like this?

*julia **fromStream,toStream,p=readandwrite(`cat`)*

*(Pipe(open, 0 bytes waiting),Pipe(open, 0 bytes waiting),Process(`cat`, 
ProcessRunning))*


*julia **toStream.line_buffered*

*true*


*julia **toStream.line_buffered = false*

*false*


*julia **fromStream.line_buffered = false*

*false*


*julia **write(toStream,x)*

*1*


*julia **readavailable(fromStream)*

*1-element Array{UInt8,1}:*

* 0x78*


[julia-users] Re: Embedding Julia with C++

2015-06-23 Thread Jeff Waller
Embedded Julia is of particular interest to me. To answer your question, 
everything in Julia is available via embedded Julia.

I would very much discourage use of version 0.3.2; avoid it if you can.  I 
think that particular version has the uv.h problem which is fixed in later 
versions. Can you gain root on this host?  If so you can get 0.3.9 via PPA. 
 Or even better if you can get ahold of one of the nightly builds, then 
0.4.x comes with julia_config.jl, which figures out all of the right 
compile flags automatically.  You just have to cut and paste in a Makefile. 
 But if no makefile, you can run it and know the necessary compile time 
flags.



[julia-users] Re: Backend deployment for website

2015-06-16 Thread Jeff Waller


On Tuesday, June 16, 2015 at 11:35:40 AM UTC-4, Matthew Krick wrote:

 I've read everything I could on deployment options, but my head is still a 
 mess when it comes to all the choices, especially with how fast julia is 
 moving! I have a website on a node.js server  when the user inputs a list 
 of points, I want to solve a traveling salesman problem (running time 
 between 2 and 10 minutes, multiple users). Can someone offer some advice on 
 what's worked for them or any pros/cons to each option? Least cost is 
 preferable to performance.


Well I can offer some information about the thing I know the most about 
(node-julia).
Ironically, I'm also working on visualizing the TSP,  but more single user 
oriented currently.

In this case, seems like a reasonable approach is routing the user request 
(via express) and calling your Julia code, process that asynchronously, and 
long-poll for the result.  That part at least is straightforward.  The 
tricky part is the long running time, and process queue, once Julia becomes 
multithreaded, then this becomes much less of an issue, but until then 
there are tasks, which is complicated with libuv use.  But if there is 
little likelihood of 2 users running things simultaneously that's much less 
of a issue.  Interesting problem; willing to help out if needed. 



Re: [julia-users] Re: Julia Summer of Code

2015-05-22 Thread Jeff Waller


On Thursday, May 21, 2015 at 4:55:16 PM UTC-4, Jey Kottalam wrote:

 Hi Jeff, 

  they relied on a 3rd party to containerize  a Pythonprogram for 
 transmission 

 That is due to the pecularities of Python's serialization module than 
 anything intrinsic to creating a Spark binding. (E.g. Python's pickle 
 format doesn't have support for serializing code and closures, so some 
 extra code was required.) This isn't an issue in Julia since 
 Base.serialize() already has the needed functionality. An initial 
 implementation of a Spark binding done in the same style as PySpark is 
 available at http://github.com/jey/Spock.jl 

 -Jey 


Hey awesome.  Initial you say, what's missing? 


[julia-users] Re: How to create own format with two digits 3=03, 10=10, 99=99 (only range 00:99)

2015-05-20 Thread Jeff Waller
how close is this to what you need?

*[@sprintf(%02d,i) for i = 1:99]*



[julia-users] Re: Julia Summer of Code

2015-05-19 Thread Jeff Waller
Is this the part where I say Julia-Spark again?  

I think this is pretty doable in time.  It will likely be more or less a 
port of PySpark https://github.com/apache/spark/tree/master/python/pyspark 
since 
Julia
and Python are similar in capability.  I think I counted about 6K lines 
(including comments).

According to the pyspark presentation 
https://www.youtube.com/watch?v=xc7Lc8RA8wE, they relied on a 3rd party 
to containerize  a Python
program for transmission -- I think I'm remembering this right.  That might 
be a problem to
overcome.


Re: [julia-users] Using Julia program as computation backend

2015-05-19 Thread Jeff Waller


On Monday, May 18, 2015 at 10:55:10 AM UTC-4, Roman Kravchik wrote:

 Thanks for the link.
 But it seems to be like CGI-style operation - eval julia expression and 
 i'm not sure it's a perspective direction.


Hmm, if I I'm understanding this correctly, then no, it's not CGI-like; 
there is no exec-ing of a process per call.


[julia-users] ANN node-julia 1.1.0

2015-05-15 Thread Jeff Waller
This version supports 0.4.x using svec as well as previous 0.4.x and of 
course
0.3.x as well here's the docs http://node-julia.readme.io/ if interested.

It's been a pretty long time + the svec change was breaking, so not all of 
the
feature requests made it in to this, like the cool one 
https://github.com/waTeim/node-julia/issues/4, but that's next for sure.

Various bug fixes.

A couple new features features

* supporting the result of julia import as an object similar to node.js 
require Issue 6 https://github.com/waTeim/node-julia/issues/6
* supporting struct types with Javascript accessor functions Issue 7 
https://github.com/waTeim/node-julia/issues/7

Allows for pretty cool syntax, i think (cut-n-pasted from docs)

var julia = require('node-julia');
var JuMP = julia.import('JuMP');
var m = julia.eval('m = JuMP.Model()');
var x = julia.eval('JuMP.@defVar(m,0 = x = 2)');
var y = julia.eval('JuMP.@defVar(m,0 = y = 30)');

julia.eval('JuMP.@setObjective(m,Max, 5x + 3*y)');
julia.eval('JuMP.@addConstraint(m,1x + 5y = 3.0)');

var status = JuMP.solve(m);

console.log('Objective value: ',JuMP.getObjectiveValue(m));
console.log('X value: ',JuMP.getValue(x));
console.log('Y value: ',JuMP.getValue(y));




[julia-users] Re: Problem with if else in Julia

2015-05-15 Thread Jeff Waller
Well I think the problem here is that though M[:,1]' * M[:,1] has only 1 
value, it's still a (1x1) matrix, and not a scalar.
What happens when you change to this?

a=(M[:,1]' * M[:,1])[1]

related.  is automatically converting a 1x1 matrix to a scalar or defining 
comparison between 1x1 matrices and scalars reasonable?

On Friday, May 15, 2015 at 5:14:43 PM UTC-4, Lytu wrote:

 When i do:
 M=rand(5,5)
 a=M[:,1]' * M[:,1]
 if a0
 println(Less than 0)
 else
 println(more)
 end

 I have an error:  isless has no method matching 
 isless(::Array{Float64,2}, ::Int 32) in  at operators.jl:32

 Can anyone tell me please how to do this? Thank you



[julia-users] Re: Spark and Julia

2015-04-04 Thread Jeff Waller


On Saturday, April 4, 2015 at 2:22:38 AM UTC-4, Viral Shah wrote:

 I am changing the subject of this thread from GSOC to Spark. I was just 
 looking around and found this:

 https://github.com/d9w/Spark.jl 
 https://www.google.com/url?q=https%3A%2F%2Fgithub.com%2Fd9w%2FSpark.jlsa=Dsntz=1usg=AFQjCNGQwuYJvzEZVDmzbAm0SsqeV2l46Q


Hey, wow, that's interesting, is this an attempt to reimplement Spark or 
create a binding? 
 

  

The real question is with all the various systems out there, what is the 
 level of abstraction that julia should work with. Julia's DataFrames is one 
 level of abstraction, which could also transparently map to csv files 
 (rather than doing readtable), or a database table, or an HBase table. Why 
 would Spark users want Julia, and why would Julia users want Spark? I guess 
 if we can nail this down - the rest of the integration is probably easy to 
 figure out.

 
As a potential user, I will try to answer in a few parts

There are currently 3 official language bindings (Java, Scala, Python) and 
some unofficial ones as well
and R in the works; one thing that users would want is whatever the others 
get but in the language they
desire with an abstraction similar to the other language bindings so that 
examples in other languages
could be readily translated to theirs.

Whatever the abstraction turns out the be, there are at least 3 big things 
that Spark offers; simplification,
speed, and lazy evaluation.  The abstraction should not make that 
cumbersome.

For me, the advantage of Julia is the syntax, the speed, and the connection 
to all of the Julia packages
and because of that the community of Julia package authors.  The advantage 
of Spark is the machinery
of Spark, access to mlib and likewise the community of Spark users.

How about an example?  This is simply from Spark examples -- good old 
K-means.  This is assuming
the Python binding because probably Julia and Python are most alike, how 
would we expect this to 
look using Julia?

from pyspark.mllib.clustering import KMeans
from numpy import array
from math import sqrt

# Load and parse the data
data = sc.textFile(data/mllib/kmeans_data.txt)
parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))

# Build the model (cluster the data)
clusters = KMeans.train(parsedData, 2, maxIterations=10,
runs=10, initializationMode=random)

# Evaluate clustering by computing Within Set Sum of Squared Errors
def error(point):
center = clusters.centers[clusters.predict(point)]
return sqrt(sum([x**2 for x in (point - center)]))

WSSSE = parsedData.map(lambda point: error(point)).reduce(lambda x, y: x + y)
print(Within Set Sum of Squared Error =  + str(WSSSE))






[julia-users] Re: URGENT: Google Summer of Code

2015-03-27 Thread Jeff Waller
Well with very little time lest (15 minutes?)  I'd like to reiterate the 
idea
of Julia-Spark.  Simply put, the basic idea is too allow big data to flow 
into Julia
via Spark. 


[julia-users] Re: Some simple use cases for multi-threading

2015-03-15 Thread Jeff Waller
I'll like for sure  the ability for Julia to support event driven tasks 
within the same
process, in particular (of course) for web-originated requests and the 
originating
thread never waits. Could the threading model allow auto-scale? Is it 
possible to
automatically decompose aggregate ops from to MATLAB form without
intermediaries? Something like auto-parallel sort would be cool.


Re: [julia-users] Intel Xeon Phi support?

2015-03-10 Thread Jeff Waller


On Tuesday, March 10, 2015 at 1:39:42 PM UTC-4, Stefan Karpinski wrote:

 I'm not sure what that would mean – CPUs don't ship with software. Julia 
 will support Kinght's Landing, however, although it probably won't do so 
 until version 0.5.

 On Tue, Mar 10, 2015 at 1:36 PM, Karel Zapfe kza...@gmail.com 
 javascript: wrote:

 Hello:

 Is it true then, that Knight's Landing will have Julia out-of-the-box? I 
 was checking the page of Intel, but found nothing to the respect. At my 
 laboratory we had some extra money, and were considering on getting one, 
 but the point is that none of us is really good at using fortran+mpi or 
 c+mpi, so with Julia most of us non-programers-but-researchers could have 
 hope of really using it. 


Knight's Landing is not supposed to be available until Q2 which strictly 
speaking, I guess is just a couple of weeks but I would have expected to 
see some big announce.  Maybe it won't really be readily available with 
(non-reference) motherboards until summer?  do you expect dev 0.5 by say 
July (kinda like last year)? 


[julia-users] Re: How to embed Julia into Visual Studoi 2013?

2015-03-10 Thread Jeff Waller


On Tuesday, March 10, 2015 at 12:25:20 PM UTC-4, Yuhui ZHI wrote:

 Hello everyone,

 I have a question: how to embed Julia into VS2013?

 I have read this: 
 https://github.com/JuliaLang/julia/blob/release-0.3/doc/manual/embedding.rst

 But I still have no idea.

 I am using vs2013 in the system of Windows10.
 I have created a project and now I want to use Julia in it. So I want to 
 know how to embed Julia here.



Julia is compiled using gcc and MSYS2 on Windows, and though support for 
compilation using VSC++ is kinda there, it's
early.  Tony would be able to explain way better, though.

However, I've been doing some reading, and there might be a way to link to 
libjulia and thus embed Julia in a Visual Studio
C/C++ program 
http://stackoverflow.com/questions/2096519/from-mingw-static-library-a-to-visual-studio-static-library-lib.
 
 All
of the exported symbols of libjulia should be C symbols, so you don't have 
to worry about how gcc and VSC++ name mangle
differently, making things easier.  libjulia is a .dll, but if you need a 
.lib, then there's a technique to do that 
https://adrianhenke.wordpress.com/2008/12/05/create-lib-file-from-dll/

I would be very interested in making libjulia accessable to VS.






Re: [julia-users] Intel Xeon Phi support?

2015-03-10 Thread Jeff Waller


On Tuesday, March 10, 2015 at 2:07:34 PM UTC-4, Viral Shah wrote:

 LLVM support for KNL is already in place. So yes, it will come quickly, 
 but in a released version of Julia, that is certainly no earlier than 0.5. 
 It is also quite likely that we need good multi-threading support to ensure 
 a good experience for KNL - which is also happening simultaneously. 

 I am personally quite excited about the socketable KNL, and the 
 possibilities with Julia. 

 -viral 


You and me both.  For though the Tesla will probably continue to enjoy some 
edge for Neural Networks, that's not the only thing going.  Something 
balanced that marries deep leaning and traditional stuff is going to be a 
cool place to be. 


[julia-users] Re: Share Library and Julia

2015-02-23 Thread Jeff Waller


 What can be the problem?
 ​

 I think this:

jeffw@dub1:~$ echo _Z6CP_lliPc | c++filt
CP_lli(char*)

jeffw@dub1:~$ echo _Z6CP_lliPh | c++filt
CP_lli(unsigned char*)




[julia-users] Re: api - web scraping

2015-02-22 Thread Jeff Waller


On Saturday, February 21, 2015 at 6:49:10 AM UTC-5, pip7...@gmail.com wrote:

 Any web scraping library or modules for the Julia language.
 If so, any links  - examples.
 Regards


Hey!  I wonder if you could use noodle http://noodlejs.com/ and/or cheerio 
https://github.com/cheeriojs/cheerio and then feed Julia via my thing 
node-julia https://github.com/waTeim/node-julia.

Would be fun for me, happy to help.


[julia-users] Re: Grouping - Dataframes

2015-02-22 Thread Jeff Waller


On Sunday, February 22, 2015 at 4:07:48 PM UTC-5, Philip Sivyer wrote:

 Hi
 I get 
 :Col1 anonymous function)) when I try *(df,:Col1,x-sum(x[:Col2]))*
 Any ideas.
 Will take a look at the link in a minute
 Regards


you forgot the by 


[julia-users] Re: Grouping - Dataframes

2015-02-22 Thread Jeff Waller
if the DataFrame if represented by the variable df, how about


*by(df,:Col1,x-sum(x[:Col2]))*




Re: [julia-users] what path to set in jl_init

2015-01-31 Thread Jeff Waller
Instead of jl_init, I use the following 


jl_init_with_image((char*)install_directory.c_str(),(char*)sys.ji);


where install_directory is the directory of libjulia (e.g. 
/usr/local/julia/lib/julia), but whatever corresponds to how you link using 
-Wl,rpath


Re: [julia-users] Re: ANN node-julia 1.0.0

2015-01-18 Thread Jeff Waller


On Sunday, January 18, 2015 at 7:11:01 PM UTC-5, Eric Forgy wrote:

 I am probably confused, but in the link, they are talking about running 
 Node in Nashorn and it even points to a list of Node modules they are 
 currently able to run. 

 https://avatar-js.java.net 
 https://www.google.com/url?q=https%3A%2F%2Favatar-js.java.netsa=Dsntz=1usg=AFQjCNHMI3W_0OOsDI0vObGlBAH2X87e4g


That's Oracle being tricksy.  That framework uses a different JavaScript 
interpreter (Nashorn) built on top of a Java JVM.  Supported so long as the 
module is 100% Javascript but it looks like it's EMCAScript 5, not 6.  But 
modules that make use of v8 native can not be.  Here's a blog from 
StrongLoop 
http://strongloop.com/strongblog/how-to-run-node-js-on-the-jvm-with-avatar-js-and-loopback/;
 
those guys are trustworthy.


 I was hoping node-julia could be added to the list. I guess not?


Unfortunately, it needs v8.  
 


 By the way, as I was reading up on Nashorn, I learned that it is intended 
 to be more general than just Javascript in Java. It is supposed to be an 
 architecture for scripting languages in general to run on JVM, i.e. an 
 LLVM for JVM, which begs the question if it now starts to make sense 
 thinking about compiling Julia directly to bytecode for JVM? The Javascript 
 performance seems pretty good. I think that would be a big boost to Julia 
 if you're able to get Java developers on board.


I think Nashorn represents Oracle's fear of a future where most Java 
programmers become Javascript programmers.  But for right now anyway, I 
don't know of an alternative to Hadoop.  Would love to learn about it.


[julia-users] Re: ANN node-julia 1.0.0

2015-01-18 Thread Jeff Waller


On Sunday, January 18, 2015 at 5:47:51 AM UTC-5, Eric Forgy wrote:

 Hi Jeff,

 I really like this idea and look forward to giving node-julia (and Julia 
 for that matter) a spin.

 As I explain here 
 https://groups.google.com/forum/#!topic/julia-users/umHiBwVLQ4g, I'm 
 building a web app with a Spring MVC backend and d3-based front end and 
 trying to figure out how to squeeze Julia in between the two somehow.

 I'm learning all this as I go and I just stumbled onto Nashorn 
 https://blogs.oracle.com/java-platform-group/entry/nashorn_the_rhino_in_the.
  
 The linked article talks about running Node applications on Nashorn through 
 Project 
 Avatar https://avatar.java.net/. If I understand, this combination 
 should allow me to run node-julia side-by-side with my Java code. Is that 
 correct? That would be awesome. Can you foresee any difficulties?


Hi Eric,

I like your d3 idea, my friends advise that projects that are based on d3 like 
bokeh http://bokeh.pydata.org/en/latest/docs/gallery.html go a long way 
and then get bogged down in details, so heads up, but it's too cool to not 
try.

Unfortunately, incorporating Julia via node-julia will not work when using 
Nashorn and Avatar.js as they are projects to replace node and v8 not work 
with them.  Nor will node-webkit or atom-shell if you are wanting to go 
browser-as-desktop-app route, but d3 in a browser still will of course.


Re: [julia-users] ANN node-julia 1.0.0

2015-01-17 Thread Jeff Waller


On Saturday, January 17, 2015 at 10:53:36 AM UTC-5, Test This wrote:

 Some basic questions: 

 We know that Node is blocking on cpu intensive tasks. If I use the async 
 option, are the calculations run separately. That is does it allow the mode 
 process to continue with its event loop. 


The answer is yes, and the reason is though node is singly threaded, native 
modules can be multi-threaded,
the main thread accepts work and then queues it along with the function to 
return the result to and returns
immediately meanwhile a couple of other threads dequeue the work evaluate 
and then enqueue the result and
finally the main thread is signaled using uv_async_send.
 

 If the answer is yes, what happens when julia is busy with a previous 
 calculation and a new one is passed to it? 


Currently there's good news and bad news.  The good news is that node-julia 
async calls will not block and so
node will not block, but the bad news is the the evaluator running in a 
separate thread waits for the result before
moving on to the next thing.  
 


 Thanks so much for creating this.


Oh man, you're welcome, most definitely.
 


Re: [julia-users] ANN node-julia 1.0.0

2015-01-17 Thread Jeff Waller


On Saturday, January 17, 2015 at 10:02:21 PM UTC-5, Test This wrote:

 This is awesome. That would provide an easy way to scale a web-application 
 at least to some extent by outsourcing the CPU intensive calculation out of 
 Node. 


And you get access to BLAS, Mocha (GPU even), JuMP, Optim, Stats.  Good for 
fast access to website oriented machine learning stuff;  regressions, 
recommender systems, etc.
 


 Currently, Julia takes a long time to start up. Does the Julia process 
 start when include myjuliafile.jl is called? If so, can one call this 
 line somewhere when the server starts. 


Because it's an embed, Julia exists/executes as part of the node process in 
another thread.  The engine starts up the first time a call to eval, exec, 
Script is made.  For first time stuff especially loading big packages, 
there's going to be some lag because of the action of the JIT, but I 
haven't seen lag otherwise.
 

 Otherwise, the start up time will increase the wait for the web clients. 


There should not be re-JITing per connection. so I expect this to not be a 
problem.  Feedback? 


Re: [julia-users] ANN node-julia 1.0.0

2015-01-17 Thread Jeff Waller


On Saturday, January 17, 2015 at 10:59:13 PM UTC-5, Tony Kelman wrote:

 Might be interesting to compare the performance here vs 
 https://github.com/amd/furious.js


Ok I'm up for this, it's a lot of stuff, hope the cut-and-paste install 
method works.  The unit test page 
https://amd.github.io/furious.js/unittest.html looks like
it's not all there,  they haven't updated lately, though (Aug) what's up?


Re: [julia-users] ANN node-julia 1.0.0

2015-01-16 Thread Jeff Waller


On Friday, January 16, 2015 at 4:26:30 PM UTC-5, Stefan Karpinski wrote:

 This is super cool. I wonder if it wouldn't be possible allow Julia to 
 operate on JavaScript typed arrays in-place?


Hmm, maybe!  With some caveats first the good news.  

Here's where the Javascript array buffer is obtained 
https://github.com/waTeim/node-julia/blob/master/src/NativeArray.h#L59
Here's the relevant part of the copy from Javascript to C++: it's ptr to 
ptr https://github.com/waTeim/node-julia/blob/master/src/request.cpp#L225
And here it is again from C++ to julia a buffer handoff 
https://github.com/waTeim/node-julia/blob/master/src/rvalue.cpp#L175

Caveats.
this is all happening in separate threads
v8 has its own memory management, however, ArrayBuffers can be neutered 
https://github.com/joyent/node/blob/master/deps/v8/include/v8.h#L2761
node heap size is 2G, having but maybe neutering means it no longer takes 
part in the calculations
Javascript Typed arrays are 1D only.
there's  implicit row-major to column major transformation going on

So multidimensional stuff might be a pain, but vector transfer of 
ownership?   yea probably.


[julia-users] Re: FE solver in Julia fast and faster

2015-01-10 Thread Jeff Waller
Hmm, not knowing a lot about this, just the keyword I have the question. 
 Is it possible to make it 3x faster yet and/or what hardware is that 
benchmark obtained from?


Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller
Hmm Atom eh? I read that you're communicating via TCP, but I wonder If 
there is some sort of abstraction possible, and it need not be process to 
process.   Have not thought it through, but feel something is there.


Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller
Oh man, I think there might be a way!  

Inspired by this because you know Atom is essentially node + chromium, I 
tried

git clone node-julia
and then

bizarro% cd node-julia/

bizarro% HOME=~/.atom-shell-gyp node-gyp rebuild --target=0.19.5 --arch=x64 
--dist-url=https://gh-contractor-zcbenz.s3.amazonaws.com/atom-shell/dist

that 0.19.5 value is critical and I ended up just trying the versions at 
random

linked node-julia in 
pwd

/Applications/Atom.app/Contents/Resources/app/node_modules
bizarro% ls -l node-julia
lrwxr-xr-x  1 jeffw  staff  32 Jan  6 18:10 node-julia - /Users/jeffw/src/
atom/node-julia


and then finally within the javascript REPL in Atom 
var julia = require('node-julia');
undefined
julia.exec('rand',200);
Array[200]


and then (bonus)
julia.eval('using Gadfly')
JRef {getHIndex:function}__proto__: JRef
julia.eval('plot(rand(10)');


that last part didn't work of course but it didn't crash though and maybe 
with a little more...  A julia engine within Atom.  Would that be useful? 
 I'm not sure what you guys are wanting to do, but maybe some collaboration?

-Jeff


Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller


 I'd be interested in getting a Julia engine in Atom, but I would not be so 
 interested in Julia for visualization when, unless I'm mistaken, at that 
 point you can use d3 directly. That would be cool if true. Is it? Can we 
 get the Julia.eval to return a javascript array? Getting Julia and 
 javascript working side by side in the same console would be pretty awesome.


Hi Eric,

I haven't done much with D3 myself, but I work with a number that do, I'll 
ask what of any limitations there are.  If all else fails, there's always 
https://www.npmjs.com/package/d3 so long the javascript engine can be fed.

Now as for JavaScript arrays; yea, that's what Julia arrays and tuples are 
mapped to.  There are some subtleties, that are documented here 
http://node-julia.readme.io/v0.2.3/docs/datatype-mapping.  For arrays of 
primitive unboxed types, I'm planning in the next version on changing the 
datatype mapping to using JavaScript typed arrays as they are faster by at 
least an order of magnitude.  The syntax and use would be essentially the 
same though.

Yea, pretty awesome!


Re: [julia-users] Re: [ANN] Blink.jl – Web-based GUIs for Julia

2015-01-06 Thread Jeff Waller


On Tuesday, January 6, 2015 8:24:23 PM UTC-5, Mike Innes wrote:

 That's very cool. You should definitely package this up if you can. The 
 JS-on-top approach might actually make it easier to package up a Julia app, 
 at least in the short term. (Also, if you don't want to call julia.eval 
 every time, it should be easy to hook up the Julia instance to Juno and use 
 it as a repl).


julia.eval(string) is essentially what happens someone types string, and 
easy you say?  yes definitely!  I read that Atom has an app database and a 
package manager (apm), but low level nodejs stuff needs to interact more 
directly with Atom-shell and it might be difficult to use the Atom supplied 
extension framework.  I'll certainly follow up.
 


 The Blink.jl model turns out to work quite well for us – since it's 
 basically a thin layer over a Julia server + browser window, it should be 
 easy to serve Blink.jl apps both locally and over the internet, which will 
 open up some interesting possibilities. It does hurt ease-of-use a little 
 though, so I'd be happy to see alternative approaches crop up.


Cool!  I read the src and it seems to boil down to the @js macro which 
printlns a JSON object over a socket, would it be as simple as instead 
sending to some sort of IO buffer?
 
-Jeff


[julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Waller


 Perhaps this is a good reason to change behaviour such that e is no longer 
 a constant: it has always seemed bit odd to use a valuable latin singleton 
 in this way. We could use a unicode script e (U+212F) instead, as suggested 
 by wikipedia:

 http://en.wikipedia.org/wiki/Numerals_in_Unicode#Characters_for_mathematical_constants

 s


Hmm I don't know about that.  True, it is 2015, but how would that look in 
vi and git diff or in an xterm?  How to type it again?  How many 
keystrokes?  


Re: [julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-05 Thread Jeff Waller
The cause for this thread is mainly a lexical analyzer bug for hex 
notation. Except for the error in #9617, I'm fine with the current behavior 
and syntax even with the semi e-ambiguity if you want the scientific 
notation literal, use no spaces.  This is only ambiguous because Julia 
permits a number literal N to proceed an identifier I as a shortcut for 
N*I, which is different than many languages and part of Julia's charm.  I'd 
be sorry to see it go.

[0-9]+(.[0-9]+)?e(+|-)?[0-9]+ scientific notation literal

2e+1 is 2x10^1
2e + 1   is 2*e + 1
2e+ 1is a syntax error because to the lexical analyzer 2e+ is an error 
without at least 1 trailing digit (no spaces)

typing 2e+1 (without the space) and expecting it to mean 2*e + 1 is way 
over emphasizing the need to not type a space.  All of the other language 
style guides are consistent about this being bad style.

Finally consider this

*julia *
*2e-1e**0.5436563656918091*

This is parsed as (2*10^-1)e  = .2e which I assert is the right thing to do.


[julia-users] Re: Why does the following give an error: p = 1; 2p+1 ??

2015-01-04 Thread Jeff Waller
Also in 0.4.x this

*julia *
*2p+*
*(type-error double number #f)*
*unexpected error: #0 (read-number #io stream #f #f)**ERROR: syntax: 
malformed expression*




[julia-users] Re: checking a version number?

2014-12-31 Thread Jeff Waller


On Wednesday, December 31, 2014 7:03:34 AM UTC-5, Andreas Lobinger wrote:

 What is the recommended way to get a julia major/minor version number (i 
 need to check  v.0.4)? Just parse Base.VERSION?


Within Julia:

*julia *
*VERSION**v0.4.0-dev+2340*

*julia *
*VERSION.minor**4*

 Also, currently, in dev these are nice for embedding but have not been 
back-ported yet.

DLLEXPORT extern int jl_ver_major(void);
DLLEXPORT extern int jl_ver_minor(void);
DLLEXPORT extern int jl_ver_patch(void);
DLLEXPORT extern int jl_ver_is_release(void);
DLLEXPORT extern const char* jl_ver_string(void);





[julia-users] Re: Declaring variables in julia

2014-12-31 Thread Jeff Waller


On Wednesday, December 31, 2014 7:57:25 AM UTC-5, sadhanapriya...@vit.ac.in 
wrote:

 hi


 Please let me know how to declare float, double, long int, unsigned long 
 and unsigned char to some variable in julia( Please mention syntax)
   

I've found the following works for me

As others have mentioned, that reference page looks good, but for the C 
equivalent, you need to be a little careful in 2 cases.  float will always 
be Float32 and double will always be Float64, but long is not necessarily 
64 bits 
http://stackoverflow.com/questions/7279504/long-and-long-long-bit-length, 
so may be either UInt64 or UInt32 depending.  Also, in 0.3.x it's Uint and 
in 0.4.x it UInt (little i, big I).  Finally the char type in Julia is a 
wide char (32 bits), so the equivalent type as far as bits go to C unsigned 
char is UInt8 on 0.4.x or Uint8 for 0.3.x.


[julia-users] Re: Community Support for Julia on Travis CI

2014-12-18 Thread Jeff Waller
Perhaps related:

Warning: error initializing module GMP:
ErrorException(The dynamically loaded GMP library (version 5.1.3 with 
__gmp_bits_per_limb == 64)
does not correspond to the compile time version (version 6.0.0 with 
__gmp_bits_per_limb == 64).
Please rebuild Julia.)

I have not investigated this a whole bunch yet, but I might have installed 
gmp locally for another reason/project and there is a conflict now.  Other 
installs (Travis) might have had the same?  6.0.0 has newishness about it.


Re: [julia-users] Re: How can I sort Dict efficiently?

2014-12-08 Thread Jeff Waller
This can be done in O(N).  Avoid sorting as it will be O(NlogN)

Here's one of many Q on how 
http://stackoverflow.com/questions/7272534/finding-the-first-n-largest-elements-in-an-array


Re: [julia-users] Re: What is Julia bad for - currently? Practically (or theoretically) - any other language more powerful?

2014-12-05 Thread Jeff Waller


On Friday, December 5, 2014 5:47:15 AM UTC-5, Páll Haraldsson wrote:



 On Friday, December 5, 2014 1:11:12 AM UTC, ivo welch wrote:


 there are no good web development language environments (server, browser) 
 IMHO, and julia will not fit this bill, either.


 You are forgetting PHP :)

 All joking aside, you got me intrigued. If there is no good language, then 
 what is it you want? Why is web-programming special? And wouldn't Julia be 
 as least as good a fit (for server side) as any other language?

 For client side, I was thinking of writing a separate post on that. Julia 
 could work..

 Is it a problem that you have separate languages on either side? Then 
 Julia or any language wouldn't be perfect (until Julia works client side, I 
 don't think JavaScript is better for server-side (than Julia) or best for 
 non-web..).

 As for server side I had this crazy idea that Julia and PHP could be 
 combined.. I'm not sure what the best way is to get people to stop using 
 PHP and rewriting from scratch isn't good..


Nah, man, Javascript!  But of course I'm going to say that.

The main problems with PHP

It's not cool.
It's perceived as slow.

The language elements or lack of language elements can be argued, but those 
things eventually get fixed so long as people remain interested.  I think 
there is value in having the same language on both sides not for technical 
reasons, but because mostly it promotes freedom and free flow of ideas, but 
at the same time, only Sauron was able to fashion the One to Rule Them All, 
and he was a villain. It's not all just a popularity contest either, but 
that's part of it.  It's no good to have a perfect but unpopular language. 
 And anyway who wants just one language for everything? too dull!


Re: [julia-users] Re: What is Julia bad for - currently? Practically (or theoretically) - any other language more powerful?

2014-12-05 Thread Jeff Waller


On Friday, December 5, 2014 12:43:20 PM UTC-5, Steven G. Johnson wrote:

 On Friday, December 5, 2014 9:57:42 AM UTC-5, Sebastian Nowozin wrote:

 I find Julia great, but for the technical computing goal, my biggest 
 grime with Julia (0.3.3 and 0.4.0-dev) at the moment is the lack of simple 
 OpenMP-style parallelism.


 See the discussion at:
   https://github.com/JuliaLang/julia/issues/1790
 and the considerable work in progress on multithreading in Julia:
   https://github.com/JuliaLang/julia/tree/threads

 There is DArray and pmap, but they have large communication overheads for 
 shared-memory parallelism,


 This is a somewhat orthogonal issue.  You can have multiple processes and 
 still use a shared address space for data structures.  See:


 http://julia.readthedocs.org/en/latest/manual/parallel-computing/#shared-arrays-experimental
  

 The real difference is the programming model, not so much the 
 communications cost.


I think you're right that the interesting thing for a language is the 
model, but at the same time
for problems that are too big to reside on 1 machine, you can't ignore the 
communications.

I feel the grail here is to do map-reduce with the bare minimum of language 
elements and hosts
as first-class is too much.  

If I can draw an analogy for a 2-levels removed analogy, what language 
elements guarantee 
@simd will vectorize anything that could possibly be vectorized and what 
will it take to make
@simd completely unnecessary.  In the same way what will it take to make a 
problem
automatically decomposable across hosts in a reasonable way.  Assuming 
everything can
fit on 1 machine is too limiting, but it's so convenient.  

I seems like Julia as a language among other things is predicated on LLVM 
being able to figure
out how to vectorize, and introducing the minimum elements for LLVM to do 
its thing; in this
case typing and immutability and JIT, and from Jacob's graph, looks like 
that was a pretty good
idea.

How about multi-host parallelism?


Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Jeff Waller
I think this could be done by instead of expanding into specific 
(optimized) code dedicated to each argument, it instead would expand into a 
tree of if statements connected to a bunch of Expr(:call,:typeof,more 
args) and ? : (whatever the parse tree is for that).  Essentially a 
function call turned inline.  Is there support for that (inlining 
functions) already?


Re: [julia-users] Re: Why doesn't @sprintf evaluate its format string?

2014-12-03 Thread Jeff Waller


Unfortunately the number of types or arguments one can encounter is 
 unbounded. Are you talking about the format specifiers or the arguments?


Yea,  the format specifiers, poor choice of words, my bad.  


[julia-users] Re: Lua Jit out performed Julia for my stock prediction engine use case

2014-11-30 Thread Jeff Waller
Pepsi challenge time?!

Do you have a link to your data?


Re: [julia-users] Different type columns in Matrix

2014-11-29 Thread Jeff Waller
I'd add one more.

Or two arrays or a composite with 2 arrays.

After all whatever is supposed to take advantage of everything being in one 
array
is going to have some subtle error when the types change out from  under it 
moving from
one column to the next.  Or you will use Any and it will be horribly slow. 
 Or you will
never apply the same operation to all columns at once either by design or 
as a practical
matter to avoid the problems above in which case why are you using 1 array 
in the
first place?


Re: [julia-users] Re: Text editor for coding Julia: Atom vs Light Table vs Bracket

2014-11-29 Thread Jeff Waller


const r_sun = 695500.0KiloMeter; export r_sun
 const r_jupiter =  69173.0KiloMeter; export r_jupiter
 const r_saturn  =  57316.0KiloMeter; export r_saturn 




I have a semi-related question.  Why this way?  Why not read these values 
from a database
at startup or at least a file of HDF5 or JSON. Do you find yourself 
cutting-and pasting
constants all over the place?  


Re: [julia-users] Different type columns in Matrix

2014-11-29 Thread Jeff Waller



 This 3 column are neccessary for later analysis in my work, first and 
 second column index and 3rd values. I have to use three columns same time.


Ok right so, whatever the exact expression is, it's going to have the form

f(A[:,1:2])

where f does some indexing function followed sometime later by

g(A[:,3])

where g does some value stuff. 

but you don't have

h(A[:,1:3])

because there's no builtin thing that does both indexing and evaluation 
simultaneously
I'm using f g and h just to short-hand describe what's going on, not to 
imply
they are actually called f, g, h or even that they're actually formally 
functions.  So if that's
the case then really why not use A and B?  Convenience, Encapsulation? 
 Sure, that's valid
but I'm suggesting that those two things are even better served by using 
composites, because
then you have the type system working for you instead of against you.


[julia-users] Re: julia vs cython benchmark

2014-11-26 Thread Jeff Waller
There is one thing is see as a potential.

The outer loop *i* is incrementing the first index, and Julia stores things 
in column-major order, so any speed gain from CPU cache is potentially lost 
since you using elements that are not contiguous in the inner loops.


Re: [julia-users] Security problem with unitialized memory

2014-11-25 Thread Jeff Waller


On Monday, November 24, 2014 10:54:36 PM UTC-5, Simon Kornblith wrote:

 In general, arrays cannot be assumed to be 16-byte aligned because it's 
 always possible to create one that isn't using pointer_to_array. However, 
 from Intel's AVX introduction 
 https://software.intel.com/en-us/articles/introduction-to-intel-advanced-vector-extensions
 :

 Intel® AVX has relaxed some memory alignment requirements, so now Intel 
 AVX by default allows unaligned access; however, this access may come at a 
 performance slowdown, so the old rule of designing your data to be memory 
 aligned is still good practice (16-byte aligned for 128-bit access and 
 32-byte aligned for 256-bit access).


And BTW 512 bit AVX registers are coming next year.  
http://en.wikipedia.org/wiki/AVX-512
 


Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Jeff Waller



*bizarro% echo __ZZL7initGPUP9resamplerPKjS2_jE12__FUNCTION__01 
|c++filt**initGPU(resampler*, 
unsigned int const*, unsigned int const*, unsigned int)::__FUNCTION__*

I'm not entirely sure what that even means (virtual table?), but you know 
what? I bet you compiled this with clang because when I do that same thing 
on Linux, it's like wa?. Because of that, and exceptions, and virtual 
table layout, and the standard calling for no standard ABI, I agree, can't 
you wrap all the things you want to export with extern C?


Re: [julia-users] calling libraries of c++ or fortran90 code in Julia

2014-11-21 Thread Jeff Waller
Oh yea.  I cut-n-pasted the wrong line, well I'll just fix that and 


*echo __ZL7initGPUP9resamplerPKjS2_j|c++filt**initGPU(resampler*, unsigned 
int const*, unsigned int const*, unsigned int)*

different symbol same comment.  


but if it has to be this way, hmm  FUNC LOCAL DEFAULT --- that function 
isn't declared static is it?  


Re: [julia-users] SymTridiagonal

2014-11-17 Thread Jeff Waller


As a user, ones(n,1) and ones(n) both return me a vector, and it is 
 confusing to find that ones(n,1) !=  ones(n)


I was where you are now a few months ago.  It's a learning cure thing, I 
think, because now I don't make that mistake
anymore or I'm like, oh yea, of course and change it 2 seconds later.  But 
to a new user it can be uninviting and not
easily solved with just more documentation.  

The question to me is what is the tradeoff?  For a semi-experienced user 
like me, it looks like Julia is trying to pick
a spot of convenience while trying to retain access to optimization.  The 
convenience part is the REPL, no requirement
for variable type declaration, no function type return declaration,  and on 
the other hand types and an options for variable
type declaration to allow the JIT to better optimize.  Your experience sits 
right where those to conflicting things are fighting
it out right now, and this wall-of-text doesn't help you out any.

I think this might be helped by having more  verbose error messages 
(optionally).


Re: [julia-users] Re: @DSL Domain code

2014-11-16 Thread Jeff Waller
  Its just something inside me that rebels against having code inside 
strings.

Yea, I don't like it either.  If feels half-done.  

New token? @@


Re: [julia-users] Intel Xeon Phi support?

2014-11-07 Thread Jeff Waller


On Thursday, November 6, 2014 1:14:51 PM UTC-5, Viral Shah wrote:

 We had ordered a couple, but they are really difficult to get working. 
 There is a fair bit of compiler work that is required to get it to work - 
 so it is safe to assume that this is not coming anytime soon. However, the 
 Knight's Landing should work out of the box with Julia whenever it comes 
 and we will most likely have robust multi-threading support by then to 
 leverage it.


Aww!
 


 Out of curiosity, what would you like to run on the Xeon Phi? It may be a 
 good multi-threading benchmark for us in general.


Something that requires 1TFlop, or maybe 1000 things that take 1 GFlop?

Hmm, how about realtime photogrammetry?
 


 -viral

 On Thursday, November 6, 2014 9:35:57 PM UTC+5:30, John Drummond wrote:

 Did you have any success?
 There's an offer of the cards for 200usd at the moment


That's like 1/10th the price?




[julia-users] Re: Failure of installing Julia 0.4 (latest master) on Mac OS X 10.10

2014-10-29 Thread Jeff Waller
Also having problems after a fetch...


lots of link errors followed by
  _ztrsen_, referenced from
_zneupd_ in zneupd.o
_zunm2r_, referenced from:
 _zneupd_ in zneupd.o

ld: symbol(s) not found for architecture x86_64
collect2: error: ld returned 1 exit status
make[4]: *** [libarpack.la] Error 1
make[3]: *** [all-recursive] Error 1
make[2]: *** [arpack-ng-3.1.5/.libs/libarpack.dylib] Error 2
make[1]: *** [julia-release] Error 2
make: *** [release] Error 2


was fixed by rm -rf libarpack-ng   and SuiteSparce and then make

previous to that I had to remove and rebuild llvm

I think what happens is that sometimes if you let deps grow too far out of 
date, they don't 
work with each other any more (makes sense) and the build process get 
wedged, it can't
even figure out what to do.

but who wants to rebuild deps entirely every time? no one.

Seems like you need to have a way to indicate inter-dep dependency, so that 
change to 
1 particular dep will cascade rebuild to all the other dependencies 
necessary but no more.

it may be in this case that if any one of

arpack-ng
SuiteSparce
objconv

change major or minor numbers, they all have to be rebuilt.

Anyone have a dependency graph in mind?




[julia-users] Re: Multi-OS (Linux + Mac) testing in Travis

2014-10-10 Thread Jeff Waller
Wow, this is exactly what I need. As I just got Travis functional last 
night for the first time @ 3 (you should see the crazy binding.gyp file), I 
feel the universe is reaching out to me.  Thanks, Tony, thanks, universe.


Re: [julia-users] Re: Article on `@simd`

2014-09-23 Thread Jeff Waller
Could this theoretical thing be approached incrementally?  Meaning here's a 
project and he's some intermediate results and now it's 1.5x faster, and 
now he's something better and it's 2.7 all the while the goal is apparent 
but difficult.  

Or would it kind of be all works or doesn't?


[julia-users] Re: How to convert a cell type to array type?

2014-09-21 Thread Jeff Waller
perhaps this?

*cell2mat(A)=hvcat(size(A),reshape(A,*(size(A)...))...)*


[julia-users] ANN: node-julia

2014-09-18 Thread Jeff Waller
It's usable enough now, so it's time to announce this.  node-julia 
https://github.com/waTeim/node-julia is a portal from node.js to Julia. 
 It's purpose is partly to allow the node-HTTP-javascript guys access to 
the excellence that is Julia, and visa versa the Julia guys access to the 
HTTP excellence that is node.js, but more fundamentally to allow the these 
two groups to help and interact with each other.  Well at least thats my 
hope.

It's definitely a work in progress, see the README 
https://github.com/waTeim/node-julia/blob/master/README.md for the 
limitations, and of course my previous message here 
https://groups.google.com/forum/#!topic/julia-users/D17iErZEtfg about the 
problem I'm currently dealing with on Linux.

If you are familiar with node.js already it can be obtained via npm here 
https://www.npmjs.org/package/node-julia


Re: [julia-users] More with embedding with dynamic libraries (Linux)

2014-09-17 Thread Jeff Waller
Oh I knew I could count on you guys. Thanks for the reply, of course my 
natural response is. What is RTLD_GLOBAL? Don't worry too much about that, 
I can probably look it up.  I found a not desirable workaround by setting 
LD_PRELOAD to load libjulia.so first, but that causes a 2nd problem see here 
https://github.com/waTeim/node-julia/blob/master/README.md#linux.

I'm up for whatever it takes to resolve this.  am hopeful I will simply 
have to set RTLD_GLOBAL on libjulia

... whatever that is.


[julia-users] More with embedding with dynamic libraries (Linux)

2014-09-15 Thread Jeff Waller
Hi all,

Remember  this problem with dlopen and openblas 
https://groups.google.com/forum/#!topic/julia-users/A8HwlmldVTM?  That 
problem was resolved by modifying how embedded Julia loads (perhaps only on 
OS/X).Well now there is a similar problem on Linux.  Same software.  The 
exact problem is different because the dynamic linkers are different.  It 
appears to me from LD_DEBUG that symbols in libjulia.so are not being 
resolved correctly when sys.so is initialized.  Much of the functionality 
is missing, but for example tuple() still works...   The relevant debug 
information is:

18788: find library=libjulia.so [0]; searching
18788:  search path=/usr/local/julia-aa5ffc6ac6/lib/julia/tls/x86_64:
/usr/local/julia-aa5ffc6ac6/lib/julia/tls:/usr/local/julia-aa5ffc6ac6/lib/
julia/x86_64:/usr/local/julia-aa5ffc6ac6/lib/julia  (RPATH from 
file /home/jeffw/src/nj-test2/node_modules/node-julia/build/Release/nj.node)

18788:   trying file=/usr/local/julia-aa5ffc6ac6/lib/julia/tls/x86_64/
libjulia.so
18788:   trying file=/usr/local/julia-aa5ffc6ac6/lib/julia/tls/libjulia.
so
18788:   trying file=/usr/local/julia-aa5ffc6ac6/lib/julia/x86_64/
libjulia.so
18788:   trying file=/usr/local/julia-aa5ffc6ac6/lib/julia/libjulia.so

Followed by:
18788:  calling init: /usr/local/julia-aa5ffc6ac6/lib/julia/libjulia.so
Which returns with no errors

Then when calling init on my .so, the following happens

18788: /usr/local/julia-aa5ffc6ac6/lib/../lib/julia/sys.so: error: 
symbol lookup error: undefined symbol: jl_diverror_exception (fatal)
18788:  node: error: symbol lookup error: undefined symbol: jl_ast_rettype (
fatal)
18788:   node: error: symbol lookup error: undefined symbol: jl_prepare_ast 
(fatal)
18788:   node: error: symbol lookup error: undefined symbol: 
jl_alloc_array_1d (fatal)
18788:node: error: symbol lookup error: undefined symbol: 
jl_alloc_array_1d (fatal)
18788:node: error: symbol lookup error: undefined symbol: 
jl_alloc_array_1d (fatal)
18788:node: error: symbol lookup error: undefined symbol: 
jl_alloc_array_1d (fatal)

followed by about 50 more unresolved symbols...

But those symbols are in libjulia.so, 

The effect of this is later on is the following exception

Error: Julia ccall: could not find function jl_prepare_ast


Here's the output of ldd of the .so library .  In linux @rpath is $ORIGIN, 
and I don't see that here, so maybe that's the culprit, or maybe a red 
herring

linux-vdso.so.1 =  (0x7fff765d9000)
libjulia.so = /usr/local/julia-aa5ffc6ac6/lib/julia/libjulia.so (
0x7f89c4cd1000)
libstdc++.so.6 = /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (
0x7f89c49c3000)
libgcc_s.so.1 = /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x7f89c47ac000)
libpthread.so.0 = /lib/x86_64-linux-gnu/libpthread.so.0 (0x7f89c458f000
)
libc.so.6 = /lib/x86_64-linux-gnu/libc.so.6 (0x7f89c41cf000)
librt.so.1 = /lib/x86_64-linux-gnu/librt.so.1 (0x7f89c3fc6000)
libdl.so.2 = /lib/x86_64-linux-gnu/libdl.so.2 (0x7f89c3dc2000)
libz.so.1 = /lib/x86_64-linux-gnu/libz.so.1 (0x7f89c3bab000)
libm.so.6 = /lib/x86_64-linux-gnu/libm.so.6 (0x7f89c38ae000)
/lib64/ld-linux-x86-64.so.2 (0x7f89c5fb1000)




Re: [julia-users] How come (x, y) isn't legal syntax?

2014-09-10 Thread Jeff Waller
I would add that ()(1,2) works; I'm imagining that it changes the context 
and forces the parser to use the one legal interpretation.


[julia-users] Re: self dot product of all columns in a matrix: Julia vs. Octave

2014-09-08 Thread Jeff Waller


On Monday, September 8, 2014 7:37:52 AM UTC-4, Mohammed El-Beltagy wrote:

 Octave seems to be doing a pretty good job for this type of calculation. 
 If you want to squeeze a bit more performance out of Julia, you can try to 
 use explicit loops (devectorize as in 
 http://julialang.org/blog/2013/09/fast-numeric/). Then you might also 
 remove bounds checking in a loop for faster performance. 

 Try this function:
 function doCalc(x::Array{Float64,2})
 XX=Array(Float64, 7000,7000)
 for j=1:7000,i=1:7000
 @inbounds XX[i,j]=x[i,j]*x[i,j]
 end
 XX
 end

 Followed by 
 @time XX=doCalc(X);



I am totally not cool with this. Just simply making things fast at whatever 
the cost is not good enough. You can't tell someone that Oh sure, Julia 
does support that extremely convenient syntax, but because it's 6 times 
slower (and it is), you need to essentially write C.  There's no reason 
that Julia for this should be any less fast than Octave except for bugs 
which will eventually be fixed.  I think it's perfectly ok to say 
devectorize if you want to do even better, but it must be at least equal.

Here's the implementation of .^

.^(x::StridedArray, y::Number) = reshape([ x[i] ^ y for i=1:length(x) ], 
size(x))
how is that made less vectorized?

Yes R, Matlab, Octave do conflate vectorization for speed and style and 
that's a mistake.  Julia tries not to do that that's better.  But do you 
expect people to throw away the Matlab syntax?  No way.

for me:

*julia **x = rand(7000,7000);*

*julia **@time x.^2;*

elapsed time: 1.340131226 seconds (392000256 bytes allocated)

And in Octave

 x=rand(7000);

 tic;y=x.^2;toc

Elapsed time is 0.201613 seconds.

Comprehensions might be the culprit, or simply use of the generic x^y 
 here's an alternate implementation I threw together

*julia **function .^{T}(x::Array{T},y::Number)*

   *   z = *(size(x)...)*

   *   new_x = Array(T,z);*

   *   tmp_x = reshape(x,z);*

   *   for i in 1:z*

   *  @inbounds if y == 2*

   * new_x[i] = tmp_x[i]*tmp_x[i]*

   *  else*

   * new_x[i] = tmp_x[i]^y*

   *  end*

   *   end*

   *   reshape(new_x,size(x))*

   *end*

*.^ (generic function with 24 methods)*

*julia **@time x.^2;*

elapsed time: 0.248816016 seconds (392000360 bytes allocated, 4.73% gc time)

Close




[julia-users] Re: self dot product of all columns in a matrix: Julia vs. Octave

2014-09-08 Thread Jeff Waller


On Monday, September 8, 2014 12:37:16 PM UTC-4, Tony Kelman wrote:

 Calm down. It's very easy to match Octave here, by writing the standard 
 library in C. Julia chose not to go that route. Of course everyone would 
 like the concise vectorized syntax to be as fast as the for-loop syntax. If 
 it were easy to do that *without* writing the standard library in C, it 
 would've been done already. In the meantime it isn't easy and hasn't been 
 done yet, so when someone asks how to write code that's faster than Octave, 
 this is the answer.


Haha, why do I come across as combative??? HUH?  Oh wait that's kind of 
combative.  But, wait,from above  I think it's been illustrated that a 
C-based rewrite is not necessary, but the current form is no good.  
 


 There have been various LLVM bugs regarding automatic optimizations of 
 converting small integer powers to multiplications. if y == 2 code really 
 shouldn't be necessary here, LLVM is smarter than that but sometimes fickle 
 and buggy under the demands Julia puts on it.


Apparently.  For some additional info.  Check this out  if you change the 
Octave example to this:

 tic;y=x.^2.1;toc
Elapsed time is 1.80085 seconds.

Now it's slower than Julia. Like you said, you can't trust LLVM on 
everything, so the question is what to do about it?
 


 And in this case the answer is clearly a reduction for which Julia has a 
 very good set of tools to express efficiently. If your output is much less 
 data than your input, who cares how fast you can calculate temporaries that 
 you don't need to save?


I think this is missing the point and concentrating on optimizing the wrong 
thing or possibly I'm missing your point.  Here's my point

Everyone coming from Octave/Matlab will expect and should expect that  X.^2 
is usable and just as fast.



Re: [julia-users] Re: self dot product of all columns in a matrix: Julia vs. Octave

2014-09-08 Thread Jeff Waller
I feel that x^2 must among the things that are convenient and fast; it's 
one of those language elements that people simply depend on.  If it's 
inconvenient, then people are going to experience that over and over.  If 
it's not fast enough people are going experience slowness over and over.  

Like Simon says it's already a thing 
https://github.com/JuliaLang/julia/issues/2741.  Maybe use of something 
like VML is an option or necessary, or maybe extending inference.jl or 
maybe even it eventually might not be necessary 
http://llvm.org/docs/Vectorizers.html.   

Julia is a fundamental improvement, but give Matlab and Octave their due, 
that syntax is great.  When I say essentially C, to me it means

Option A use built in infix:

X.^y

versus Option B go write a function

pow(x,y)
   for i = 1 ...
  for j =  
   ...
   etc

Option A is just too good, it has to be supported. 

What to do?  Is this a fundamental flaw?  No I don't think so.  Is this a 
one time only thing?  It feels like no, this is one of many things that 
will occasionally occur.  Is it possible to make this a hassle?  Like I 
think Tony is saying, can/should there be a branch of immediate 
optimizations?  Stuff that would eventually be done in a better way but 
needs to be completed now.  It's a branch so things can be collected and 
retracted more coherently.



Re: [julia-users] trouble building julia on mac

2014-08-13 Thread Jeff Waller
Likewise no problems compiling using an older version of clang (Xcode 
5.0.2), but I do see that flag

bizarro% clang --version

Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)

Target: x86_64-apple-darwin13.3.0

Thread model: posix

It looks like it's in there.  There is a test to check (see below) but the 
compiler doesn't fail, it just warns so the test is perhaps concluding that 
yep, supported which is technically true.

bizarro% find . -type f -exec grep malign-double {} /dev/null \;

...

./deps/fftw-3.3.3-double/config.log:clang: warning: argument unused during 
compilation: '-malign-double'

./deps/fftw-3.3.3-double/config.log:configure:14242: clang -stdlib=libc++ 
-mmacosx-version-min=10.7 -m64 -c -O3 -fomit-frame-pointer -mtune=native 
-malign-double -fstrict-aliasing -fno-schedule-insns -ffast-math 


...


./deps/fftw-3.3.3-single/configure:$as_echo_n checking whether C compiler 
accepts -malign-double...  6; }

./deps/fftw-3.3.3-single/configure:  CFLAGS=-malign-double

./deps/fftw-3.3.3-single/configure: CFLAGS=$CFLAGS -malign-double

./deps/fftw-3.3.3-single/configure: # -malign-double for x86 systems

./deps/fftw-3.3.3-single/configure:  { $as_echo 
$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts 
-malign-double 5

./deps/fftw-3.3.3-single/configure:$as_echo_n checking whether C compiler 
accepts -malign-double...  6; }

./deps/fftw-3.3.3-single/configure:  CFLAGS=-malign-double

./deps/fftw-3.3.3-single/configure: CFLAGS=$CFLAGS -malign-double

./deps/fftw-3.3.3-single/configure: { $as_echo 
$as_me:${as_lineno-$LINENO}: checking whether C compiler accepts 
-malign-double 5

./deps/fftw-3.3.3-single/configure:$as_echo_n checking whether C compiler 
accepts -malign-double...  6; }

./deps/fftw-3.3.3-single/configure:  CFLAGS=-malign-double

./deps/fftw-3.3.3-single/configure: CFLAGS=$CFLAGS -malign-double

...

Binary file ./usr/lib/libfftw3.3.dylib matches

Binary file ./usr/lib/libfftw3.a matches

Binary file ./usr/lib/libfftw3f.3.dylib matches

Binary file ./usr/lib/libfftw3f.a matches


[julia-users] Unable to dlopen openblas and dSFMT by default when using embedded Julia

2014-08-12 Thread Jeff Waller
This is on OS/X

I do have a workaround, but I'd like to get to the bottom of it.  I don't 
see anywhere in the code where either of these 2 libraries are loaded 
explicitly --- maybe it's in loading sys.ji?  My workaround can be to set 
an env variable, but it would be better to understand and address the 
problem directly.

If DYLD_LIBRARY_PATH is set to point to the Julia library directory, then 
all works, but if this environment variable is not set, then the following 
occurs  



Warning: error initializing module LinAlg:

ErrorException(error compiling __init__: error compiling check_blas: error 
compiling blas_vendor: could not load module libopenblas: 
dlopen(libopenblas.dylib, 1): image not found)

Entropy pool not available to seed RNG; using ad-hoc entropy sources.

Warning: error initializing module Random:

ErrorException(could not load module libdSFMT: dlopen(libdSFMT.dylib, 1): 
image not found)
===


Those additional errors I'm sure are just side effects cascading form 
failing to load those libraries.  The engine still does function, but the 
linear algebra stuff is unavailable.  I've tried various combinations of 
jl_init and jl_init_with_image to no effect.

-Jeff





Re: [julia-users] Unable to dlopen openblas and dSFMT by default when using embedded Julia

2014-08-12 Thread Jeff Waller


On Tuesday, August 12, 2014 12:40:02 PM UTC-4, Elliot Saba wrote:

 Since Julia's libraries are in a non-standard directory, (usually 
 prefix/lib/julia, where prefix is the root of your julia installation) 
 we rely on a dynamic linker feature known as the RPATH to find these 
 libraries at runtime.

 You can inspect the rpath of the julia executable with otool:

 $ otool -l julia | grep -A 2 LC_RPATH
   cmd LC_RPATH
   cmdsize 48
  path @executable_path/../lib/julia (offset 12)
 --
   cmd LC_RPATH
   cmdsize 40
  path @executable_path/../lib (offset 12)


Nice, this is a useful.  I knew about rpath and am using it, but not the 
option and the grep to pull it out of compiled binary for verification. 
 I've verified that rpath is part of the object code created and is the 
right value (or at least corresponds to a value of DYLD_LIBRARY_PATH) that 
makes things work.

 

 This is encoded into the binary calling julia by putting in extra linker 
 commands when compiling julia.  Things like 
 -Wl,-rpath,@executable_path/../lib

 I haven't played around with whether encoding RPATHS into libraries will 
 work, so you either have to try one of the following:

 * Change the RPATH of the executable you're embedding julia into


I think this is the critical problem.  I did try that, BUT.  I'm not 
creating an executable but another dynamically loaded library -- a module 
-- which is loaded by another system at runtime.  I have no control of that 
other system, only the module.
 

 * Change the RPATH of libjulia itself


I have no control over that; this is for others.  This has to work with any 
deployment, but I am able to control how my thing is compiled.  I can 
insert any flags, constants, or run any sort configuration gathering script 
I want as a pre-compilation step (within reason of course).
 

 * Tack things onto DYLD_FALLBACK_LIBRARY_PATH (Try to stay away from 
 DYLD_LIBRARY_PATH, it's better to mess with the FALLBACK variant 
 http://stackoverflow.com/questions/3146274/is-it-ok-to-use-dyld-library-path-on-mac-os-x-and-whats-the-dynamic-library-s
  
 when you can)


Well this looks like what I will have to do for the time being at least.  I 
agree this is the least desirable. 
 

 * Move the libraries into an easier-to-find place.  Either a place that is 
 already on your executable's RPATH, or a location that is searched by 
 default by the linker.  The default locations are the default list of 
 DYLD_FALLBACK_LIBRARY_PATH entries, listed in this man page 
 https://developer.apple.com/library/mac/documentation/Darwin/Reference/ManPages/man1/dyld.1.html,
  
 and are $(HOME)/lib:/usr/local/lib:/lib:/usr/lib.
 -E


Likewise as above, I have no control over that.
 



 On Tue, Aug 12, 2014 at 3:12 AM, Jeff Waller trut...@gmail.com 
 javascript: wrote:

 This is on OS/X

 I do have a workaround, but I'd like to get to the bottom of it.  I don't 
 see anywhere in the code where either of these 2 libraries are loaded 
 explicitly --- maybe it's in loading sys.ji?  My workaround can be to set 
 an env variable, but it would be better to understand and address the 
 problem directly.

 If DYLD_LIBRARY_PATH is set to point to the Julia library directory, then 
 all works, but if this environment variable is not set, then the following 
 occurs  

 

 Warning: error initializing module LinAlg:

 ErrorException(error compiling __init__: error compiling check_blas: 
 error compiling blas_vendor: could not load module libopenblas: 
 dlopen(libopenblas.dylib, 1): image not found)

 Entropy pool not available to seed RNG; using ad-hoc entropy sources.

 Warning: error initializing module Random:

 ErrorException(could not load module libdSFMT: dlopen(libdSFMT.dylib, 
 1): image not found)
 ===


 Those additional errors I'm sure are just side effects cascading form 
 failing to load those libraries.  The engine still does function, but the 
 linear algebra stuff is unavailable.  I've tried various combinations of 
 jl_init and jl_init_with_image to no effect.

 -Jeff






Re: [julia-users] Unable to dlopen openblas and dSFMT by default when using embedded Julia

2014-08-12 Thread Jeff Waller


On Tuesday, August 12, 2014 2:36:03 PM UTC-4, Elliot Saba wrote:

 Alright, I worked up a little proof-of-concept demo here 
 https://github.com/staticfloat/dltest/tree/master, but the long and the 
 short of it is that you should be able to alter the rpath of YOUR library 
 to include whatever paths you need, (probably relative to @loader_path so 
 that your library just needs to know the relative path to openblas and 
 friends) and you'll then be able to dlopen these guys to your heart's 
 content.

 If you have any questions about this, please feel free to ask, but it 
 should be as simple as adding a -Wl,-rpath,@loader_path/../lib or whatever 
 to the linker options for your library.
 -E


Hmm ok.  What needs to dlopen openblas, etc?  dlopen returns a value and 
needs to be saved does it not?  If dlopen occurs on a library that is 
already loaded, it doesn't error, but does it succeed in the way that Julia 
expects it to?  Or are you saying that openblas, etc just needs to be 
loaded somehow and then it all works out, my library can invoke dlopen 
brute-force like with maybe an arbitrary location. on these things?

To be honest, I don't quite understand why this is even happening except 
for possibly side effects of resolving symbols at link time.  After all 
libopenblas and libdSFMT are in the same exact directory as libjulia, and 
libjulia does get loaded no problem.  

Hmm @loader_path is a value eh?  I don't see that documented.

-Jeff


Re: [julia-users] Unable to dlopen openblas and dSFMT by default when using embedded Julia

2014-08-12 Thread Jeff Waller
In fact this does indeed just work.

I did need to specify rpath, but didn't need to do anything else, this one 
value of rpath during the link step was enough to resolve both libjulia and 
libopenblas, etc.

e.g. -Wl,-rpath,/usr/local/julia/lib/julia,



Also.

*Version 0.4.0-dev+92 (2014-08-12 22:52 UTC)  --- congratz!*



  1   2   >