[julia-users] Re: ANN: Julia v0.4.3 released

2016-01-24 Thread hustf
The Julia + Juno Ide Windows 64 bundles are still version 0.4.2.



torsdag 14. januar 2016 19.27.48 UTC+1 skrev Tony Kelman følgende:
>
> Hello all! The latest bugfix release of the Julia 0.4.x line has been 
> released. Binaries are available from the usual place 
> , and as is typical with such things, 
> please report all issues to either the issue tracker 
> , or email the julia-users 
> list. (If you reply to this message on julia-users, please do not cc 
> julia-news which is intended to be low-volume.)
>
> This is a bugfix release, see this commit log 
>  for the list 
> of bugs fixed between 0.4.2 and 0.4.3. Bugfix backports to the 0.4.x line 
> will be continuing with a target of one point release per month. If you are 
> a package author and want to rely on functionality that did not work in 
> earlier 0.4.x releases but does work in 0.4.3 in your package, please be 
> sure to change the minimum julia version in your REQUIRE file to 0.4.3 
> accordingly. If you're not sure about this, you can test your package 
> specifically against older 0.4.x releases on Travis and/or locally.
>
> These are recommended upgrades for anyone using previous releases, and 
> should act as drop-in replacements. If you find any regressions relative to 
> previous releases, please let us know.
>
> -Tony
>
>

Re: [julia-users] Running multiple scripts in parallel Julia sessions

2016-01-24 Thread Tim Holy
pmap?

--Tim

On Saturday, January 23, 2016 07:29:07 PM Ritchie Lee wrote:
> Do you mean using @spawn, success, or just from the command prompt?
> 
> The scripts are long-running experiments where I am also tracking things
> like CPU time.  I would like them queued and run 4 at a time, i.e., if 1
> finishes ahead of others, then the next one will start running on the free
> processor.  Is there a way to schedule into separate processes?
> 
> On Saturday, January 23, 2016 at 12:00:35 PM UTC-8, Stefan Karpinski wrote:
> > Any reason not to run them all as separate processes?
> > 
> > On Fri, Jan 22, 2016 at 11:08 PM, Ritchie Lee  > 
> > > wrote:
> >> Let's say I have 10 julia scripts, scripts = ["script1.jl", "script2.jl",
> >> , "script10.jl"] and I would like to run them in parallel in separate
> >> Julia sessions, but 4 at a time (since I only have 4 cores on my
> >> machine).
> >> Is there any way to do this programmatically?
> >> 
> >> I tried doing this:
> >> 
> >> addprocs(4)
> >> pmap(include, scripts)
> >> 
> >> or
> >> 
> >> addprocs(4)
> >> @parallel for s in scripts
> >> include(s)
> >> end
> >> 
> >> However, this seems to reuse the same session, so all the global consts
> >> in the script file are colliding.  I would like to make sure that the
> >> namespaces are completely separate.
> >> 
> >> Thanks!



[julia-users] Re: Cannot pull with rebase when I try to use Pkg.update()

2016-01-24 Thread Christopher Alexander
This is basically identical to a very recent issue on 
here: https://groups.google.com/forum/#!topic/julia-users/3Hpll3j48Rg

That issue also involved the FixedEffectModels package, so maybe it's an 
issue with that?

Chris

On Friday, January 22, 2016 at 7:30:34 PM UTC-5, Diogo Falcão wrote:
>
> I am trying to use Julia on Windows for the first time, and when I try to 
> run Pkg.update(), I got this error:
>
> error: Cannot pull with rebase: You have unstaged changes.
>
>
> When I run "git status" in the METADATA folder, I got:
>
>
> On branch metadata-v2
> Your branch is behind 'origin/metadata-v2' by 524 commits, and can be 
> fast-forwarded.
>   (use "git pull" to update your local branch)
> Changes not staged for commit:
>   (use "git add ..." to update what will be committed)
>   (use "git checkout -- ..." to discard changes in working directory)
>
> modified:   FixedEffectModels/versions/0.1.0/requires
> modified:   FixedEffectModels/versions/0.1.1/requires
> modified:   FixedEffectModels/versions/0.2.0/requires
> modified:   FixedEffectModels/versions/0.2.1/requires
> modified:   FixedEffectModels/versions/0.2.2/requires
>
> no changes added to commit (use "git add" and/or "git commit -a")
>
>
> How can I solve this? Thanks
>
> --
> Esta mensagem pode conter informação confidencial e/ou privilegiada.Se 
> você não for o destinatário ou a pessoa autorizada a receber esta mensagem, 
> você não deve usar, copiar, divulgar, alterar e não tomar nenhuma ação em 
> relação a esta mensagem ou qualquer informação aqui contida.Se você recebeu 
> esta mensagem erroneamente, por favor entre em contato imediatamente ou 
> responsa por e-mail ao remetente e apague esta mensagem. Opiniões pessoais 
> do remetente não refletem, necessariamente, o ponto de vista da Neurotech, 
> o qual é divulgado somente por pessoas autorizadas. Antes de imprimir este 
> e-mail, veja se realmente é necessário. Ajude a preservar o meio ambiente.
>
> This message may contain confidential and/or privileged information. If 
> you are not the addressee or authorized to receive this for the 
> addressee, please, you must not use, copy, disclose, change, take any 
> action based on this message or any information herein. Personal opinions 
> of the sender do not necessarily reflect the view of Neurotech, which is 
> only divulged by authorized persons. Please consider the environment before 
> printing this email.
>


Re: [julia-users] JLD save function taking hours to finish

2016-01-24 Thread Tim Holy
Do you have cycles in the objects you're trying to save? (like A->B->A) I'm 
not sure JLD handles cycles. In which case breaking the cycle with a custom 
serializer will also solve the problem. (More ambitiously, one could also 
solve the general cycle problem.)

Best,
--Tim

On Sunday, January 24, 2016 12:31:33 PM Pedro Silva wrote:
> I did see that other post, but I really thought that this could be a
> different problem. The save function is running for the past 20 hours
> without terminating. I am inexperienced with serializers but I will see
> what I can make from the code you posted. Thank you very much.
> 
> On Sunday, January 24, 2016 at 10:29:25 AM UTC-8, Tim Holy wrote:
> > Similar question here, asked just a couple of days ago (please do search
> > the
> > archives first):
> > https://groups.google.com/d/msg/julia-users/VInJ4M-yNUY/Z6N8wCCfAwAJ
> > 
> > Someone should just add a serializer to the relevant random
> > forest/decision
> > tree packages. These aren't hard to write, and there's an example in the
> > linked docs.
> > 
> > For reference, here's a more complicated example: in my own lab's code, we
> > use
> > "tile trees" to represent sums over little pieces of images. They combine
> > QuadTrees/OctTrees (depending on spatial dimensionality) with
> > spatio-temporal
> > factorizations. The main point being that these might seem like fairly
> > complicated data structures, yet the serializer and deserializer can each
> > be
> > written in ~10 lines of code, and gave me an orders-of-magnitude
> > performance
> > improvement when saving/loading.
> > 
> > For reference, I've pasted the code below: it's not self-contained, but it
> > should give you the idea.
> > 
> > Best,
> > --Tim
> > 
> > # This contains info needed to reconstruct the BoxTree, but does not store
> > the
> > # BoxTree itself
> > type TileTreeSerializer{TT<:Tile}
> > 
> > tiles::Vector{TT}
> > ids::Vector{Int}
> > ntiles::Int
> > dims::Dims
> > Ts::Type
> > Tel::Type
> > K::Int
> > W::Tuple
> > 
> > end
> > TileTrees.tiletype{TT}(::Type{TileTreeSerializer{TT}}) = TT
> > TileTrees.tiletype{TT}(::TileTreeSerializer{TT}) = TT
> > 
> > function JLD.readas(serdata::TileTreeSerializer)
> > 
> > bt = boxtree(serdata.Ts, serdata.Tel, serdata.K, serdata.W,
> > 
> > dimspans(serdata.dims[1:end-1]))
> > 
> > TT = tiletype(serdata)
> > tiles = Array(TT, serdata.ntiles)
> > for i = 1:length(serdata.tiles)
> > 
> > id = serdata.ids[i]
> > tile = serdata.tiles[i]
> > tiles[id] = tile
> > roi = boxroi(tile.spans, id)
> > push!(bt, roi)
> > 
> > end
> > ttree = TileTree(tiles, bt, serdata.dims)
> > 
> > end
> > 
> > function JLD.writeas(ttree::TileTree)
> > 
> > tiles = Array(tiletype(ttree), 0)
> > ids = Int[]
> > for (id, tile) in ttree
> > 
> > push!(tiles, tile)
> > push!(ids, id)
> > 
> > end
> > BT = boxtreetype(ttree)
> > ST = splittype(BT)
> > TileTreeSerializer{tiletype(ttree)}(
> > 
> > tiles,
> > ids,
> > length(ttree.tiles),
> > ttree.dims,
> > ST,
> > eltype(BT),
> > splitk(BT),
> > (splitwidth(BT)...))
> > 
> > end
> > 
> > On Sunday, January 24, 2016 02:15:50 AM Pedro Silva wrote:
> > > I've been training a lot of random forests in a really big dataset and
> > 
> > while
> > 
> > > saving my transformations of the data in JLD files has been a breeze
> > 
> > saving
> > 
> > > the Models and their respective details is not going smoothly. I'm
> > > experimenting with different sizes of trees and different number of
> > > parameters per tree, so I have 10 forests total and since they take
> > 
> > about 1
> > 
> > > hour to train each I'd like to save them every 7 iterations in case I
> > 
> > have
> > 
> > > to shut down a machine. My code for the process is the following:
> > > 
> > > using HDF5, JLD, DataFrames, Distributions, DecisionTree, MLBase,
> > 
> > StatsBase
> > 
> > > ...
> > > 
> > > num_of_trees = collect(10:10:100);
> > > num_of_features = collect(20:5:50);
> > > Models =
> > 
> > Array{DecisionTree.Ensemble}(length(num_of_trees),length(num_of_features))
> > ;
> > 
> > > Predictions =
> > > Array{Array{Float64,1}}(length(num_of_trees),length(num_of_features));
> > > RMSEs = Array{Float64}(length(num_of_trees),length(num_of_features));
> > 
> > train
> > 
> > > = rand(Bernoulli(0.8), size(Y)) .== 1;
> > > 
> > > for i in 1:length(num_of_trees)
> > > 
> > > for j in 1:length(num_of_features)
> > > 
> > > Models[i,j] =
> > 
> > build_forest(Y[train],DataSTD[train,:],num_of_features[j],num_of_trees[i])
> > ;
> > 
> > > Predictions[i,j] = apply_forest(Models[i,j], DataSTD[!train,:]);
> > 
> > RMSEs[i,j]
> > 
> > > = root_mean_squared_error(Y[!train], Predictions[i,j]); println("\n",
> > > Models[i,j])
> > > 
> > > println("Features: 

[julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Bryan Rivera
Why can't this:

function functionWithPaths(state::State, a::Int, b::Int, onPathA)
 if( a > b)
 onPathA(a, b)

 state.value = a
 else
 state.value = a + b
 end
end


function test()

   array = zeros(Int, 100)

   onPathA(a, b) = array[a] = b

   functionWithPaths(State(0), 10, 20, onPathA)

end


Be unrolled to this:

function unrolled_functionWithPaths(state::State, a::Int, b::Int, array) # 
`array` injected
   if( a > b)
   array[a] = b # Injected function

   state.value = a
   else
   state.value = a + b
   end
end


I tried to keep it simple here - this is not the exact use case, but it is 
exactly what I want.

However, I tried using anonymous functions, both Julia's and FastAnonymous' 
, and they both have issues in my more complex use case.

The best route, as far as I can tell, is to create a macro.  I tried a few 
ways and got stuck.  

Maybe someone can create it with less effort?  I have the feeling this 
would be *very* useful.



Re: [julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Yichao Yu
On Sun, Jan 24, 2016 at 7:57 PM, Bryan Rivera  wrote:
> Why can't this:
>
> function functionWithPaths(state::State, a::Int, b::Int, onPathA)
>  if( a > b)
>  onPathA(a, b)
>
>  state.value = a
>  else
>  state.value = a + b
>  end
> end
>
>
> function test()
>
>array = zeros(Int, 100)
>
>onPathA(a, b) = array[a] = b
>
>functionWithPaths(State(0), 10, 20, onPathA)
>
> end
>
>
> Be unrolled to this:
>
> function unrolled_functionWithPaths(state::State, a::Int, b::Int, array) #
> `array` injected
>if( a > b)
>array[a] = b # Injected function
>
>state.value = a
>else
>state.value = a + b
>end
> end
>
>
> I tried to keep it simple here - this is not the exact use case, but it is
> exactly what I want.
>
> However, I tried using anonymous functions, both Julia's and FastAnonymous'
> , and they both have issues in my more complex use case.
>
> The best route, as far as I can tell, is to create a macro.  I tried a few
> ways and got stuck.
>
> Maybe someone can create it with less effort?  I have the feeling this would
> be *very* useful.
>

You can do this with a functor (callable types), which I think someone
have already showed you how to do in reply to one of your previous
post.
Plain closures should be able to be inlined once Jeff's closure rewrite
is merged.


Re: [julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Stefan Karpinski
The branch has not been merged yet.

On Sunday, January 24, 2016, Bryan Rivera  wrote:

> Yea, the callback also has its issues.
>
> Will those new lambdas be in 0.5?  I thought they were already available
> in 0.5dev.
>
>


Re: [julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Bryan Rivera
Yea, the callback also has its issues.

Will those new lambdas be in 0.5?  I thought they were already available in 
0.5dev.



Re: [julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Bryan Rivera
I can hardly wait.  

Is it just me, or is this something long overdue for Julia? (don't get me 
wrong, I know time is precious)  




[julia-users] problems reading a file

2016-01-24 Thread Alberto
Hi,

I found a problem in reading the file "channel 1 20120815 015005 1.ddf" 
that I attached to this message.
The file is the output from a measurement equipment and basically contains 
some text but has extension '.ddf'.
I tried to read the content of the file with the following lines of code:

f = open(filename,"r")
readall(filename)

But I get the following error when julia attempts to read the unicode 
character °

Error showing value of type UTF8String:
ERROR: UnicodeError: invalid character index
in next at ./unicode/utf8.jl:65
 in print_escaped at strings/io.jl:114
 in print_quoted at strings/io.jl:131
 in show at strings/io.jl:48
 in anonymous at show.jl:1294
 in with_output_limit at ./show.jl:1271
 in showlimited at show.jl:1293
 in writemime at replutil.jl:4
 in display at REPL.jl:114
 in display at REPL.jl:117
 [inlined code] from multimedia.jl:151
 in display at multimedia.jl:162
 in print_response at REPL.jl:134
 in print_response at REPL.jl:121
 in anonymous at REPL.jl:624
 in run_interface at ./LineEdit.jl:1610
 in run_frontend at ./REPL.jl:863
 in run_repl at ./REPL.jl:167
 in _start at ./client.jl:420


I did some tests to understand more this problem:
1. I copied the content of the original .ddf file to a new file with the 
same name and extension, and I ran the julia code f = open(filename,"r"); 
readall(filename). It worked! 
2. I removed the unicode character ° from the original .ddf file and I ran 
the julia code f = open(filename,"r"); readall(filename)  successfully.
3. I tried to read the original .ddf file using python with f = 
open(filename,"r"); f.read() and it worked.

It seems like Julia can read files containing unicode characters, but for 
some reasons it cannot read the specific files that are generated by this 
measurement equipment. The same problem doesn't happen in python.

Does anybody have some ideas regarding this problem?
Many thanks,

Alberto


channel 1 20120815 015005 1.ddf
Description: Binary data


[julia-users] Re: problems reading a file

2016-01-24 Thread Andreas Lobinger
Hello colleague,

i think i once had a similar problem.
https://github.com/JuliaLang/julia/issues/12764

Wishing a happy day,
 Andreas



Re: [julia-users] Re: How to run a detached command and return execution to the parent script?

2016-01-24 Thread Adrian Salceanu
Cheers! 

sâmbătă, 23 ianuarie 2016, 19:01:24 UTC+1, Stefan Karpinski a scris:
>
> Great! I'm glad you got it sorted out.
>
> On Fri, Jan 22, 2016 at 6:24 PM, Adrian Salceanu  > wrote:
>
>> No no, It's perfectly fine, it was my fault. What I haven't realized is 
>> that if I start the server async then my script will finish immediately, 
>> which also terminated the server. It was my responsibility to keep the 
>> whole app alive now. 
>>
>> It works like a charm! 
>>
>>
>> sâmbătă, 23 ianuarie 2016, 00:06:13 UTC+1, Stefan Karpinski a scris:
>>>
>>> The shell works with processes, Julia has tasks where are not the same 
>>> thing...
>>>
>>> On Fri, Jan 22, 2016 at 5:49 PM, Adrian Salceanu  
>>> wrote:
>>>
 The problem seems to that HttpServer can not run @async - it exits 
 immediately. 

 ===

 using HttpServer

 http = HttpHandler() do req::Request, res::Response
 Response( ismatch(r"^/hello/", req.resource) ? exit(2) : 404 )
 end

 server = Server( http )
 run( server, 8001 )  # <--- this works but blocks
 @async run( server, 8001 ) # <--- this exits immediately

 ===

 It's not necessarily a problem that HttpServer blocks. But what drives 
 me nuts is: if I run 
 $ julia app.jl & 
 in the shell, it works perfectly. The process is placed in the 
 background, the server happily listens to the assigned port, etc. 

 Why can't I run the same command from within another julia process and 
 get the same effect? 


 vineri, 22 ianuarie 2016, 22:40:56 UTC+1, Stefan Karpinski a scris:
>
> @spawn runs a command on a (random) worker process. If you want to do 
> "background" work in the current process, you can use @async:
>
> julia> t = @async (sleep(5); rand())
> Task (runnable) @0x000112d746a0
>
> julia> wait(t)
> 0.14543742643271207
>
>
> On Fri, Jan 22, 2016 at 4:33 PM, Adrian Salceanu  > wrote:
>
>> Oh! The ruby analogy made me think about actually spawning the 
>> detached command! Which produced the desired effect! 
>>
>> julia> @spawn run(detach(`ping www.google.com`))
>>
>>
>>
>> vineri, 22 ianuarie 2016, 22:29:27 UTC+1, Adrian Salceanu a scris:
>>>
>>> I guess what I'm looking for is the equivalent of Ruby's 
>>> Process#spawn
>>>
>>> In REPL: 
>>>
>>> >> pid = Process.spawn("ping www.google.com", :out => '/dev/null')
>>> 83210
>>> >> <-- the process is running in the 
>>> background and control has been returned to the REPL
>>>
>>>
>>> vineri, 22 ianuarie 2016, 22:06:01 UTC+1, Adrian Salceanu a scris:

 Hi, 

 I'm hammering at a web app and I'm trying to setup functionality to 
 monitor the file system for changes and restart/reload the server 
 automatically so the changes are picked up (I'm using Mux which uses 
 HttpServer). 

 The approach I have in mind is: 

 1. have a startup script which is run from the command line, 
 something like: 
 $ julia -L startup.jl

 2. the startup script launches the web app, which starts the web 
 server. My intention was to run 
 $ julia -L app.jl 
 as a command inside startup.jl, detached, and have the startup.jl 
 script get back control, with app.jl running detached in the 
 background. 

 3. once startup.jl gets back control, it begins monitoring the file 
 system and when changes are detected, kills the app and relaunches it. 

 That was the theory. Now, I might be missing something but I can't 
 find a way to detach the command I'm running and get control back to 
 the 
 startup script. And I tried a lot of things! 

 ===

 I'm providing simpler example using "ping", which also run 
 indefinitely, similar to the web server. 

 julia> run(detach(`ping "www.google.com"`)) # the command is 
 detached and continues to run after the julia REPL is closed, but at 
 this 
 point the REPL does not get control, there's no cursor available in 
 the REPL
 PING www.google.com (173.194.45.82): 56 data bytes
 64 bytes from 173.194.45.82: icmp_seq=0 ttl=54 time=30.138 ms
 64 bytes from 173.194.45.82: icmp_seq=1 ttl=54 time=30.417 ms
 ... more output ...
 64 bytes from 173.194.45.82: icmp_seq=7 ttl=54 time=30.486 ms
 64 bytes from 173.194.45.82: icmp_seq=8 ttl=54 time=30.173 ms
 ^CERROR: InterruptException:   
   < here I press Ctrl+C and only now the 
 REPL 
 gets back the 

[julia-users] Re: problems reading a file

2016-01-24 Thread Tomas Lycken
Try converting from binary to text with UTF8String instead of ASCIIString - 
then all characters should display correctly. 

//T 

Re: [julia-users] Why can't function passing be unrolled?

2016-01-24 Thread Kevin Squire
There are a lot of exciting possibilities still in store for Julia, but as
you alluded, only so many qualified person-hours to go around.

It was good to see a pull request from you--who knows, maybe you'll be the
one to contribute the next big thing.

Cheers,
   Kevin

On Sunday, January 24, 2016, Bryan Rivera  wrote:

> I can hardly wait.
>
> Is it just me, or is this something long overdue for Julia? (don't get me
> wrong, I know time is precious)
>
>
>


Re: [julia-users] randperm run time is slow

2016-01-24 Thread Dan
A good description of random permutation generation is given in Wikipedia. 
It's the Fisher-Yates algorithm and its variants.

On Saturday, January 23, 2016 at 4:43:36 PM UTC+2, Kevin Squire wrote:
>
> Hi Dan, 
>
> It would also be good if you deleted that post (or edited it and removed 
> the code), for the same reason Viral mentioned: if someone reads the post 
> and then creates a pull request for changing the Julia implementation, it 
> could be argued that that implementation was derived from GPL code, which 
> makes it GPL.  A clean-room implementation (such as creating the patch from 
> a higher level description of the code) is okay.
>
> (Viral, it would probably be good for you to remove the code from your 
> post as well.  I did so in this post below.)
>
> Cheers,
>Kevin
>
> On Saturday, January 23, 2016 at 6:30:07 AM UTC-8, Viral Shah wrote:
>>
>> Unfortunately, this makes the implementation GPL. If you can describe the 
>> algorithm in an issue on github, someone can do a cleanroom implementation.
>>
>> -viral
>>
>> On Saturday, January 23, 2016 at 3:17:19 PM UTC+5:30, Dan wrote:
>>>
>>> Another way to go about it, is to look at R's implementation of a random 
>>> permutation and recreate it in Julia, hoping for similar performance. 
>>> Having done so, the resulting Julia code is:
>>>
>>> 
>>> A size 1,000,000 permutation was generated x2 faster with this method.
>>> But, this method uses uniform floating random numbers, which might not 
>>> be uniform on integers for large enough numbers. In general, it should be 
>>> possible to optimize a more correct implementation. But if R envy is a 
>>> driver, R performance is recovered with a pure Julia implementation (the R 
>>> implementation is in C).
>>>  
>>> On Saturday, January 23, 2016 at 7:26:49 AM UTC+2, Rafael Fourquet wrote:

 > Let's capture this as a Julia performance issue on github, 
 > if we can't figure out an easy way to speed this up right away. 

 I think I remember having identified a potentially sub-optimal 
 implementation of this function few weeks back (perhaps no more than 
 what Tim suggested) and had planned to investigate further (when time 
 permits...) 

>>>

Re: [julia-users] randperm run time is slow

2016-01-24 Thread Dan
As requested I deleted the post. But algorithms for creating permutations 
are standard and very much in the public domain (what does 
TheArtOfProgramming say?). If someone really invests a little effort into 
it, a more formally correct algorithm can be implemented.

On Saturday, January 23, 2016 at 4:43:36 PM UTC+2, Kevin Squire wrote:
>
> Hi Dan, 
>
> It would also be good if you deleted that post (or edited it and removed 
> the code), for the same reason Viral mentioned: if someone reads the post 
> and then creates a pull request for changing the Julia implementation, it 
> could be argued that that implementation was derived from GPL code, which 
> makes it GPL.  A clean-room implementation (such as creating the patch from 
> a higher level description of the code) is okay.
>
> (Viral, it would probably be good for you to remove the code from your 
> post as well.  I did so in this post below.)
>
> Cheers,
>Kevin
>
> On Saturday, January 23, 2016 at 6:30:07 AM UTC-8, Viral Shah wrote:
>>
>> Unfortunately, this makes the implementation GPL. If you can describe the 
>> algorithm in an issue on github, someone can do a cleanroom implementation.
>>
>> -viral
>>
>> On Saturday, January 23, 2016 at 3:17:19 PM UTC+5:30, Dan wrote:
>>>
>>> Another way to go about it, is to look at R's implementation of a random 
>>> permutation and recreate it in Julia, hoping for similar performance. 
>>> Having done so, the resulting Julia code is:
>>>
>>> 
>>> A size 1,000,000 permutation was generated x2 faster with this method.
>>> But, this method uses uniform floating random numbers, which might not 
>>> be uniform on integers for large enough numbers. In general, it should be 
>>> possible to optimize a more correct implementation. But if R envy is a 
>>> driver, R performance is recovered with a pure Julia implementation (the R 
>>> implementation is in C).
>>>  
>>> On Saturday, January 23, 2016 at 7:26:49 AM UTC+2, Rafael Fourquet wrote:

 > Let's capture this as a Julia performance issue on github, 
 > if we can't figure out an easy way to speed this up right away. 

 I think I remember having identified a potentially sub-optimal 
 implementation of this function few weeks back (perhaps no more than 
 what Tim suggested) and had planned to investigate further (when time 
 permits...) 

>>>

Re: [julia-users] JLD save function taking hours to finish

2016-01-24 Thread Tim Holy
Similar question here, asked just a couple of days ago (please do search the 
archives first):
https://groups.google.com/d/msg/julia-users/VInJ4M-yNUY/Z6N8wCCfAwAJ

Someone should just add a serializer to the relevant random forest/decision 
tree packages. These aren't hard to write, and there's an example in the 
linked docs.

For reference, here's a more complicated example: in my own lab's code, we use 
"tile trees" to represent sums over little pieces of images. They combine 
QuadTrees/OctTrees (depending on spatial dimensionality) with spatio-temporal 
factorizations. The main point being that these might seem like fairly 
complicated data structures, yet the serializer and deserializer can each be 
written in ~10 lines of code, and gave me an orders-of-magnitude performance 
improvement when saving/loading.

For reference, I've pasted the code below: it's not self-contained, but it 
should give you the idea.

Best,
--Tim

# This contains info needed to reconstruct the BoxTree, but does not store the
# BoxTree itself
type TileTreeSerializer{TT<:Tile}
tiles::Vector{TT}
ids::Vector{Int}
ntiles::Int
dims::Dims
Ts::Type
Tel::Type
K::Int
W::Tuple
end
TileTrees.tiletype{TT}(::Type{TileTreeSerializer{TT}}) = TT
TileTrees.tiletype{TT}(::TileTreeSerializer{TT}) = TT

function JLD.readas(serdata::TileTreeSerializer)
bt = boxtree(serdata.Ts, serdata.Tel, serdata.K, serdata.W, 
dimspans(serdata.dims[1:end-1]))
TT = tiletype(serdata)
tiles = Array(TT, serdata.ntiles)
for i = 1:length(serdata.tiles)
id = serdata.ids[i]
tile = serdata.tiles[i]
tiles[id] = tile
roi = boxroi(tile.spans, id)
push!(bt, roi)
end
ttree = TileTree(tiles, bt, serdata.dims)
end

function JLD.writeas(ttree::TileTree)
tiles = Array(tiletype(ttree), 0)
ids = Int[]
for (id, tile) in ttree
push!(tiles, tile)
push!(ids, id)
end
BT = boxtreetype(ttree)
ST = splittype(BT)
TileTreeSerializer{tiletype(ttree)}(
tiles,
ids,
length(ttree.tiles),
ttree.dims,
ST,
eltype(BT),
splitk(BT),
(splitwidth(BT)...))
end


On Sunday, January 24, 2016 02:15:50 AM Pedro Silva wrote:
> I've been training a lot of random forests in a really big dataset and while
> saving my transformations of the data in JLD files has been a breeze saving
> the Models and their respective details is not going smoothly. I'm
> experimenting with different sizes of trees and different number of
> parameters per tree, so I have 10 forests total and since they take about 1
> hour to train each I'd like to save them every 7 iterations in case I have
> to shut down a machine. My code for the process is the following:
> 
> using HDF5, JLD, DataFrames, Distributions, DecisionTree, MLBase, StatsBase
> 
> ...
> 
> num_of_trees = collect(10:10:100);
> num_of_features = collect(20:5:50);
> Models =
> Array{DecisionTree.Ensemble}(length(num_of_trees),length(num_of_features));
> Predictions =
> Array{Array{Float64,1}}(length(num_of_trees),length(num_of_features));
> RMSEs = Array{Float64}(length(num_of_trees),length(num_of_features)); train
> = rand(Bernoulli(0.8), size(Y)) .== 1;
> 
> for i in 1:length(num_of_trees)
>   for j in 1:length(num_of_features)
>   Models[i,j] =
> build_forest(Y[train],DataSTD[train,:],num_of_features[j],num_of_trees[i]);
> Predictions[i,j] = apply_forest(Models[i,j], DataSTD[!train,:]); RMSEs[i,j]
> = root_mean_squared_error(Y[!train], Predictions[i,j]); println("\n",
> Models[i,j])
>   println("Features: ",num_of_features[j])
>   println("RMSE: ",RMSEs[i,j])
>   
> display(confusion_matrix_regression(Y[!train],Predictions[i,j],10))
>   end
>   save("Models_run1.jld", "Models", Models, "Features", num_of_features,
> "Predictions", Predictions, "RMSEs", RMSEs, "Bernoulli", train); end
> 
> Finishing the internal for loop takes around 7 hours, which is not a
> surprise, but the save function runs for hours as well. The file keeps
> slowly increasing in size, so I think something is happening but I'm not
> sure what. I'm still unable to get to a second iteration of my outer loop
> after 3 hours of the intern loop has finished. I plan to leave it running
> over night to see whether it fails or finishes. Any idea on why this is
> happening?



[julia-users] Re: optim stopping without reaching convergence

2016-01-24 Thread grandemundo82
thanks but that s not the issue. for some reasons. the number of iterations 
is really high to begin with

On Friday, January 22, 2016 at 7:01:56 PM UTC-6, Kristoffer Carlsson wrote:
>
> Look at the source 
> https://github.com/JuliaOpt/Optim.jl/blob/master/src/nelder_mead.jl
> The number of iterations is not necessarily the same as the number of 
> objective function evaluations



Re: [julia-users] Simultaneous audio playback / recording.

2016-01-24 Thread CrocoDuck O'Ducks
Ok cool people! I got the first prototype working:

# sinegen.jl

#=
  Just a simple generator of sine waves. It even records stuff!
=#

using WAV

# User defined stuff:

# Define Sample Frequency:
Fs = 96000# Hz
# Define Frequency of Wave:
f = 440 # Hz
# Define Duration:
T = 10 #s

# Define Audio File variables:
nbits = 32
# Format:
fformat = "wav"

# Define Audio Driver and AudioDevice for playback
adriver = "alsa"
adev = "hw:1"

# Execute:

# Time window:
t = (1/Fs) * collect(0:(T*Fs - 1)) #s
# Signal:
y = sin(2π*f*t)

# File Name:
fname = "Sine $f.$fformat"
# Save the sine wave:
wavwrite(y,Fs,nbits,fname)

# Set Driver and device for sox:
ENV["AUDIODRIVER"] = adriver
ENV["AUDIODEV"] = adev

# And now, play it with sox from command line.
# run(`play -b $nbits -r $Fs $fname`)

# To record for a lenght of time T:
# run(`rec -b $nbits -r $Fs Output.wav trim 0 $T `)

# To play and record simultaneously:
run(`play -b $nbits -r $Fs $fname` & `rec -b $nbits -r $Fs Output.wav trim 
0 $T `)

It is using sox as suggested above. Do you have any particular suggestion 
about calling these two commands simultaneously in the last line?


On Wednesday, 20 January 2016 00:15:45 UTC, CrocoDuck O'Ducks wrote:
>
> Wow! Sounds amazing! Can't wait for that!
>
> On Monday, 18 January 2016 23:42:30 UTC, Spencer Russell wrote:
>>
>> AudioIO is going to be going deprecated soon in favor of a family of 
>> packages that are each a bit more focused and simpler to interface with. 
>> They’re not quite release-ready but have been making a lot of progress 
>> lately, and I wanted folks to know what’s coming before you sink a bunch of 
>> time into working with the AudioIO implementation. Currently there’s a 
>> mostly working JACK library, and I’ll probably port over the PortAudio 
>> support from AudioIO after that.
>>
>> -s
>>
>> On Jan 17, 2016, at 1:30 PM, CrocoDuck O'Ducks  
>> wrote:
>>
>> Hi there!
>>
>> I have a number MATLAB scripts that I use for electro-acoustic 
>> measurements (mainly impulse responses) and I would like to port them to 
>> JULIA. I also written the data acquisition in MATLAB. It works by streaming 
>> the test signal to the soundcard outputs while recording from the soundcard 
>> inputs. I would like to implement that as well. I was looking at AudioIO 
>> but there are few issues:
>>
>>
>>- I cannot find documentation/examples on how to record from 
>>soundcard input.
>>- I cannot find documentation/examples on selecting the audio device 
>>to use.
>>- I cannot find documentation/examples on setting sampling variables 
>>(it is in my best interest to use the highest sample rate available).
>>
>> Are these things possible with JULIA? I think I can use aplayer and 
>> arecord through JULIA (I am on Linux), but I was wishing to have all the 
>> code contained within JULIA.
>>
>> Many thanks, I hope you can point me in the right direction.
>>
>>
>>

Re: [julia-users] CurveFit Pakage: non-linear fitting code

2016-01-24 Thread Alexandre Gomiero de Oliveira
Hi, João,

The "y = 1 * sin (x * a2-a3)" is a harmonic function, so the coefficients 
returning from the function call, depend heavily on the parameter a0 
("initial guess for each fitting parameter") you will send as the third 
parameter of the nonlinear_fit function (with maxiter=200_000):

a0=[1.5, 1.5, 1.0]  
coefficients: [0.2616335317043578, 1.1471991302529982,0.7048665905560775]

a0=[100.,100.,100.]  
coefficients: [-0.4077952060368059, 90.52328921205392, 96.75331155303707]

a0=[1.2, 0.5, 0.5]  
coefficients: [1.192007321713507, 0.49426296880933257, 0.19863645732313934]

I think the results you're getting are harmonics, as the graph (posted on 
StackOverflow):

http://i.stack.imgur.com/9WZSJ.png

Where:

- blue line:  
f1(xx)=0.2616335317043578*sin(xx*1.1471991302529982-0.7048665905560775)

- yellow line:
f2(xx)=1.192007321713507*sin(xx*0.49426296880933257-0.19863645732313934)

- pink line:
f3(xx)=-0.4077952060368059*sin(xx*90.52328921205392-96.75331155303707)

- blue dots are your initial data.

The graph was generated with Gadfly:

plot(layer(x=x,y=y,Geom.point),layer([f1,f2,f3],0.0, 15.0,Geom.line))

Since nonlinar_fit uses a Newton like algorithm to find the coefficients 
and the function is harmonic (f and f' are not continuous on the data's 
interval)  the coefficients depend on the initial parameters.

The code:

using CurveFit

x = 
[0.0,0.2,0.4,1.0,1.6,1.8,2.0,2.6,2.8,3.0,3.8,4.8,5.0,5.2,6.0,6.2,7.4,7.6,7.8,8.6,8.8,9.0,9.2,9.4,10.0,10.6,10.8,11.2,11.6,11.8,12.2,12.4];
y = [-0.183, -0.131, 0.027, 0.3, 0.579, 0.853, 0.935, 1.133, 1.269, 1.102, 
1.092, 1.143, 0.811, 0.91, 0.417, 0.46, -0.516, -0.334, -0.504, -0.946, 
-0.916, -0.975, -1.099, -1.113, -1.297, -1.234, -0.954, -1.122, -0.609, 
-0.593, -0.403, -0.51];

xy=[x y]

meps=1e-4

fun(x::Vector{Float64}, a::Vector{Float64})=x[2] - a[1] * sin(a[2] * x[1] - 
a[3])

a0=[1.2, 0.5, 0.5];

nonlinear_fit(xy, fun, a0, meps, 200_000)
# returns: 
([1.192007321713507,0.49426296880933257,0.19863645732313934],false,20)

HTH



On Saturday, January 23, 2016 at 9:42:08 PM UTC-2, jmarcell...@ufpi.edu.br 
wrote:

hello tshort

I published this post on the "stack". the solution presented has errors in 
the coefficients.



Em sábado, 23 de janeiro de 2016 14:08:07 UTC-2, tshort escreveu:
One link:
http://stackoverflow.com/questions/34840875/julia-using-curvefit-package-non-linear
On Jan 23, 2016 10:13 AM,  wrote:
I wish someone would publish a nonlinear fitting code using package CurveFit
using this data, how we can use CurveFit?
x = [0.0 0.2 0.4 1.0 1.6 1.8 2.0 2.6 2.8 3.0 3.8 4.8 5.0 5.2 6.0 6.2 7.4 
7.6 7.8 8.6 8.8 9.0 9.2 9.4 10.0 10.6 10.8 11.2 11.6 11.8 12.2 12.4];

y = [-0.183 -0.131 0.027 0.3 0.579 0.853 0.935 1.133 1.269 1.102 1.092 
1.143 0.811 0.91 0.417 0.46 -0.516 -0.334 -0.504 -0.946 -0.916 -0.975 
-1.099 -1.113 -1.297 -1.234 -0.954 -1.122 -0.609 -0.593 -0.403 -0.51];

asasasa




On Saturday, January 23, 2016 at 9:42:08 PM UTC-2, jmarcell...@ufpi.edu.br 
wrote:
>
>
> hello tshort
>
> I published this post on the "stack". the solution presented has errors in 
> the coefficients.
>
>
>
> Em sábado, 23 de janeiro de 2016 14:08:07 UTC-2, tshort escreveu:
>>
>> One link:
>>
>>
>> http://stackoverflow.com/questions/34840875/julia-using-curvefit-package-non-linear
>> On Jan 23, 2016 10:13 AM,  wrote:
>>
>>> I wish someone would publish a nonlinear fitting code using package 
>>> CurveFit
>>> using this data, how we can use CurveFit?
>>>
>>> x = [0.0 0.2 0.4 1.0 1.6 1.8 2.0 2.6 2.8 3.0 3.8 4.8 5.0 5.2 6.0 6.2 7.4 
>>> 7.6 7.8 8.6 8.8 9.0 9.2 9.4 10.0 10.6 10.8 11.2 11.6 11.8 12.2 12.4];
>>>
>>> y = [-0.183 -0.131 0.027 0.3 0.579 0.853 0.935 1.133 1.269 1.102 1.092 
>>> 1.143 0.811 0.91 0.417 0.46 -0.516 -0.334 -0.504 -0.946 -0.916 -0.975 
>>> -1.099 -1.113 -1.297 -1.234 -0.954 -1.122 -0.609 -0.593 -0.403 -0.51];
>>>
>>

[julia-users] Re: CurveFit Package. why the code below does not work?

2016-01-24 Thread Alexandre Gomiero de Oliveira
The 'fun' function has to return a residual value 'r' of type Float64, 
calculated at each iteration of the data, as follows:

r = y - fun(x, coefs)

so your function y=a1*sin(x*a2-a3) will be defined as:

fun(x,a) = x[2]-a[1]*sin(a[2]*x[1] - a[3])

Where:

  x[2] is a value of 'y' vector  
  x[1] is a value of 'x' vector  
  a[...] is the set of parameters
  
The 'fun' function has to return a single Float64, so the operators can't 
be 'dot version' (.*).

By calling the nonlinear_fit function, the first parameter must be an array 
Nx2, with the first column containing N values of 'x' and the second, 
containing N values of 'y', so you must concatenate the two vectors 'x' and 
'y' in a two columns array:

xy = [x y]

and finally, call the function:

coefs, converged, iter = CurveFit.nonlinear_fit(xy , fun , a0 , eps, 
maxiter) 

The complete code:

using CurveFit

x = 
[0.0,0.2,0.4,1.0,1.6,1.8,2.0,2.6,2.8,3.0,3.8,4.8,5.0,5.2,6.0,6.2,7.4,7.6,7.8,8.6,8.8,9.0,9.2,9.4,10.0,10.6,10.8,11.2,11.6,11.8,12.2,12.4];
y = [-0.183, -0.131, 0.027, 0.3, 0.579, 0.853, 0.935, 1.133, 1.269, 1.102, 
1.092, 1.143, 0.811, 0.91, 0.417, 0.46, -0.516, -0.334, -0.504, -0.946, 
-0.916, -0.975, -1.099, -1.113, -1.297, -1.234, -0.954, -1.122, -0.609, 
-0.593, -0.403, -0.51];
xy=[x y]
meps=1e-4

fun(x::Vector{Float64}, a::Vector{Float64})=x[2] - a[1] * sin(a[2] * x[1] - 
a[3])

a0=[1.2, 0.5, 0.5];

nonlinear_fit(xy, fun, a0, meps, 200_000)
# returns: 
([1.192007321713507,0.49426296880933257,0.19863645732313934],false,20)

posted on another thread, as well: 
https://groups.google.com/forum/#!searchin/julia-users/curvefit/julia-users/ZlIujG4fI_c/pNWzHd40AAAJ

HTH

On Tuesday, January 19, 2016 at 12:40:37 AM UTC-2, jmarcell...@ufpi.edu.br 
wrote:
>
>
>
> Code: 
>
> x = [0.0 0.2 0.4 1.0 1.6 1.8 2.0 2.6 2.8 3.0 3.8 4.8 5.0 5.2 6.0 6.2 7.4 
> 7.6 7.8 8.6 8.8 9.0 9.2 9.4 10.0 10.6 10.8 11.2 11.6 11.8 12.2 12.4];
> y = [-0.183 -0.131 0.027 0.3 0.579 0.853 0.935 1.133 1.269 1.102 1.092 
> 1.143 0.811 0.91 0.417 0.46 -0.516 -0.334 -0.504 -0.946 -0.916 -0.975 
> -1.099 -1.113 -1.297 -1.234 -0.954 -1.122 -0.609 -0.593 -0.403 -0.51];
>
>
> x0 = vec(x)
> y0 = vec(y)
>
> a = [1.5 1.5 1.0]
> eps = 0.0001
> maxiter= 200.0
>
> fun(xm,a) = a[1].*sin(a[2].*xm - a[3]);
>
> xy=hcat(x0,y0);
>
> coefs,converged,iter = CurveFit.nonlinear_fit(xy,fun,a,eps,maxiter);
>
>

[julia-users] JLD save function taking hours to finish

2016-01-24 Thread Pedro Silva
I've been training a lot of random forests in a really big dataset and while 
saving my transformations of the data in JLD files has been a breeze saving the 
Models and their respective details is not going smoothly. I'm experimenting 
with different sizes of trees and different number of parameters per tree, so I 
have 10 forests total and since they take about 1 hour to train each I'd like 
to save them every 7 iterations in case I have to shut down a machine. My code 
for the process is the following:

using HDF5, JLD, DataFrames, Distributions, DecisionTree, MLBase, StatsBase

...

num_of_trees = collect(10:10:100);
num_of_features = collect(20:5:50);
Models = 
Array{DecisionTree.Ensemble}(length(num_of_trees),length(num_of_features));
Predictions = 
Array{Array{Float64,1}}(length(num_of_trees),length(num_of_features));
RMSEs = Array{Float64}(length(num_of_trees),length(num_of_features));
train = rand(Bernoulli(0.8), size(Y)) .== 1;

for i in 1:length(num_of_trees)
for j in 1:length(num_of_features)
Models[i,j] = 
build_forest(Y[train],DataSTD[train,:],num_of_features[j],num_of_trees[i]);
Predictions[i,j] = apply_forest(Models[i,j], DataSTD[!train,:]);
RMSEs[i,j] = root_mean_squared_error(Y[!train], 
Predictions[i,j]);
println("\n", Models[i,j])
println("Features: ",num_of_features[j])
println("RMSE: ",RMSEs[i,j])

display(confusion_matrix_regression(Y[!train],Predictions[i,j],10))
end
save("Models_run1.jld", "Models", Models, "Features", num_of_features, 
"Predictions", Predictions, "RMSEs", RMSEs, "Bernoulli", train);
end

Finishing the internal for loop takes around 7 hours, which is not a surprise, 
but the save function runs for hours as well. The file keeps slowly increasing 
in size, so I think something is happening but I'm not sure what. I'm still 
unable to get to a second iteration of my outer loop after 3 hours of the 
intern loop has finished. I plan to leave it running over night to see whether 
it fails or finishes. Any idea on why this is happening?


[julia-users] Running multiple scripts in parallel Julia sessions

2016-01-24 Thread Kenta Sato
I think GNU parallel is the best tool for that purpose. 
http://www.gnu.org/software/parallel/

You can pass -j option to control the number of maximum jobs at a time.

Re: [julia-users] Running multiple scripts in parallel Julia sessions

2016-01-24 Thread Tim Holy
You could write your own pmap-like function (see its source code) that starts 
a new worker and then shuts it down again when done.

Or, just wrap all your scripts inside modules?

module Script1

# code that was in script1.jl

end

--Tim

On Sunday, January 24, 2016 01:13:21 PM Ritchie Lee wrote:
> The scripts contain a lot of global consts and other things.  pmap seems to
> mix the namespaces for all scripts executed on the same processor.
> 
> On Sunday, January 24, 2016 at 12:06:05 AM UTC-8, Tim Holy wrote:
> > pmap?
> > 
> > --Tim
> > 
> > On Saturday, January 23, 2016 07:29:07 PM Ritchie Lee wrote:
> > > Do you mean using @spawn, success, or just from the command prompt?
> > > 
> > > The scripts are long-running experiments where I am also tracking things
> > > like CPU time.  I would like them queued and run 4 at a time, i.e., if 1
> > > finishes ahead of others, then the next one will start running on the
> > 
> > free
> > 
> > > processor.  Is there a way to schedule into separate processes?
> > > 
> > > On Saturday, January 23, 2016 at 12:00:35 PM UTC-8, Stefan Karpinski
> > 
> > wrote:
> > > > Any reason not to run them all as separate processes?
> > > > 
> > > > On Fri, Jan 22, 2016 at 11:08 PM, Ritchie Lee  > > > 
> > > > > wrote:
> > > >> Let's say I have 10 julia scripts, scripts = ["script1.jl",
> > 
> > "script2.jl",
> > 
> > > >> , "script10.jl"] and I would like to run them in parallel in
> > 
> > separate
> > 
> > > >> Julia sessions, but 4 at a time (since I only have 4 cores on my
> > > >> machine).
> > > >> Is there any way to do this programmatically?
> > > >> 
> > > >> I tried doing this:
> > > >> 
> > > >> addprocs(4)
> > > >> pmap(include, scripts)
> > > >> 
> > > >> or
> > > >> 
> > > >> addprocs(4)
> > > >> @parallel for s in scripts
> > > >> include(s)
> > > >> end
> > > >> 
> > > >> However, this seems to reuse the same session, so all the global
> > 
> > consts
> > 
> > > >> in the script file are colliding.  I would like to make sure that the
> > > >> namespaces are completely separate.
> > > >> 
> > > >> Thanks!



Re: [julia-users] Running multiple scripts in parallel Julia sessions

2016-01-24 Thread Ritchie Lee
The scripts contain a lot of global consts and other things.  pmap seems to 
mix the namespaces for all scripts executed on the same processor.


On Sunday, January 24, 2016 at 12:06:05 AM UTC-8, Tim Holy wrote:
>
> pmap? 
>
> --Tim 
>
> On Saturday, January 23, 2016 07:29:07 PM Ritchie Lee wrote: 
> > Do you mean using @spawn, success, or just from the command prompt? 
> > 
> > The scripts are long-running experiments where I am also tracking things 
> > like CPU time.  I would like them queued and run 4 at a time, i.e., if 1 
> > finishes ahead of others, then the next one will start running on the 
> free 
> > processor.  Is there a way to schedule into separate processes? 
> > 
> > On Saturday, January 23, 2016 at 12:00:35 PM UTC-8, Stefan Karpinski 
> wrote: 
> > > Any reason not to run them all as separate processes? 
> > > 
> > > On Fri, Jan 22, 2016 at 11:08 PM, Ritchie Lee  > > 
> > > > wrote: 
> > >> Let's say I have 10 julia scripts, scripts = ["script1.jl", 
> "script2.jl", 
> > >> , "script10.jl"] and I would like to run them in parallel in 
> separate 
> > >> Julia sessions, but 4 at a time (since I only have 4 cores on my 
> > >> machine). 
> > >> Is there any way to do this programmatically? 
> > >> 
> > >> I tried doing this: 
> > >> 
> > >> addprocs(4) 
> > >> pmap(include, scripts) 
> > >> 
> > >> or 
> > >> 
> > >> addprocs(4) 
> > >> @parallel for s in scripts 
> > >> include(s) 
> > >> end 
> > >> 
> > >> However, this seems to reuse the same session, so all the global 
> consts 
> > >> in the script file are colliding.  I would like to make sure that the 
> > >> namespaces are completely separate. 
> > >> 
> > >> Thanks! 
>
>

[julia-users] Re: optim stopping without reaching convergence

2016-01-24 Thread Iain Dunning
Nelder-Mead is a pretty brute-force algorithm, so maybe its just struggling 
with your 10-dimensional objective function.
Its using 10 evaluations (one for each dimension) in the initial setup, 
then doing 2000 updates to the simplex.

On Sunday, January 24, 2016 at 1:45:48 PM UTC-5, grande...@gmail.com wrote:
>
> thanks but that s not the issue. for some reasons. the number of 
> iterations is really high to begin with
>
> On Friday, January 22, 2016 at 7:01:56 PM UTC-6, Kristoffer Carlsson wrote:
>>
>> Look at the source 
>> https://github.com/JuliaOpt/Optim.jl/blob/master/src/nelder_mead.jl
>> The number of iterations is not necessarily the same as the number of 
>> objective function evaluations
>
>

[julia-users] Re: problems reading a file

2016-01-24 Thread Alberto
Thank you very much Andreas for the quick and accurate reply.
Indeed the issue of the suggested link is pretty much the same as mine.
The problem is that the file that I'm try to read is a binary file.
As suggested in the link I tried readbytes.

f = open(filename)
f_bytes = readbytes(f)

This works and returns a binary representation of the file. 
Then, in order to get back a string I used 

f_text = ASCIIString(f_bytes)

The variable f_text contains the text of the file.
When I print the string f_text the unicode characters ° are not displayed 
correctly but the remaining text is correct and this is enough for me at 
the moment.

Thank you again,
Alberto


Il giorno domenica 24 gennaio 2016 15:03:03 UTC+1, Andreas Lobinger ha 
scritto:
>
> Hello colleague,
>
> i think i once had a similar problem.
> https://github.com/JuliaLang/julia/issues/12764
>
> Wishing a happy day,
>  Andreas
>
>