Yes indeed, expanding sigComponents fixes the problem. Thanks.

On Sunday, February 23, 2014 5:19:22 AM UTC-5, Amit Murthy wrote:
>
> Is it possible that any one of capitals,{marketHistory for i in 1
> :trials},depths,ALRs,{vec(variances) for i in 1:trials},{weights[:,i] for 
> i in 1:trials},sigComponents is of length 1?
>
> Testing with "-p 3" and a busywait function: 
>
> @everywhere function bw(s...) t1 = time() while (time() - t1) < s[1] end 
> end 
> pmap(bw, [10,20,30], [4,5,6]) results in 3 workers first taking up 100% of 
> 3 cores, then 2 and finally 1 for the final 10 seconds.
>
> pmap(bw, [10,20,30], [4]) will result in just one worker taking up one 
> core for 10 seconds.
>
>
>
>
>
>
>
> On Sun, Feb 23, 2014 at 3:25 PM, Micah McClimans 
> <[email protected]<javascript:>
> > wrote:
>
>> Sure.
>>
>> Invoked with julia -p 8
>>
>> fid=open("/home/main/data/juliafiles/Julia/machinefile.txt") 
>>
>> rc=readlines(fid)
>>
>> m={match(r"(\r|\n)",rcd) for rcd in rc}
>>
>> machines={rc[ma][1:(m[ma].offset-1)] for ma in 1:length(m)}
>>
>> ...
>> addprocs(machines;dir="/home/main/data/programfiles/julia/usr/bin")
>> ...
>> @everywhere 
>> progfile="/home/main/data/juliafiles/Julia/WLPPInt.jl"@everywhere 
>> marketHistoryFile="/home/main/data/juliafiles/Julia/daily.mat"
>> @everywhere using MAT
>>
>> @everywhere include(progfile)
>> @everywhere Daily= matread(marketHistoryFile)@everywhere marketHistory= 
>> NYSE["NYSE_Smoothed_Closes"]
>> ...
>> results=pmap(runWLPPIntTest,capitals,{marketHistory for i in 
>> 1:trials},depths,ALRs,{vec(variances) for i in 1:trials},{weights[:,i] for i 
>> in 1:trials},sigComponents)
>>
>> I'm not really sure if this is enough to be useful though, or what really 
>> would be able to be useful. 
>>
>>
>>
>> On Sunday, February 23, 2014 3:58:04 AM UTC-5, Amit Murthy wrote:
>>
>>> Is it possible to share the relevant portions of the call here? 
>>>
>>>
>>> On Sun, Feb 23, 2014 at 11:44 AM, Micah McClimans 
>>> <[email protected]>wrote:
>>>
>>>> Thank you, it turns out my problem was coming from an @everywhere 
>>>> macro, not from pmap. 
>>>>
>>>> However, and I hope it is not bad practice continuing in this same 
>>>> thread, but now I'm seeing that pmap is not utilizing all of the workers 
>>>> available for the process, in fact it is using only one, despite having 8 
>>>> local and 8 remote workers available. What sort of problems could be 
>>>> causing this behavior?
>>>>
>>>>
>>>> On Saturday, February 22, 2014 6:32:18 PM UTC-5, Stefan Karpinski wrote:
>>>>
>>>>> If there are other processors, pmap doesn't use the head node by 
>>>>> default:
>>>>>
>>>>> julia> addprocs(2)
>>>>> 2-element Array{Any,1}:
>>>>>  2
>>>>>  3
>>>>>
>>>>> julia> pmap(x->myid(), 1:10)
>>>>> 10-element Array{Any,1}:
>>>>>  2
>>>>>  3
>>>>>  3
>>>>>  2
>>>>>  2
>>>>>  3
>>>>>  2
>>>>>  3
>>>>>  2
>>>>>  3
>>>>>  
>>>>>
>>>>> On Sat, Feb 22, 2014 at 5:50 PM, Micah McClimans 
>>>>> <[email protected]>wrote:
>>>>>
>>>>>> I am working on distributing a compute intensive task over a cluster 
>>>>>> in Julia, using the pmap function. However, for several reasons I would 
>>>>>> like to avoid having the master node used in the computation- is there a 
>>>>>> way to accomplish this using the built in keyword, or will I need to 
>>>>>> rewrite pmap?
>>>>>>
>>>>>
>>>>>
>>>
>

Reply via email to