Hi Andrey/Niphlod,

*Is there a way I can connect servers via SQLite?*

*Regards,*

*Manjinder*
On Tuesday, 4 March 2014 20:52:31 UTC-8, Andrey K wrote:
>
> Thanks Niphlod, as usual very detail and great answer. Thank you a lot!
> After you answer I have check the web and have found several tools that do 
> specifically cluster management: StarCluster, Elasticluster. I am really 
> keen to try the later one. It looks good specifically for GCE and EC2 work. 
> However now I know better how I can utilize w2p scheduler. After figuring 
> out how Elasticluster works - might blend work of w2p scheduler and EC.
>
> Thanks again! Really appreciate your help!
>
> On Monday, March 3, 2014 11:33:17 PM UTC+3, Niphlod wrote:
>>
>>
>>
>> On Monday, March 3, 2014 1:10:08 PM UTC+1, Andrey K wrote:
>>>
>>> Wow, what an answer! Niphlod, thanks a lot for such a detailed info with 
>>> examples - now it is crystal clear for me. Very great help, really 
>>> appreciate it!!!
>>>
>>> You answer make me clarify the future architecture for my app. Before I 
>>> thought to use amazon internal tools for  task distribution now I think I 
>>> can use w2p scheduler at least for the first stage or maybe permanently.
>>>
>>> I have several additional question if you allow me. Hope it helps to 
>>> other members of the w2p club.
>>> The plan is to start amazon servers (with web2py preinstalled) 
>>>  programmatically when I need it with the purpose to run  w2p scheduler on 
>>> it.
>>> Could you give me your point of your on the following  questions that I 
>>> need to address in order to build such a service:
>>> 1)Can I set up and cancel workers under web2py programmatically  which 
>>> equivalent 
>>> to' python web2py.py -K myapp:fast,myapp:fast,myapp:fast'?
>>>
>>
>> you can put them to sleep, terminate or kill them (read the book or use 
>> w2p_scheduler_tests to get comfortable with the terms) but there's no 
>> "included" way to start them on demand. That job is left to various pieces 
>> of software that are built from the ground-up to manage external 
>> processes....upstart, systemd, circus, gaffer, supervisord, foreman, etc 
>> are all good matches but each one with a particular design in mind and 
>> totally outside the scope of web2py. Coordinating processes among a set of 
>> servers just needs a more complicated solution than web2py itself.
>>  
>>
>>> 2) What is the best way to monitor load of the server to make a decision 
>>> to start new worker or new server depends on the resources left?
>>>
>>
>> depends of what you mean by load. Just looking at your question, I see 
>> that you never had to manage such architecture :-P......usually you don't 
>> want to monitor the load "of the server" to ADD additional workers... you 
>> want to monitor the load "of the server" to KILL additional workers or ADD 
>> servers to process the jobs, watching at the load "of the infrastructure". 
>> Again usually - because basically every app has its own priorities - you'd 
>> want to set an estimate (KPI) on how much the queue can grow before jobs 
>> are actually processed, and if the queue is growing faster than the 
>> processed items, start either a new worker or a new virtual machine. 
>>  
>>
>>> 3)Is it possible to set up folder on dedicated server for web2py file 
>>> upload and make it accessible to all web2py instances = job workers
>>>
>>> linux has all kinds of support for that: either an smb share or an nfs 
>> share is the simplest thing to do. a Ceph cluster is probably more 
>> complicated, but again we're outside of the scope of web2py 
>>
>

-- 
Resources:
- http://web2py.com
- http://web2py.com/book (Documentation)
- http://github.com/web2py/web2py (Source code)
- https://code.google.com/p/web2py/issues/list (Report Issues)
--- 
You received this message because you are subscribed to the Google Groups 
"web2py-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to web2py+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to