Re: [web2py] Re: [web2py-dev] Asyncronous Application Sync

2012-07-17 Thread José Luis Redrejo Rodríguez
I had to to do something similar a couple of years ago (between
several waste water plants and the control center) and ended using a
similar approach to what nick name said:
- In the control center I used mysql
- In the waste water plants I used a sqlite database per day
(initializating the database every day at 00:00 and backing up the
previous file in another directory)
- Every record in the plants had a datetime stamp
- The plants just send the sqlite files gzipped (and splitted in small
bits because my connection was really bad) and the control center just
received the bits, joined them, unziped the sqlite files and import
their data into mysql using the plant-datetime as key to avoid
duplicated items.


Regards.
José L.


2012/7/13 nick name i.like.privacy@gmail.com:
 On Wednesday, July 11, 2012 6:26:00 PM UTC-4, Massimo Di Pierro wrote:

 I am planning to improve this functionality but it would help to know if
 it works for you as it is and what problems you encounter with it.


 I originally used the export-to-csv, but a few months ago, I switched to
 just shipping the sqlite files (actually the whole databases directory
 with .table files); That handles everything like types, blobs, fractional
 seconds in the database, etc, without any conversion. It is also faster when
 processing the files at the other end - especially if you have indices and
 have a non-trivial import requirement. It should be opened with
 auto_import=True on the receiving end, of course.

 (you'd still need an export to a new .sqlite database, or use sqlite's
 backup command, to make sure you get the database in a consistent state --
 unless you know that the database is in a fully committed state when you
 send it).

 If the connection is not reliable, the classic solution is a queuing system
 like MSMQ / MQSeries / RabbitMQ (which is often non-trivial to manage), but
 you could just export (csv, .sqlite, whatever) to a dropbox-or-similar
 synced directory (e.g. sparkleshare lets you own the repository and not rely
 on dropbox.com servers), and import it on the server side when the file has
 changed. much, much simpler and works just as well for one way communication
 that does not require the lowest possible latency.

-- 





Re: [web2py] Re: [web2py-dev] Asyncronous Application Sync

2012-07-17 Thread Bruno Rocha
I would set a Dropbox shared folder on both machines (there is a server
dropbox.py for linux)

So create a script which copies the .sqlite in to Dropbox folder.

On city machine a scheduled task read records on sqlite and import new
records directly to mysql.

You are going to need a 'signal' field on every record. I am using 'N' for
new, 'U' for updated, 'D' to deactivate.

In this scenario it is not a good idea to delete any data, so use the
is_active bool to change the status for deactivated.

In the other side you can have the city machine exporting on to the  sqlite
database the records that should go to the forest.

Also, you need to set a sqlite-status table, or file to be readed on both
sides before any transaction occurs. (to avoid race condition)

The better solution would be a mysql master-slave structure. But if yoi
have connection issues you can go with a home made queue.

http://zerp.ly/rochacbruno
Em 17/07/2012 08:49, José Luis Redrejo Rodríguez jredr...@debian.org
escreveu:

 I had to to do something similar a couple of years ago (between
 several waste water plants and the control center) and ended using a
 similar approach to what nick name said:
 - In the control center I used mysql
 - In the waste water plants I used a sqlite database per day
 (initializating the database every day at 00:00 and backing up the
 previous file in another directory)
 - Every record in the plants had a datetime stamp
 - The plants just send the sqlite files gzipped (and splitted in small
 bits because my connection was really bad) and the control center just
 received the bits, joined them, unziped the sqlite files and import
 their data into mysql using the plant-datetime as key to avoid
 duplicated items.


 Regards.
 José L.


 2012/7/13 nick name i.like.privacy@gmail.com:
  On Wednesday, July 11, 2012 6:26:00 PM UTC-4, Massimo Di Pierro wrote:
 
  I am planning to improve this functionality but it would help to know if
  it works for you as it is and what problems you encounter with it.
 
 
  I originally used the export-to-csv, but a few months ago, I switched
 to
  just shipping the sqlite files (actually the whole databases directory
  with .table files); That handles everything like types, blobs, fractional
  seconds in the database, etc, without any conversion. It is also faster
 when
  processing the files at the other end - especially if you have indices
 and
  have a non-trivial import requirement. It should be opened with
  auto_import=True on the receiving end, of course.
 
  (you'd still need an export to a new .sqlite database, or use sqlite's
  backup command, to make sure you get the database in a consistent state
 --
  unless you know that the database is in a fully committed state when you
  send it).
 
  If the connection is not reliable, the classic solution is a queuing
 system
  like MSMQ / MQSeries / RabbitMQ (which is often non-trivial to manage),
 but
  you could just export (csv, .sqlite, whatever) to a dropbox-or-similar
  synced directory (e.g. sparkleshare lets you own the repository and not
 rely
  on dropbox.com servers), and import it on the server side when the file
 has
  changed. much, much simpler and works just as well for one way
 communication
  that does not require the lowest possible latency.

 --





-- 





[web2py] Re: [web2py-dev] Asyncronous Application Sync

2012-07-12 Thread Alfonso de la Guarda
Hi,

Well, i need to work with that, so i will try to develop some features
to support this!


Saludos,


Alfonso de la Guarda
Twitter: @alfonsodg
Redes sociales: alfonsodg
   Telef. 991935157
1024D/B23B24A4
5469 ED92 75A3 BBDB FD6B  58A5 54A1 851D B23B 24A4


On Wed, Jul 11, 2012 at 5:26 PM, Massimo DiPierro
massimo.dipie...@gmail.com wrote:
 There are two issue: 1) protocol for transferring data; 2) exporting and 
 importing from database.

 rabbitmq etc. only address 1 and you do not need any. Web2py already has a 
 web server a many RPC systems you can use.
 The real issue is 2. If your tables have a uuid field, db.export_to_csv_field 
 and db.import_from_csv_file should do what you ask.

 I am planning to improve this functionality but it would help to know if it 
 works for you as it is and what problems you encounter with it.



 On Jul 11, 2012, at 12:06 PM, Alfonso de la Guarda wrote:

 Hi,


 I have a web2py app in 2 places:
 - City location
 - Rainforest location

 The app is the same for both cases, however the city app is the main.

 I need to synchronize the information entered in the location of the
 jungle to the city but not bi-directionally due to low bandwidth, in
 addition should be automatic as soon as there is availability of
 connectivity (ie queue management / messaging).

 Has anyone had experiences like this with web2py specifically? ( I can
 surely work with rabbitmq, hornetmq, etc. but there is an approach for
 web2py?)


 Saludos,

 
 Alfonso de la Guarda
 Twitter: @alfonsodg
 Redes sociales: alfonsodg
   Telef. 991935157
 1024D/B23B24A4
 5469 ED92 75A3 BBDB FD6B  58A5 54A1 851D B23B 24A4

 -- mail from:GoogleGroups web2py-developers mailing list
 make speech: web2py-develop...@googlegroups.com
 unsubscribe: web2py-developers+unsubscr...@googlegroups.com
 details: http://groups.google.com/group/web2py-developers
 the project: http://code.google.com/p/web2py/
 official: http://www.web2py.com/

 -- mail from:GoogleGroups web2py-developers mailing list
 make speech: web2py-develop...@googlegroups.com
 unsubscribe: web2py-developers+unsubscr...@googlegroups.com
 details: http://groups.google.com/group/web2py-developers
 the project: http://code.google.com/p/web2py/
 official: http://www.web2py.com/


[web2py] Re: [web2py-dev] Asyncronous Application Sync

2012-07-12 Thread nick name
On Wednesday, July 11, 2012 6:26:00 PM UTC-4, Massimo Di Pierro wrote:

 I am planning to improve this functionality but it would help to know if 
 it works for you as it is and what problems you encounter with it. 


I originally used the export-to-csv, but a few months ago, I switched to 
just shipping the sqlite files (actually the whole databases directory 
with .table files); That handles everything like types, blobs, fractional 
seconds in the database, etc, without any conversion. It is also faster 
when processing the files at the other end - especially if you have indices 
and have a non-trivial import requirement. It should be opened with 
auto_import=True on the receiving end, of course.

(you'd still need an export to a new .sqlite database, or use sqlite's 
backup command, to make sure you get the database in a consistent state -- 
unless you know that the database is in a fully committed state when you 
send it).

If the connection is not reliable, the classic solution is a queuing system 
like MSMQ / MQSeries / RabbitMQ (which is often non-trivial to manage), but 
you could just export (csv, .sqlite, whatever) to a dropbox-or-similar 
synced directory (e.g. sparkleshare lets you own the repository and not 
rely on dropbox.com servers), and import it on the server side when the 
file has changed. much, much simpler and works just as well for one way 
communication that does not require the lowest possible latency.


[web2py] Re: [web2py-dev] Asyncronous Application Sync

2012-07-11 Thread Massimo DiPierro
There are two issue: 1) protocol for transferring data; 2) exporting and 
importing from database.

rabbitmq etc. only address 1 and you do not need any. Web2py already has a web 
server a many RPC systems you can use.
The real issue is 2. If your tables have a uuid field, db.export_to_csv_field 
and db.import_from_csv_file should do what you ask.

I am planning to improve this functionality but it would help to know if it 
works for you as it is and what problems you encounter with it.



On Jul 11, 2012, at 12:06 PM, Alfonso de la Guarda wrote:

 Hi,
 
 
 I have a web2py app in 2 places:
 - City location
 - Rainforest location
 
 The app is the same for both cases, however the city app is the main.
 
 I need to synchronize the information entered in the location of the
 jungle to the city but not bi-directionally due to low bandwidth, in
 addition should be automatic as soon as there is availability of
 connectivity (ie queue management / messaging).
 
 Has anyone had experiences like this with web2py specifically? ( I can
 surely work with rabbitmq, hornetmq, etc. but there is an approach for
 web2py?)
 
 
 Saludos,
 
 
 Alfonso de la Guarda
 Twitter: @alfonsodg
 Redes sociales: alfonsodg
   Telef. 991935157
 1024D/B23B24A4
 5469 ED92 75A3 BBDB FD6B  58A5 54A1 851D B23B 24A4
 
 -- mail from:GoogleGroups web2py-developers mailing list
 make speech: web2py-develop...@googlegroups.com
 unsubscribe: web2py-developers+unsubscr...@googlegroups.com
 details: http://groups.google.com/group/web2py-developers
 the project: http://code.google.com/p/web2py/
 official: http://www.web2py.com/