[sqlite] making a meal of text substitution
Can someone please help me to understand why my substitution of the single line that creates the table 'std' isn't working. Any help much appreciated! #== proc msg {vlu} {tk_messageBox -message "$vlu"} ::oo::class create cDbase { variable db tmp constructor {} { package require sqlite3 set tmp {} set db {} } method db_open {fPth} { sqlite3 db $fPth ;# :memory: } method tbl_open {nm flds} { db eval {CREATE TABLE std( id integer primary key, epic text, itm text, yr integer, vlu real)} why isn't this a replacement for the line above??? #== #msg "nm=$nm\nflds=$flds" #set script {CREATE TABLE [$nm]( id integer primary key, [$flds] )} #db eval $script #== db eval {INSERT INTO std VALUES(1,'mcro','revs',2003, 10)} db eval {INSERT INTO std VALUES(2,'mcro','revs',2004, 11)} set x [db eval {SELECT vlu FROM std WHERE itm='revs' ORDER BY id}] tk_messageBox -message $x db close } method add {} { } method destructor {} { } } cDbase create d d db_open :memory: d tbl_open std "epic text, itm text, yr integer, vlu real" d destroy cDbase destroy exit ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Best practice for connections and cursors
I have created a python module that I import within several files that simply opens a connection to an sqlite file and defines several methods which each open a cursor before they either select or insert data. As the module opens a connection, wherever I import it I can call a commit against the connection. Seems I've made a proper mess, one of the modules causes a 5 second delay at import (big indicator there) and one of the modules calls a method that yields data while calling other methods as it iterates. Each of these methods opens its own cursor. One of which during some processing calls another method which opens a cursor and creates a temp table and this corrupts the top level cursor and causes it to yield a shorter count. If I open a debugger just as the top level method begins to yield, I can pull all the expected records. It seems to be one of the nested methods that leverages the singleton connection to the sqlite db, once it opens its own cursor and creates a temp table, things go south. A bit vague I know, but does anyone see the obvious mistake? I assumed the module setting up a singleton connection was a perfectly viable way to accomplish this? Thanks! jlc ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Transaction Handling
Is it possible to use Sqlite3 transaction capability based on actual return of individual transaction query execution? Example: BEGIN SELECT for reading the database (Here I will get data in my read call back function Based on data status I may initiate other transaction, like below Update) UPDATE COMMIT Note :This is implemented by C wrapper function of handling individual operation like Select, Update, Drop or Insert. Cheers- Techi ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query on database back-up:
On 07/31/2013 07:33 PM, Simon Slavin wrote: On 31 Jul 2013, at 12:56pm, techi eth wrote: I have followed below link for details. It say “If another thread or process writes to the source database while this function is sleeping, then SQLite detects this and usually restarts the backup process when sqlite3_backup_step() is next called”. - It means it will start back-up from start or resume from last left page but making sure all update will be written in backup file. Every time a change is made the backup has to restart from the beginning again. So if, for example, it would take 300 seconds to backup the entire file, the backup process will not complete until there has been a 300 second period when no changes were made to the file. - Is their any issue if process acquiring database handle will do both the operation (Writing & Backup). No. This is not a problem. Though, of course, if anything does use that handle to write to the database the backup will be (automatically) restarted. Actually this is an exception to that rule. If a connection being used as the source in a backup operation writes to the database, the backup does not have to be restarted. Instead, the backup db is modified at the same time to keep it up to date with the source db. Dan. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query on database back-up:
Thanks for answer. Let me put my query on point mentioned by comparing historic way of doing copy at the start of link. 1. "Any database clients wishing to write to the database file while a backup is being created must wait until the shared lock is relinquished." Example 1: Loading and Saving In-Memory Databases will not fulfill this shortcoming. Example 2: Fulfill the shortcoming but cost of more time due to restart of backup from fresh in the case of INSERT, ALTER, DROP & UPDATE.I understand SELECT will not add time. Addinitionally what happen if other process is not able to complete operation within timeout duration? 2. "It cannot be used to copy data to or from in-memory databases." I haven't got it full but I understand by CP I am able to copy any file in Linux system. 3."If a power failure or operating system failure occurs while copying the database file the backup database may be corrupted following system recovery" This is very useful benefit. What is the resulted outcome of power failure or OS failure if we use backup API? On 7/31/13, Simon Slavin wrote: > > On 31 Jul 2013, at 12:56pm, techi eth wrote: > >> I have followed below link for details. It say “If another thread or >> process writes to the source database while this function is sleeping, >> then >> SQLite detects this and usually restarts the backup process when >> sqlite3_backup_step() is next called”. >> >> - It means it will start back-up from start or resume from last left page >> but making sure all update will be written in backup file. > > Every time a change is made the backup has to restart from the beginning > again. So if, for example, it would take 300 seconds to backup the entire > file, the backup process will not complete until there has been a 300 second > period when no changes were made to the file. > >> - Is their any issue if process acquiring database handle will do both >> the >> operation (Writing & Backup). > > No. This is not a problem. Though, of course, if anything does use that > handle to write to the database the backup will be (automatically) > restarted. > >> http://www.sqlite.org/backup.html > > Simon. > ___ > sqlite-users mailing list > sqlite-users@sqlite.org > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users > ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Comparing two tables column by column
On 30.07.2013 14:49, fnoyan...@yahoo.com wrote: Yes, it is slow actually. But I could not do it with SQL. I wish I could do more with SQL than code, which would give a good performance improvement. From overall application point of view, I may be considering using threads to read from database, so that the performance will be improved a bit. On 30/07/2013, at 4:26 PM, "Stadin, Benjamin" wrote: That can work, if performance is of no concern. Otherwise it will become miserably slow. Von: Fehmi Noyan ISI Antworten an: Fehmi Noyan ISI Datum: Dienstag, 30. Juli 2013 01:54 An: Benjamin Stadin , General Discussion of SQLite Database Betreff: Re: [sqlite] Comparing two tables column by column The approach I am using to compare tableA_old and tableA_new is; typedef struct container_t { // a structure to pass parameters into callbacks } container; static int callback_2(){ // compare two values in the resulting table if they are different // write them into a resulting file } static int callback_1(){ for (each column){ char sql[256]; sprintf(sql,"select %s from tableA_old where pkey=%s",columnName,primarykey); sqlite3_exec(db,sql,callback_2,,) } } int main(){ // import CS file into DB // for each pair of files (each file is a table and files have // the same column names in the same order). Actually, these are // records for different dates char sql[256] = "select * from tableA_new;"; container c; sqlite3_exec(db,sql,callback_1,(void*)c,null) } From: "Stadin, Benjamin" To: Fehmi Noyan ISI ; General Discussion of SQLite Database Sent: Tuesday, July 30, 2013 9:00 AM Subject: Re: [sqlite] Comparing two tables column by column If you like ruby, I have another idea to get you going (maybe without needing to write much code): - Use a registered function to SQLite to create MD5 or SHA1 keys for rows in the table. Here is a ruby snippet that registers a SHA1 function: http://www.copiousfreetime.org/articles/2009/01/10/writing-sql-functions-in -ruby.html - Join both tables by your primary key and select those where the SHA keys don't match - go threw each column programmatically in your ruby code and simply compare column values with each other If you then need performance and get by some means a trigger for your newly inserted records, add a sha1 field to all of your tables and precompute and index the sha1 for each table. Benjamin Stadin Am 30.07.13 01:07 schrieb "Fehmi Noyan ISI" unter : EXCEPT query gave the different rows in tables, but what I am after is the different values for existing records. The column names are exactly the same, however the number of rows may differ (with most of the records are the same). From: Simon Slavin To: General Discussion of SQLite Database Sent: Monday, July 29, 2013 9:10 PM Subject: Re: [sqlite] Comparing two tables column by column On 29 Jul 2013, at 12:36pm, Fabian Klebert wrote: Wouldn't SELECT * FROM table1 EXCEPT SELECT * FROM table2 solve this problem? I think it does for the example provided. Not sure if it would work in real-world environment. There are two elements: making sure the same rows are present, and making sure the contents of the rows match. I would probably use EXCEPT commands to find out entries in one database which weren't in the other, then use INTERSECT command to check that the fields in the rows which were in both. But that's just at first glance. Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users a) Store your table backups ordered by a primary key (not rowid but integer primary key). b) Make a whole table select ordered by the same primary key for the current table. c) Merge both tables. This will be faster cause you won't need to trace through the BTREE (B+TREE) for each recordset. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Re: [sqlite] Query on database back-up:
On 31 Jul 2013, at 12:56pm, techi eth wrote: > I have followed below link for details. It say “If another thread or > process writes to the source database while this function is sleeping, then > SQLite detects this and usually restarts the backup process when > sqlite3_backup_step() is next called”. > > - It means it will start back-up from start or resume from last left page > but making sure all update will be written in backup file. Every time a change is made the backup has to restart from the beginning again. So if, for example, it would take 300 seconds to backup the entire file, the backup process will not complete until there has been a 300 second period when no changes were made to the file. > - Is their any issue if process acquiring database handle will do both the > operation (Writing & Backup). No. This is not a problem. Though, of course, if anything does use that handle to write to the database the backup will be (automatically) restarted. > http://www.sqlite.org/backup.html Simon. ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
[sqlite] Query on database back-up:
Hi, Case: “I copied first five pages from database using online backup api & during sleep time other process write to database which is actually part of first copied five pages, In that case at the end of backup can I assure I will have my database backed up with latest changes. (Changes done during backup as well) As of now I am having database on Flash memory. I have followed below link for details. It say “If another thread or process writes to the source database while this function is sleeping, then SQLite detects this and usually restarts the backup process when sqlite3_backup_step() is next called”. - It means it will start back-up from start or resume from last left page but making sure all update will be written in backup file. - Is their any issue if process acquiring database handle will do both the operation (Writing & Backup). http://www.sqlite.org/backup.html Cheers Techi ___ sqlite-users mailing list sqlite-users@sqlite.org http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users