Re: [sqlite] Sqlite 3.4.1 write performance difference between linux kernel 2.6.33 and 3.4.6

2012-11-29 Thread Kevin Liao
2012/11/30 Simon Slavin :
>
> On 29 Nov 2012, at 5:47pm, Kevin Liao  wrote:
>
>> Usually it takes only 11-13
>> ms to execute the update commands. Recently I upgrade the kernel to 3.4.6 but
>> find one problem. It takes about 43-51 ms to finish the update command now.
>
> How are you measuring the amount of time it takes ?
>
> Can you try issuing the same command using the sqlite3 shell tool ?  You can 
> use the '.timer ON' command and it will time how long a command takes.  Try 
> this using both kernels and see if the newer kernal slows down the sqlite3 
> shell tool too.
>
> Simon.

I use the following function before and after issuing the sqlite update
command to measure the time.

static unsigned int MyGetTimerMS(void)
{
struct timeval tv;

gettimeofday(,NULL);
return ((tv.tv_sec * 1000) + (tv.tv_usec / 1000));
}

2012/11/30 Black, Michael (IS) :
> The Linux kernel used to default to writeback for a while until 2.6.36 where 
> it then defaulted to ordered.
> So you're seeing the ordered behavior now which is the safest mode.
> http://forum.linode.com/viewtopic.php?t=7815
>
> How to convert to writeback if that's what you want...it is a more dangerous 
> option.
> http://ubuntuforums.org/showthread.php?t=107856
>

I have checked my system and found that it has already set to writeback mode.
However, from the information you provided, I finally know that the root cause
is the barrier setting. The barrier is turned on by default for EXT3 on kernel
3.4.6 but is turned off on kernel 2.6.33. After turning off barrier on kernel
3.4.6, the write performance is back to same as kernel 2.6.33. Now at least I
can tell my boss what is going on and let him decide wheter to turn off
barrier. Thanks a lot for all the replies.

Kevin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite 3.4.1 write performance difference between linux kernel 2.6.33 and 3.4.6

2012-11-29 Thread Black, Michael (IS)
The Linux kernel used to default to writeback for a while until 2.6.36 where it 
then defaulted to ordered.
So you're seeing the ordered behavior now which is the safest mode.
http://forum.linode.com/viewtopic.php?t=7815

How to convert to writeback if that's what you want...it is a more dangerous 
option.
http://ubuntuforums.org/showthread.php?t=107856

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Kevin Liao [kevin...@gmail.com]
Sent: Thursday, November 29, 2012 11:47 AM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Sqlite 3.4.1 write performance difference between linux 
kernel 2.6.33 and 3.4.6

I have a simple propram that issues sqlite update command every few seconds.
The platform is linux based with kernel 2.6.33 and sqlite version is 3.4.1.
The db file is on the partition with EXT3 format. Usually it takes only 11-13
ms to execute the update commands. Recently I upgrade the kernel to 3.4.6 but
find one problem. It takes about 43-51 ms to finish the update command now.
That is, the write performance is almost four times slower that kernel 2.6.33.

The following is source code of the function I used for updating
sqlite. Is there
anything I do wrong or does anyone have the similar problem? Thanks a lot.

int my_db_update_progress(int value)
{
sqlite3* db;
char* zSQL = NULL;
int ret = 0;
int changed = 0;

zSQL = sqlite3_mprintf("UPDATE MY_TASK SET progress = %d WHERE \
task_pid = %d;", value, getpid());
ret = sqlite3_open("/etc/mydb.db", );
if (ret) {
sqlite3_free(zSQL);
return -1;
}
sqlite3_busy_timeout(db, 2);
ret = sqlite3_exec(db, zSQL, NULL, NULL, NULL);
if (sqlite3_total_changes(db))
changed = 1;
sqlite3_close(db);
sqlite3_free(zSQL);
if (ret != SQLITE_OK || !changed)
return -1;
return 0;
}

Regards,
Kevin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite 3.4.1 write performance difference between linux kernel 2.6.33 and 3.4.6

2012-11-29 Thread Simon Slavin

On 29 Nov 2012, at 5:47pm, Kevin Liao  wrote:

> Usually it takes only 11-13
> ms to execute the update commands. Recently I upgrade the kernel to 3.4.6 but
> find one problem. It takes about 43-51 ms to finish the update command now.

How are you measuring the amount of time it takes ?

Can you try issuing the same command using the sqlite3 shell tool ?  You can 
use the '.timer ON' command and it will time how long a command takes.  Try 
this using both kernels and see if the newer kernal slows down the sqlite3 
shell tool too.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite 3.4.1 write performance difference between linux kernel 2.6.33 and 3.4.6

2012-11-29 Thread Kevin Liao
I have a simple propram that issues sqlite update command every few seconds.
The platform is linux based with kernel 2.6.33 and sqlite version is 3.4.1.
The db file is on the partition with EXT3 format. Usually it takes only 11-13
ms to execute the update commands. Recently I upgrade the kernel to 3.4.6 but
find one problem. It takes about 43-51 ms to finish the update command now.
That is, the write performance is almost four times slower that kernel 2.6.33.

The following is source code of the function I used for updating
sqlite. Is there
anything I do wrong or does anyone have the similar problem? Thanks a lot.

int my_db_update_progress(int value)
{
sqlite3* db;
char* zSQL = NULL;
int ret = 0;
int changed = 0;

zSQL = sqlite3_mprintf("UPDATE MY_TASK SET progress = %d WHERE \
task_pid = %d;", value, getpid());
ret = sqlite3_open("/etc/mydb.db", );
if (ret) {
sqlite3_free(zSQL);
return -1;
}
sqlite3_busy_timeout(db, 2);
ret = sqlite3_exec(db, zSQL, NULL, NULL, NULL);
if (sqlite3_total_changes(db))
changed = 1;
sqlite3_close(db);
sqlite3_free(zSQL);
if (ret != SQLITE_OK || !changed)
return -1;
return 0;
}

Regards,
Kevin
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Persistence of -wal and -shm

2012-11-29 Thread Simon Slavin

On 29 Nov 2012, at 5:18pm, Richard Hipp  wrote:

> On Thu, Nov 29, 2012 at 12:04 PM, Simon Slavin  wrote:
> 
>> Should these journal files exist even when nothing is open or has crashed
>> ?  I thought that properly-closed connections deleted journal files for
>> databases in WAL mode.
> 
> If everybody does sqlite3_close() then those files are deleted on the last
> call to sqlite3_close().  If they are lingering, that indicates that
> somebody didn't do their sqlite3_close().

That's what I was afraid of.  Okay, that makes it certain.  I'll delete them 
this time, and see if some pop up again.

>> Is there some official method to delete them ?  I tried VACUUM on one and
>> when the connection was closed the journal files were deleted, but I don't
>> want to do that if their existence definitely indicates that my code is not
>> closing those files correctly.
> 
> Simply running:  "sqlite3 whatever.db .tables" should be sufficient to
> delete left-over -shm and -wal files.  If that doesn't work, it indicates
> that some other process still has the database open.

Will try that tomorrow.  Thanks.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Persistence of -wal and -shm

2012-11-29 Thread Dan Kennedy

On 11/30/2012 12:04 AM, Simon Slavin wrote:

I have a folder with 17 independent databases in, each of them opened
for reading an writing occasionally.  Two of them have both -wal and
-shm files, even though they shouldn't have been opened for read or
write for days, and the last opening of each one closed connections
properly without crashing.

Should these journal files exist even when nothing is open or has
crashed ?  I thought that properly-closed connections deleted journal
files for databases in WAL mode.

Is there some official method to delete them ?  I tried VACUUM on one
and when the connection was closed the journal files were deleted,
but I don't want to do that if their existence definitely indicates
that my code is not closing those files correctly.


You should probably assume that.

Usually *-wal and *-shm files are deleted when the number of
connections to a database drops from one to zero.

I think there is a race condition there though - if the last two
connections disconnect more or less at the same time the files
might not be removed. Seems unlikely that you would strike that
twice, but who knows.

To delete them, just run any query on the database and then disconnect.
i.e.

  sqlite3 databasefile.db "SELECT * FROM sqlite_master;" > /dev/null




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Persistence of -wal and -shm

2012-11-29 Thread Richard Hipp
On Thu, Nov 29, 2012 at 12:04 PM, Simon Slavin  wrote:

> I have a folder with 17 independent databases in, each of them opened for
> reading an writing occasionally.  Two of them have both -wal and -shm
> files, even though they shouldn't have been opened for read or write for
> days, and the last opening of each one closed connections properly without
> crashing.
>
> Should these journal files exist even when nothing is open or has crashed
> ?  I thought that properly-closed connections deleted journal files for
> databases in WAL mode.
>

If everybody does sqlite3_close() then those files are deleted on the last
call to sqlite3_close().  If they are lingering, that indicates that
somebody didn't do their sqlite3_close().


>
> Is there some official method to delete them ?  I tried VACUUM on one and
> when the connection was closed the journal files were deleted, but I don't
> want to do that if their existence definitely indicates that my code is not
> closing those files correctly.
>

Simply running:  "sqlite3 whatever.db .tables" should be sufficient to
delete left-over -shm and -wal files.  If that doesn't work, it indicates
that some other process still has the database open.



>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Persistence of -wal and -shm

2012-11-29 Thread Simon Slavin
I have a folder with 17 independent databases in, each of them opened for 
reading an writing occasionally.  Two of them have both -wal and -shm files, 
even though they shouldn't have been opened for read or write for days, and the 
last opening of each one closed connections properly without crashing.

Should these journal files exist even when nothing is open or has crashed ?  I 
thought that properly-closed connections deleted journal files for databases in 
WAL mode.

Is there some official method to delete them ?  I tried VACUUM on one and when 
the connection was closed the journal files were deleted, but I don't want to 
do that if their existence definitely indicates that my code is not closing 
those files correctly.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Jay A. Kreibich
On Thu, Nov 29, 2012 at 02:39:49PM +, Black, Michael (IS) scratched on the 
wall:
> I thought a backup was using a snapshot and locking the database?

  No... the source DB remains available.  That's largely the point of
  the API.  In fact, the full name is the "Online Backup API." The fact
  that it can also be used to copy in-memory DBs is more of a side
  benefit, even if it was a much needed benefit.

  http://www.sqlite.org/c3ref/backup_finish.html

SQLite holds a write transaction open on the destination database
file for the duration of the backup operation. The source database
is read-locked only while it is being read; it is not locked
continuously for the entire backup operation. Thus, the backup may
be performed on a live source database without preventing other
database connections from reading or writing to the source database
while the backup is underway.

  If the source database is modified by the same connection doing the
  backup, the page updates are written to both DBs.  If the source DB
  is modified by any other connection, the backup automatically restarts.

  Because it is easy to imagine a case when the backup gets caught in a
  restart loop, some people choose to make the backup a more atomic
  operation by having the backup "step" function copy all the pages in
  one go.  In that case it is likely that the majority of pages are
  written out in-order, but I wouldn't want to bank on that.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Black, Michael (IS)
I thought a backup was using a snapshot and locking the database?

Hadn't considered random access though which I'd wager it does do on write.

Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Jay A. Kreibich [j...@kreibi.ch]
Sent: Thursday, November 29, 2012 8:37 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Converting in-memory sqlite database to char array

On Thu, Nov 29, 2012 at 02:05:02PM +, Black, Michael (IS) scratched on the 
wall:
> And if you want to improve latency you can use fifo's on Unix or
> anonymous pipes on Windows and run a thread to send your data
> while it's writing since those methods are synchronous.

  I would not assume the backup API writes the file front to back,
  especially if the database is modified while the backup is taking
  place.

  A custom VFS that just "writes" the file to a big chunk of memory
  makes the most sense.

   -j

--
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Jay A. Kreibich
On Thu, Nov 29, 2012 at 02:05:02PM +, Black, Michael (IS) scratched on the 
wall:
> And if you want to improve latency you can use fifo's on Unix or
> anonymous pipes on Windows and run a thread to send your data
> while it's writing since those methods are synchronous.

  I would not assume the backup API writes the file front to back,
  especially if the database is modified while the backup is taking
  place.

  A custom VFS that just "writes" the file to a big chunk of memory
  makes the most sense.

   -j

-- 
Jay A. Kreibich < J A Y  @  K R E I B I.C H >

"Intelligence is like underwear: it is important that you have it,
 but showing it to the wrong people has the tendency to make them
 feel uncomfortable." -- Angela Johnson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Map Scape
writing to a file is indeed the easy way, but I would rather not involve
the filesystem if possible. yet, I guess that will be the way I will choose
at the end. I also had a look at VFS, but an easier method would have been
better of course.


On Thu, Nov 29, 2012 at 2:53 PM, Eric Minbiole  wrote:

> As a first (simple) approach, I might use the standard backup API to back
> up to a temp file, then stream that file byte by byte over the
> communication protocol.
>
> I'm sure there may be other more direct-to-memory approaches, perhaps using
> a custom VFS.  However, this approach should be simple and easy, and would
> not require any special serialization library-- just standard file I/O.
>
>
> On Thu, Nov 29, 2012 at 8:19 AM, Map Scape  wrote:
>
> > Hi all,
> >
> > I have an in-memory sqlite database which I want to convert to a simple
> > char array, to send over a communication protocol. I want to do this
> > preferably without using any serialization library.
> >
> > Basically I want to do what backup api calls does, but instead of copying
> > database to another database, I will be copying it to a char
> > array/string/stream (whatever you may call it).
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help. Read only process being blocked by writer process.

2012-11-29 Thread Howard Chu

Alejandro Martínez wrote:

Thanks Richard, that makes perfect sense.

Thanks Howard, but i don't know what you are talking about, so i will
google "copy-on-write".


See the papers and presentations here:
   http://www.symas.com/mdb

Source code for SQLite is here:
   http://gitorious.org/mdb


Григорий Григоренко, Interesting! I'll consider this approach if at some
point i'm able to go "scorched earth" and start this from scratch, but at
this point i would have to change too much stuff.

I will go with the WAL solution for now. I'm just worried a buggy process
could hang while having a prepared statement open and cause the wal file to
grow forever, then causing errors in other processes.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Black, Michael (IS)
And if you want to improve latency you can use fifo's on Unix or anonymous 
pipes on Windows and run a thread to send your data while it's writing since 
those methods are synchronous.
man popen (you open write in one thread and open a read in another)
http://msdn.microsoft.com/en-us/library/windows/desktop/aa365141%28v=vs.85%29.aspx
On Windows you get 2 handles that you pass the read handle to your other thread.

Remember to send a 2nd item (last packet) with how many bytes you sent so the 
client knows it got what it was supposed to.  Otherwise you're sure to get a 
truncated db some time and die on the client.


Michael D. Black
Senior Scientist
Advanced Analytics Directorate
Advanced GEOINT Solutions Operating Unit
Northrop Grumman Information Systems


From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Eric Minbiole [eminbi...@gmail.com]
Sent: Thursday, November 29, 2012 7:53 AM
To: General Discussion of SQLite Database
Subject: EXT :Re: [sqlite] Converting in-memory sqlite database to char array

As a first (simple) approach, I might use the standard backup API to back
up to a temp file, then stream that file byte by byte over the
communication protocol.

I'm sure there may be other more direct-to-memory approaches, perhaps using
a custom VFS.  However, this approach should be simple and easy, and would
not require any special serialization library-- just standard file I/O.


On Thu, Nov 29, 2012 at 8:19 AM, Map Scape  wrote:

> Hi all,
>
> I have an in-memory sqlite database which I want to convert to a simple
> char array, to send over a communication protocol. I want to do this
> preferably without using any serialization library.
>
> Basically I want to do what backup api calls does, but instead of copying
> database to another database, I will be copying it to a char
> array/string/stream (whatever you may call it).
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Eric Minbiole
As a first (simple) approach, I might use the standard backup API to back
up to a temp file, then stream that file byte by byte over the
communication protocol.

I'm sure there may be other more direct-to-memory approaches, perhaps using
a custom VFS.  However, this approach should be simple and easy, and would
not require any special serialization library-- just standard file I/O.


On Thu, Nov 29, 2012 at 8:19 AM, Map Scape  wrote:

> Hi all,
>
> I have an in-memory sqlite database which I want to convert to a simple
> char array, to send over a communication protocol. I want to do this
> preferably without using any serialization library.
>
> Basically I want to do what backup api calls does, but instead of copying
> database to another database, I will be copying it to a char
> array/string/stream (whatever you may call it).
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Converting in-memory sqlite database to char array

2012-11-29 Thread Map Scape
Hi all,

I have an in-memory sqlite database which I want to convert to a simple
char array, to send over a communication protocol. I want to do this
preferably without using any serialization library.

Basically I want to do what backup api calls does, but instead of copying
database to another database, I will be copying it to a char
array/string/stream (whatever you may call it).
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Sqlite and TCL on linux

2012-11-29 Thread brijesh_philips
Thank you for the reply,
i have downloaded the test / the source tree from sqlite
SHA1 Hash:  ba8d08b67021a32fda069c18b7eb93523e6f0d1f
Date:   2012-11-27 21:56:28

I tried the point mentioned  by you and while executing the test (
./testfixture ./test/all.test )its throwing the below error, am not sure why
it is ?

philips@philips-desktop:~/Desktop/Brijesh/SQLite-Source tree$ ./testfixture
./test/all.test 
rtree1.test-closeallfiles... Ok
rtree1.test-sharedcachesetting... Ok
Time: rtree1.test 14 ms
Memory used:  now 12  max  68096  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree2.test-closeallfiles... Ok
rtree2.test-sharedcachesetting... Ok
Time: rtree2.test 13 ms
Memory used:  now 12  max  68096  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree3.test-closeallfiles... Ok
rtree3.test-sharedcachesetting... Ok
Time: rtree3.test 17 ms
Memory used:  now 12  max  68096  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree4.test-closeallfiles... Ok
rtree4.test-sharedcachesetting... Ok
Time: rtree4.test 13 ms
Memory used:  now 12  max  68096  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree5.test-closeallfiles... Ok
rtree5.test-sharedcachesetting... Ok
Time: rtree5.test 12 ms
Memory used:  now 12  max  68096  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree6.test-closeallfiles... Ok
rtree6.test-sharedcachesetting... Ok
Time: rtree6.test 20 ms
Memory used:  now 12  max  68104  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree7.test-closeallfiles... Ok
rtree7.test-sharedcachesetting... Ok
Time: rtree7.test 15 ms
Memory used:  now 12  max  68104  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree8.test-closeallfiles... Ok
rtree8.test-sharedcachesetting... Ok
Time: rtree8.test 14 ms
Memory used:  now 12  max  68104  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtree9.test-closeallfiles... Ok
rtree9.test-sharedcachesetting... Ok
Time: rtree9.test 16 ms
Memory used:  now 12  max  68104  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  now  0  max   1036
Scratch memory used:  now  0  max  0
Scratch overflow: now  0  max  0  max-size  0
rtreeA.test-closeallfiles... Ok
rtreeA.test-sharedcachesetting... Ok
Time: rtreeA.test 13 ms
Memory used:  now 12  max  68104  max-size  64000
Allocation count: now  1  max 38
Page-cache used:  now  0  max  0  max-size   1024
Page-cache overflow:  

[sqlite] Building custom amalgamation

2012-11-29 Thread Derek Wang
Hi All,

I followed this link: http://www.sqlite.org/howtocompile.html and read that
running > make sqlite3.c should yield 3 files, sqlite3.c, sqlite3.h, and
tclsqlite3.c.

In my build folder, I found sqlite3.c and sqlite3.h, but there's only a
tclsqlite.o.  It also created a tsrc folder where I can find a file called
tclsqlite.c.  Is this file supposed to be the same as tclsqlite3.c?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 59, Issue 28

2012-11-29 Thread Paul Corke
On 29 November 2012 10:04, Imanuel wrote:

> SELECT c FROM mytable WHERE b IS NOT c
> 
> Is this what you mean?

Or maybe she wants?

SELECT c FROM mytable WHERE c NOT IN (SELECT IFNULL(b,0) FROM mytable);

as in:


 sqlite> CREATE TABLE mytable (b number, c number);
 sqlite> INSERT INTO mytable VALUES(897313,897313);
 sqlite> INSERT INTO mytable VALUES(854809,868763);
 sqlite> INSERT INTO mytable VALUES(854808,854809);
 sqlite> INSERT INTO mytable VALUES(NULL,854808);

 sqlite> SELECT * FROM mytable;
 897313|897313
 854809|868763
 854808|854809
 |854808

 sqlite> SELECT c FROM mytable WHERE c NOT IN (SELECT IFNULL(b,0)
FROM mytable);
 868763


Regards,

Paul.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help. Read only process being blocked by writer process.

2012-11-29 Thread Alejandro Martínez
Thanks Richard, that makes perfect sense.

Thanks Howard, but i don't know what you are talking about, so i will
google "copy-on-write".

Григорий Григоренко, Interesting! I'll consider this approach if at some
point i'm able to go "scorched earth" and start this from scratch, but at
this point i would have to change too much stuff.

I will go with the WAL solution for now. I'm just worried a buggy process
could hang while having a prepared statement open and cause the wal file to
grow forever, then causing errors in other processes.


On Wed, Nov 28, 2012 at 3:02 PM, Григорий Григоренко wrote:

> Hi,
>
>
> CREATE TABLE rev(current); INSERT INTO rev VALUES(0);
> CREATE TABLE data(..., revision);
>
> Readers:
> SELECT * FROM data JOIN rev ON revision = current WHERE ... ;
> // or "SELECT current FROM rev" into var and passing it value in "SELECT *
> FROM data WHERE revision=?"
>
> Writer:
>
> // insert new records, old records remains
>
> SELECT current + 1 FROM rev; // into some variable
>
> BEGIN;
> INSERT INTO data(..., revision) VALUES (..., current );
> INSERT INTO data(..., revision) VALUES (..., current );
> ...
> ... repeat 1000 times
> ...
> COMMIT;
>
> BEGIN;
> INSERT INTO data(..., revision) VALUES (..., current );
> INSERT INTO data(..., revision) VALUES (..., current );
> ...
> ... repeat 1000 times
> ...
> COMMIT;
>
> // repeat inserting until all records are inserted; 1000 is a number of
> records taken by a wild guess )
>
> // now switch readers to new records
> BEGIN; UPDATE rev SET current = current + 1; COMMIT; // no need for a
> begin/commit just to point that this runs inside its own transaction
>
> // now delete old records again incrementally
>
> // repeat this block until records stop deleting from table
> BEGIN;
> SELECT Max(rowid) - 1000 FROM data; // into variable MaxId
> DELETE FROM data WHERE revision = current - 1 AND rowid > MaxId;
> COMMIT;
>
> // done, there are only new records in a table, repeat above steps to
> insert new bunch of records
>
>
>
>
> Regads,
> GG
>
>
> Wed 28 Nov 2012 09:47:50 от Alejandro Martínez :
> >
> >
> >
>
>
> >
>
>
>
> >I have one process that each 30 minutes refills several tables in this
> >
> manner:
> >
>
> >
> sqlite3_open_v2(CACHEDB_PATH, _conn, SQLITE_OPEN_CREATE |
> >
> SQLITE_OPEN_READWRITE, NULL)
> >
>
> >
> - For each table:
> >
>
> >
> begin deferred transaction; delete from [table];
> >
> insert into table ...
> >
> insert into table ...
> >
> insert into table ...
> >
> insert into table ...
> >
> [up to 180.000 rows]
> >
> commit;
> >
>
> >
> and sometimes the commit fails, so it is retried. (why would it fail? its
> >
> the only  writter)
> >
>
> >
> And then i have many other processes that open that sqlite database read
> >
> only. sqlite3_open_v2(_dbfile, _conn, SQLITE_OPEN_READONLY, NULL)
> >
> and sqlite3_busy_timeout(sqcache_conn, 5000)
> >
>
> >
> These processes create very simple prepared statements to query that
> tables.
> >
>
> >
> And the big problem i'm having, is that when i step these prepared
> >
> statements, they lock for 5 seconds and then fail.
> >
>
> >
> And i put that busy timeout just for completeness, cause i wasn't expecting
> >
> any locking because for being a read only query.
> >
>
> >
> I really need these queries not to lock or fail.
> >
>
> >
> What am i doing wrong?
> >
> Any suggestions?
> >
>
> >
> Thank you,
> >
> Alejandro
> >
> ___
> >
> sqlite-users mailing list
> >
> >sqlite-users@sqlite.org
> >
> >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> >
>
>
>
>
>
> >
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXISTS syntax (was: sqlite-users Digest, Vol 59, Issue 28)

2012-11-29 Thread Clemens Ladisch
YAN HONG YE wrote:
> b c
> 897313897313
> 854809868763
> 854808854809
> 854980854808
> 854974854980
> 854973854974
> 854780854973
> 854778854780
> 854769854778
> 854768854769
> 854738854768
> 854691854738
> 854690854691
> 853709854690
> 856707853709
> 856697856707
> 856695856697
> 927681856695
> 927696927681
> 927695927696
> 927694927695
> 927693927694
> 927692927693
> 927691927692
> 927690927691
>   927690
>   910986
>   910985
> I have a table like this, how to get result  not exists:b=c
>
> my command is :
> select c from mytable where not exists( b=c);

The expression after EXISTS must be a subquery:

  SELECT c FROM mytable AS m1 WHERE NOT EXISTS (SELECT b FROM mytable AS m2 
WHERE m2.b = m1.c);


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite-users Digest, Vol 59, Issue 28

2012-11-29 Thread Imanuel
SELECT c FROM mytable WHERE b IS NOT c

Is this what you mean?

Imanuel



Am 29.11.2012 07:45, schrieb YAN HONG YE:
> b c
> 897313897313
> 854809868763
> 854808854809
> 854980854808
> 854974854980
> 854973854974
> 854780854973
> 854778854780
> 854769854778
> 854768854769
> 854738854768
> 854691854738
> 854690854691
> 853709854690
> 856707853709
> 856697856707
> 856695856697
> 927681856695
> 927696927681
> 927695927696
> 927694927695
> 927693927694
> 927692927693
> 927691927692
> 927690927691
>   927690
>   910986
>   910985
> I have a table like this, how to get result  not exists:b=c
> 
> my command is :
> select c from mytable where not exists( b=c);
> but not correct.
> how to get correct result?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ANN: SQLite Maestro 12.11 released

2012-11-29 Thread SQL Maestro Group

Hi!

SQL Maestro Group announces the release of SQLite Maestro 12.11, a complete
Windows GUI solution for SQLite database management. The new version is
immediately available at
http://www.sqlmaestro.com/products/sqlite/maestro/

Top 10 new features
=

1. Enhanced full text search table management.
2. Support for XML and GUID data types.
3. Report Designer.
4. New Start Page with quick access to recently connected databases.
5. Improved SQL Editor.
6. Inline images, highlighted keys and multi-column lookups in data grids.
7. Data import from CSV files containing multi-line strings.
8. Updated SQL Script Editor and Script Runner tools.
9. Colored profiles in Database Explorer.
10. New modern progress windows.

Full press-release (with explaining screenshots) is available at:
http://www.sqlmaestro.com/news/company/sqlite_maestro_12_11_released/

Background information:

SQL Maestro Group offers complete database admin and management tools for
MySQL, Oracle, MS SQL Server, PostgreSQL, SQLite, DB2, Firebird, SQL
Anywhere and MaxDB providing the highest performance, scalability and
reliability to meet the requirements of today's database applications.

Sincerely yours,
The SQL Maestro Group Team
http://www.sqlmaestro.com


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users