Re: [sqlite] How to detect "database disk image malformed" in advance?

2013-11-08 Thread Mario M. Westphal
> If you have a contrary example, please send me a
> copy of the database file via private email.

I take it from your reply that the integrity_check indeed should reveal
problems in the database file which cause the "disk image malformed" return
code so my logic is OK as it is.

Unfortunately, the database is at a customers site and has about 4 or 5 GB
in size. I doubt I can get this DB to you somehow. 

I asked the user to send me the log files my application automatically
writes. Now the problem looks slightly different.

The diagnosis with the "OK" result was on 11/7 but the error happened on
11/8. So there most likely where a lot of database activity in-between,
although no disk problems or other issues. If integrity_check would have
revealed problem but did not, the database seems to have become corrupted
between these two events. I'll ask him to run the diagnosis again and check
if the integrity_check reports something.

I test the result for the "OK" result and is something different is
returned, I write the first 100 rows into a log file. Maybe we see something
there.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to detect "database disk image malformed" in advance?

2013-11-08 Thread Simon Slavin

On 8 Nov 2013, at 3:06pm, Mario M. Westphal  wrote:

> The first phase in my analysis checks the data in the database for logical
> problems, business logic issues etc.
> After this phase has been completed, my diagnosis function runs:
> 
> 1. PRAGMA integrity_check(100)
> 2. VACUUM
> 3. ANALYZE

I would move just the integrity_check to before your business-logic check.  And 
if that stage notices any problems, your program should abort there, since 
doing anything else to the database at that point will only mess it up further.

If you want the ultimate in cleanness and neatness, ANALYZE should be before 
VACUUM, but it won't make that much difference.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to detect "database disk image malformed" in advance?

2013-11-08 Thread Richard Hipp
On Fri, Nov 8, 2013 at 10:06 AM, Mario M. Westphal  wrote:

>
> 1. PRAGMA integrity_check(100)
> 2. VACUUM
> 3. ANALYZE
>
> My idea was that these routines should reveal problems reading from or
> writing to the database file.
>
> Today I had a case where my diagnosis considered a database as OK (all
> SQLite functions used in the diagnosis returned SQLITE_OK), but the
> database
> still runs into the dreaded "disk image malformed" problem shortly
> afterwards :-(
>

You should never get an SQLITE_CORRUPT error after querying a database file
that is unchanged since a successful PRAGMA integrity_check.  If you have a
contrary example, please send me a copy of the database file via private
email.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Howard Chu

Raheel Gupta wrote:

Hi,


You use BDB SQL or BDB KV ?

I built BDB 6.0.20 with --enable-sql_compat
It made a libsqlite3.so in the .libs folder which I linked with my QT C++
Application.


You must try it with SQLightning too, https://gitorious.org/mdb/

sqlightning

I tried to build it, but it says lmdb.h missing. Will check it as well.


You must copy (or symlink) the lmdb source code into the SQLightning source 
directory.



On Fri, Nov 8, 2013 at 4:12 PM, Aris Setyawan  wrote:


This is the BDB SQL doc I found.


http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbfeatures.html#bulkloading

If you insert record in bulk, you can use PRAGMA TXN_BULK for optimization.



I tested TXN_BULK, still pretty slow. Nearly 2.4 times.
Also the space wastage is pretty high. I set a page size of 64K in this and
the space wastage is very high as the DBNAME.db-journal folder has too many
files.
DBNAME.db is 448MB
DBNAME.db-journal is 161MB.

Which is after inserting 10 rows of 4K = 400MB.

In SQLITE 3.7.17 I get only 6.69% wastage of space.
I wish BDB worked faster to see my alternatives.


BDB is inherently a very slow codebase. Look at the performance difference 
between BDB and SQLite3 here http://symas.com/mdb/microbench/#sec1 - on random 
writes BDB KV is only 8% faster than SQLite. If you add the SQL parsing 
overhead on top of that, that 8% performance margin is erased. If you use 
batched random writes, SQLite is already 2% faster than BDB KV, so BDB SQL can 
only be slower than SQLite.


Whatever other differences there may be, there is no performance benefit to 
using BDB as a backend for SQLite. In most cases there is a performance loss.


--
  -- Howard Chu
  CTO, Symas Corp.   http://www.symas.com
  Director, Highland Sun http://highlandsun.com/hyc/
  Chief Architect, OpenLDAP  http://www.openldap.org/project/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to detect "database disk image malformed" in advance?

2013-11-08 Thread Mario M. Westphal

I have implemented diagnostic routines which allow my users to check the
database for problems.
If something bad happens to a database (which may be not the fault of SQLite
at all, e.g. a network problem or disk problem) I want to detect this as
early as possible in order to inform the user. This allows them to restore
their backups before too much data is lost.

The first phase in my analysis checks the data in the database for logical
problems, business logic issues etc.
After this phase has been completed, my diagnosis function runs:

1. PRAGMA integrity_check(100)
2. VACUUM
3. ANALYZE

My idea was that these routines should reveal problems reading from or
writing to the database file.

Today I had a case where my diagnosis considered a database as OK (all
SQLite functions used in the diagnosis returned SQLITE_OK), but the database
still runs into the dreaded "disk image malformed" problem shortly
afterwards :-(

My question: Is there a API function etc. which performs a reliable test of
all database pages and other structures? Something that will detect that
"disk image malformed" will be returned when some areas of the database are
updated?


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Where does SQLite write out files?

2013-11-08 Thread Ben
Don't try to redirect the SQLite temporary files. In the long run, you'll be in 
for more maintenance. Instead work within the sandbox limitations.

Are you using SQLite as a file format or looking to edit arbitrary SQLite files?

If you're making a custom file format, put the SQLite database inside a package 
(search for "nsdocument package" if you're unfamiliar with this). This makes 
sandboxing almost irrelevant since your app opens the root directory of the 
package and then your app has pretty much free access to anything inside that 
directory. This is bar far the easiest approach if you can use it.

If you need access to arbitrary SQLite files, then you'll need to read up on 
NSFilePresenter and Related Items. I think this is mentioned in this one or two 
of this years WWDC videos. Try http://asciiwwdc.com for searchable 
transcriptions of the WWDC videos. It's more annoying and still has some bugs, 
but works.

- Ben


On 7 Nov 2013, at 17:42, L. Wood  wrote:

> What directories can SQLite possibly write files to?
> 
> Modern Mac OS X programs run in "sandbox mode". This is a requirement to 
> publish apps on Apple's Mac App Store. "Sandboxing" means that it is 
> impossible for an app to write files into locations outside a given 
> authorized "sandbox".
> 
> For instance, a proper way to get a safe temporary directory is Apple's own C 
> API, NSTemporaryDirectory().
> 
> How would you adapt SQLite to this environment?
> 
> 
> Here are my own thoughts so far (based on the webpage 
> http://sqlite.org/tempfiles.html):
> 
> * I'm aware of the files that SQLite can write to the *same* directory as 
> that of the actual database file, and I have ways to deal with that (by 
> putting the database file into a so-called package, so all the files will be 
> together as a unit).
> 
> * If I set the global variable sqlite3_temp_directory 
> (http://sqlite.org/c3ref/temp_directory.html) to the value returned by 
> Apple's NSTemporaryDirectory(), I'm hoping to be fine. Will I?
> 
> Unfortunately, the webpage also says:
> 
> "The manner in which SQLite uses temporary files is not considered part of 
> the contract that SQLite makes with applications. The information in this 
> document is a correct description of how SQLite operates at the time that 
> this document was written or last updated. But there is no guarantee that 
> future versions of SQLite will use temporary files in the same way. New kinds 
> of temporary files might be employed and some of the current temporary file 
> uses might be discontinued in future releases of SQLite."
> 
> This renders any efforts in this regard ill-defined. Are there any solid ways 
> out? I'm using SQLite 3.8.1, and are the rules on what webpage even true for 
> this version? That is nowhere mentioned. 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Recursive vs nested triggers

2013-11-08 Thread Carlson, Jay
All,

I saw in the documentation that recursive triggers are supported and enabled by 
default as of 3.7.0, and as there was no mention of nested triggers I assumed 
(wrongly, it would seem) that in sqlite3 the concept of recursive triggers 
included nested triggers as well.

In the following script, I have set up a few triggers for different purposes.  
Because I don't want the triggers to cascade to one-another, I set PRAGMA 
recursive_triggers = 0.  As shown in the log, however, this isn't working as I 
wanted and after the DELETE operation, the delete trigger performs an UPDATE 
operation, which in turn results in an UPDATE trigger being run once for each 
row that was updated by the DELETE trigger.

How can I control this such that one trigger's statement won't cause another 
trigger to run?

-Regards,
-Jay

--

PRAGMA recursive_triggers = 0;

CREATE TABLE test (val INTEGER NOT NULL);
CREATE TABLE log (timestamp, message);

CREATE TRIGGER trig_update_before_insert BEFORE INSERT ON test
BEGIN
INSERT INTO log VALUES (datetime('NOW'), 'Trigger: trig_update_before_insert, 
new.val='||new.val);
UPDATE test SET val = val + 1 WHERE val >= new.val;
END;

CREATE TRIGGER IF NOT EXISTS trig_update_after_delete AFTER DELETE ON test
BEGIN
INSERT INTO log VALUES (datetime('NOW'), 'Trigger: trig_update_after_delete, 
old.val='||old.val);
UPDATE test SET val = val - 1 WHERE val > old.val;
END;

CREATE TRIGGER IF NOT EXISTS trig_update_before_update_lower BEFORE UPDATE ON 
test WHEN new.val > old.val
BEGIN
INSERT INTO log VALUES (datetime('NOW'), 'Trigger ran: 
trig_update_before_update_lower, new.val='||new.val||', old.val='||old.val);
UPDATE test SET val = val - 1 WHERE val > old.val AND val <= new.val;
END;

CREATE TRIGGER IF NOT EXISTS trig_update_before_update_raise BEFORE UPDATE ON 
test WHEN new.val < old.val
BEGIN
INSERT INTO log VALUES (datetime('NOW'), 'Trigger ran: 
trig_update_before_update_raise, new.val='||new.val||', old.val='||old.val);
UPDATE test SET val = val + 1 WHERE val >= new.val AND val < old.val;
END;

INSERT INTO test VALUES (1);
INSERT INTO test VALUES (2);
INSERT INTO test VALUES (3);
DELETE FROM test WHERE val = 1;

SELECT * FROM test ORDER BY val;
SELECT * FROM log;

~/mipsbin/priority_continuity_testing # ./sqlite378_xip < test.sql
1
2
2013-11-08 00:03:42|Trigger: trig_update_before_insert, new.val=1
2013-11-08 00:03:42|Trigger: trig_update_before_insert, new.val=2
2013-11-08 00:03:42|Trigger: trig_update_before_insert, new.val=3
2013-11-08 00:03:42|Trigger: trig_update_after_delete, old.val=1
2013-11-08 00:03:42|Trigger ran: trig_update_before_update_raise, new.val=1, 
old.val=2
2013-11-08 00:03:42|Trigger ran: trig_update_before_update_raise, new.val=2, 
old.val=3
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Why does SQLite refuse to use available indexes when adding a JOIN?

2013-11-08 Thread Ofer Sadgat
Hello, I have been trying to solve this problem for a while and have had no 
luck, so this is the first time I am trying asking for help here. I apologize 
if this is not the correct place to ask such questions and would appreciate if 
you would forward this to the appropriate location or let me know where to send 
it.

Why does SQLite refuse to use available indexes when adding a JOIN?

To illustrate what I mean, look at the following statements to create the 
database:

CREATE TABLE foo(id TEXT); CREATE INDEX `foo.index` ON foo(id); CREATE TABLE 
bar(id TEXT); CREATE INDEX `bar.index` ON bar(id); CREATE VIEW baz AS SELECT id 
FROM foo UNION ALL SELECT id FROM bar; CREATE TABLE bam(id TEXT, value TEXT); 
INSERT INTO foo VALUES('123'); INSERT INTO foo VALUES('1123'); INSERT INTO foo 
VALUES('2123'); INSERT INTO foo VALUES('3123'); INSERT INTO bar 
VALUES('44123'); INSERT INTO bar VALUES('441123'); INSERT INTO bar 
VALUES('442123'); INSERT INTO bar VALUES('443123');

This is the query that I try to run:
EXPLAIN QUERY PLAN SELECT * FROM baz LEFT JOIN bam ON baz.id=bam.id WHERE 
baz.id IN ('123', '234');


Here is the result of that query:
SCAN TABLE foo (~100 rows) SCAN TABLE bar (~100 rows) COMPOUND 
SUBQUERIES 2 AND 3 (UNION ALL) SCAN SUBQUERY 1 (~200 rows) EXECUTE LIST 
SUBQUERY 4 SEARCH TABLE bam USING AUTOMATIC COVERING INDEX (id=?) (~7 rows)

As you can see, instead of using an index to scan foo and bar to evaluate the 
where clause, it does a full table scan.

If you want to see this / play with it, you can at SQL Fiddle: 
http://sqlfiddle.com/#!7/32af2/14 (use WebSQL)

If I do the query: EXPLAIN QUERY PLAN SELECT * FROM baz WHERE baz.id IN ('123', 
'234'); it does use the index to scan foo and bar.

On the other hand, the query EXPLAIN QUERY PLAN SELECT * FROM (SELECT * FROM 
baz WHERE baz.id IN ('123', '234')) AS t LEFT JOIN bam ON t.id=bam.id ; does 
not use the index.

The context for this is that I have an unknown number of databases which are 
sharded and then joined together via this view. This allows me to write queries 
which don't have to worry about the attached databases and the view is simply 
regenerated every time a new database needs to be added. For this reason, I 
cannot manually flatten the query (because I do not know a priori which 
databases will be attached). 

Therefore, is there any way that I can write a query onto this view that will 
join another table and still be able to use available indexes?

Thank you for your time,
Ofer Sadgat


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Raheel Gupta
>
> This is the BDB SQL doc I found.
>
>
> http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbfeatures.html#bulkloading
>
> If you insert record in bulk, you can use PRAGMA TXN_BULK for optimization.
>
I tested TXN_BULK, still pretty slow. Nearly 2.4 times.
Also the space wastage is pretty high. I set a page size of 64K in this and
the space wastage is very high as the DBNAME.db-journal folder has too many
files.
DBNAME.db is 448MB
DBNAME.db-journal is 161MB.

Which is after inserting 10 rows of 4K = 400MB.

In SQLITE 3.7.17 I get only 6.69% wastage of space.
I wish BDB worked faster to see my alternatives.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Raheel Gupta
Hi,

>> You use BDB SQL or BDB KV ?
I built BDB 6.0.20 with --enable-sql_compat
It made a libsqlite3.so in the .libs folder which I linked with my QT C++
Application.

>> You must try it with SQLightning too, https://gitorious.org/mdb/
sqlightning

I tried to build it, but it says lmdb.h missing. Will check it as well.



On Fri, Nov 8, 2013 at 4:12 PM, Aris Setyawan  wrote:

> This is the BDB SQL doc I found.
>
>
> http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbfeatures.html#bulkloading
>
> If you insert record in bulk, you can use PRAGMA TXN_BULK for optimization.
>
>
>
> On 11/8/13, Aris Setyawan  wrote:
> > You use BDB SQL or BDB KV ?
> >
> > You must try it with SQLightning too,
> https://gitorious.org/mdb/sqlightning
> >
> >
> > On 11/8/13, Aris Setyawan  wrote:
> >>> For a single threaded application BDB is very bad after I tested.
> >>> It takes nearly 2.5 times the amount of time and CPU to do a
> transaction
> >>> of
> >>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of
> 4
> >>> K)
> >>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did
> this
> >>> in
> >>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts.
> >>> That
> >>> is what I found.
> >>
> >> Have you consult this to the BDB forum?
> >> BDB doesn't have SQL parsing overhead, so it will be faster in general.
> >>
> >> On 11/8/13, Raheel Gupta  wrote:
> > If you have many core of processors [and big RAM], then I recommend
> >>> BDB Sql over Sqlite. Because you can have many processes or threads to
> >>> write to a database concurrently.
> >>>
> >>> For a single threaded application BDB is very bad after I tested.
> >>> It takes nearly 2.5 times the amount of time and CPU to do a
> transaction
> >>> of
> >>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of
> 4
> >>> K)
> >>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did
> this
> >>> in
> >>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts.
> >>> That
> >>> is what I found.
> >>>
> >>>
> >>>
> >>> On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin 
> >>> wrote:
> >>>
> 
>  On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:
> 
>  > Any idea when will SQLite4 be released as stable ?
> 
>  No.  It's not even feature-frozen yet, as far as we know.  And
> whenever
>  it
>  is, it's incredibly unlikely to have row level locking.
> 
>  Simon.
>  ___
>  sqlite-users mailing list
>  sqlite-users@sqlite.org
>  http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> >>> ___
> >>> sqlite-users mailing list
> >>> sqlite-users@sqlite.org
> >>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >>>
> >>
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite db getting corrupt on power outage scenarios

2013-11-08 Thread Simon Slavin

On 8 Nov 2013, at 6:41am, Mayank Kumar (mayankum)  wrote:

> [MK] I am using a flash drive running on linux kernel 2.6.27.10(SMP). The 
> corruption is sometimes seen on sqlite db on the flash drive and sometimes 
> seen on sqlite db in nvram. Network file system is not involved.

I have used SQLite extensively on a Macintosh, storing the data on a Flash 
drive and pulling the drive without warning to simulate hardware and power 
problems.  I have tested with Flash drives formatted in FAT format and other 
Flash drives formatted in HFS+ (A Macintosh native format.)  While I have lost 
new transactions by doing this, unless the format structure of the drive was 
damaged I have not ended up with any databases which fail

PRAGMA integrity_check

.  Note that it is important not to manually delete journal files after 
plugging the drive back in but before starting a SQLite app.

The one problem with this is, as noted, when the format structure of the drive 
is damaged, i.e. the drive fails in disk/folder/file-structure checking 
software like chkdsk/Disk Utility.  Under those situations, SQLite can attempt 
to rescue its database file but end up causing other problems because it is 
writing to corrupt workspace.  Cannot be solved inside SQLite.

> [MK] in my case the db is corrupted but there is also a journal file present. 
> When I try to write a new transaction to the corrupted db, the new 
> transaction is successful and at the same time the journal file gets deleted. 
> The resultant db is still corrupted. Can we assume from this behavior that 
> the corruption happened before the last failed transaction for which there is 
> a journal file assuming that a journal file is created per transaction.

I'm not familiar enough with the inner workings of SQLite to give a definitive 
answer, but your description of what happens is the same as what I see and ...



"The default journaling mode is DELETE, which is the default behavior of 
deleting the rollback journal file at the end of each transaction"

which suggests that you're right if you haven't changed your journal mode.  You 
may be able to use the PRAGMA I listed earlier to learn more about your setup.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Aris Setyawan
This is the BDB SQL doc I found.

http://docs.oracle.com/cd/E17076_02/html/bdb-sql/dbfeatures.html#bulkloading

If you insert record in bulk, you can use PRAGMA TXN_BULK for optimization.



On 11/8/13, Aris Setyawan  wrote:
> You use BDB SQL or BDB KV ?
>
> You must try it with SQLightning too, https://gitorious.org/mdb/sqlightning
>
>
> On 11/8/13, Aris Setyawan  wrote:
>>> For a single threaded application BDB is very bad after I tested.
>>> It takes nearly 2.5 times the amount of time and CPU to do a transaction
>>> of
>>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4
>>> K)
>>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this
>>> in
>>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts.
>>> That
>>> is what I found.
>>
>> Have you consult this to the BDB forum?
>> BDB doesn't have SQL parsing overhead, so it will be faster in general.
>>
>> On 11/8/13, Raheel Gupta  wrote:
> If you have many core of processors [and big RAM], then I recommend
>>> BDB Sql over Sqlite. Because you can have many processes or threads to
>>> write to a database concurrently.
>>>
>>> For a single threaded application BDB is very bad after I tested.
>>> It takes nearly 2.5 times the amount of time and CPU to do a transaction
>>> of
>>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4
>>> K)
>>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this
>>> in
>>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts.
>>> That
>>> is what I found.
>>>
>>>
>>>
>>> On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin 
>>> wrote:
>>>

 On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:

 > Any idea when will SQLite4 be released as stable ?

 No.  It's not even feature-frozen yet, as far as we know.  And whenever
 it
 is, it's incredibly unlikely to have row level locking.

 Simon.
 ___
 sqlite-users mailing list
 sqlite-users@sqlite.org
 http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Aris Setyawan
You use BDB SQL or BDB KV ?

You must try it with SQLightning too, https://gitorious.org/mdb/sqlightning


On 11/8/13, Aris Setyawan  wrote:
>> For a single threaded application BDB is very bad after I tested.
>> It takes nearly 2.5 times the amount of time and CPU to do a transaction
>> of
>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4
>> K)
>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this
>> in
>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts. That
>> is what I found.
>
> Have you consult this to the BDB forum?
> BDB doesn't have SQL parsing overhead, so it will be faster in general.
>
> On 11/8/13, Raheel Gupta  wrote:
 If you have many core of processors [and big RAM], then I recommend
>> BDB Sql over Sqlite. Because you can have many processes or threads to
>> write to a database concurrently.
>>
>> For a single threaded application BDB is very bad after I tested.
>> It takes nearly 2.5 times the amount of time and CPU to do a transaction
>> of
>> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4
>> K)
>> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this
>> in
>> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts. That
>> is what I found.
>>
>>
>>
>> On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin 
>> wrote:
>>
>>>
>>> On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:
>>>
>>> > Any idea when will SQLite4 be released as stable ?
>>>
>>> No.  It's not even feature-frozen yet, as far as we know.  And whenever
>>> it
>>> is, it's incredibly unlikely to have row level locking.
>>>
>>> Simon.
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Aris Setyawan
> For a single threaded application BDB is very bad after I tested.
> It takes nearly 2.5 times the amount of time and CPU to do a transaction of
> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4 K)
> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this in
> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts. That
> is what I found.

Have you consult this to the BDB forum?
BDB doesn't have SQL parsing overhead, so it will be faster in general.

On 11/8/13, Raheel Gupta  wrote:
>>> If you have many core of processors [and big RAM], then I recommend
> BDB Sql over Sqlite. Because you can have many processes or threads to
> write to a database concurrently.
>
> For a single threaded application BDB is very bad after I tested.
> It takes nearly 2.5 times the amount of time and CPU to do a transaction of
> 40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4 K)
> in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this in
> QT C++. Overall BDB SQL interface is slower than Sqlite for inserts. That
> is what I found.
>
>
>
> On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin  wrote:
>
>>
>> On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:
>>
>> > Any idea when will SQLite4 be released as stable ?
>>
>> No.  It's not even feature-frozen yet, as far as we know.  And whenever
>> it
>> is, it's incredibly unlikely to have row level locking.
>>
>> Simon.
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Raheel Gupta
>> No. It's not even feature-frozen yet, as far as we know. And whenever it
is, it's incredibly unlikely to have row level locking.

Please add row-level locking if possible.


On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin  wrote:

>
> On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:
>
> > Any idea when will SQLite4 be released as stable ?
>
> No.  It's not even feature-frozen yet, as far as we know.  And whenever it
> is, it's incredibly unlikely to have row level locking.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Row Level Locking as in InnoDB

2013-11-08 Thread Raheel Gupta
>> If you have many core of processors [and big RAM], then I recommend
BDB Sql over Sqlite. Because you can have many processes or threads to
write to a database concurrently.

For a single threaded application BDB is very bad after I tested.
It takes nearly 2.5 times the amount of time and CPU to do a transaction of
40MB Data. E.g. If SQLIte did the 40MB data transaction (1 rows of 4 K)
in 1 second, BDB was taking 2.5 seconds and more CPU as well. I did this in
QT C++. Overall BDB SQL interface is slower than Sqlite for inserts. That
is what I found.



On Fri, Nov 8, 2013 at 12:03 AM, Simon Slavin  wrote:

>
> On 7 Nov 2013, at 6:31pm, Raheel Gupta  wrote:
>
> > Any idea when will SQLite4 be released as stable ?
>
> No.  It's not even feature-frozen yet, as far as we know.  And whenever it
> is, it's incredibly unlikely to have row level locking.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users