Re: [sqlite] SQL Logic error or missing database

2013-04-07 Thread Pavel Ivanov
Do you know that VS2012 has known optimizer bugs?

Pavel
On Apr 6, 2013 5:01 AM, "ibrahim"  wrote:

> On 05.04.2013 17:01, Dan Kennedy wrote:
>
>> On 04/05/2013 09:08 PM, Rob Collie wrote:
>>
>>> Yeap, I'm on Visual Studio 2012. I've created a console app:
>>>
>>>
>>>   sqlite3 *oDatabase;
>>>   int returnValue;
>>>   returnValue = sqlite3_open_v2("file://C:/**Newfolder/testing.db",
>>> , SQLITE_OPEN_CREATE, NULL);
>>>   if (returnValue != SQLITE_OK )
>>>   {
>>>//sqlite3_close(oDatabase);
>>>return returnValue ;
>>>   }
>>>   int anyKey;
>>>   return 0;
>>>
>>> It returns 21. Checking the other project, the open actually does return
>>> 21
>>> too.
>>>
>>
>> This one is returning SQLITE_MISUSE because the SQLITE_OPEN_READWRITE
>> flag is not being passed. It seems quite odd that the other code
>> would do the same though.
>>
>>
>>
>>
>>
>> __**_
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users
>>
>
> Thanks Dan I just copied and paste his code.
>
> correction :
>
> returnValue = sqlite3_open_v2 ("C:\\Newfolder\\testing.db", ,
> SQLITE_OPEN_CREATE | SQLITE_OPEN_READWRITE, NULL) ;
>
> and to make the test just more simple :
>
> returnValue = sqlite3_open ("C:\\Newfolder\\testing.xyz", ) ;
>
> try the different file extension could be a problem on some systems.
> __**_
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-**bin/mailman/listinfo/sqlite-**users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to achieve fastest possible write performance for a strange and limited case

2013-03-29 Thread Pavel Ivanov
On Fri, Mar 29, 2013 at 12:17 PM, Jeff Archer
 wrote:
> I have previously made an apparently bad assumption about this so now I
> would like to go back to the beginning of the problem and ask the most
> basic question first without any preconceived ideas.
>
> This use case is from an image processing application.  I have a large
> amount of intermediate data (way exceeds physical memory on my 24GB
> machine).  So, I need to store it temporarily on disk until getting to next
> phase of processing.  I am planning to use a large SSD dedicated to holding
> this temporary data.  I do not need any recoverability in case of hardware,
> power or other failure.   Each item to be stored is 9 DWORDs, 4 doubles and
> 2 variable sized BLOBS which are images.
>
> I could write directly to a file myself.  But I would need to provide some
> minimal indexing, some amount of housekeeping to manage variable
> sized BLOBS and some minimal synchronization so that multiple instances of
> the same application could operate simultaneously on a single set of data.
>
> So, then I though that SQLite could manage these things nicely for me so
> that I don't have to write and debug indexing and housekeeping code that
> already exists in SQLite.
>
> So, question is:  What is the way to get the fastest possible performance
> from SQLite when I am willing to give up all recoverability guarantees?

Use
pragma journal_mode = off;
pragma synchronous = off;
pragma locking_mode = exclusive;

In addition to that you may issue BEGIN statement at the beginning of
the application and never COMMIT.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Efficient way to store counters

2013-03-13 Thread Pavel Ivanov
On Wed, Mar 13, 2013 at 11:48 AM, David King  wrote:
>> BTW, in case you don't do that yet your best performance will be if
>> you prepare your UPDATE and INSERT statements only once and then do
>> bind + step + reset in that 100k times loop.
>
>
> In principle I agree, but since the temporary-table version is blindingly 
> fast up the the update-the-disk portion it's definitely not a bottleneck at 
> this point

I was talking about your initial implementation when you did 100k times

> update_counter(k1, k2, count=count+1, expires=now+count*1day)
> if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day)

Not about your final version with one INSERT OR REPLACE. Was your
statement about the same thing? If yes I didn't understand what you
meant.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Efficient way to store counters

2013-03-13 Thread Pavel Ivanov
On Tue, Mar 12, 2013 at 11:03 PM, David King  wrote:
>> > At first I was just doing something like this pseducode:
>> > update_counter(k1, k2, count=count+1, expires=now+count*1day)
>> > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day)
>>
>> Assuming these 2 statements constitute each of the 10k-100k steps you
>> mentioned above and all of these steps are wrapped up in BEGIN-COMMIT
>> block this is probably the most efficient way of doing this. The only
>> improvement could be if you are doing creates more often than updates.
>> Then you can switch and do INSERT first and then UPDATE if necessary.
>> It could gain you a little time.
>
>
> Yeah. I even tried keeping track of how many hits/misses I had and 
> re-ordering the attempted INSERT/UPDATE as appropriate. A batch of 100k of 
> these is done in a single transaction
>
>> > but was having serious performance problems that seems to be confined to 
>> > those lines. So I converted ir to INSERT OR REPLACE which had no 
>> > noticeable impact on performance.
>> Actually my understanding would suggest that INSERT OR REPLACE should
>> execute slower than UPDATE + INSERT (or INSERT + UPDATE).
>
> […]
>> > Convinced the problem was in my code, I decided to offload as much as 
>> > possible to sqlite. Now my code looks like:
>>
>> This should be much-much slower than UPDATE + INSERT.
>
>
> That's unfortunate because the overall performance was about the same ±10% 
> between all three approaches :(
>
>> First of all in the statement above you don't gain benefit from
>> uniqueness and replace about 10k rows twice.
>
>
> Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM 
> trans_counters_v AS c", the grouped temporary view. So it should only see any 
> given key pair once before it starts doing any inserting at all
>
>> Second with such low
>> repeatability you don't gain much from doing it with such complicated
>> INSERT. And about journal size: imagine that you've got "lucky" and
>> all those 94k rows are each in it's own page in the counters table.
>> SQLite will have to save each of that pages in the journal which will
>> give journal size of about 94k * 4096 ~ 400M.
>
>
> I hadn't thought about it that way, that's true. And it's probably wildly 
> seeking all over the disk to do it. The reads are probably fine because the 
> machine has plenty of RAM to devote to page cache, it's the random writes 
> that are killing it.
>
>> I don't think there's anything better than what you did initially.
>
> As for the fundamental approach, I figured as much. The rearrangement into 
> the giant INSERT OR REPLACE was just to prove to myself that the problem 
> wasn't elsewhere in my code
>
> For optimising it on the sqlite front, I've played with page sizes, 
> journaling modes, and changing the transaction batch size without much luck. 
> I don't have strong consistency requirements for e.g. power failures or OS 
> crashes but I do need an application crash to not take it out so I can't just 
> go without the journal altogether (which does help the problem, but isn't 
> huge).

BTW, in case you don't do that yet your best performance will be if
you prepare your UPDATE and INSERT statements only once and then do
bind + step + reset in that 100k times loop.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Efficient way to store counters

2013-03-13 Thread Pavel Ivanov
>> First of all in the statement above you don't gain benefit from
>> uniqueness and replace about 10k rows twice.
>
> Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM 
> trans_counters_v AS c", the grouped temporary view. So it should only see any 
> given key pair once before it starts doing any inserting at all

Sorry, you are right. I missed the GROUP BY part...


Pavel


On Tue, Mar 12, 2013 at 11:03 PM, David King  wrote:
>> > At first I was just doing something like this pseducode:
>> > update_counter(k1, k2, count=count+1, expires=now+count*1day)
>> > if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day)
>>
>> Assuming these 2 statements constitute each of the 10k-100k steps you
>> mentioned above and all of these steps are wrapped up in BEGIN-COMMIT
>> block this is probably the most efficient way of doing this. The only
>> improvement could be if you are doing creates more often than updates.
>> Then you can switch and do INSERT first and then UPDATE if necessary.
>> It could gain you a little time.
>
>
> Yeah. I even tried keeping track of how many hits/misses I had and 
> re-ordering the attempted INSERT/UPDATE as appropriate. A batch of 100k of 
> these is done in a single transaction
>
>> > but was having serious performance problems that seems to be confined to 
>> > those lines. So I converted ir to INSERT OR REPLACE which had no 
>> > noticeable impact on performance.
>> Actually my understanding would suggest that INSERT OR REPLACE should
>> execute slower than UPDATE + INSERT (or INSERT + UPDATE).
>
> […]
>> > Convinced the problem was in my code, I decided to offload as much as 
>> > possible to sqlite. Now my code looks like:
>>
>> This should be much-much slower than UPDATE + INSERT.
>
>
> That's unfortunate because the overall performance was about the same ±10% 
> between all three approaches :(
>
>> First of all in the statement above you don't gain benefit from
>> uniqueness and replace about 10k rows twice.
>
>
> Are you sure? The SELECT in the INSERT OR UPDATE selects "FROM 
> trans_counters_v AS c", the grouped temporary view. So it should only see any 
> given key pair once before it starts doing any inserting at all
>
>> Second with such low
>> repeatability you don't gain much from doing it with such complicated
>> INSERT. And about journal size: imagine that you've got "lucky" and
>> all those 94k rows are each in it's own page in the counters table.
>> SQLite will have to save each of that pages in the journal which will
>> give journal size of about 94k * 4096 ~ 400M.
>
>
> I hadn't thought about it that way, that's true. And it's probably wildly 
> seeking all over the disk to do it. The reads are probably fine because the 
> machine has plenty of RAM to devote to page cache, it's the random writes 
> that are killing it.
>
>> I don't think there's anything better than what you did initially.
>
> As for the fundamental approach, I figured as much. The rearrangement into 
> the giant INSERT OR REPLACE was just to prove to myself that the problem 
> wasn't elsewhere in my code
>
> For optimising it on the sqlite front, I've played with page sizes, 
> journaling modes, and changing the transaction batch size without much luck. 
> I don't have strong consistency requirements for e.g. power failures or OS 
> crashes but I do need an application crash to not take it out so I can't just 
> go without the journal altogether (which does help the problem, but isn't 
> huge).
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Efficient way to store counters

2013-03-12 Thread Pavel Ivanov
On Tue, Mar 12, 2013 at 8:29 PM, David King  wrote:
> I'm trying to find an efficient way to store simple incrementing integers but 
> I'm having trouble finding an efficient way to do it
>
> My database looks like:
>
> CREATE TABLE counters
>   k1, k2,
>   count, -- how many we've seen
>   expires,
>   PRIMARY KEY (k1, k2)
> );
> CREATE INDEX counters_expires_idx ON counters(expires);
>
> It is about 1.9gb and contains ~22 million of these rows. A given transaction 
> updates or creates between 10k and 100k of them.
>
> At first I was just doing something like this pseducode:
>
> update_counter(k1, k2, count=count+1, expires=now+count*1day)
> if rows_updated != 1: insert_counter(k1, k2, count=1, expires=now+1day)

Assuming these 2 statements constitute each of the 10k-100k steps you
mentioned above and all of these steps are wrapped up in BEGIN-COMMIT
block this is probably the most efficient way of doing this. The only
improvement could be if you are doing creates more often than updates.
Then you can switch and do INSERT first and then UPDATE if necessary.
It could gain you a little time.

> but was having serious performance problems that seems to be confined to 
> those lines. So I converted ir to INSERT OR REPLACE which had no noticeable 
> impact on performance.

Actually my understanding would suggest that INSERT OR REPLACE should
execute slower than UPDATE + INSERT (or INSERT + UPDATE).

> Convinced the problem was in my code, I decided to offload as much as 
> possible to sqlite. Now my code looks like:
>
> === cut here =
>
> PRAGMA synchronous=OFF;
> PRAGMA temp_store=MEMORY;
>
>
>
> CREATE TEMPORARY TABLE trans_counters(k1, k2);
>
> -- (add all of the items to that temporary table)
>
> CREATE TEMPORARY VIEW trans_counters_v AS
> SELECT k1 AS k1,
> k2 AS k2,
> COUNT(*) AS count
> FROM trans_counters
> GROUP BY (k1, k2);
>
>
>
> INSERT OR REPLACE INTO counters
> SELECT c.k1 AS k1,
> c.k2 AS k2,
> COALESCE((SELECT count FROM counters WHERE k1 = c.k1 AND k2 = c.k2),
> 0)+c.count AS count,
> (COALESCE((SELECT count FROM counters WHERE k1 = c.k1 AND k2 = c.k2),
> 0)+c.count)*24*60*60+? AS expires
> FROM trans_counters_v AS c

This should be much-much slower than UPDATE + INSERT.

> === cut here =
>
> Now the code that inserts all of the rows into the memory table executes 
> nearly instantly, but the big INSERT takes 15+ minutes. Meanwhile the journal 
> (in either rollback or wal mode) balloons to over 300mb in size. The 
> temporary table itself is only about 1.8mb of data (102,603 rows, 94,064 
> unique) so where is all of the journal coming from?

First of all in the statement above you don't gain benefit from
uniqueness and replace about 10k rows twice. Second with such low
repeatability you don't gain much from doing it with such complicated
INSERT. And about journal size: imagine that you've got "lucky" and
all those 94k rows are each in it's own page in the counters table.
SQLite will have to save each of that pages in the journal which will
give journal size of about 94k * 4096 ~ 400M.

> The process takes nearly 0 CPU during this time, the disk becomes very active 
> (but low throughput, reading and writing maybe 200k/s judging by the rate of 
> growth of the journal), and sampling the process with OS X's Activity Monitor 
> while it's busy outputs:
>
> 100% 2869 _pysqlite_query_execute (in _sqlite3.so) + 1886 [0x101945e5e]
> 100% 2869 pysqlite_step (in _sqlite3.so) + 47 [0x10194893f]
> 100% 2869 sqlite3_step (in libsqlite3.dylib) + 1883 [0x7fff8d95ca5b]
> 100% 2869 sqlite3VdbeExec (in libsqlite3.dylib) + 3327 [0x7fff8d95e3af]
> 100% 2869 sqlite3BtreeMovetoUnpacked (in libsqlite3.dylib) + 761 
> [0x7fff8d97ab89]
> 100% 2869 moveToChild (in libsqlite3.dylib) + 146 [0x7fff8d96c872]
> 100% 2869 sqlite3PagerAcquire (in libsqlite3.dylib) + 194 [0x7fff8d93dc22]
> 100% 2869 sqlite3PcacheFetch (in libsqlite3.dylib) + 475 [0x7fff8d93e02b]
> 100% 2869 pagerStress (in libsqlite3.dylib) + 670 [0x7fff8d9c407e]
> 100% 2869 pager_write_pagelist (in libsqlite3.dylib) + 149 [0x7fff8d999a35]
> 100% 2869 unixWrite (in libsqlite3.dylib) + 83 [0x7fff8d98bd73]
> 100% 2869 pwrite (in libsystem_kernel.dylib) + 10 [0x7fff8130bab6]
>
>
>
> That is, 2869 of 2869 samples, 100% of the time, was spent in sqlite3_step 
> writing the data to disk. Further samples look basically the same with an 
> occasional read-path taking up to ~10% of the time.
>
> VACUUM ANALYZE doesn't look to have any effect. I'm running sqlite 3.7.7 on 
> Mac OS X 10.7.5 via the Python sqlite3 module
>
> So I feel like something about what I'm doing is fundamentally flawed given 
> something about sqlite's performance model. All I want is a count of the 
> number of times that I've seen each pair (k1, k2), is there a better way to 
> do this without storing them all individually and grouping them later? (This 
> would be prohibitively large.)

I don't think there's anything better than what you did 

Re: [sqlite] inner vs. outer join inconsistency

2013-03-04 Thread Pavel Ivanov
On Sun, Mar 3, 2013 at 11:48 AM, Tom Matrix  wrote:
> Hello,
>
> I’ve encountered a problem, which is hardly reproducable on arbitrary
> databases, therefore I attached one.
> The problem is that inner join omits the result on large datasets. But
> again, ’large’ does not necessarly refer simply the amount of data; I
> couldn’t reproduce it on different (arbitrary) databases with larger
> datasets, but I could on another database with the same structure.
>
> To be more specific:
>
> The following query reports 18900080 rows (after some computation time):
>
> SELECT COUNT(*)
> FROM entryintervals ei1
> JOIN interval2label il1 ON ei1.id = il1.entry_intervalid
> JOIN labels la1 ON il1.labelid = la1.id
> JOIN labelgroups lg1ON la1.groupid = lg1.id
> JOIN entries en1ON ei1.entryid = en1.id
> JOIN modules mo1ON en1.moduleid = mo1.id
> JOIN measurements me1   ON en1.measurementid = me1.id
> JOIN entries en2ON en1.measurementid = en2.measurementid
> JOIN modules mo2ON en2.moduleid = mo2.id
> JOIN entryintervals ei2 ON en2.id = ei2.entryid
> JOIN interval2label il2 ON ei2.id = il2.entry_intervalid
> LEFT JOIN labels la2ON il2.labelid = la2.id
>
> However, the following one reports 0 rows immediately:
>
> SELECT COUNT(*)
> FROM entryintervals ei1
> JOIN interval2label il1 ON ei1.id = il1.entry_intervalid
> JOIN labels la1 ON il1.labelid = la1.id
> JOIN labelgroups lg1ON la1.groupid = lg1.id
> JOIN entries en1ON ei1.entryid = en1.id
> JOIN modules mo1ON en1.moduleid = mo1.id
> JOIN measurements me1   ON en1.measurementid = me1.id
> JOIN entries en2ON en1.measurementid = en2.measurementid
> JOIN modules mo2ON en2.moduleid = mo2.id
> JOIN entryintervals ei2 ON en2.id = ei2.entryid
> JOIN interval2label il2 ON ei2.id = il2.entry_intervalid
> JOIN labels la2 ON il2.labelid = la2.id
>
> This is not what I expected. So I checked, that for example
>
> SELECT il2.labelid
> FROM entryintervals ei1
> JOIN interval2label il1 ON ei1.id = il1.entry_intervalid
> JOIN labels la1 ON il1.labelid = la1.id
> JOIN labelgroups lg1ON la1.groupid = lg1.id
> JOIN entries en1ON ei1.entryid = en1.id
> JOIN modules mo1ON en1.moduleid = mo1.id
> JOIN measurements me1   ON en1.measurementid = me1.id
> JOIN entries en2ON en1.measurementid = en2.measurementid
> JOIN modules mo2ON en2.moduleid = mo2.id
> JOIN entryintervals ei2 ON en2.id = ei2.entryid
> JOIN interval2label il2 ON ei2.id = il2.entry_intervalid
> LEFT JOIN labels la2ON il2.labelid = la2.id
> LIMIT 1
>
> returns the id 47. The empty set, returned by the query containing
> inner joins only, implies that no row exists with id 47 in the
> ’labels’ table. This is, however, not true:
>
> SELECT * FROM labels WHERE id = 47
>
> returns the appropriate row...

First of all attachments are stripped out from this list, so nobody
saw your database.
And second your above results can be not definitive if different
datatypes and affinities come into play. So to check things out you
should add la2.id to the above SELECT field list. If it is NULL then
results of query with inner join are correct. And to check why they
are not what you expect you can do the following queries:

SELECT id, typeof(id) FROM labels WHERE id = 47;
SELECT labelid, typeof(labelid) FROM interval2label WHERE labelid = 47;

And you could show us schema of these two tables so that we could
explain the results to you.


Pavel

> I’m using the latest (SQLite version 3.7.15.2 2013-01-09 11:53:05)
> version on Windows.
> Previously, I used SQLite version 3.7.13 2012-06-11 02:05:22 with
> Cygwin, which gave similar result, except that it was enough to have
> less joins:
>
> SELECT COUNT(*)
> FROM entryintervals ei1
> JOIN interval2label il1 ON ei1.id = il1.entry_intervalid
> JOIN labels la1 ON il1.labelid = la1.id
> JOIN labelgroups lg1ON la1.groupid = lg1.id
> JOIN entries en1ON ei1.entryid = en1.id
> JOIN modules mo1ON en1.moduleid = mo1.id
> JOIN measurements me1   ON en1.measurementid = me1.id
> JOIN entries en2ON en1.measurementid = en2.measurementid
> JOIN modules mo2ON en2.moduleid = mo2.id
> [LEFT] JOIN entryintervals ei2 ON en2.id = ei2.entryid
>
> Have you ever experienced such behavior?
> Please take a look at it, as it might be a rare but serious low-level
> problem.
>
> Thanks,
> Tamás
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] update the same record in multiple segments at once

2013-02-28 Thread Pavel Ivanov
On Thu, Feb 28, 2013 at 10:50 PM, Gert Van Assche  wrote:
> All, I don't know if this is possible.
>
> A segment in table "Files" contains a record "Content" with a fields like
> this "XXX      "
> Needs to become "XXX 123  456  78 90"
>
> Based on a "Translation" table were I have
>
> |123
> |456
> |78
> |90
>
> If I do
>
> UPDATE [Files] SET [Content] = replace(Content, (SELECT [Source] FROM
> [Translations]), (SELECT [Translation] FROM [Translations]));
>
> only the first match is replaced and I get: "XXX 123    
> "
>
> Is there a way to "translate" ,  and  as well?

It is so much easier and faster to do this in programming language.
SQL is not suitable for things like that.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite equivalent to Oracle's MERGE INTO

2013-02-26 Thread Pavel Ivanov
On Tue, Feb 26, 2013 at 2:29 PM, Igor Tandetnik  wrote:
> On 2/26/2013 5:13 PM, anydacdev anydacdev wrote:
>>
>> I am struggling with SQLite's support for Oracle's DUAL table.
>>
>> The updated statement, now including Oracle's DUAL looks like.
>>
>> MERGE INTO x TGT
>> USING (SELECT 'A_NAME' as name, 'A_KEY' as key FROM DUAL) SRC
>
>
> As far as I can tell, you are using DUAL as a table with no data, just

Igor, a little correction here: it's not table with no data, it's
table with exactly one row. If it was table with no data then SELECT
.. FROM DUAL would have returned nothing. ;-)

But anyway you suggested the correct replacement for that in SQLite.


Pavel

> something to put into a FROM clause to keep the parser happy. In this case,
> you would be glad to know that SQLite doesn't require FROM clause at all -
> you can simply write
>
>
> SELECT 'A_NAME' as name, 'A_KEY' as key
>
> --
> Igor Tandetnik
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Memory DB - Query does not return all records after Update

2013-02-26 Thread Pavel Ivanov
> Example, If I insert 10 records and loop a query I receive
> 10, 10, 10, 10,  results
> But, if I change one of the records during the loop I get
> 10, 10, update record, 9, 10, 

This sounds gibberish. Could you please post your database schema and
exact queries you issue in the exact order? And tell us how their
results are different from what you expect.


Pavel


On Mon, Feb 25, 2013 at 1:47 PM, mike.akers  wrote:
> I am using an in-memory database with a singleton containing a single
> connection. I have multiple threads (3) using this same connection, but I
> have ensured that only 1 thread uses the connection at a time.
>
> If I do an UPDATE on a record then immediately preform a SELECT, the query
> will return all rows that match the WHERE clause except for the record that
> was being updated. The sqlite3_prepare_v2 and sqlite3_step both return
> SQLITE_OK and do not return a SQLITE_LOCKED or SQLITE_BUSY so I would assume
> that my write was completed.
>
> I would like to be able to either
> 1. read the old data (as the quieres happen frequently a couple old data
> queries are fine) I have tried read_uncommited with no noticable difference.
> or 2. Have the database return LOCKED or BUSY if the write has not completed
> so that the query will return all of the results.
>
> Example, If I insert 10 records and loop a query I receive
> 10, 10, 10, 10,  results
> But, if I change one of the records during the loop I get
> 10, 10, update record, 9, 10, 
>
> Any advice on what I am missing?
>
> Thanks in advance,
> Mike
>
>
>
> --
> View this message in context: 
> http://sqlite.1065341.n5.nabble.com/Memory-DB-Query-does-not-return-all-records-after-Update-tp67267.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about aggregate returning empty row

2013-02-22 Thread Pavel Ivanov
That's SQL standard -- query with aggregate functions always return at
least one row.

Pavel

On Fri, Feb 22, 2013 at 12:06 PM, Pierre Chatelier  
wrote:
> Hello,
>
> [tested under 3.6.12 and 3.7.15.2]
>
> I have a question regarding the use of aggregate functions.
>
> Let's imagine the following db :
>>create table A (id1 INTEGER UNIQUE PRIMARY KEY, id2 INTEGER);
> It is empty.
>
> The following query :
>>select id1,id2 from A;
> returns nothing, there is no row.
>
> However, the following query, using the aggregate min() :
>>select min(id1),id2 from A;
> returns an empty line (displays '|' in the shell).
>
> Using avg(), max()... will do the same.
>
> With the C interface, SQLITE_ROW is returned, and I must test 
> sqlite_column_type() against SQLITE_NULL to check that in fact, there is no 
> result.
>
> Is this expected behaviour ?
>
> Regards,
>
> Pierre Chatelier
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite error near "16": syntax error

2013-02-19 Thread Pavel Ivanov
On Tue, Feb 19, 2013 at 1:39 AM, mikkelzuuu  wrote:
> Hey guys, I'm getting this error (see title)
> I'm using C# by the way.
>
> string StrQuery = @"INSERT INTO Test VALUES (" +
> dataGridView1.Rows[i].Cells["Column1"].Value + ", " +
> dataGridView1.Rows[i].Cells["Column2"].Value + ", " +
> dataGridView1.Rows[i].Cells["Column3"].Value + ", " +
> dataGridView1.Rows[i].Cells["Column4"].Value + ", " +
> dataGridView1.Rows[i].Cells["Column5"].Value + ");";
>
> That would then be my query. I can't see a 16 anywhere in my query code.

This is a great example of sql injection (you can google "sql
injection" to understand what's that) and the reason why you shouldn't
construct your sql queries like that. :)


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange eviction or bypass of Linux page cache

2013-02-07 Thread Pavel Ivanov
> Anyone out there know how to correct this undesirable behaviour?
>
> PS. This only happens over NFS - local DB files behave as expected and fill 
> the OS page cache.

Don't write your database to NFS. I'd guess that your problem is that
NFS driver for some reason thinks that the file was changed on the
server (could be as easy as rounding of file modification time) and
thus re-reads it from NFS server. And it has nothing to do with
SQLite.


Pavel


On Thu, Feb 7, 2013 at 3:27 AM, James Vanns  wrote:
> (Sorry if this gets posted twice - our damn mail server rewrites outgoing 
> mails so I had to unsubscribe and re-subscribe under a different Email 
> address)
>
> Hello list. I'd like to ask someone with more SQLite experience than me a 
> simple question. First, some background;
>
> Distribution: Scientific Linux 6.3
> Kernel: 2.6.32-279.9.1.el6.x86_64
> SQLite version: 3.6.20
>
> We have a single process that, given some data, does some processing and 
> writes it all to a single SQLite DB file. This is a write-once process. When 
> this task is finished, the file itself is marked as read only (0444).
>
> This file exists on an NFS share for multiple users to read - nothing further 
> is ever written to it. The problem we're seeing is that when this DB file is 
> read from (over NFS) none of the pages are cached (despite ~12GB free for 
> page cache use) or at least immediately evicted. This is quite detrimental to 
> performance because our resulting data files (SQLite DB files) are between 
> 100 to 400 MB in size. We *want* it to be cached - the whole thing. The page 
> cache would do this nicely for us and allow multiple processes on the same 
> machine to share that data without any complication.
>
> I understand that SQLite implements it's own internal page cache but why, on 
> a standard desktop machine, will it not use the page cache. Is there anyway 
> of forcing it or bypassing the internal page cache in favour of the job that 
> Linux already does? I cannot find any reference to O_DIRECT or madvise() or 
> favdise() etc. in the code. The following PRAGMAs don't help either;
>
> PRAGMA writable_schema = OFF
> PRAGMA journal_mode = OFF
> PRAGMA synchronous = OFF
>
> PRAGMA cache_size = -
>
> Obviously that last one works - but only for a single process and for the 
> lifetime of that process. We want the pages to reside in RAM afterwards.
>
> Anyone out there know how to correct this undesirable behaviour?
>
> Regards,
>
> Jim Vanns
>
> PS. This only happens over NFS - local DB files behave as expected and fill 
> the OS page cache.
>
> --
> Jim Vanns
> Senior Software Developer
> Framestore
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Deletion slow?

2013-02-07 Thread Pavel Ivanov
> Don't know if the index is updated after each row delete or after the whole 
> delete transaction is commited. For the first you can try:
>
> time sqlite3 trip.db "PRAGMA automatic_index= FALSE; delete from trip where 
> key<=1400;PRAGMA automatic_index= TRUE; reindex trip"

PRAGMA automatic_index has nothing to do with updating index after
deleting each row. And it won't have any impact on this delete
statement because it can't use automatic index.


Pavel


On Thu, Feb 7, 2013 at 2:13 AM, Eduardo Morras  wrote:
> On Tue, 5 Feb 2013 12:54:13 +
> Jason Gauthier  wrote:
>
>> Hey Everyone,
>>
>>  I am a fairly new user of sqlite, but not particularly new to SQL 
>> principles.  I am developing an application that will run on a low end 
>> system.
>> Not quite embedded, but not quite a PC.  In my application, I do frequent 
>> table deletes.  My results have been poor, and I am looking for someone to 
>> tell me "I'm > doing it wrong", or maybe "that's the best you're going to 
>> get", etc.
>
>> Any thoughts on why this may be so slow, or what I can do to improve it?
>
> Don't know if the index is updated after each row delete or after the whole 
> delete transaction is commited. For the first you can try:
>
> time sqlite3 trip.db "PRAGMA automatic_index= FALSE; delete from trip where 
> key<=1400;PRAGMA automatic_index= TRUE; reindex trip"
>
> If you delete a high percentage of the table rows, it's faster select the 
> data to save in a new table, drop original table and "alter table temp rename 
> to trip". If you use a memory temp table and a powerloss happens, your data 
> is lost, use a non-temp table.
>
> The trick of adding a new column for mark dirty rows will not work because 
> you are using a sd-card, the cost of mark as delete/dirty those rows is 
> greater than deleting them.
>
>
>>
>> Thanks,
>>
>> Jason
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ---   ---
> Eduardo Morras 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I suspect not - but is the database separator configurable?

2013-02-01 Thread Pavel Ivanov
On Fri, Feb 1, 2013 at 6:42 AM, message adams  wrote:
> Greetings;
>
> I've recently started using sqlite within Python, to help unit-test my
> applications.
>
> My applications actually run against sybase, but I'd love to use a
> connection to an in-memory sqlite to carry out my testing.
> As part of the unit-test, I'd pass the sqlite conenction into my source
> code hoping it would be none the wiser.
>
> The only real problem I see, is sybase uses a double period separator
> between the database and table. e.g.
> select * from database..table

No, it's not double period separator. It's actually
database.table_owner.table, but table_owner can be omitted, in that
case it's defaulted to 'dbo'.

> ... whereas sqlite
> select * from database.table
>
>
> I assume the sqlite database separator is not configurable, but was
> wondering if anybody's resolved a similar issue?

SQLite doesn't have a notion of users, thus it doesn't and won't
support the "double period" separator. SQLite even has a different
notion of databases that can go before table name. So to make queries
work both in SQLite and in Sybase don't use database name at all, make
it "select * from table".


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-dev] Can I safely use the pragma synchronization = OFF?

2013-01-27 Thread Pavel Ivanov
OK. I picked this one:
http://www.freebsd.org/doc/en/articles/gjournal-desktop/article.html.
It says:

A journaling file system uses a log to record all transactions that
take place in the file system, and preserves its integrity in the
event of a system crash or power failure. Although it is still
possible to lose unsaved changes to files, journaling almost
completely eliminates the possibility of file system corruption caused
by an unclean shutdown.

So with UFS you have guarantees that file system won't corrupt. But
there's absolutely no durability guarantees ("it is possible to lose
unsaved changes") and I don't see guarantees that SQLite file format
won't corrupt (FS may be non-corrupt while file data are bogus). While
I agree the latter is arguable and could be preserved, durability is a
big reason to use pragma synchronous = normal. Sure, if you don't care
about it you may not use that, you may as well use WAL journal mode
(which AFAIR can also lose some of last changed data with pragma
synchronous = normal). But still your claim that UFS with full
journaling is a complete replacement for pragma synchronous = normal
is false.


Pavel

On Sun, Jan 27, 2013 at 5:20 PM, Shuki Sasson <gur.mons...@gmail.com> wrote:
> Pick up any book about UFS and read about the journal...
>
> Shuki
>
> On Sun, Jan 27, 2013 at 7:56 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> > So in any file system that supports journaling fwrite is blocked until
>> all
>> > metadata and data changes are made to the buffer cache and journal is
>> > update with the changes.
>>
>> Please give us some links where did you get all this info with the
>> benchmarks please. Because what you try to convince us is that with
>> journaling FS write() doesn't return until the journal record is
>> guaranteed to physically make it to disk. First of all I don't see
>> what's the benefit of that compared to direct writing to disk not
>> using write-back cache. And second do you realize that in this case
>> you can't make more than 30-50 journal records per second? Do you
>> really believe that for good OS performance it's enough to make less
>> than 30 calls to write() per second (on any file, not on each file)? I
>> won't believe that until I see data and benchmarks from reliable
>> sources.
>>
>>
>> Pavel
>>
>>
>> On Sun, Jan 27, 2013 at 8:53 AM, Shuki Sasson <gur.mons...@gmail.com>
>> wrote:
>> > Hi Pavel, thanks a lot for your answer. Assuming xWrite is using fwrite
>> > here is what is going on the File System:
>> > In a legacy UNIX File System (UFS) the journaling protects only the
>> > metadata (inode structure directory block indirect block etc..) but not
>> the
>> > data itself.
>> > In more modern File Systems (usually one that are enterprise based like
>> EMC
>> > OneFS on the Isilon product) both data and meta data are journaled.
>> >
>> > How journaling works?
>> > The File System has a cache of the File System blocks it deals with (both
>> > metadata and data) when changes are made to a buffer cached block it is
>> > made to the memory only and the set of changes is save to the journal
>> > persistently. When the persistent journal is on disk than saving both
>> data
>> > and meta data changes
>> > takes too long and and only meta data changes are journaled. If the
>> journal
>> > is placed on NVRAM then it is fast enough to save both data and metadata
>> > changes to the journal.
>> > So in any file system that supports journaling fwrite is blocked until
>> all
>> > metadata and data changes are made to the buffer cache and journal is
>> > update with the changes.
>> > The only question than is if the File System keeps a journal of both meta
>> > data and data , if your system has a file system that supports journaling
>> > to both metadata and data blocks than you are theoretically (if there are
>> > no bugs in the FS) guaranteed against data loss in case of system panic
>> or
>> > loss of power.
>> > So in short, fully journaled File System gives you the safety of
>> > synchronized = FULL (or even better) without the huge performance penalty
>> > associated with fsync (or fsyncdada).
>> >
>> > Additional Explanation: Why is cheaper to save the changes to the log
>> > rather the whole chached buffer (block)?
>> > Explanation: Each FileSystem block is 8K in size, some of the changes
>> > includes areas in the block that are smaller in size and only these
>> changes
>> > are recorders.
>> >

Re: [sqlite] [sqlite-dev] Can I safely use the pragma synchronization = OFF?

2013-01-27 Thread Pavel Ivanov
> So in any file system that supports journaling fwrite is blocked until all
> metadata and data changes are made to the buffer cache and journal is
> update with the changes.

Please give us some links where did you get all this info with the
benchmarks please. Because what you try to convince us is that with
journaling FS write() doesn't return until the journal record is
guaranteed to physically make it to disk. First of all I don't see
what's the benefit of that compared to direct writing to disk not
using write-back cache. And second do you realize that in this case
you can't make more than 30-50 journal records per second? Do you
really believe that for good OS performance it's enough to make less
than 30 calls to write() per second (on any file, not on each file)? I
won't believe that until I see data and benchmarks from reliable
sources.


Pavel


On Sun, Jan 27, 2013 at 8:53 AM, Shuki Sasson <gur.mons...@gmail.com> wrote:
> Hi Pavel, thanks a lot for your answer. Assuming xWrite is using fwrite
> here is what is going on the File System:
> In a legacy UNIX File System (UFS) the journaling protects only the
> metadata (inode structure directory block indirect block etc..) but not the
> data itself.
> In more modern File Systems (usually one that are enterprise based like EMC
> OneFS on the Isilon product) both data and meta data are journaled.
>
> How journaling works?
> The File System has a cache of the File System blocks it deals with (both
> metadata and data) when changes are made to a buffer cached block it is
> made to the memory only and the set of changes is save to the journal
> persistently. When the persistent journal is on disk than saving both data
> and meta data changes
> takes too long and and only meta data changes are journaled. If the journal
> is placed on NVRAM then it is fast enough to save both data and metadata
> changes to the journal.
> So in any file system that supports journaling fwrite is blocked until all
> metadata and data changes are made to the buffer cache and journal is
> update with the changes.
> The only question than is if the File System keeps a journal of both meta
> data and data , if your system has a file system that supports journaling
> to both metadata and data blocks than you are theoretically (if there are
> no bugs in the FS) guaranteed against data loss in case of system panic or
> loss of power.
> So in short, fully journaled File System gives you the safety of
> synchronized = FULL (or even better) without the huge performance penalty
> associated with fsync (or fsyncdada).
>
> Additional Explanation: Why is cheaper to save the changes to the log
> rather the whole chached buffer (block)?
> Explanation: Each FileSystem block is 8K in size, some of the changes
> includes areas in the block that are smaller in size and only these changes
> are recorders.
> What happens if a change to the File System involves multiple changes to
> data blocks as well as metadata blocks like when an fwrite operation
> increases the file size and induced an addition of an indirect meta data
> block?
> Answer: The journal is organized in transactions that each of them is
> atomic, so all the buffered cache changes for such operation are put into
> the transaction. Only fully completed transaction are replayed when the
> system is recovering from a panic or power loss.
>
> In short, in most file systems like UFS using synchronization = NORMAL
> makes a lot of sense as data blocks are not protected by the journal,
> however with more robust File System that have full journal for metadata as
> well as data it makes all the sense in the world to run with
> synchronization = OFF and gain the additional performance benefits.
>
> Let me know if I missed something and I hope this makes things clearer.
> Shuki
>
>
>
>
> On Sat, Jan 26, 2013 at 10:31 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> On Sat, Jan 26, 2013 at 6:50 PM, Shuki Sasson <gur.mons...@gmail.com>
>> wrote:
>> >
>> > Hi all, I read the documentation about the synchronization pragma.
>> > It got to do with how often xSync method is called.
>> > With synchronization = FULL xSync is called after each and every change
>> to
>> > the DataBase file as far as I understand...
>> >
>> > Observing the VFS interface used by the SQLITE:
>> >
>> > typedef struct sqlite3_io_methods sqlite3_io_methods;
>> > struct sqlite3_io_methods {
>> >   int iVersion;
>> >   int (*xClose)(sqlite3_file*);
>> >   int (*xRead)(sqlite3_file*, void*, int iAmt, sqlite3_int64 iOfst);
>> >   *int (*xWrite)(sqlite3_file*, const void*, int iAmt, sqlite3_int64
>> iOfst);*
>> >   int (*xT

Re: [sqlite] Can I safely use the pragma synchronization = OFF?

2013-01-26 Thread Pavel Ivanov
On Sat, Jan 26, 2013 at 6:50 PM, Shuki Sasson  wrote:
>
> Hi all, I read the documentation about the synchronization pragma.
> It got to do with how often xSync method is called.
> With synchronization = FULL xSync is called after each and every change to
> the DataBase file as far as I understand...
>
> Observing the VFS interface used by the SQLITE:
>
> typedef struct sqlite3_io_methods sqlite3_io_methods;
> struct sqlite3_io_methods {
>   int iVersion;
>   int (*xClose)(sqlite3_file*);
>   int (*xRead)(sqlite3_file*, void*, int iAmt, sqlite3_int64 iOfst);
>   *int (*xWrite)(sqlite3_file*, const void*, int iAmt, sqlite3_int64 iOfst);*
>   int (*xTruncate)(sqlite3_file*, sqlite3_int64 size);
>  * int (*xSync)(sqlite3_file*, int flags);*
>
> *
> *
>
> I see both xWrite and xSync...
>
> Is this means that xWrite initiate  a FS write to the file?

Yes, in a sense that subsequent read without power cut from the
machine will return written data.

>
> Is that means that xSync makes sure that the FS buffered changes are
> synced to disk?

Yes.

> I guess it is calling fsync in case of LINUX /FreeBSD am I right?

fdatasync() I think.

> If the above is correct and SQLITE operates over modern reliable FS
> that has journaling with each write, than despite the fact that the
> write buffer cache are not fully synced they are protected by the FS
> journal that fully records all the changes to the file and that is
> going to be replayed in case of a FS mount after a system crash.
>
> If  my understanding is correct than assuming the FS journaling  is
> bullet proof than I can safely operate with synchronization = OFF with
> SQLITE and still be fully protected by the FS journal in case system
> crash, right?

I really doubt journaling filesystems work like that. Yes, your file
will be restored using journal if the journal records made it to disk.
But FS just can't physically write every record of the journal to disk
at the moment of that record creation. If it did that your computer
would be really slow. But as FS doesn't do that fdatasync still makes
sense if you want to guarantee that when COMMIT execution is finished
it's safe to cut the power off or crash.

> Meaning synchronization = NORMAL doesn't buy me anything in fact it
> severely slows the Data Base operations.
>
> Am I missing something here?

Please re-check documentation on how journaling FS work.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem whit sqlite_prepare_v2() in 3.7.5 version and later

2013-01-09 Thread Pavel Ivanov
On Wed, Jan 9, 2013 at 6:06 AM,  wrote:

>
> >
> >  Mensaje original 
> > De: Clemens Ladisch 
> > Para:  sqlite-users@sqlite.org
> > Fecha:  Wed, 09 Jan 2013 14:25:31 +0100
> > Asunto:  Re: [sqlite] Problem whit sqlite_prepare_v2() in 3.7.5 version
> andlater
> >
> >
> >
> >a...@zator.com wrote:
> >>int resp = sqlite3_prepare_v2 (dbPtr, "SELECT ky,id,lev,pwd FROM
> dbOrig.usr", -1, , 0);
> >
> > if (resp != SQLITE_OK)
> > printf("error message: %s\n", sqlite3_errmsg(dbPtr));
> >
> >
> >Regards,
> >Clemens
>
> error message: no such table: dbOri.usr


Did you type error message by hand (please don't do that) or you indeed
forgot the "g" at the end of db name?


>
>
sqlite> .tables
> AgEfHolder AgPdIDtBlockcoco   ftsm_segments
> AgEfIDt AgVtHolder FreqUseemail  ftsm_stat
> AgEfKlv AgVtIDt Usr   ftsm   lnk
> AgEfemer   AgVtPreatm ftsm_content   prm
> AgPdHolder AgVtos blb ftsm_segdir
>
>
> Regards
> Adolfo J. Millan
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite - Dropping TEMP table doesn't free allocated memory

2012-12-28 Thread Pavel Ivanov
You may be hitting the memory fragmentation issue. Try to run your
application with different memory managers (Hoard is my favorite -
http://www.hoard.org/) and see if the memory consumption is the same.

Also if you close all connections to your database (and other SQLite
databases too) does amount of used memory get lower?


Pavel

On Fri, Dec 28, 2012 at 7:04 AM, Krzysztof  wrote:
> Hi,
>
> I'm using PRAGMA temp_store=2 so all temp tables are created in memory. I
> have temp table with 1 000 000 000 records. Memory used by my test
> application grow up to ~80 MB. If I delete all records from this table or
> drop table, then my application still use 80 MB of memory. I have tried
> also PRAGMA shrink_memory. Why sqlite don't free memory?
>
> Regards
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in Cygwin SQLite: temporary table creation fails

2012-12-26 Thread Pavel Ivanov
On Wed, Dec 26, 2012 at 9:55 PM, Daniel Colascione <dan...@dancol.org> wrote:
> On 12/26/12 9:54 PM, Pavel Ivanov wrote:
>> I believe you can get this functionality now by compiling SQLite code
>> using cygwin compiler, not a Win32 one (and not downloading dll
>> library from sqlite.org website).
>
> The latest sources I checked still have the bug.

How do you compile it?


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bug in Cygwin SQLite: temporary table creation fails

2012-12-26 Thread Pavel Ivanov
I believe you can get this functionality now by compiling SQLite code
using cygwin compiler, not a Win32 one (and not downloading dll
library from sqlite.org website).

Pavel

On Wed, Dec 26, 2012 at 9:19 PM, Daniel Colascione  wrote:
> Creating temporary tables fails in Cygwin SQLite:
>
> sqlite> CREATE TEMPORARY TABLE foo (bar INT);
> Error: unable to open database file
>
> The reason this operation fails is that SQLite cannot create a
> temporary file in "C:\Windows", which GetTempPathW reports as the
> system temporary directory. GetTempPathW returns the system temporary
> directory when none of the environment variables "TMP", "TEMP", and
> "USERPROFILE" is set. In a typical Cygwin environment, these
> environment variables are unset from the point of view of Windows
> APIs: Cygwin maintains an internal, Cygwin-private environment block
> and synchronizes it with the Windows environment block lazily. A
> normal Cygwin process that uses SQLite has no reason to initialize its
> Windows environment block.
>
> Programs run under strace or the debugger, however, do receive a
> Windows environment block, so temporary table creation appears to work
> in programs run using these tools. A user with administrative rights
> can also create temporary tables, since he will be able to write to
> the Windows directory.
>
> In a Cygwin environment, SQLite should avoid GetTempPathW and instead
> use unixGetTempname to find the temporary directory, translating paths
> as appropriate.
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger blocks a single transaction?

2012-12-23 Thread Pavel Ivanov
Okay, sorry, I didn't understand your initial email correctly. So the
answer to your question is yes, trigger is executed as a single
transaction and if first statement fails then other do not execute.
The only exception is when you have ON CONFLICT IGNORE.
Do you observe a different behavior? Can you illustrate it with the test script?

Pavel

On Sun, Dec 23, 2012 at 4:04 PM, Alem Biscan <biscana...@gmail.com> wrote:
> Hi,
>
> No, i do not execute begin/commit. It is VIEW'S INSTEAD OF UPDATE TRIGGER.
> I am doing a regular update to the view from C#. Another thing is that view
> doesn't return any row affected value. Well it makes sense somehow.. It
> cannot know how many view's visible rows were affected. It lowers the
> coolnes of views and instead of trigs.
>
> Thanx
>
> On Mon, Dec 24, 2012 at 12:15 AM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> Do you execute all updates as one call to sqlite_exec? Or as separate
>> calls to sqlite_exec or sqlite_prepare/sqlite_step? If separate then
>> do you check return value from the calls? And do you execute
>> BEGIN/COMMIT somewhere?
>>
>> Pavel
>>
>> On Sun, Dec 23, 2012 at 2:37 PM, Alem Biscan <biscana...@gmail.com> wrote:
>> > Hello,
>> >
>> > I am wandering if i have a block of dml statements in a trigger ( instead
>> > of ). Is it a single transaction? If something fails everything fails or?
>> >
>> > SELECT
>> > CASE
>> > WHEN
>> > NOT EXISTS ( SELECT klas_sifra FROM klasifikacija kl , tipklas tk WHERE
>> kl
>> > . tkla_id = tk . tkla_id AND kl . klas_sifra = new . grupa_klas_sifra AND
>> > tk . tkla_naziv = 'GRUPA' )
>> > THEN
>> > RAISE ( ROLLBACK , ' * Ne postoji grupa pod šifrom ' )
>> > WHEN
>> > NOT EXISTS ( SELECT klas_sifra FROM klasifikacija kl , tipklas tk WHERE
>> kl
>> > . tkla_id = tk . tkla_id AND kl . klas_sifra = new . grupa_klas_sifra AND
>> > tk . tkla_naziv = 'TIP ARTIKLA' )
>> > THEN
>> > RAISE ( ROLLBACK , ' * Ne postoji tip artikla pod šifrom ' )
>> > END;
>> >
>> > UPDATE katklas kkl
>> >SET kkl.kkl_kat_sifra  = new.kat_sifra
>> >   ,kkl.kkl_klas_sifra = new.grupa_klas_sifra
>> >  WHERE kkl.kkl_kat_sifra  = old.kat_sifra
>> >AND kkl.kkl_klas_sifra =  old.grupa_klas_sifra;
>> >
>> > UPDATE katklas kkl
>> > SET kkl.kkl_kat_sifra  = new.kat_sifra
>> >   ,kkl.kkl_klas_sifra = new.grupa_klas_sifra
>> > WHERE kkl.kkl_kat_sifra  = old.kat_sifra
>> > AND kkl.kkl_klas_sifra =  old.tip_klas_sifra;
>> >
>> > UPDATE katalog kat
>> >SET kat.kat_sifra   = new.kat_sifra
>> >   ,kat.kat_barcode = new.kat_barcode
>> >   ,kat.kat_naziv   = new.kat_naziv
>> >   ,kat.kat_jmj = new.kat_jmj
>> >   ,kat.kat_car = new.kat_car
>> >   ,kat.kat_mpc = new.kat_mpc
>> >   ,kat.kat_porez   = new.kat_porez
>> > WHERE kat.kat_sifra = old.kat_sifra;
>> >
>> > If first update fails, is it considered a failiure for the whole thing?
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Trigger blocks a single transaction?

2012-12-23 Thread Pavel Ivanov
Do you execute all updates as one call to sqlite_exec? Or as separate
calls to sqlite_exec or sqlite_prepare/sqlite_step? If separate then
do you check return value from the calls? And do you execute
BEGIN/COMMIT somewhere?

Pavel

On Sun, Dec 23, 2012 at 2:37 PM, Alem Biscan  wrote:
> Hello,
>
> I am wandering if i have a block of dml statements in a trigger ( instead
> of ). Is it a single transaction? If something fails everything fails or?
>
> SELECT
> CASE
> WHEN
> NOT EXISTS ( SELECT klas_sifra FROM klasifikacija kl , tipklas tk WHERE kl
> . tkla_id = tk . tkla_id AND kl . klas_sifra = new . grupa_klas_sifra AND
> tk . tkla_naziv = 'GRUPA' )
> THEN
> RAISE ( ROLLBACK , ' * Ne postoji grupa pod šifrom ' )
> WHEN
> NOT EXISTS ( SELECT klas_sifra FROM klasifikacija kl , tipklas tk WHERE kl
> . tkla_id = tk . tkla_id AND kl . klas_sifra = new . grupa_klas_sifra AND
> tk . tkla_naziv = 'TIP ARTIKLA' )
> THEN
> RAISE ( ROLLBACK , ' * Ne postoji tip artikla pod šifrom ' )
> END;
>
> UPDATE katklas kkl
>SET kkl.kkl_kat_sifra  = new.kat_sifra
>   ,kkl.kkl_klas_sifra = new.grupa_klas_sifra
>  WHERE kkl.kkl_kat_sifra  = old.kat_sifra
>AND kkl.kkl_klas_sifra =  old.grupa_klas_sifra;
>
> UPDATE katklas kkl
> SET kkl.kkl_kat_sifra  = new.kat_sifra
>   ,kkl.kkl_klas_sifra = new.grupa_klas_sifra
> WHERE kkl.kkl_kat_sifra  = old.kat_sifra
> AND kkl.kkl_klas_sifra =  old.tip_klas_sifra;
>
> UPDATE katalog kat
>SET kat.kat_sifra   = new.kat_sifra
>   ,kat.kat_barcode = new.kat_barcode
>   ,kat.kat_naziv   = new.kat_naziv
>   ,kat.kat_jmj = new.kat_jmj
>   ,kat.kat_car = new.kat_car
>   ,kat.kat_mpc = new.kat_mpc
>   ,kat.kat_porez   = new.kat_porez
> WHERE kat.kat_sifra = old.kat_sifra;
>
> If first update fails, is it considered a failiure for the whole thing?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database locked after read

2012-12-01 Thread Pavel Ivanov
> I have found that when using SQLite under Windows if you execute an update
> statement and then a select statement, the database is locked until the
> select statement is finished.  How can I stop this from happening?

Note that you don't have to execute UPDATE before the SELECT to
reproduce that. SELECT places shared lock on the database. UPDATE
cannot proceed while any shared locks are active on the database. To
avoid such situation you can switch your database to the WAL journal
mode (http://www.sqlite.org/wal.html).


Pavel


On Sat, Dec 1, 2012 at 2:02 PM, Scott Ferrett
 wrote:
> I have found that when using SQLite under Windows if you execute an update
> statement and then a select statement, the database is locked until the
> select statement is finished.  How can I stop this from happening?
>
> Here is a simple test program that simulates this situation.  If you run the
> program you will get the message "Update executed without problems".
> Without closing this message you run the program again you will get "Table
> is locked??".
>
> void DisplayMessage(_TCHAR* message)
> {
>   MessageBox(NULL, message, "Test Lock", 0);
> }
>
> void Check(int status, char* message)
> {
>   if (status != 0 && status != SQLITE_ROW && status != SQLITE_DONE)
> DisplayMessage(message);
> }
> int _tmain(int argc, _TCHAR* argv[])
> {
>   sqlite3* connection;
>   bool needCreate = sqlite3_open_v2("test.sqlite", ,
> SQLITE_OPEN_READWRITE, NULL) != SQLITE_OK;
>   if (!needCreate || sqlite3_open_v2("test.sqlite", ,
> SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE, NULL) == SQLITE_OK)
>   {
> if (needCreate)
> {
>   Check(sqlite3_exec(connection, "CREATE TABLE TESTFILE (id LONG, data
> CHAR(30))", NULL, NULL, NULL), "Create Table");
>   Check(sqlite3_exec(connection, "CREATE TABLE TESTFILE2 (id LONG, data
> CHAR(30))", NULL, NULL, NULL), "Create Table 2");
>   Check(sqlite3_exec(connection, "INSERT INTO TESTFILE VALUES (1, 'A')",
> NULL, NULL, NULL), "Insert 1");
>   Check(sqlite3_exec(connection, "INSERT INTO TESTFILE VALUES (2, 'B')",
> NULL, NULL, NULL), "Insert 2");
>   Check(sqlite3_exec(connection, "INSERT INTO TESTFILE2 VALUES (1,
> 'A')", NULL, NULL, NULL), "Insert 1");
>   Check(sqlite3_exec(connection, "INSERT INTO TESTFILE2 VALUES (2,
> 'B')", NULL, NULL, NULL), "Insert 2");
> }
> sqlite3_stmt* statement1;
> Check(sqlite3_prepare_v2(connection, "UPDATE TESTFILE SET data='A' WHERE
> id=1", -1, , NULL), "Update");
> int status = sqlite3_step(statement1);
> sqlite3_finalize(statement1);
> sqlite3_stmt* statement2;
> Check(sqlite3_prepare_v2(connection, "SELECT ID,data FROM TESTFILE2",
> -1, , NULL), "SELECT");
> Check(sqlite3_step(statement2), "stepping Select");
> if (status == SQLITE_DONE)
>   DisplayMessage("Update executed without problems"); // This simulates
> a select statement that takes a long time to execute
> else if (status == SQLITE_BUSY)
>   DisplayMessage("Table is locked??");
> else
>   DisplayMessage("There was a problem");
> sqlite3_finalize(statement2);
> sqlite3_close(connection);
>   }
>   return 0;
> }
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Confusion on 'foreign key mismatch' errors

2012-11-11 Thread Pavel Ivanov
> # http://answers.oreilly.com/topic/1955-how-to-use-full-text-search-in-sqlite/
> Unlike traditional tables, the ROWID of an FTS table is stable through
> a vacuum (VACUUM in Appendix C), so it can be reliably referenced
> through a foreign key.

I'm not sure who wrote that but this page
http://www.sqlite.org/lang_createtable.html states explicitly (at the
very end): "The parent key of a foreign key constraint is not allowed
to use the rowid".


Pavel


On Sun, Nov 11, 2012 at 9:49 PM, Darren Spruell  wrote:
>
> I'm stuck on some errors related to my use of foreign key constraints
> in my application. The following illustrates:
>
> $ sqlite3
> SQLite version 3.7.9 --SOURCE-ID--
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> PRAGMA foreign_keys = ON;
> sqlite> CREATE VIRTUAL TABLE comment USING fts4();
> sqlite> CREATE TABLE ip (
>...> id INTEGER PRIMARY KEY, date_added DEFAULT CURRENT_DATE,
>...> ip VARCHAR(15) NOT NULL UNIQUE, comment INTEGER NOT NULL,
>...> FOREIGN KEY(comment) REFERENCES comment);
> sqlite> CREATE INDEX ipcommentindex ON ip(comment);
> sqlite> INSERT INTO comment VALUES ('this is a comment.');
> sqlite> SELECT rowid,content FROM comment;
> 1|this is a comment.
> sqlite> SELECT last_insert_rowid();
> 1
> sqlite> INSERT INTO ip (ip,comment) VALUES ('10.0.1.1',last_insert_rowid());
> Error: foreign key mismatch
> sqlite> INSERT INTO ip (ip,comment) VALUES ('10.0.1.1',1);
> Error: foreign key mismatch
>
>
> Thinking I understand the requirements for foreign key constraints, I
> don't know why my INSERT into ip table returns 'foreign key mismatch'
> error.
>
> When I leave foreign key constraints off, things work as I would expect:
>
>
> sqlite> CREATE VIRTUAL TABLE comment USING fts4();
> sqlite> CREATE TABLE ip (
>...> id INTEGER PRIMARY KEY, date_added DEFAULT CURRENT_DATE,
>...> ip VARCHAR(15) NOT NULL UNIQUE, comment INTEGER NOT NULL,
>...> FOREIGN KEY(comment) REFERENCES comment);
> sqlite> CREATE INDEX ipcommentindex ON ip(comment);
> sqlite> INSERT INTO comment VALUES ('this is a comment.');
> sqlite> SELECT rowid,content FROM comment;
> 1|this is a comment.
> sqlite> SELECT last_insert_rowid();
> 1
> sqlite> INSERT INTO ip (ip,comment) VALUES ('10.0.1.1',last_insert_rowid());
> sqlite> SELECT * FROM ip;
> 1|2012-11-12|10.0.1.1|1
>
>
> I suspect the issue might stem from trying to use the rowid in the
> comment table as the parent key in the foreign key on the ip table,
> reading:
>
> # http://www.sqlite.org/foreignkeys.html
> The parent key is the column or set of columns in the parent table
> that the foreign key constraint refers to. This is normally, but not
> always, the primary key of the parent table. The parent key must be a
> named column or columns in the parent table, not the rowid.
>
> ...but then this kind of reads like it's supported:
>
> # http://answers.oreilly.com/topic/1955-how-to-use-full-text-search-in-sqlite/
> Unlike traditional tables, the ROWID of an FTS table is stable through
> a vacuum (VACUUM in Appendix C), so it can be reliably referenced
> through a foreign key.
>
>
> I'm struggling to find a clear way to achieve a foreign key
> constrained relation between these tables if a.) the FTS table can't
> define an INTEGER PRIMARY KEY column to function as the parent key for
> a child table, or b.) child tables can't reference the rowid on an FTS
> parent table as the parent key.
>
> Clue bat appreciated.
>
> --
> Darren Spruell
> phatbuck...@gmail.com
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance of sparse sqlite tables

2012-11-04 Thread Pavel Ivanov
I'd say generally speaking your way of storing data has no significant
downsides. There's just one "but": if each row in your table stores pretty
significant amount of data (blobs, long text fields or just lots of
different fields) you'd better not make your ItemID INTEGER PRIMARY KEY.
Because SQLite stores all rows in the table in the order of rowids. So
every time you update your ItemID SQLite would have to move the whole row
to a new place. So for the case of big rows I'd suggest to make some other
column INTEGER PRIMARY KEY and add unique constraint to your ItemID. It
won't hurt your search speed (could make it faster actually) and will make
updates faster. Although it will come with a larger size of the database
file.

Pavel


On Sun, Nov 4, 2012 at 9:26 AM, Dominguez Bonini, David <
david.doming...@ikusi.com> wrote:

> Hi,
>
> I have an application where a table is filled with elements whose primary
> key is specified at insertion, and is actually a combination of several
> independent IDs. Example:  ItemID = (id0 << 32) + (id1 << 16) + (id2).
> The range covered by each ID guarantees that their sum will never exceed
> the 64 bit maximum size of an sqlite primary key. The advantage of this
> approach is that a function performing a SELECT statement can pre-compute
> the id that needs to be retrieved from the database. This is what I call a
> sparse table, meaning that the table will never have more than X items, but
> the primary key range is actually much bigger than X. Sorry if my
> definitions are not standard, SQL is not my native language :)
>
> This scheme is used because IDs are usually inserted all at once in a
> single batch, and then they have to be regularly updated over a very long
> time. So, there are relatively few INSERTS and a LOT of UPDATES and SELECTS.
>
> I'm wondering if the advantage in search speed obtained by this ID
> assignment scheme may somehow be offset by other factors like additional
> memory usage, less efficient inserts, etc. Can anyone offer
> counterarguments, or recommend a better scheme?
>
> Regards
>
> David
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_column_name() contains quotes for views

2012-10-29 Thread Pavel Ivanov
This problem was fixed here http://www.sqlite.org/src/info/5526e0aa3c.
It will appear in the next version of SQLite.

Pavel

On Mon, Oct 29, 2012 at 5:41 AM, NSRT Mail account.
 wrote:
> In the example, I just realized something that makes matters worse.
>
> sqlite> SELECT "id" AS "id", "name" AS "name" FROM "names";
> id|name
> 1|Linus
> 2|Bill
> 3|Steve
> 4|Richard
> 5|Ninjas
>
> Despite using a view, using AS seems to remove the quotes.
>
>
>
>
> 
>  From: NSRT Mail account. 
> To: "sqlite-users@sqlite.org" 
> Sent: Monday, October 29, 2012 2:33 PM
> Subject: [sqlite] sqlite3_column_name() contains quotes for views
>
> I believe I ran into a bug with SQLite, and would like to ensure the problem 
> is not on my end.
>
> I created a simple table along with a view of it:
> SQLite version 3.7.14.1 2012-10-04 19:37:12
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> .dump
> PRAGMA foreign_keys=OFF;
> BEGIN TRANSACTION;
> CREATE TABLE "namesReal" (
>   "id" INTEGER PRIMARY KEY AUTOINCREMENT,
>   "name" TEXT NOT NULL UNIQUE ON CONFLICT IGNORE
> );
> INSERT INTO "namesReal" VALUES(1,'Linus');
> INSERT INTO "namesReal" VALUES(2,'Bill');
> INSERT INTO "namesReal" VALUES(3,'Steve');
> INSERT INTO "namesReal" VALUES(4,'Richard');
> INSERT INTO "namesReal" VALUES(5,'Ninjas');
> DELETE FROM sqlite_sequence;
> INSERT INTO "sqlite_sequence" VALUES('namesReal',10);
> CREATE VIEW "names" AS SELECT * FROM "namesReal";
> COMMIT;
> -
> At this point selecting from names or namesReal should generate the same data:
> sqlite> .header on
> sqlite> SELECT "id", "name" FROM "namesReal";
> id|name
> 1|Linus
> 2|Bill
> 3|Steve
> 4|Richard
> 5|Ninjas
> -
> The data above is good, the column names, as well as the row values. But look 
> what happens when selecting from the view:
> sqlite> SELECT "id", "name" FROM "names";
> "id"|"name"
> 1|Linus
> 2|Bill
> 3|Steve
> 4|Richard
> 5|Ninjas
> -
> The quotes are being includes in the column names unlike the prior case. 
> However when selecting via wildcard, this happens:
> sqlite> SELECT * FROM "names";
> id|name
> 1|Linus
> 2|Bill
> 3|Steve
> 4|Richard
> 5|Ninjas
> -
>
> It appears from these examples, that SQLite mistakenly? is including the 
> decorations around column names as used by the query for views. Unless I'm 
> mistaken, column names are supposed to be quoted in SQL in order to prevent 
> conflict with reserved words.
>
> When trying to query this database with the API, sqlite3_column_name() 
> includes the quotes around the column name in the second select statement, 
> but not in the first or third. So it seems the issue is with that function, 
> and not some quirk of the command line client.
>
>
> Is this a bug? Or am I doing something wrong?
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite versus MySQL automatic defragmentation on SSDs?

2012-10-28 Thread Pavel Ivanov
Thank you. This is what I wanted to hear. And as you already saw from
responses, fragmentation is far from your main problem. I'd like to
point to one particular issue:

> However, we're starting to see problems.  There is so much activity on some
> servers that there is never a chance for our checkpointing thread to do its
> thing, so our WAL file often ballons up to 30GB or more.  This makes query
> times plummet.

Looking at this problem alone I'd say SQLite is not the right tool for
you. At least at the scale you are working now. And I don't know all
your arguments but I hope you are arguing not just because you are a
fan of SQLite and don't want to move away from it.

Pavel


On Sun, Oct 28, 2012 at 7:48 AM, David Barrett <dbarr...@expensify.com> wrote:
> Wow, I didn't realize this was such a controversial question.
>
> I'm a huge sqlite fan.  Expensify is built on sqlite.  We have a 40GB
> database, replicated using our custom distributed transaction layer across
> 5 severs in three different datacenters.[1]  It's been powering all of
> Expensify (including our direct deposit reimbursement engine and credit
> card import layer -- both of which contain incredibly sensitive
> information, with mistakes causing millions of dollars to move in the wrong
> direction).  On the back of sqlite, we've grown to over million users,
> processing millions of dollars in expense reports every day.
>
> However, we're starting to see problems.  There is so much activity on some
> servers that there is never a chance for our checkpointing thread to do its
> thing, so our WAL file often ballons up to 30GB or more.  This makes query
> times plummet.  We regularly checkpoint manually, and often vacuum, all in
> an effort to keep queries moving quick.  We also do things to trick out our
> indexes in order to ensure proper disk ordering, pay particular attention
> to block and cache amounts, etc.  This isn't premature optimization for the
> sake of having fun, these are in response to real performance problems
> affecting our product.
>
> In light of that, there is a contingent pushing to drop sqlite in favor of
> MySQL.  There are a wide range of reasons -- it has its own replication,
> better write concurrency, clustered indexes, and better edge-case data
> integrity (because we use 2 DBs and WAL, ATTACH doesn't provide atomic
> commit advantages).  And for each I have a corresponding answer -- MySQL's
> replication isn't as good as ours, concurrency doesn't matter because we
> serialize writes and have a single threaded server anyway, clustered
> indexes would be nice but we can get close enough with custom ROWIDs, and
> the extremely rare situation where there's a cross-database integrity
> problem, we can detect and recover from any of the other slaves.  And I
> also add in that sqlite can never crash because it's built into the server;
> its performance is fantastic because it runs in the same process; in years
> of operation we've never once seen it corrupt data; it's so easy to use;
> etc etc.
>
> But there's an argument I've heard come up to which I don't have a
> response: MySQL handles fragmentation better, and by extension would give
> us better performance on the same hardware.   I'd like to know more about
> it, which is why I've asked.  Thanks!
>
> -david
>
> [1] http://permalink.gmane.org/gmane.comp.db.sqlite.general/71868
>
>
> On Sun, Oct 28, 2012 at 2:12 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> OK. Curiosity is a good thing in certain situations. But could you
>> kindly tell me what will you do with this information (assuming it's
>> possible to obtain it of course)?
>>
>> Pavel
>>
>> On Sat, Oct 27, 2012 at 11:54 PM, David Barrett <dbarr...@expensify.com>
>> wrote:
>> > I completely understand the wide and varied differences.  I'm just *also*
>> > interested in this very specific issue.
>> >
>> > -david
>> >
>> > On Sun, Oct 28, 2012 at 12:38 PM, Pavel Ivanov <paiva...@gmail.com>
>> wrote:
>> >
>> >> > That said, I'd still welcome any quick summary of the differences
>> between
>> >> > sqlite and mysql when it comes to fragmentation.
>> >>
>> >> This is far from main differences between sqlite and mysql that you
>> >> should consider if you want to choose between them unless of course
>> >> your question is just about an academic interest. As you are talking
>> >> about employees I guess you are not in some purely academic exercise.
>> >> In this case think more about in-process code vs pumping through
>> >> TCP/IP stack, designed mostly to be accessible from machine-loca

Re: [sqlite] sqlite versus MySQL automatic defragmentation on SSDs?

2012-10-28 Thread Pavel Ivanov
OK. Curiosity is a good thing in certain situations. But could you
kindly tell me what will you do with this information (assuming it's
possible to obtain it of course)?

Pavel

On Sat, Oct 27, 2012 at 11:54 PM, David Barrett <dbarr...@expensify.com> wrote:
> I completely understand the wide and varied differences.  I'm just *also*
> interested in this very specific issue.
>
> -david
>
> On Sun, Oct 28, 2012 at 12:38 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> > That said, I'd still welcome any quick summary of the differences between
>> > sqlite and mysql when it comes to fragmentation.
>>
>> This is far from main differences between sqlite and mysql that you
>> should consider if you want to choose between them unless of course
>> your question is just about an academic interest. As you are talking
>> about employees I guess you are not in some purely academic exercise.
>> In this case think more about in-process code vs pumping through
>> TCP/IP stack, designed mostly to be accessible from machine-local
>> processes only vs accessible to anyone on the network, plain access to
>> everything vs versatile and complicated authorization and
>> authentication mechanisms, and so on and so forth. Database format is
>> never a part of the decision which DBMS you want to use.
>>
>> Pavel
>>
>>
>> On Sat, Oct 27, 2012 at 9:32 PM, David Barrett <dbarr...@expensify.com>
>> wrote:
>> > Thanks, this is really helpful!  (And I lecture my employees about the
>> > evils of premature optimization all the time.  In fact, I'll lecture
>> anyone
>> > in earshot, so frequently that it's the butt of jokes.)
>> >
>> > That said, I'd still welcome any quick summary of the differences between
>> > sqlite and mysql when it comes to fragmentation.  I often get in debates
>> > about sqlite versus other datbases, and I'm always eager to be informed.
>> >  Thanks!
>> >
>> > -david
>> >
>> > On Sat, Oct 27, 2012 at 11:42 PM, Simon Slavin <slav...@bigfraud.org>
>> wrote:
>> >
>> >>
>> >> On 27 Oct 2012, at 11:38am, David Barrett <dbarr...@expensify.com>
>> wrote:
>> >>
>> >> > I'm trying to learn more about MySQL versus sqllite when it comes to
>> >> > vacuuming and fragmentation, especially as it relates to SSD storage.
>> >>
>> >>
>> >> Rather than answer your questions point-by-point, I'm going to give you
>> >> the current state of play.  Your understanding of how the various DBMSes
>> >> work is right, but your excellent question "in a world with SSDs and a
>> >> tremendous amount of RAM, does vacuuming matter nearly as much as on a
>> >> spinning disk with constrained RAM?" cuts to the heart of all your other
>> >> points.  The following involves a little simplification and handwaving
>> >> because otherwise it would be two chapters long and you'd have to do
>> >> homework.
>> >>
>> >> Fragmentation made a big difference to apps running on Windows, but very
>> >> little on any other platform.  This is because Windows does something
>> >> called 'read-ahead caching' which assumes that if you read block B,
>> you're
>> >> soon going to want to read block B+1, so at quite a low level it
>> helpfully
>> >> pre-reads it for you.  Other operating systems don't make this
>> assumption.
>> >>  This is why Windows users talk about defragmentation so much, but Unix
>> >> users don't care about it.
>> >>
>> >> SSDs negate the whole point of defragmentation.  On a rotational disk
>> it's
>> >> faster to read blocks B, B+1, B+2, B+4, B+5 than it is to read five
>> random
>> >> blocks from the disk, because the read heads are already positioned in
>> the
>> >> right place, and the disk is going to rotate to show those five blocks
>> in
>> >> order.  SSDs are just like RAM: they're Random Access Memory.  Reading
>> any
>> >> five blocks in any order takes roughly the same time.  So nobody cares
>> >> about fragmentation on an SSD.  Read whatever blocks you want in
>> whatever
>> >> order you want.
>> >>
>> >> As to clever management of disk block alignment with respect to rows and
>> >> columns, this is rarely worth attention these days.  The amount of
>> >> programming and debugging time it takes to get this righ

Re: [sqlite] sqlite versus MySQL automatic defragmentation on SSDs?

2012-10-27 Thread Pavel Ivanov
> That said, I'd still welcome any quick summary of the differences between
> sqlite and mysql when it comes to fragmentation.

This is far from main differences between sqlite and mysql that you
should consider if you want to choose between them unless of course
your question is just about an academic interest. As you are talking
about employees I guess you are not in some purely academic exercise.
In this case think more about in-process code vs pumping through
TCP/IP stack, designed mostly to be accessible from machine-local
processes only vs accessible to anyone on the network, plain access to
everything vs versatile and complicated authorization and
authentication mechanisms, and so on and so forth. Database format is
never a part of the decision which DBMS you want to use.

Pavel


On Sat, Oct 27, 2012 at 9:32 PM, David Barrett  wrote:
> Thanks, this is really helpful!  (And I lecture my employees about the
> evils of premature optimization all the time.  In fact, I'll lecture anyone
> in earshot, so frequently that it's the butt of jokes.)
>
> That said, I'd still welcome any quick summary of the differences between
> sqlite and mysql when it comes to fragmentation.  I often get in debates
> about sqlite versus other datbases, and I'm always eager to be informed.
>  Thanks!
>
> -david
>
> On Sat, Oct 27, 2012 at 11:42 PM, Simon Slavin  wrote:
>
>>
>> On 27 Oct 2012, at 11:38am, David Barrett  wrote:
>>
>> > I'm trying to learn more about MySQL versus sqllite when it comes to
>> > vacuuming and fragmentation, especially as it relates to SSD storage.
>>
>>
>> Rather than answer your questions point-by-point, I'm going to give you
>> the current state of play.  Your understanding of how the various DBMSes
>> work is right, but your excellent question "in a world with SSDs and a
>> tremendous amount of RAM, does vacuuming matter nearly as much as on a
>> spinning disk with constrained RAM?" cuts to the heart of all your other
>> points.  The following involves a little simplification and handwaving
>> because otherwise it would be two chapters long and you'd have to do
>> homework.
>>
>> Fragmentation made a big difference to apps running on Windows, but very
>> little on any other platform.  This is because Windows does something
>> called 'read-ahead caching' which assumes that if you read block B, you're
>> soon going to want to read block B+1, so at quite a low level it helpfully
>> pre-reads it for you.  Other operating systems don't make this assumption.
>>  This is why Windows users talk about defragmentation so much, but Unix
>> users don't care about it.
>>
>> SSDs negate the whole point of defragmentation.  On a rotational disk it's
>> faster to read blocks B, B+1, B+2, B+4, B+5 than it is to read five random
>> blocks from the disk, because the read heads are already positioned in the
>> right place, and the disk is going to rotate to show those five blocks in
>> order.  SSDs are just like RAM: they're Random Access Memory.  Reading any
>> five blocks in any order takes roughly the same time.  So nobody cares
>> about fragmentation on an SSD.  Read whatever blocks you want in whatever
>> order you want.
>>
>> As to clever management of disk block alignment with respect to rows and
>> columns, this is rarely worth attention these days.  The amount of
>> programming and debugging time it takes to get this right, and the amount
>> of extra processing and disk access you need to do, give you less return on
>> investment than if you spent the same money on buying a faster hard disk.
>>  It's "premature optimization" (look up the term) except for two cases:
>> overnight runs and realtime 3D graphics.  If your overnight run takes more
>> than one night, you have a problem.  If you're programming realtime 3D
>> graphics and they're jerky, your users won't enjoy your simulation.  But
>> you wouldn't be using a SQL engine for 3D graphics anyway.
>>
>> The matters you mentioned were all worth attention back in the 1980s when
>> storage and bandwidth were expensive.  As you pointed out near the end of
>> your post, these things matter less now.
>>
>> Simon.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] transaction commit is successful despite I/O error

2012-10-25 Thread Pavel Ivanov
I guess this can stand as one more argument for not using SQLite over
NFS. Another argument could be found here:
http://www.sqlite.org/lockingv3.html#how_to_corrupt.

Pavel


On Wed, Oct 24, 2012 at 2:01 PM, Efim Dyadkin  wrote:
> I was probably not quite specific. So I would like to rephrase the problem 
> and give more details.
>
> I run a default configuration of Sqlite on Linux and I don't use WAL or 
> anything that changes transaction behavior. A database is located on a remote 
> file server and accessed via NFS. There is a single process only that 
> accesses the database. The problem is when auto-mounter is stopped (on a 
> machine where the process is running) in the middle of database transaction, 
> commit returns successfully though it leaves a hot journal on disk because it 
> is not seen for unlink (ENOENT)!!!
>
> Note, when auto-mounter is down, you still can operate on a database file and 
> its journal if they were opened before the stop.
>
> The problem can be reproduced very easily with an Sqlite command shell:
> 1) Run the shell
> 2) Begin exclusive transaction
> 3) Insert a few records
> 4) Stop auto-mounter
> 5) Optionally insert a few more records (stopped auto-mounter does not affect 
> these inserts because database and journal are opened)
> 6) Commit (no errors!!!)
> 7) Quit the shell
>
> If you then restart auto-mounter and check the database directory you will 
> find a hot journal! If you reopen the database the transaction will be played 
> back and recently commited changes will be gone!
>
> Thanks for your feedbacks,
> Efim
>
>
>
> 
>  This e-mail, including any attached files, may contain confidential and 
> privileged information for the sole use of the intended recipient. Any 
> review, use, distribution, or disclosure by others is strictly prohibited. If 
> you are not the intended recipient (or authorized to receive information for 
> the intended recipient), please contact the sender by reply e-mail and delete 
> all copies of this message.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite could not open db in WinRT issue

2012-10-24 Thread Pavel Ivanov
SQLite expects all paths to be in UTF-8 (non-standard for Win but the
same for all platforms). Is your path in UTF-8 or in some other system
encoding?

Pavel

On Tue, Oct 23, 2012 at 6:28 AM, Václav Jirovský
 wrote:
> Hello,
>
> I tried to use SQLite 3.7.14.1 with sqlite-net 1.0.5 (both official from
> visual studio gallery or NuGet with no modifications) for WinRT.
>
> here is my code to get error:
>
> [Table("Users")]
> public class User
> {
> [PrimaryKey, Unique]
> public int Id { get; set; }
> public string LastName { get; set; }
> public string FirstName { get; set; }
> public int Age { get; set; }
> }
>
> .
>
>  string DbRootPath = ApplicationData.Current.LocalFolder.Path;
>  string path = Path.Combine(DbRootPath, "users.db");
>  var db = new SQLiteAsyncConnection(path);
> *
> *
> *and when I call *
>
>   await db.CreateTableAsync();
>
> *I recieve SQLiteException **Could not open database file:
> C:\Users\Václav\AppData\Local\Packages\xxx\LocalState\users.db
> (CannotOpen)*
> *
> *
> I think it's because Unicode char (á) in my Path, but from WinRT app I am
> allowed to write to folders in this user path only. In Czech is this char
> in name typical and most of people has username their name.
>
> Has anyone seen this?
>
> Thanks,
>
> Vaclav
> --
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about foreign key

2012-10-21 Thread Pavel Ivanov
On Sat, Oct 20, 2012 at 10:41 PM, Igor Korot  wrote:
> Hi, ALL,
> According to http://www.sqlite.org/foreignkeys.html the FOREIGN KEY
> support is disabled by default.
> In order to enable it I need to compile SQLITE with 2 defines undefined.

Which also undefined by default. Any SQLITE_OMIT_* define is undefined
by default.

> I downloaded the file sqlite-amalgamation-3071400.zip, unpacked it,
> added .c and .h file to my project
> and inspected them.
>
> SQLITE_OMIT_FOREIGN_KEY can not be found in both .h files and I don't
> see the #define of this constant
> anywhere in the .c file.
>
> Is foreign key documentation outdated?

Nope, everything is right.

> Also one minor question: do I need both .h files or just one will suffice?

Would be nice to know what names those both .h files have. But
generally speaking sqlite3.h should be enough.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about prepared statements

2012-10-19 Thread Pavel Ivanov
>> Var Alice := Db.SelectRecord('SELECT * FROM client WHERE client_name = ?', [ 
>>  'alice' ]);
>>
>> 1. If I drop record for "alice" from db and then access column data in 
>> prepared stmt will it work OK?
>
> If the select statement wasn't reset or finalized, you won't be able to 
> delete a record from the database (or, in WAL journaling mode, you would be 
> able to, but the reader would still see original data).

Well, if he uses the same connection used to prepare the statement,
then he will be able to delete record even in normal journaling mode.
What happens in this case is undefined. It might be that column data
will be still accessbile, it might be you get some garbage, it might
be an access violation.

>> 2. Can I drop client table while having such prepared stmt not closed?

I think table client cannot be dropped altogether until all statements
using it are reset/finalized.


Pavel


On Fri, Oct 19, 2012 at 12:49 PM, Igor Tandetnik  wrote:
> Григорий Григоренко  wrote:
>> I am using prepared stmts to cache selected record for later use. I have a 
>> wrapper class, that has methods like AsString(name),
>> AsFloat(name) etc to extract data for a column name.
>>
>> I prepare, bind and then do Sqlite3_step() to get record. Later I use 
>> sqlite3_column_XXX() to access data.
>>
>> My concern is - do open selective prepared stmts depend on something in db 
>> or lock something in db?
>
> Yes, until you call sqlite3_reset or sqlite3_finalize. For as long as the 
> statement is "active" (meaning, sqlite3_reset or sqlite3_finalize has not 
> been called after the most recent sqlite3_step call), it holds a read 
> transaction open.
>
>> Var Alice := Db.SelectRecord('SELECT * FROM client WHERE client_name = ?', [ 
>>  'alice' ]);
>>
>> 1. If I drop record for "alice" from db and then access column data in 
>> prepared stmt will it work OK?
>
> If the select statement wasn't reset or finalized, you won't be able to 
> delete a record from the database (or, in WAL journaling mode, you would be 
> able to, but the reader would still see original data).
>
>> 2. Can I drop client table while having such prepared stmt not closed?
>
> What do you mean by "closed" here?
>
>> 3. If I have a lot of such stmts (100 or more) will it somehow affect 
>> performance of SQLite or waste lots of memory?
>
> No, not really. There is a reasonably small data structure associated with a 
> prepared statement, on the order of a few hundred bytes perhaps.
>
>> Are there some kind of cursors for each selecting prepared stmt?
>
> In a sense. After a call to sqlite3_step and before a call of sqlite3_reset 
> or sqlite3_finalize, you might think of a statement handle as a form of a 
> cursor. Each subsequent sqlite3_step call advances this cursor forward by one 
> row.
>
>> 4. If there is a cursor, maybe there is a way to disconnect a cursor from 
>> stmt keeping the single record still available?
>
> No, short of making a copy of every column's value.
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-18 Thread Pavel Ivanov
On Thu, Oct 18, 2012 at 6:32 AM, Daniel Polski  wrote:
>> The SELECT statement, including the _prepare() stage and all the _step()s
>> until you've reached the last row, and then the _finalize(), is all one
>> process.  They're all part of the statement and you can assume that the
>> database is still locked until you do a _finalize().
>>
>> If you are using the results of a SELECT to figure out a bunch of other
>> instructions like INSERT or DELETE you can do it two ways:
>>
>> A) Quicky save the results of stepping into an array.  Once the SELECT is
>> finalized, look through the array and figure out what you want to do about
>> it.
>>
>> B) Generate the INSERT/DELETE commands while stepping but keep them
>> somewhere, either as a data array or by accumulating the text of the SQL
>> commands in a string, rather than executing them immediately.  Once you're
>> finished stepping, execute the commands.  (You may choose to use _exec to
>> execute them all in one go.)
>>
>>
>
> Thank you for the clarification, but I still misunderstand the documentation
> some way.
>
> In the documentation about WAL mode it says:
> "Writers merely append new content to the end of the WAL file. Because
> writers do nothing that would interfere with the actions of readers, writers
> and readers can run at the same time. However, since there is only one WAL
> file, there can only be one writer at a time."
>
> Maybe the magic words I don't fully understand are what's written later:
> "A checkpoint operation takes content from the WAL file and transfers it
> back into the original database file. A checkpoint can run concurrently with
> readers, however the checkpoint must stop when it reaches a page in the WAL
> that is past the read mark of any current reader. The checkpoint has to stop
> at that point because otherwise it might overwrite part of the database file
> that the reader is actively using. The checkpoint remembers (in the
> wal-index) how far it got and will resume transferring content from the WAL
> to the database from where it left off on the next invocation."
>
> I logically do understand that there can't be 2 writers updating the
> database at the same time, but I don't understand why the second insert
> statement in the example below won't work without finalizing the SELECT
> query?
>
>
> sqlite3* conn_1;
> sqlite3* conn_2;
> //(..opening db files and so on)
>
> sqlite3_stmt* pVM_1;
> sqlite3_stmt* pVM_2;
>
> sqlite3_prepare(conn_1, "SELECT * FROM test_table_2;", -1, _1, );
> //sets WAL end mark for pVM_1?
> sqlite3_prepare(conn_2, "SELECT * FROM test_table_2;", -1, _2, );
> //sets WAL end mark for pVM_2?
>
> nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, set
> new read mark?
> nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, set
> new read mark?
>
> //statements below will add content to the end of the WAL file?
> nRet = sqlite3_exec(conn_1, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
> nRet = sqlite3_exec(conn_1, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
> );
> nRet = sqlite3_exec(conn_1, "COMMIT;", 0, 0, );
>
> nRet = sqlite3_step(pVM_1); //stepping if < WAL end mark set for pVM_1, set
> new read mark?
> nRet = sqlite3_step(pVM_2); //stepping if < WAL end mark set for pVM_2, set
> new read mark?
>
> //sqlite3_finalize(pVM_1);
> //sqlite3_finalize(pVM_2);
>
> //The execution below will fail with SQLITE_BUSY if the SELECT statement
> pVM_2 isn't finalized
> //(Why won't it append new data in the end of the WAL file just like the
> successful insert above?)
> nRet = sqlite3_exec(conn_2, "BEGIN IMMEDIATE TRANSACTION;", 0, 0, );
> nRet = sqlite3_exec(conn_2, "INSERT INTO test_table_1 VALUES(1, 1);", 0, 0,
> );
> nRet = sqlite3_exec(conn_2, "COMMIT;", 0, 0, );

When INSERT is executed it doesn't add new row to the WAL file, it
have to add new row to some database page possibly changing some other
pages on the way too. These changed pages are written into WAL file.
Because of this fact when INSERT statement is executed it must be
executed on the latest version of the database pages. Otherwise it can
create an alternative database version that can't be merged with
version created by other INSERT statements. So when you execute INSERT
statement, or start IMMEDIATE transaction, or convert existing
read-only transaction into writing transaction SQLite have to make
sure that this transaction sees the latest version of the database.
But when you start read-only transaction SQLite ensures that all the
way through it sees a consistent database state, and that would be the
state of the database at the beginning of the transaction. So when you
start executing SELECT statement you lock conn_2 into the database
state that was at that moment. Then you do database changes on conn_1,
so database state is changed. Then you want to execute changes in
conn_2, but it can't do that because it's locked into database state
which is not latest one. And it 

Re: [sqlite] Begin immediate transaction -> SQLITE_BUSY (database is locked)

2012-10-17 Thread Pavel Ivanov
The problem is you are starting read-only transaction by executing
SELECT and then try to convert this transaction into writing one by
executing BEGIN IMMEDIATE. If in such situation SQLITE_BUSY is
returned you have to finish the transaction and start it again. In
your code solution is easy: finalize SELECT statement before executing
BEGIN IMMEDIATE.

Pavel

On Wed, Oct 17, 2012 at 6:12 AM, Daniel Polski  wrote:
> Hello again,
> Attached is a test application which replicates the problem.
>
> I expected the transactions to block each other exactly like they do in the
> beginning (one connection successfully begins and the other receives
> SQLITE_BUSY), but I didn't expect the blocked connection to never get
> unlocked in the end. What's holding the lock on the database so that the
> "begin" can't proceed?
>
> Sample console output:
>
> sqlite3_libversion: 3.7.13
> sqlite3_sourceid: 2012-06-11 02:05:22
> f5b5a13f7394dc143aa136f1d4faba6839eaa6dc
> sqlite3_libversion_number: 3007013
> sqlite3_threadsafe: 1
> Creating thread 0
> Creating thread 1
>
> conn addrstatusquery
> ----
> 0x6a6278SuccessPRAGMA journal_mode=wal;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6bd678SuccessPRAGMA journal_mode=wal;
> 0x6bd678FailedBEGIN IMMEDIATE TRANSACTION; (return code: 5)
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6a6278SuccessBEGIN IMMEDIATE TRANSACTION;
> 0x6a6278SuccessINSERT INTO test_table_1 (test1, test2) VALUES (
> 1, 2);
> 0x6a6278SuccessCOMMIT;
> 0x6bd678FailedBEGIN IMMEDIATE TRANSACTION; (return code: 5)
> 0x6bd678FailedBEGIN IMMEDIATE TRANSACTION; (return code: 5)
> (... And so on)
>
>
> Thank you in advance,
> Daniel
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite VM questions

2012-10-14 Thread Pavel Ivanov
On Sun, Oct 14, 2012 at 12:07 PM, Ryan Johnson
<ryan.john...@cs.utoronto.ca> wrote:
> On 14/10/2012 2:26 PM, Pavel Ivanov wrote:
>>>
>>> What kind of interpreter does the query executor uses? How important is
>>> the
>>> interpreter's speed, to SQLite's speed ?
>>
>> SQLite doesn't have interpreter, it has parser. I guess this makes the
>> rest of your email inapplicable.
>
> Umm... yes it does. http://www.sqlite.org/vdbe.html

Maybe there's some conflict of terminology here. But as I understand
it converting SQL query into a set of opcodes representing all
operations needed to execute the query and then executing these
opcodes is not interpreting, it's parsing. Interpreting is more
related to some full-blown execution languages like python, perl,
javascript or something like that. These languages indeed require some
technologies like JIT. But they are not applicable to SQL. Maybe only
to PL/SQL-like language, but it doesn't exist in SQLite.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite VM questions

2012-10-14 Thread Pavel Ivanov
> What kind of interpreter does the query executor uses? How important is the
> interpreter's speed, to SQLite's speed ?

SQLite doesn't have interpreter, it has parser. I guess this makes the
rest of your email inapplicable.


Pavel


On Sun, Oct 14, 2012 at 4:38 AM, Elefterios Stamatogiannakis
 wrote:
> I have some questions for those that know the innards of SQLite.
>
> What kind of interpreter does the query executor uses? How important is the
> interpreter's speed, to SQLite's speed ?
>
> Concerning above questions, i've found a great article about a portable
> interpreter implementation that produces a close to JITed performance,
> interpreter:
>
> http://www.emulators.com/docs/nx25_nostradamus.htm
>
> Another idea for producing a portable JIT (without an assembly backend) is
> what QEMU does, by "chaining" precompiled functions. Arguably QEMU's way is
> more heavy/complex than using an interpreter, but maybe it wouldn't bloat
> SQLite that much, and SQLite would remain portable across platforms.
>
> I'm asking above questions, because i believe that due to SQLite running on
> billions of devices it needs to be as efficient as possible. Due to the
> number of deployments, it may "burn" GWatts of power across all these
> devices (i haven't done the calculation).
>
> Thanks,
>
> lefteris.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] light weight write barriers

2012-10-12 Thread Pavel Ivanov
Well, an article on write barriers published in May 2007 can't
contradict the statement that barriers don't exist these days. :)

Pavel

On Fri, Oct 12, 2012 at 5:38 AM, Black, Michael (IS)
 wrote:
> There isn't  Somebody sure wasted their time on this article then...
> http://www.linux-magazine.com/w3/issue/78/Write_Barriers.pdf
>
> Michael D. Black
> Senior Scientist
> Advanced Analytics Directorate
> Advanced GEOINT Solutions Operating Unit
> Northrop Grumman Information Systems
>
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Christoph Hellwig [h...@infradead.org]
> Sent: Thursday, October 11, 2012 12:41 PM
> To: ? Yang Su Li
> Cc: linux-fsde...@vger.kernel.org; General Discussion of SQLite Database; 
> linux-ker...@vger.kernel.org; d...@hwaci.com
> Subject: EXT :Re: [sqlite] light weight write barriers
>
> On Thu, Oct 11, 2012 at 11:32:27AM -0500, ? Yang Su Li wrote:
>> I am not quite whether I should ask this question here, but in terms
>> of light weight barrier/fsync, could anyone tell me why the device
>> driver / OS provide the barrier interface other than some other
>> abstractions anyway? I am sorry if this sounds like a stupid questions
>> or it has been discussed before
>
> It does not.  Except for the legacy mount option naming there is no such
> thing as a barrier in Linux these days.
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering of fields in a join

2012-10-11 Thread Pavel Ivanov
On Thu, Oct 11, 2012 at 7:09 AM, Hamish Symington
 wrote:
>> Note that I was mistaken earlier when I said that "X=Y" and "Y=X" should
>> result in the same answer.  If X and Y have different default collating
>> sequences, then X=Y does not mean the same thing as Y=X and so the answers
>> can be different.
> Right, I understand. Perhaps a brief note on the docs for the query optimizer 
> to remind people of this might be worth it - it seems slightly 
> counterintuitive, from a 'normal' user's point of view.

Usage of appropriate collating function is not an optimizer feature,
it's property of SQLite data types. And it's well documented here
http://www.sqlite.org/datatype3.html (see section 6.1). Whether
optimizer chooses to use index or not depending on collating function
is also well documented.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to open a tickect with an issue?

2012-10-01 Thread Pavel Ivanov
See http://www.sqlite.org/src/wiki?name=Bug+Reports .

Pavel

On Mon, Oct 1, 2012 at 3:10 AM, Álvaro García  wrote:
> I Have a problem with Entity framework and SQLite and I would like to open a
> ticket to expose my problem, but I don't see the way to open the ticket.
>
> could someone help me?
>
>
>
> Thanks.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How much disk space is required to checkpoint?

2012-09-26 Thread Pavel Ivanov
On Wed, Sep 26, 2012 at 5:55 AM, Simon Slavin  wrote:
>
> On 26 Sep 2012, at 5:42am, Dan Kennedy  wrote:
>
>> So when SQLite requires a temporary
>> file, it normally creates and opens a file with a randomized name
>> in /tmp then calls unlink() on it immediately - before it starts
>> using the file.
>
> Goodness me.  Okay, I wasn't aware that SQLite used this trick.  Thanks for 
> the correction.  That certainly does tie in with what the OP reported.
>
> There are problems with doing this under some versions of Unix/Linux, and 
> especially variants like Android.

I wonder what problems do you have in mind?

> For instance, another user/app can make a file with the same name.

If it doesn't happen in between file creation and unlinking then
there's absolutely no problem in this -- this app will see a different
file, not the one that SQLite sees. And with name randomization and
extremely short time between creation and unlinking there virtually no
chances to have any problems of this type.

> Or the system may crash while the file is still open.

What's the problem with this? File will be deleted by the file system
after reboot. To the contrary this scenario will give you problems if
you don't do unlinking.

> Is there any chance that the use of this trick can be discontinued ?

Hopefully you have some compelling reason for suggesting that.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQL:2003 -- Window Functions

2012-09-19 Thread Pavel Ivanov
On Wed, Sep 19, 2012 at 7:10 PM, Mohd Radzi Ibrahim  wrote:
> On Thu, Sep 20, 2012 at 12:58 AM, Igor Tandetnik wrote:
>
>> On 9/19/2012 12:51 PM, joe.fis...@tanguaylab.com wrote:
>>
>>> Too bad SQLite doesn't yet support SQL Window Functions.
>>>
>>> Are there any SQLite Extension Libraries that support "SQL:2003 type
>>> Window Functions"?
>>> I specifically need LEAD and LAG to calculate an event integer timestamp
>>> delta between consecutive rows.
>>> I've played with some self-join code but that's proving to be complicated.
>>>
>>
>> The easiest approach is to maintain the window in your application code,
>> as you iterate over a simple SELECT statement.
>> --
>> Igor Tandetnik
>>
>>
>>
> Could it not be done with inner select of ROWID-n and ROWID+n to get the
> LEAD and LAG row ?
>
> select
>  logtime as timeNow,
>  (select logtime from logtable where rowid=a.rowid-1) as timeBefore,
>  (select logtime from logtable where rowid=a.rowid+1) as timeAfter
> from logtime;

This will work only if the logtime table has consecutive rowids which
is almost never the case.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting query results as new records are added

2012-09-14 Thread Pavel Ivanov
On Fri, Sep 14, 2012 at 8:57 AM, Igor Tandetnik <itandet...@mvps.org> wrote:
> On 9/14/2012 10:36 AM, Pavel Ivanov wrote:
>>
>> On Fri, Sep 14, 2012 at 5:58 AM, Igor Tandetnik <itandet...@mvps.org>
>> wrote:
>>>
>>> Wait a minute. If "statement-level read consistency" only applies to two
>>> queries in different transactions, what bearing, if any, does it have on
>>> your example of "open, fetch, update, fetch"? There, everything happens in
>>> the same transaction. I'm thoroughly confused.
>>
>>
>> I'm sorry, I made that example before I carefully re-read Petite's
>> email and understood what you are concerned about.
>
>
> I'm not actually concerned about anything. It is difficult for me to
> possibly care less about the intricacies of Oracle's transaction management.
> It's just that Petite asked whether SQLite supports something called
> "statement-level read consistency", and I've never heard of such a beast,
> and was confused by the formal definition thereof that Petite cited.
>
> Let me see if I understand it. Oracle (optionally?) provides transaction
> isolation levels weaker than "serializable", whereby transaction A may see
> changes made by another transaction B, committed after A started.

This weaker isolation level is (or at least was when I worked with
Oracle databases) set by default. "Serializable" was optional as it
imposes more problems for db administrators and requires more careful
implementations on programmers side.

> However,
> it guarantees that any individual statement within A, during its lifetime,
> will see data either as it existed before B committed, or as it exists
> afterward, but never a mix of the two. Whereas two identical statements
> executed one after another may very well observe different data. This is
> what's called "statement-level read consistency".
>
> Since SQLite only provides serializable transaction isolation level, no
> statement in transaction A will ever see any changes made by transaction B.
> This is a strictly stronger guarantee than "statement-level read
> consistency", rendering moot the question of whether SQLite supports the
> same.
>
> In addition to, and independently of, all of the above, Oracle provides a
> similar guarantee (whose name was not mentioned in this thread) that a
> statement A within a transaction will see all changes by another statement B
> within the same transaction if B completes before A starts; and will not see
> any changes made by yet third statement C, still within the same
> transaction, if C completes after A starts (even if C starts while A was in
> progress). SQLite only provides half of that guarantee: statement A will see
> the changes made by B; it may see none, some, or all changes made by C, in
> an unpredictable way.
>
> Thus, in SQLite it is best to avoid modifying the same data that an
> outstanding SELECT statement is iterating over, as the outcome of such a
> modification is unpredictable.
>
>
> Does this sound right to you?

Yes, that's right.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting query results as new records are added

2012-09-14 Thread Pavel Ivanov
On Fri, Sep 14, 2012 at 5:58 AM, Igor Tandetnik <itandet...@mvps.org> wrote:
> Pavel Ivanov <paiva...@gmail.com> wrote:
>> There's another way how it could make sense - if you read that excerpt
>> in context. ;-) The "statement-level read consistency" definition
>> cited by Petite is the transaction isolation level. I.e. it defines
>> what each statement executed in one transaction can see related to the
>> changes done in another transaction.
>
> Well, in this case, if you only ever execute one statement per transaction in 
> SQLite, then it does support "statement-level read consistency." Though I 
> fail to see how it materially differs from "serializable". Am I missing 
> something obvious?

For one statement per transaction "statement-level consistency" and
"serializable" do not differ, you are right. But IIRC Oracle doesn't
have an auto-commit mode of operation and multi-statement transactions
are more common, especially when we talk about stored procedures,
triggers and things like that. And if you ever call "begin
transaction" or execute multi-statement transactions then serializable
will make sure that your transaction sees only changes committed
before "begin", in "statement-level" you can see changes committed
after "begin". The worst problem of this: with "statement-level
consistency" two identical select queries executed inside the same
transaction can return different results.

>> This isolation level doesn't
>> define visibility of changes done in the same transaction. These
>> visibility rules are defined in different place in the documentation
>> and don't depend on transaction isolation level.
>>
>> So returning to my example "statement-level read consistency" means
>> that cursor can see myfield=2 if update is done in another transaction
>> before cursor is opened but after begin is executed.
>
> Wait a minute. If "statement-level read consistency" only applies to two 
> queries in different transactions, what bearing, if any, does it have on your 
> example of "open, fetch, update, fetch"? There, everything happens in the 
> same transaction. I'm thoroughly confused.

I'm sorry, I made that example before I carefully re-read Petite's
email and understood what you are concerned about. Plus I was taking
your example as a base for mine. :)
But for statements inside one transaction Oracle uses the same rules
as in statement-level consistency too. Although it's probably named a
little bit differently and its definition doesn't include "committed
data" (it would be "finished statements" or something like that).

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting query results as new records are added

2012-09-14 Thread Pavel Ivanov
On Thu, Sep 13, 2012 at 9:29 PM, Igor Tandetnik <itandet...@mvps.org> wrote:
> Pavel Ivanov <paiva...@gmail.com> wrote:
>> On Thu, Sep 13, 2012 at 3:36 PM, Igor Tandetnik <itandet...@mvps.org> wrote:
>>> On 9/13/2012 4:39 PM, Petite Abeille wrote:
>>>>
>>>>
>>>> On Sep 13, 2012, at 10:24 PM, Igor Tandetnik <itandet...@mvps.org> wrote:
>>>>
>>>>>> In other words, a select should only ever see what existed at the
>>>>>> beginning of the query, and that is that.
>>>>>
>>>>>
>>>>> Again, the concept of isolation level applies to transactions, not to
>>>>> individual queries.
>>>>
>>>>
>>>> Let me rephrase that, by paraphrasing some other DB doc [1]:
>>>>
>>>> "statement-level read consistency. This guarantees that all the data
>>>> returned by a single query comes from a single point in time—the time that
>>>> the query began. Therefore, a query never sees dirty data or any of the
>>>> changes made by transactions that commit during query execution. As query
>>>> execution proceeds, only data committed before the query began is visible 
>>>> to
>>>> the query. The query does not see changes committed after statement
>>>> execution begins."
>>>>
>>>> So… SQLite always provide statement-level read consistency, yes?
>>>
>>>
>>> In SQLite, a query cannot see changes made by other transactions (whether
>>> committed or otherwise) that weren't already committed by the time the
>>> current transaction started. But it may observe changes made by the current
>>> transaction - the transaction the query is running in. If that's what the
>>> definition means by "dirty data", then no, SQLite doesn't provide
>>> statement-level read consistency.
>>>
>>> But I can't help but wonder. Consider this scenario:
>>>
>>> begin;
>>> update mytable set myfield=1;
>>> commit;
>>>
>>> begin;
>>> update mytable set myfield=2;
>>> select myfield from mytable;
>>> commit;
>>>
>>> Are you saying that in Oracle, the SELECT statement would observe myfield=1?
>>> That seems to follow from the definition you quote, but I can't imagine how
>>> such behavior could be useful. In SQLite, the SELECT statement would
>>> certainly see myfield=2.
>>
>> No, Oralce will return myfield=2 in this case.
>
> So then it doesn't support statement-level read consistency, as defined in 
> its own documentation? The claim was that, among other things, "as query 
> execution proceeds, only data committed before the query began is visible to 
> the query." But setting myfield to 2 has not yet been committed by the time 
> the query began.
>
> The only way it makes sense is if the word "commit" means something other 
> than "commit a transaction" - something like "a data manipulation statement 
> successfully runs to completion".

There's another way how it could make sense - if you read that excerpt
in context. ;-) The "statement-level read consistency" definition
cited by Petite is the transaction isolation level. I.e. it defines
what each statement executed in one transaction can see related to the
changes done in another transaction. This isolation level doesn't
define visibility of changes done in the same transaction. These
visibility rules are defined in different place in the documentation
and don't depend on transaction isolation level.

So returning to my example "statement-level read consistency" means
that cursor can see myfield=2 if update is done in another transaction
before cursor is opened but after begin is executed.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting query results as new records are added

2012-09-13 Thread Pavel Ivanov
On Thu, Sep 13, 2012 at 3:36 PM, Igor Tandetnik  wrote:
> On 9/13/2012 4:39 PM, Petite Abeille wrote:
>>
>>
>> On Sep 13, 2012, at 10:24 PM, Igor Tandetnik  wrote:
>>
 In other words, a select should only ever see what existed at the
 beginning of the query, and that is that.
>>>
>>>
>>> Again, the concept of isolation level applies to transactions, not to
>>> individual queries.
>>
>>
>> Let me rephrase that, by paraphrasing some other DB doc [1]:
>>
>> "statement-level read consistency. This guarantees that all the data
>> returned by a single query comes from a single point in time—the time that
>> the query began. Therefore, a query never sees dirty data or any of the
>> changes made by transactions that commit during query execution. As query
>> execution proceeds, only data committed before the query began is visible to
>> the query. The query does not see changes committed after statement
>> execution begins."
>>
>> So… SQLite always provide statement-level read consistency, yes?
>
>
> In SQLite, a query cannot see changes made by other transactions (whether
> committed or otherwise) that weren't already committed by the time the
> current transaction started. But it may observe changes made by the current
> transaction - the transaction the query is running in. If that's what the
> definition means by "dirty data", then no, SQLite doesn't provide
> statement-level read consistency.
>
> But I can't help but wonder. Consider this scenario:
>
> begin;
> update mytable set myfield=1;
> commit;
>
> begin;
> update mytable set myfield=2;
> select myfield from mytable;
> commit;
>
> Are you saying that in Oracle, the SELECT statement would observe myfield=1?
> That seems to follow from the definition you quote, but I can't imagine how
> such behavior could be useful. In SQLite, the SELECT statement would
> certainly see myfield=2.

No, Oralce will return myfield=2 in this case. Oracle's statement
isolation level (which is not supported by SQLite) is for cases like
this:

begin;
open cursor c as select myfield from mytable;
fetch next row from c;
fetch next row from c;
update mytable set myfield=2 where ...;
fetch next row from c;
fetch next row from c;
commit;

In this case cursor will never observe myfield=2 and this behavior is
well defined, documented and guaranteed.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Possible bug - journal_mode DELETE/TRUNCATE

2012-09-09 Thread Pavel Ivanov
Note: attachments are stripped out of this list. So if you want for
anybody else to see your zip file you need to put it on some website
and post link here.

About the problem you have: I wonder how are you sure that there
should be 50 rows in the database and not 49? If you are resetting the
OS before it has a chance to properly commit everything then it's okay
for last transaction to be missing. But if you are sure that you are
resetting the OS after everything is settled then maybe you are not
committing your last transaction properly? Or maybe there's some bug
in your virtualization layer and you need to try the same thing on a
real hardware?


Pavel


On Sun, Sep 9, 2012 at 8:50 AM, Daniel Frimerman
 wrote:
> I am fairly new to sqlite and as a result of not reading the manual and not
> doing some performance testing, I got punished somewhat.  I did not
> anticipate that on journal_mode=DELETE and synchronous=FULL, I would get no
> more than 5 inserts (in auto-commit mode) per second.  It crippled a
> certain batch operation on a live system.  That's water under the bridge;
> it's the testing afterwards and a potential minor problem that I found is
> what I am now interested in.
>
> I tested all journal mode settings for sqlite, as well as the synchronous
> setting.  Some things that I discovered were not so obvious from reading
> the docs, such as the WAL journal mode combined with NORMAL synchronous
> setting, which is nowhere near as "durable" as NORMAL setting for other
> journal modes. I.e. NORMAL mode for DELETE journal in 99% of cases saves
> all inserted data - reproducing the slightest of chances that consistency
> is compromised was rather hard.  This is reflected in performance testing:
> NORMAL is only slightly faster than FULL mode for non-WAL journal settings
> (btw, journal_mode=OFF was never tested in any of my tests). But, I
> understood, that in WAL+NORMAL mode is equivalent to FULL+non-WAL mode
> where consistency/corruption is concerned.  That is, the database cannot
> get corrupted in WAL+NORMAL.  The gain in speed for WAL+NORMAL trades off
> durability and in my tests I easily reproduced that.
>
> Okay, that was not really related to the possible bug I found.  I've
> attached a ZIP file containing some batch files that create a table, insert
> some rows, at which point you hard-reset the OS, log back in and check if
> the number of rows in the DB matches what you inserted. Although the
> non-WAL journal modes are somewhat similar, the little problem that I've
> come to find only happens on DELETE/TRUNCATE, but not on PERSIST or WAL.
> The problem is basically as follows: in DELETE and TRUNCATE journal mode
> combined with NORMAL/FULL synchronous mode, there is always 1 row missing
> during my simulated power-cut.
>
> I used VirtualBox 4.1.22 and Windows XP Pro (SP3) and sqlite3 3.7.14
> (command line as well as through my testing application). In VirtualBox,
> under storage settings for the VM, I used IDE Controller (afaik it's single
> threaded), turned off host I/O cache.  Inside the guest, write-cache should
> be enabled in device manager under policies for the default disk controller.
>
> To test this, set your VM as above, copy the files from the attached ZIP
> file, also download the latest sqlite3 command line shell.  Restart the
> guest once to ensure your files are flushed out before you start resetting
> the guest :)
> Execute the following batch file: EXEC_DATA__DELETE_FULL.cmd, wait 2-3
> seconds (or less) then hit HOST+R to hard reset the OS.  When you reboot,
> run READ_DATA.cmd, you'll see 49 rows, but there should be 50.
> You can try the same with EXEC_DATA__DELETE_NORMAL.cmd,
> EXEC_DATA__TRUNCATE_FULL.cmd, EXEC_DATA__TRUNCATE_NORMAL.cmd
> 50 rows if you try with EXEC_DATA__PERSIST_FULL.cmd and
> EXEC_DATA__PERSIST_NORMAL.cmd and EXEC_DATA__WAL_FULL.cmd
>
> What's with that?
>
> Kind regards, Dan
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3DbMallocRaw problem

2012-09-09 Thread Pavel Ivanov
If pBuf points to an invalid memory it's possible that you use already
closed connection or already finalized statement. Or maybe your
application corrupts SQLite's memory. Try to run your program under a
memory debugger.

Pavel


On Sat, Sep 8, 2012 at 10:11 AM, David Levinson  wrote:
> All,
>
> I wanted to revise my last email regarding the failure within 
> sqlite3DbMallocRaw() ... The point of failure is  "db->lookaside.pFree = 
> pBuf->pNext;" and it appears that pBuf points to an invalid memory location 
> since Windows report a memory access violation.  Does anyone know how to 
> correct this?
>
> LookasideSlot *pBuf;
> if( db->mallocFailed ){
>   return 0;
> }
> if( db->lookaside.bEnabled ){
>   if( n>db->lookaside.sz ){
> db->lookaside.anStat[1]++;
>   }else if( (pBuf = db->lookaside.pFree)==0 ){
> db->lookaside.anStat[2]++;
>   }else{
> db->lookaside.pFree = pBuf->pNext;
>
> should I recompile with the SQLITE_OMIT_LOOKASIDE enabled?
>
> Thanks,
>
> Dave.
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] bus error on sparc machines

2012-09-09 Thread Pavel Ivanov
> However, bus errors are often the result of something in your own code, or of 
> not checking the result codes returned by all your previous SQLite3 calls.

Simon,

You are confusing bus errors with segmentation faults. They are
different things and bus errors inside SQLite can never be caused by
the code using SQLite.


Pavel


On Sun, Sep 9, 2012 at 9:34 AM, Simon Slavin  wrote:
>
> On 9 Sep 2012, at 12:49pm, Tal Tabakman  wrote:
>
>> "CREATE
>> INDEX IDX_OBJ on entries (obj_id, path);;"
>
> Just on the offchance, please try it with just one semi-colon, and again with 
> no semi-colons.  I don't think that's the problem, but it might show up 
> something different.
>
> However, bus errors are often the result of something in your own code, or of 
> not checking the result codes returned by all your previous SQLite3 calls.  
> Are you checking the integers returned by all the SQLite3 calls and making 
> sure they're SQLITE_OK ?
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [64bits server] The underlying provider failed on Open

2012-09-04 Thread Pavel Ivanov
Maybe you have 32-bit SQLite.Interop.dll and don't have 64-bit
SQLite.Interop.dll?

Pavel


On Tue, Sep 4, 2012 at 2:49 PM, Joe Mistachkin  wrote:
>
>>
>> After few tries, I've finally downloaded the 64 bits zip (
>> sqlite-netFx40-static-binary-bundle-x64-2010-1.0.81.0.zip) and deployed
>> the 2 SQLite dlls. I've come to this error:
>>
>>The underlying provider failed on Open.
>>at
>> System.Data.EntityClient.EntityConnection.OpenStoreConnectionIf(Boolean
>> openCondition, DbConnection storeConnectionToOpen, DbConnection
>>
>
> This exception appears to be coming from the Entity Framework.  Do you have
> the inner exception being thrown by System.Data.SQLite?  Are there other
> "SQLite.Interop.dll" files in your PATH, by chance?
>
> Also see:
>
> http://www.mail-archive.com/sqlite-users%40sqlite.org/msg70309.html
>
> --
> Joe Mistachkin
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [sqlite-announce] SQLite Version 3.7.14

2012-09-04 Thread Pavel Ivanov
There's a documentation bug on the Release Notes page:

"Enhancements to PRAGMA integrity_check and PRAGMA quick_check so that
they can optionally check just a single attached database >install< of
all attached databases."

Should be "instead" instead of "install".


Pavel


On Tue, Sep 4, 2012 at 4:41 AM, D. Richard Hipp  wrote:
> SQLite version 3.7.14 is now available on the SQLite website
>
> http://www.sqlite.org/
>
> Version 3.7.14 is a regularly scheduled maintenance release containing a few 
> performance enhancements and minor code refinements.  Support for OS/2 has 
> been dropped, but as we were unable to locate any active uses of SQLite on 
> OS/2, we do not think this will be a serious hardship on anyone.  Upgrading 
> from prior releases is optional.  Details regarding the changes in version 
> 3.7.14 can be found at
>
>http://www.sqlite.org/releaselog/3_7_14.html
>
> As always, please send email to sqlite-users@sqlite.org or directly to me if 
> you encounter any difficulties with this new release.  Thanks.
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-announce mailing list
> sqlite-annou...@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-announce
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] some questions about sqlite

2012-09-01 Thread Pavel Ivanov
> thanks a lot,
> but about the second question,I write java code,and find if the connection 
> exceeds 17990,then will throws exception

This probably happened because your OS restricts number of open file
descriptors or something like that.

Pavel


On Sat, Sep 1, 2012 at 7:52 PM,   wrote:
> thanks a lot,
> but about the second question,I write java code,and find if the connection 
> exceeds 17990,then will throws exception
> SQLite.Exception: unknown error in open
> at SQLite.Database._open4(Native Method)
> at SQLite.Database.open(Database.java:37)
> at SQLite.MyDay.main(MyDay.java:20)
> Exception in thread "main" java.lang.OutOfMemoryError: unable to get SQLite 
> handle
> at SQLite.Database._open4(Native Method)
> at SQLite.Database.open(Database.java:37)
> at SQLite.MyDay.main(MyDay.java:20)
>
> the source java code:
> public static void main(String[] args) {
> // TODO Auto-generated method stub
> int count=0;
>
> while(count<=17990){
> Database db = new Database();
>
> try {
> // open connection,but not close
> db.open("c:\\my1.db", 0666);
> count++;
> db.interrupt();
> db.busy_timeout(1000);
> db.busy_handler(null);
>
> } catch (Exception e) {
> e.printStackTrace();
> }
> }
>
> System.out.println(count);
> }
>
>
>> -Original E-mail-
>> From: "J Decker" 
>> Sent Time: 2012-9-2 10:37:04
>> To: "General Discussion of SQLite Database" 
>> Cc:
>> Subject: Re: [sqlite] some questions about sqlite
>>
>> On Sat, Sep 1, 2012 at 7:32 PM, Simon Slavin  wrote:
>> >
>> > On 2 Sep 2012, at 3:13am, shuif...@mail.ustc.edu.cn wrote:
>> >
>> >> 1、Do the sqlite support job like oracle or sqlserver,such as
>> >>variable job1 number;
>> >>begin
>> >>sys.dbms_job.submit(job => :job,
>> >>what => 'prc_g_test;',
>> >> next_date => to_date('22-10-2008 10:06:41', 
>> >> 'dd-mm- hh24:mi:ss'),
>> >> interval => 
>> >> 'sysdate+1/1440');--每天1440分钟,即一分钟运行test过程一次
>> >> commit;
>> >>end;
>> >
>> > No.
>> >
>> >> 2、how many user connections the sqlite can handler
>> >
>> > There is no limit in SQLite for connections.
>>
>> It is not advised to have multiple processes connect to same file. ..
>> but in-process probably no limit of connections.
>>
>> >
>> > Simon.
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Pragma Synchronous=OFF is not working

2012-08-28 Thread Pavel Ivanov
If all your inserts are in one transaction then pragma synchronous =
OFF won't affect your transaction speed too much. To understand
whether this pragma works or not you should measure how long it takes
to execute COMMIT (just this one statement). With synchronous = OFF
COMMIT will be executed much faster.

Pavel


On Mon, Aug 27, 2012 at 7:02 AM, tripathi.kritesh
 wrote:
> Hi ,
>
> I am executing all below mentioned pragma before start the (BEGIN --COMMIT)
> transaction in sqlite version (3.6.4)
>
> sqlite3_exec(mDb, “PRAGMA synchronous=OFF”, NULL, NULL, );
>  sqlite3_exec(mDb, “PRAGMA count_changes=OFF”, NULL, NULL, );
>  sqlite3_exec(mDb, “PRAGMA journal_mode=MEMORY”, NULL, NULL, );
>  sqlite3_exec(mDb, “PRAGMA temp_store=MEMORY”, NULL, NULL, );
>
>
> In transaction , I am inserting the values in the table but I dnt know the
> specific reason why does not pragma effecting  the insert speed .. I am
> getting the same speed even i use the pragma or not . please help
>
> Is these pragma effect take place in transaction ?
>
> Cheers
>  kritesh
>
>
>
>
> --
> View this message in context: 
> http://sqlite.1065341.n5.nabble.com/Pragma-Synchronous-OFF-is-not-working-tp63904.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [BUG] sqlite3_exec "BEGIN; ROLLBACK" corrupts statement already running

2012-08-23 Thread Pavel Ivanov
This is a documented change. See http://www.sqlite.org/releaselog/3_7_11.html:

"Pending statements no longer block ROLLBACK. Instead, the pending
statement will return SQLITE_ABORT upon next access after the
ROLLBACK."

There was even some explanation of reasons for that somewhere on the list.


Pavel


On Thu, Aug 23, 2012 at 9:03 PM, Joey Adams  wrote:
> Consider the following operations (full test program attached):
>
> stmt <- prepare conn "SELECT * FROM foo"
> Row <- step stmt
> exec conn "BEGIN; ROLLBACK"
> Row <- step stmt
>
> Namely, we prepare a statement with sqlite3_prepare_v2, call
> sqlite3_step (giving us SQLITE_ROW).  While the statement is busy, we
> jump in and do this:
>
> rc = sqlite3_exec(conn, "BEGIN; ROLLBACK", NULL, NULL, NULL);
>
> On SQLite 3.6.22, this sqlite3_exec call returns SQLITE_BUSY, and the
> subsequent sqlite3_step returns SQLITE_ROW.
>
> On SQLite 3.7.13, this sqlite3_exec call returns SQLITE_OK, but the
> subsequent sqlite3_step returns SQLITE_ABORT.
>
> The latter result looks bogus to me.
>
> #define SQLITE_ABORT4   /* Callback routine requested an abort */
>
> We're not doing anything with a callback routine.  And according to
> the documentation for ROLLBACK:
>
> The ROLLBACK will fail with an error code SQLITE_BUSY if there are
> any pending queries.
>
> Does this include queries started before the transaction begin?  Should it?
>
> When the sequence above is performed, I would expect one of two
> results (preferably the first):
>
>  * The exec "BEGIN; ROLLBACK" has no effect on the prepared statement
> that has already started.
>
>  * The ROLLBACK fails with SQLITE_BUSY (the 3.6.22 behavior).
>
> The 3.7.13 behavior definitely looks wrong.  Is this a bug, or is it
> undefined behavior to BEGIN or ROLLBACK while a prepared statement is
> running on the same connection?
>
> Thanks,
> -Joey
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite - Database disk size difference when tables copied

2012-08-22 Thread Pavel Ivanov
> When I implement the same mechanism for saving the
> memory database back to disk, the size of disk file is 4x of the original
> disk file size.

What is "original disk file size" here? Is it an empty database,
database with some data, database with exactly the same data you are
restoring? If the latter then do you restore right over the existing
data? If yes how?

Pavel


On Wed, Aug 22, 2012 at 10:00 PM, Madhur Kashyap
 wrote:
> Hello,
>
> I am using Tcl Extension (TEA) of SQLite inside an EDA tool which supports
> TCL User Shell. I have a strange problem that SQLite version 3.7.10 is
> downgraded to 3.5.9 when I load the necessary .so files. Due to this change
> the backup and restore commands are probably causing a core dump in the EDA
> tool execution.
>
> I have to implement save and restore capability for the memory database
> used in the application. Restore I have implemented using traditional
> method to first attach the disk database and then copy all tables listed in
> sqlite_master table. When I implement the same mechanism for saving the
> memory database back to disk, the size of disk file is 4x of the original
> disk file size.
>
> What is the reason for this size increase? What can I do to avoid this file
> size increase? A long pending question - Why the version is getting
> downgraded to 3.5.9 ? The Tcl version is 8.4.x within the EDA tool. Also I
> cannot find any SQLite libraries within the tool installation.
>
> dbcmd eval "ATTACH DATABASE 'test_disk.db' AS copyto"
> set tableNames [ dbcmd eval {SELECT name FROM sqlite_master WHERE type
> = 'table'} ]
> foreach tname $tableNames {
> dbcmd eval "CREATE TABLE copyto.$tname AS SELECT * FROM $tname"
> }
>
> --
> Regards
> Madhur Kashyap
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to find table names referenced in the query, not in the result set

2012-08-20 Thread Pavel Ivanov
You can try to use authorizer
(http://www.sqlite.org/c3ref/set_authorizer.html) although it seems to
me SQLite won't inform you about access to table2 in the first query.

Pavel


On Mon, Aug 20, 2012 at 2:18 AM, Bishwa Shrestha
 wrote:
> Hi,
>
> I am creating an sqlite3 interface to an existing data model. For that
> purpose sqlite3 tables are dynamically loaded. I'm using
> /sqlite3_column_count/ and /sqlite3_column_table_name/ to load data into
> relevant tables only.
>
> Now, I am having problems with queries such as:
>
> SELECT a.column1 FROM table1 AS a, table2 AS b;
>
> however this works:
> SELECT a.column1, b.column1 FROM table1 AS a, table2 AS b;
>
> because /sqlite3_column_table_name /only returns columns in the result set.
>
> Is there a way to get the table names that were referenced in the query
> itself?
>
> Thanks in advance,
> bishwa
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-Thread Reads to SQLite Database

2012-08-13 Thread Pavel Ivanov
> "There are four levels to the shared-cache locking model: transaction
>  level locking, table level locking, cache level locking, and schema
>  level locking. They are described in the following three sub-sections."

This would be a bad way of documenting this because "cache level
locking" works on a completely different level than transaction, table
or schema level locking. The exclusive mutex is taken inside one call
to sqlite3_step() and released upon exit. But all other "locks" have
"meta-level" notion and work between calls to sqlite3_*() functions.


Pavel


On Mon, Aug 13, 2012 at 6:45 AM, Boris Kolpackov
 wrote:
> Hi Richard,
>
> Richard Hipp  writes:
>
>> In shared-cache mode, SQLite acquires a mutex on the shared cache, thus
>> serializing access.
>
> Oh, that's quite a surprise. I would have never said that something
> like this is going on just by reading the documentation page on shared
> cache:
>
> http://www.sqlite.org/sharedcache.html
>
> Specifically, quoting Section 2.2, "Table Level Locking":
>
> "When two or more connections use a shared-cache, locks are used to
>  serialize concurrent access attempts on a per-table basis. Tables
>  support two types of locks, "read-locks" and "write-locks". Locks
>  are granted to connections - at any one time, each database connection
>  has either a read-lock, write-lock or no lock on each database table.
>
>  At any one time, a single table may have any number of active read-locks
>  or a single active write lock. To read data [from] a table, a connection
>  must first obtain a read-lock. To write to a table, a connection must
>  obtain a write-lock on that table."
>
> From this I at the time concluded that there could be multiple connections
> reading from a table concurrently. But it appears I was wrong.
>
> Perhaps this page should be updated to reflect this. Specifically, the
> last paragraph in Section 2.0:
>
> "There are three levels to the shared-cache locking model, transaction
>  level locking, table level locking and schema level locking. They are
>  described in the following three sub-sections."
>
> Should probably read something along these lines:
>
> "There are four levels to the shared-cache locking model: transaction
>  level locking, table level locking, cache level locking, and schema
>  level locking. They are described in the following three sub-sections."
>
> Then a new section should be inserted between 2.2 and 2.3 that reads
> along these lines:
>
> "2.3 Cache Level Locking
>
>  When a connection needs to access (read or write) data from a table,
>  it must acquire an exclusive mutex on the shared cache. In particular,
>  this means that in the shared cache mode actual access to the database
>  pages, regardless of whether for reading or writing, is always serialized."
>
> If you find it suitable, feel free to use this to update the documentation.
>
> Boris
> --
> Boris Kolpackov, Code Synthesishttp://codesynthesis.com/~boris/blog
> Compiler-based ORM system for C++  http://codesynthesis.com/products/odb
> Open-source XML data binding for C++   http://codesynthesis.com/products/xsd
> XML data binding for embedded systems  http://codesynthesis.com/products/xsde
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multi-Thread Reads to SQLite Database

2012-08-13 Thread Pavel Ivanov
Another idea for you:

> In our test, on the other hand, each thread queries its own set of
> rows from the table. So, based on your explanation, here each thread
> should end up with its own set of pages (more or less). However, even
> in this case, I still observe a near sequential performance.

Even though each thread works basically with its own set of pages they
all use the same cache and thus are subject to the combined limit on
total number of pages in the cache. And to keep that combined limit
working they need to maintain a combined LRU list of pages to know
which pages should be evicted from cache first in case that's needed.


Pavel


On Mon, Aug 13, 2012 at 5:51 AM, Boris Kolpackov
 wrote:
> Hi Richard,
>
> Richard Hipp  writes:
>
>> In shared-cache mode, the page cache is shared across threads. That means
>> that each thread must acquire a mutex on the page cache in order to read
>> it. Which means that access to the page cache is serialized.
>
> I just ran our concurrency test in different configurations and I
> observer a similar behavior. That is, in the shared-cache mode,
> read-only transactions on the same table are executed pretty much
> sequentially.
>
> Also, your explanation doesn't feel quite satisfactory to me. In his
> original email, Eric mentioned that his table contains just 50 rows.
> Surely all this data would be loaded into the cache the first time
> it is requested and then accessed concurrently by all the threads.
> The only way I can see how the sequential performance could be
> explained here is if the cache mutex did not distinguish between
> readers and writers (which would seem to be a fairly natural thing
> to do).
>
> In our test, on the other hand, each thread queries its own set of
> rows from the table. So, based on your explanation, here each thread
> should end up with its own set of pages (more or less). However, even
> in this case, I still observe a near sequential performance.
>
> Any idea what else might be going on here?
>
> Boris
> --
> Boris Kolpackov, Code Synthesishttp://codesynthesis.com/~boris/blog
> Compiler-based ORM system for C++  http://codesynthesis.com/products/odb
> Open-source XML data binding for C++   http://codesynthesis.com/products/xsd
> XML data binding for embedded systems  http://codesynthesis.com/products/xsde
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Restore SQLite DB from WAL

2012-07-23 Thread Pavel Ivanov
> In each case the same thing happens, the messages briefly display, then
> disappear

I think your sms-controlling app has synced the whole sms database
with some server. When you start your phone it shows you local data
but then it sees that server has latest data (maybe using modification
date on the database, maybe some synchronization token stored in the
database) and restores everything from there.

So maybe you need to look for some setting saying "allow to sync
everything with server" and "allow to sync everything back from
server".


Pavel


On Mon, Jul 23, 2012 at 3:51 PM,   wrote:
> So close, yet so far! This is really getting on my wick...
>
>>> Example scenario where vast swathes of content have vanished from
>>> mmssms.db thanks to an unintentional mass-DELETE FROM through the naughty
>>> Android SMS app:
>>>
>>> ls -al /data/data/com.android.providers.telephony/databases/mm*
>>> -rw-rw 1 root root 60416 Jul 17 20:16 mmssms.db
>>> -rw-rw 1 radio radio 32768 Jul 17 16:18 mmssms.db-shm
>>> -rw-rw 1 radio radio 628832 Jun 30 19:23 mmssms.db-wal
>> The WAL file does not contain the old data, it contains the new data. If
>> the WAL file has not been checkpointed, you should be able to simply delete
>> the WAL file and the old data will still be in the original database file.
>> A "checkpoint" operation is the act of moving the new database from the WAL
>> file back into the original database file, presumably overwriting the old
>> data (depending on your settings and other factors).
>> But, if the WAL file has been partially checkpointed, and that checkpoint
>> was interrupted by a crash or power failure, deleting the WAL file will
>> corrupt your database. So you should probably run the experiment on a
>> backup. :-)
>
> In my test, I started the phone in single-user/recovery mode, deleted
> mmssms.db-wal and restarted. The deleted messages showed very briefly,
> then vanished.
> I then restored mmssms.db to its original (just after deletion) state
> and deleted both mmssms.db-wal and mmssms.db-shm. Same result.
> Not to be deterred, I then restored the mmssms.db, deleted the
> mmssms.db-shm and created a zero-length file. In all cases the owner was
> set to radio:radio (in the initial example below mmssms.db was incorrectly
> set to root.)
> In each case the same thing happens, the messages briefly display, then
> disappear, mmssms.db, then mmssms.db-wal and mmssms.db-shm with the much
> sought-after deleted messages, so (if I'm not mistaken) at least some of
> the info is retained in mmssms.db itself, albeit with instructions to
> purge.
>
> Is there any way to merge the lost data stored in mmssms.db-wal /
> mmssms.db-shm back into mmssms.db? For example, if the DB/WAL stores a
> bunch of DELETE FROM statements could they be changed to INSERT INTO?
>
> Mandy
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] C++ - Creating Table

2012-07-23 Thread Pavel Ivanov
As you don't have stmt defined anywhere this is apparently not the
actual program you run. Seeing the full source code would be more
helpful. If the whole source is too big try to reduce it to small
program reproducing the problem. It's possible that while attempting
to reduce program source you'll find the culprit by yourself.

My bet would be you have some multi-threading issues.


Pavel


On Mon, Jul 23, 2012 at 2:54 PM, Arbol One  wrote:
> Using SQLite version 3.7.8 amalgamation, under Win7 with MinGW, I compile
> the bellow program, but for some strange reason I am getting a runtime error
> when creating the table. I hope that one of you would be able to tell me
> what I am doing wrong.
>
>
>
> TIA
>
> ===
>
> class mySQLite3Class {
>
> private:
>
> //SQLite3
>
> sqlite3* db; //SQLite3
>
> Glib::ustring dbName; // Database name
>
> Glib::ustring apstr; // All Purpose String
>
>
>
> Glib::ustring sql_param_tblName; // Databese table Name parameters
>
> Glib::ustring stmtName;  // SQL statement name
>
> public:
>
> void createDB();
>
> void create_tblName();
>
> mySQLite3Class(const Glib::ustring& s){ createDatabase(s);}
>
> };
>
> void mySQLite3Class::createDatabase(const Glib::ustring& s) {
>
> rc = sqlite3_open_v2(s.c_str(),
>
>  , SQLITE_OPEN_READWRITE | SQLITE_OPEN_CREATE,
>
>  NULL);
>
> if(rc != SQLITE_OK) {
>
> std::cout << rc << std::endl;
>
> }
>
> }
>
> void mySQLite3Class::createTable(const Glib::ustring& s){
>
>  rc = sqlite3_prepare_v2(db, s.c_str(), s.length(), , NULL );
>
> if(rc != SQLITE_OK) {
>
>std::cout << rc << std::endl;// error = 1
>
>std::cout << sqlite3_errmsg(db)
>
><< std::endl; // er-msg = library routine called out
> of sequence
>
> }
>
> rc = sqlite3_step(stmt);
>
> if(rc != SQLITE_DONE) {
>
> std::cout << rc << stdl;
>
> }
>
> sqlite3_finalize(stmt);
>
> }
>
> myClass{
>
> private:
>
> mySQLite3Class* myDB;
>
> Glib::ustring sql_param_tblName;
>
> Glib::ustring dbName;
>
> public:
>
> myClass();
>
> }
>
> myClass::myClass(){
>
> dbName = "001Database.sql";
>
> sql_param_tblName = "CREATE TABLE name(n_id INTEGER PRIMARY KEY, title
> TEXT, fname TEXT, mname TEXT, lname TEXT)";
>
> myDB = new mySQLite3Class(dbName);
>
> myDB->createTable(sql_param_tblName); ==> // problem
>
> }
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Current Time in WHEN clause.

2012-07-22 Thread Pavel Ivanov
Keith,

No SQL format can guarantee anything. And changing SQL format won't change
its execution plan - SQL optimizer will decide that for you. And I don't
know where did you take your information from but I can guarantee you that
with your query SQLite will execute strftime many times and OP's problem
will persist.

Pavel
On Jul 22, 2012 2:24 PM, "Keith Medcalf"  wrote:

> select a.name, a.assignedTo, a.validFrom
>   from aliases a,
>(select name, max(validfrom) as mvf
>   from aliases,
>(select strftime('%s', 'now') as now) as c
>  where validFrom <= now
>   group by name) as b
>  where a.name=b.name and a.validfrom=b.mvf;
>
> should also work.  I changed the syntax slightly:
> from a join b join c on a.x=b.y and b.z=c.p is just a different spelling
> of from a, b, c where a.x=b.y and b.z=c.p
> designed to make it clear (COBOL style) which clauses are join columns
> (ON) and which are row selectors (WHERE).  It should have zero effect on
> the actually query plan.
>
> Effectively, creates a temp table with a single row containing a single
> value now, which is used as a selector against the rows of aliases.  This
> format should guarantee that the strftime function is only ever executed
> once.
>
> sqlite> explain query plan select a.name, a.assignedTo, a.validFrom
>...>   from aliases a,
>...>(select name, max(validfrom) as mvf
>...>   from aliases,
>...>(select strftime('%s', 'now') as now) as c
>...>  where validFrom <= now
>...>   group by name) as b
>...>  where a.name=b.name and a.validfrom=b.mvf;
> SELECT item[0] = {0:0}
>item[1] = {0:1}
>item[2] = {0:2}
> FROM {0,*} = aliases (AS a)
>  {1,*} = SELECT agg_flag
> item[0] = AGG{2:0}
> item[1] = AGG_FUNCTION:max(AGG{2:2})
>  FROM {2,*} = aliases
>   {3,*} = SELECT FUNCTION:strftime(item[0] = '%s'
>item[1] = 'now')
>   END (tabname=sqlite_subquery_53E850_) (AS c)
>  WHERE LE({2:2},{3:0})
>  GROUPBY {2:0}
>  END (tabname=sqlite_subquery_53E4D0_) (AS b)
> WHERE AND(EQ({0:0},{1:0}),EQ({0:2},{1:1}))
> END
> sele  order  from  deta
>   -    
> 1 0  1 SCAN SUBQUERY 2 AS c (~1 rows)
> 1 1  0 SCAN TABLE aliases (~33 rows)
> 1 0  0 USE TEMP B-TREE FOR GROUP BY
> 0 0  1 SCAN SUBQUERY 1 AS b (~100 rows)
> 0 1  0 SEARCH TABLE aliases AS a USING AUTOMATIC
> COVERING INDEX (name=? AND validFrom=?) (~7 rows)
> sqlite>
>
> ---
> ()  ascii ribbon campaign against html e-mail
> /\  www.asciiribbon.org
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Current Time in WHEN clause.

2012-07-22 Thread Pavel Ivanov
> I'm trying to convince myself that the subquery in latest_aliases only 
> executes once, and also that alias_info only runs latest_aliases once. 
> However, I'm not doing very well at convincing myself. Can someone confirm 
> this is true, or suggest a nice solution to get the current time function out 
> of the when clause.

AFAIK, that's not true, i.e. subquery will be executed several times.
You can execute "select strftime('%s', 'now')" first and then put its
result into your query. But that won't work with view, of course.

Pavel


On Sun, Jul 22, 2012 at 11:17 AM, Kevin Martin  wrote:
> Hi,
>
> I have the following schema:
>
>> CREATE TABLE aliases (name text not null, assignedTo text, validFrom integer 
>> not null);
>> CREATE TABLE services (name text primary key, url text not null, type text 
>> not null);
>> CREATE VIEW latest_aliases as select a.name, a.assignedTo, a.validFrom from 
>> aliases as a join (select name, max(validfrom) as mvf from aliases where 
>> validFrom <= strftime('%s', 'now') group by name) as b on a.name=b.name and 
>> a.validFrom=b.mvf;
>> CREATE VIEW alias_info as select a.name, s.url, s.type from latest_aliases 
>> as a join services as s on a.assignedTo = s.name;
>
> The aliases table maps names to services at different times. So for example 
> with the following data:
>
>> sqlite> select * from services;
>> svc1|http://somewhere|type1
>> svc2|http://somewhere.else|type1
>
>> sqlite> select *,datetime(validFrom, 'unixepoch') from aliases;
>> env1|svc1|1342967110|2012-07-22 14:25:10
>> env1|svc2|1342967185|2012-07-22 14:26:25
>
> I want env1 to be mapped to svc1 after 14:25:10, until 14:26:25 after which 
> point I want it to be mapped to svc2.
>
> This is done with the latest_aliases view, alias_info just joins 
> latest_aliases to the services table to get the connection info.
>
> However, I'm quite concerned about executing strftime('%s', 'now') inside the 
> subquery, specifically does it execute multiple times? I don't want this to 
> happen because it could cause a rare bug when the list of services returned 
> is partly pre and partly post an update.
>
> I'm trying to convince myself that the subquery in latest_aliases only 
> executes once, and also that alias_info only runs latest_aliases once. 
> However, I'm not doing very well at convincing myself. Can someone confirm 
> this is true, or suggest a nice solution to get the current time function out 
> of the when clause.
>
> Thanks,
> Kevin Martin.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] matchinfo example does not work as documented

2012-07-21 Thread Pavel Ivanov
On Sat, Jul 21, 2012 at 7:35 PM, AJ ONeal  wrote:
>>
>> Read the documentation carefully:
>> http://www.sqlite.org/fts3.html#matchinfo. Right the first paragraph:
>>
>> The matchinfo function returns a blob value. If it is used within a
>> query that does not use the full-text index (a "query by rowid" or
>> "linear scan"), then the blob is zero bytes in size. Otherwise, the
>> blob consists of zero or more 32-bit unsigned integers in machine
>> byte-order.
>>
>> What part of this paragraph makes you believe that if you print the
>> result of matchinfo as text you will see something meaningful?
>>
>
> The part where it shows output in the comments of the example that,
> according to common conventions used in documentation, would indicate it is
> the output of the function (which it is, just not the user-viewable output).

Where did you see that? Could you cite it? All I see is

-- ... If each block of 4 bytes in the blob is interpreted
-- as an unsigned integer in machine byte-order, the values will be:
--
-- 3 2  1 3 2  0 1 1  1 2 2  0 1 1  0 0 0  1 1 1

So it's clearly says: you have to interpret it, it's not like you just
print it as string.

> Plenty of languages (javascript, ruby, python, etc, etc, etc) pretty-print
> native objects when they are to be represented as text.
>
> What about that paragraph indicates that the sqlite3 cli doesn't know how
> to pretty-print understand its own native types?

sqlite3 cli understand its native type which is BLOB. But how should
it pretty-print it? BLOB can contain absolutely any information and
it's not its job to parse SQL to try to understand what this blob can
actually contain.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] matchinfo example does not work as documented

2012-07-21 Thread Pavel Ivanov
On Sat, Jul 21, 2012 at 4:06 PM, AJ ONeal  wrote:
>>
>> > Now I see a number which matches my expectations:
>> > SELECT hex(matchinfo(t1)) FROM t1 WHERE t1 MATCH 'default transaction
>> > "these semantics"';
>> >
>> >
>> 030002000100030002000100010001000200020001000100010001000100
>> >
>> > Will you update the documentation with this information?
>> >
>>
>> What information do you propose to add to the documentation?
>
>
> That in order to see the data the hex() function must be used.
>
> I imagine that most users who are following this documentation for the
> first time are using the sqlite3 binary and following along by copying and
> pasting the examples.

Read the documentation carefully:
http://www.sqlite.org/fts3.html#matchinfo. Right the first paragraph:

The matchinfo function returns a blob value. If it is used within a
query that does not use the full-text index (a "query by rowid" or
"linear scan"), then the blob is zero bytes in size. Otherwise, the
blob consists of zero or more 32-bit unsigned integers in machine
byte-order.

What part of this paragraph makes you believe that if you print the
result of matchinfo as text you will see something meaningful?


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to compile and load the example fts4 rank function?

2012-07-21 Thread Pavel Ivanov
On Sat, Jul 21, 2012 at 3:36 AM, AJ ONeal  wrote:
> I naively tried
>
> wget
> https://raw.github.com/gist/3154964/d570955d45580c095c99de6eb0c378395d4b076d/sqlite3-fts4-rank.c
> gcc -c sqlite3-fts4-rank.c -o sqlite3-fts4-rank.o
>
> sqlite3
> .load sqlite3-fts4-rank.o
>
> But that didn't work.
>
> Can I get a link to the docs on this? I don't think I was using the right
> search terms to find it.

You cannot load an object file, you should load a shared library (*.so
on Linux).

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting occasional crashes with sqlite3_prepare_v2 and sqlite3_step in iOS app

2012-07-19 Thread Pavel Ivanov
On Thu, Jul 19, 2012 at 2:07 PM, Rick Maddy  wrote:
> Didn't mean to imply that failing to check the return value resulted in 
> memory corruption. I was wondering if it was possible that one of the many 
> calls to sqlite3_bind_* in my code may actually be causing some memory 
> corruption. I can envision some possible buffer overflows associated with 
> those calls.

SQLite is written in such a way that no call to sqlite3_bind_* can
cause memory corruption per se. It can return error if you are trying
to bind wrong parameter and your app can corrupt memory if along with
call to sqlite3_bind_* it changes some internal structures. But calls
to sqlite3_* functions can cause corruption only if you try to work
with already closed connection or with already finalized statement.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT :Re: Getting occasional crashes with sqlite3_prepare_v2 and sqlite3_step in iOS app

2012-07-19 Thread Pavel Ivanov
There's no easy method of detecting memory corruption from inside your
own application, especially the memory that your application don't
control. You can use memory debuggers for that purpose. If you were on
Linux I would recommend to use valgrind. But I don't know if there are
any similar tools on iOS.

Pavel

On Thu, Jul 19, 2012 at 1:17 PM, Rick Maddy  wrote:
> But that would flag valid state changes just as much as it would flag 
> corrupted memory.
>
> Rick
>
>
>
> On Jul 19, 2012, at 11:15 AM, Black, Michael (IS) wrote:
>
>> It could easiliy be expanded to look at the whole structure...
>>
>>
>>
>> Or just do an XOR checksum on the bytes in sizeof(sqlite3) and compare that.
>>
>>
>>
>> Michael D. Black
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Getting occasional crashes with sqlite3_prepare_v2 and sqlite3_step in iOS app

2012-07-19 Thread Pavel Ivanov
> For quite some time now I've been getting reports of crashes in my iOS app. 
> Specifically these are caused by crashes in sqlite3_prepare_v2 and 
> sqlite_step. The associated code works fine most of the time. So I'm looking 
> for thoughts on how to find and fix the problem since there seems to be no 
> pattern to when the rare crashes actually happen.

All the above means that your application corrupts memory somewhere
and it's not related to SQLite in any way. SQLite here is just an
unfortunate victim tripping over your memory corruption.

I don't know though if there are any tools on iOS for memory debugging.


Pavel


On Thu, Jul 19, 2012 at 12:09 PM, Rick Maddy  wrote:
> For quite some time now I've been getting reports of crashes in my iOS app. 
> Specifically these are caused by crashes in sqlite3_prepare_v2 and 
> sqlite_step. The associated code works fine most of the time. So I'm looking 
> for thoughts on how to find and fix the problem since there seems to be no 
> pattern to when the rare crashes actually happen.
>
> Here is some background information:
>
> All of the issues are from devices running iOS 5.1.1 which has sqlite version 
> 3.7.7. I open the database on app startup with the following call:
>
> if (sqlite3_open_v2([dbPath UTF8String], , 
> SQLITE_OPEN_READWRITE | SQLITE_OPEN_FULLMUTEX, NULL) == SQLITE_OK) {
> }
>
> I use the same 'dbRef' for every query performed in the app regardless of the 
> thread the query is used on.
>
> The output of 'sqlite3_threadsafe()' is 2 (SQLITE_CONFIG_SERIALIZED I 
> believe). So the database should be properly setup to work in a 
> multi-threaded environment.
>
> Every call to 'sqlite3_prepare_v2' that I use basically looks like this:
>
> sqlite3_stmt *query = nil;
> NSString *sql = @"some valid SQL";
> sqlite3_prepare_v2(dbRef, [sql UTF8String], -1, , nil);
>
>
> Here are a few examples of the stack traces from the crash reports:
>
> Sample 1:
>
> Exception Type:  SIGSEGV
> Exception Codes: SEGV_ACCERR at 0x1a
> Crashed Thread:  0
>
> Thread 0 Crashed:
> 0   libsqlite3.dylib0x34c783cc 0x34c43000 + 218060
> 1   libsqlite3.dylib0x34c5d3bd 0x34c43000 + 107453
> 2   libsqlite3.dylib0x34c4d5bb 0x34c43000 + 42427
> 3   libsqlite3.dylib0x34c4ab7b 0x34c43000 + 31611
> 4   libsqlite3.dylib0x34c4a367 0x34c43000 + 29543
> 5   libsqlite3.dylib0x34c49e95 0x34c43000 + 28309
> 6   libsqlite3.dylib0x34c49beb 0x34c43000 + 27627
> 7   libsqlite3.dylib0x34c80f97 sqlite3_prepare_v2 + 27
>
> ===
>
> Sample 2:
>
> Exception Type:  SIGSEGV
> Exception Codes: SEGV_ACCERR at 0x7
> Crashed Thread:  0
>
> Thread 0 Crashed:
> 0   libsqlite3.dylib0x34f38410 0x34f1e000 + 107536
> 1   libsqlite3.dylib0x34f2819f 0x34f1e000 + 41375
> 2   libsqlite3.dylib0x34f25b7b 0x34f1e000 + 31611
> 3   libsqlite3.dylib0x34f25367 0x34f1e000 + 29543
> 4   libsqlite3.dylib0x34f24e95 0x34f1e000 + 28309
> 5   libsqlite3.dylib0x34f24beb 0x34f1e000 + 27627
> 6   libsqlite3.dylib0x34f5bf97 sqlite3_prepare_v2 + 27
>
> ===
>
> Sample 3:
>
> Exception Type:  SIGSEGV
> Exception Codes: SEGV_ACCERR at 0x2
> Crashed Thread:  7
>
> Thread 7 Crashed:
> 0   libsqlite3.dylib0x34cf848c 0x34cc8000 + 197772
> 1   libsqlite3.dylib0x34cfd8f9 0x34cc8000 + 219385
> 2   libsqlite3.dylib0x34cf84ed 0x34cc8000 + 197869
> 3   libsqlite3.dylib0x34cf0bdd 0x34cc8000 + 166877
> 4   libsqlite3.dylib0x34cef6c9 sqlite3_step + 2105
>
> ===
>
> Thanks for any insights into these issues.
>
> Rick
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Selecting NULL

2012-07-11 Thread Pavel Ivanov
On Wed, Jul 11, 2012 at 2:00 PM, deltagam...@gmx.net
 wrote:
> If mydetails contains a value, and I delete this value with "SQLite Database
> Browser"
> it seems not to be a "real NULL", has someone an explanation for this
> behaviour ?

It depends on your meaning of word "delete". But maybe when you clear
the value in SQLite Database Browser it writes empty string?

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Selecting NULL

2012-07-11 Thread Pavel Ivanov
> // here maybe NULL is returned ?
> mydetails = (char*)sqlite3_column_text(stmt, 0 );

Check sqlite3_column_type() before calling sqlite3_column_text(). If
it returns SQLITE_NULL then you have NULL.

Pavel


On Wed, Jul 11, 2012 at 1:40 PM, deltagam...@gmx.net
 wrote:
> Hello,
>
> how to process if the select-statement selects a column with NULL  ?
>
>
> ==
> char *mydetails;
> char *sql;
> sqlite3_stmt *stmt;
> sqlite3 *db;
> const char dbname[] = "mysqlite.db";
>
>
> sql = "Select mydetails from mytable";
>
> rc = sqlite3_prepare(db, sql, strlen(sql), , NULL);
>
> rc = sqlite3_step(stmt);
>
>
> while(rc == SQLITE_ROW) {
>
> // here maybe NULL is returned ?
> mydetails = (char*)sqlite3_column_text(stmt, 0 );
> rc = sqlite3_step(stmt);
> }
> ===
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting from another table...

2012-07-06 Thread Pavel Ivanov
> The query does not seem to do anything, the table is empty after I run the
> query.

This can only mean that there was some error executing insert. It's
written in a way that it just cannot leave empty table for any reason
except error. You have all columns NOT NULL, so maybe dblookup doesn't
have value for some column and thus the whole insert fails.

> One question...  In the PP_VIEWER_SETTINGS, I have columns of different
> types, but in DBLookup all the values are varchar's.  Since sqlite doesn't
> actually deal with column types, I figured this would not be an issue.
> Might it be?  Should I be casting them to the correct type first?

You don't have to. Your types declared in such way that SQLite will
automatically try to convert all varchar values to integers and
doubles where necessary. If SQLite won't be able to convert it will
insert text value provided. But if you put explicit casting then
SQLite will insert 0 if cast won't be successful.


Pavel


On Fri, Jul 6, 2012 at 2:45 PM, Sam Carleton <scarle...@miltonstreet.com> wrote:
> Pavel,
>
> The goal is to get them all into one row, correct.
>
> The query does not seem to do anything, the table is empty after I run the
> query.
>
> One question...  In the PP_VIEWER_SETTINGS, I have columns of different
> types, but in DBLookup all the values are varchar's.  Since sqlite doesn't
> actually deal with column types, I figured this would not be an issue.
> Might it be?  Should I be casting them to the correct type first?
>
> Sam
>
> On Thu, Jul 5, 2012 at 11:24 PM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> The insert statement below should insert one row into table
>> PP_VIEWER_SETTINGS. Does it do that? Is it what you called "does not
>> work"? To insert several rows you need to write a huge join of
>> dblookup to itself, so your insert statement should look like this:
>>
>> insert into PP_VIEWER_SETTINGS (...)
>> select a.ItemValue, b.ItemValue, c.ItemValue, ...
>> from dblookup a, dblookup b, dblookup c, ...
>> where a.Category = "KvsSettings"
>> and a.ItemName = "Company"
>> and b.Category = "KvsSettings"
>> and b.ItemName = "DspNextPrevious"
>> and c.Category = "KvsSettings"
>> and c.ItemName = "EnableCarts"
>> ...
>> and a.? = b.?
>> and a.? = c.?
>> ...
>> ;
>>
>> Question marks here is the field which value should identify what row
>> particular ItemName should go to.
>>
>>
>> Pavel
>>
>>
>> On Thu, Jul 5, 2012 at 11:03 PM, Sam Carleton
>> <scarle...@miltonstreet.com> wrote:
>> > I am working on converting my system table from one form to another.  The
>> > old form was one row per value with a category/key/value (DBLookup) , the
>> > new form is a separate column for each value (PP_VIEWER_SETTINGS).  I am
>> > trying to create an insert statement to run when the new table is
>> created,
>> > but when I run it, it does not work nor do I get any errors in SQLite
>> > manager:
>> >
>> > /* The OLD one */
>> > CREATE TABLE DBLookup (
>> > Category VARCHAR(32) NOT NULL,
>> > ItemName VARCHAR(128) NOT NULL,
>> > ItemValue VARCHAR(3000) NOT NULL,
>> > PRIMARY KEY(Category, ItemName))
>> >
>> > /* The NEW one */
>> > CREATE TABLE PP_VIEWER_SETTINGS
>> > (
>> >   VIEWER_SETTINGS_ID  INTEGER PRIMARY KEY
>> > AUTOINCREMENT,
>> >   COMPANY_NAMEVARCHAR(   260) NOT NULL,
>> >   DSPNEXTPREVIOUSSMALLINT NOT NULL,
>> >   ENABLE_CARTS   SMALLINT NOT NULL,
>> >   ENABLE_DEBUGINFO   SMALLINT NOT NULL,
>> >   ENABLE_FAVORITES   SMALLINT NOT NULL,
>> >   ENABLE_RIGHTCLICK  SMALLINT NOT NULL,
>> >   ENABLE_SLIDESHOW   SMALLINT NOT NULL,
>> >   ENABLE_TIMEOUT SMALLINT NOT NULL,
>> >   EXIT_KVS   SMALLINT NOT NULL,
>> >   EXIT_PASSWORD   VARCHAR(20) NOT NULL,
>> >   IS_CART_FAVORITES  SMALLINT NOT NULL,
>> >   IS_LOGIN_REQUIRED  SMALLINT NOT NULL,
>> >   IMAGE_SIZE  INTEGER NOT NULL,
>> >   PHONE_NUM_FORMATVARCHAR(20) NOT NULL,
>> >   THEME_ID   

Re: [sqlite] Bug regarding HAVING aggregation and typecast comparisons

2012-07-05 Thread Pavel Ivanov
> sqlite> SELECT test_base.id, SUM(test_join.value) FROM test_base LEFT
> JOIN test_join ON test_join.base_id = test_base.id GROUP BY
> test_base.id HAVING SUM(test_join.value)='0';
> sqlite>
>
> The last statement generates no results.  There is no mention of a
> special case for HAVING so I would assume that the engine should also
> typecast the string into a corresponding numerical value.

No, your assumption is incorrect. SUM() is not a database column -
it's a function. So it doesn't have any affinity. Value '0' doesn't
have affinity too. Thus SQLite doesn't convert those values and
compares them as is. Numeric value won't be ever equal to string.


Pavel


On Thu, Jul 5, 2012 at 11:17 PM, Benjamin Feng  wrote:
> sqlite> CREATE TABLE test_base(id INTEGER NOT NULL PRIMARY KEY);
> sqlite> CREATE TABLE test_join(id INTEGER NOT NULL PRIMARY KEY,
> base_id INTEGER NOT NULL REFERENCES test_base(id), value NUMERIC NOT
> NULL);
> sqlite> INSERT INTO test_base VALUES(1);
> sqlite> INSERT INTO test_join VALUES(1, 1, 0);
>
> sqlite> SELECT test_base.id, test_join.value FROM test_base LEFT JOIN
> test_join ON test_join.base_id = test_base.id WHERE test_join.value=0;
> 1|0
> sqlite> SELECT test_base.id, test_join.value FROM test_base LEFT JOIN
> test_join ON test_join.base_id = test_base.id WHERE
> test_join.value='0';
> 1|0
> sqlite> SELECT test_base.id, SUM(test_join.value) FROM test_base LEFT
> JOIN test_join ON test_join.base_id = test_base.id GROUP BY
> test_base.id HAVING SUM(test_join.value)=0;
> 1|0
> sqlite> SELECT test_base.id, SUM(test_join.value) FROM test_base LEFT
> JOIN test_join ON test_join.base_id = test_base.id GROUP BY
> test_base.id HAVING SUM(test_join.value)='0';
> sqlite>
>
> The last statement generates no results.  There is no mention of a
> special case for HAVING so I would assume that the engine should also
> typecast the string into a corresponding numerical value.  This is
> failing on all numerical types of `value` (including INTEGER).
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database sharing across processes

2012-07-05 Thread Pavel Ivanov
On Thu, Jul 5, 2012 at 10:54 AM, Jonathan Haws
 wrote:
> I am fairly new to database development and I am working on an embedded 
> system where we are utilizing SQLite to manage some files and other 
> information that is being shared between processes.  What I am doing is I 
> have the SQLite amalgamation source code that I am compiling into each binary 
> executable and each executable is opening the same database file on disk.
>
> My question is this: is this the appropriate way to go about this?

Yes.

> Is there a better way to accomplish this task?

This is good enough, considering the amount of information you gave.

> If I continue down this path, are there are particular settings that I need 
> to set?

No, you don't need any settings.

> I have read through a lot of the documentation and it seems like I may want 
> to put the database in shared-cache mode, however that also seems to only 
> apply to threads within a single process.  Is that correct?

Correct. No need to use shared-cache mode if you use single-threaded processes.

> I am also thinking that I may want to make use of the sqlite_unlock_notify() 
> call to ensure that if I try to write to the database and it fails to get a 
> lock, it will pend until it is available.  However, I thought that a query 
> would pend until it gets a lock anyway.  Is that not the case?

sqlite3_unlock_notify() works only with shared-cache mode within one
process. It doesn't work in inter-process locking. And by default
query won't be pending until locking is possible. If you use function
sqlite3_busy_timeout() you can obtain behavior close to what you want.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting from another table...

2012-07-05 Thread Pavel Ivanov
The insert statement below should insert one row into table
PP_VIEWER_SETTINGS. Does it do that? Is it what you called "does not
work"? To insert several rows you need to write a huge join of
dblookup to itself, so your insert statement should look like this:

insert into PP_VIEWER_SETTINGS (...)
select a.ItemValue, b.ItemValue, c.ItemValue, ...
from dblookup a, dblookup b, dblookup c, ...
where a.Category = "KvsSettings"
and a.ItemName = "Company"
and b.Category = "KvsSettings"
and b.ItemName = "DspNextPrevious"
and c.Category = "KvsSettings"
and c.ItemName = "EnableCarts"
...
and a.? = b.?
and a.? = c.?
...
;

Question marks here is the field which value should identify what row
particular ItemName should go to.


Pavel


On Thu, Jul 5, 2012 at 11:03 PM, Sam Carleton
 wrote:
> I am working on converting my system table from one form to another.  The
> old form was one row per value with a category/key/value (DBLookup) , the
> new form is a separate column for each value (PP_VIEWER_SETTINGS).  I am
> trying to create an insert statement to run when the new table is created,
> but when I run it, it does not work nor do I get any errors in SQLite
> manager:
>
> /* The OLD one */
> CREATE TABLE DBLookup (
> Category VARCHAR(32) NOT NULL,
> ItemName VARCHAR(128) NOT NULL,
> ItemValue VARCHAR(3000) NOT NULL,
> PRIMARY KEY(Category, ItemName))
>
> /* The NEW one */
> CREATE TABLE PP_VIEWER_SETTINGS
> (
>   VIEWER_SETTINGS_ID  INTEGER PRIMARY KEY
> AUTOINCREMENT,
>   COMPANY_NAMEVARCHAR(   260) NOT NULL,
>   DSPNEXTPREVIOUSSMALLINT NOT NULL,
>   ENABLE_CARTS   SMALLINT NOT NULL,
>   ENABLE_DEBUGINFO   SMALLINT NOT NULL,
>   ENABLE_FAVORITES   SMALLINT NOT NULL,
>   ENABLE_RIGHTCLICK  SMALLINT NOT NULL,
>   ENABLE_SLIDESHOW   SMALLINT NOT NULL,
>   ENABLE_TIMEOUT SMALLINT NOT NULL,
>   EXIT_KVS   SMALLINT NOT NULL,
>   EXIT_PASSWORD   VARCHAR(20) NOT NULL,
>   IS_CART_FAVORITES  SMALLINT NOT NULL,
>   IS_LOGIN_REQUIRED  SMALLINT NOT NULL,
>   IMAGE_SIZE  INTEGER NOT NULL,
>   PHONE_NUM_FORMATVARCHAR(20) NOT NULL,
>   THEME_IDINTEGER NOT NULL,
>   THUMBNAIL_SIZE SMALLINT NOT NULL,
>   TICKER_MSG  VARCHAR(   260) NOT NULL,
>   TO_AFTER   SMALLINT NOT NULL,
>   TO_STARTS  SMALLINT NOT NULL,
>   TO_TRANSITION_SECS SMALLINT NOT NULL,
>   SS_COUNT   SMALLINT NOT NULL,
>   SS_DEFAULT_IS_IN_SLIDESHOW SMALLINT NOT NULL,
>   SS_DISPLAY_SECONDS DOUBLE PRECISION NOT NULL,
>   SS_ZOOM_FACTOR DOUBLE PRECISION NOT NULL,
>   USERLAN VARCHAR(   260) NOT NULL
> );
>
> /* The insert script */
>
> insert into PP_VIEWER_SETTINGS
> ( COMPANY_NAME, DSPNEXTPREVIOUS, ENABLE_CARTS, ENABLE_DEBUGINFO,
> ENABLE_FAVORITES, ENABLE_RIGHTCLICK, ENABLE_SLIDESHOW,
>   ENABLE_TIMEOUT, EXIT_KVS, EXIT_PASSWORD, IS_CART_FAVORITES,
> IS_LOGIN_REQUIRED, IMAGE_SIZE, PHONE_NUM_FORMAT, THEME_ID,
>   THUMBNAIL_SIZE, TICKER_MSG, TO_AFTER, TO_STARTS, TO_TRANSITION_SECS,
> SS_COUNT, SS_DEFAULT_IS_IN_SLIDESHOW,
>   SS_DISPLAY_SECONDS, SS_ZOOM_FACTOR, USERLAN)
>   values (
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "Company"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "DspNextPrevious"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "EnableCarts"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "EnableDebugInfo"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "EnableFavorites"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "EnableRightClick"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "EnableSlideShow"),
> 1,
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "ExitKvs"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "ExitPassword"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = "IsCartFavorites"),
> (select ItemValue from dblookup where Category = "KvsSettings" and
> ItemName = 

Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread Pavel Ivanov
So this feature shouldn't work for you. From my first message:

> But this possibility was
> introduced in SQLite 3.7.13. So your asp.net provider should be
> compiled with the latest version of SQLite, otherwise it won't work.


Pavel


On Thu, Jul 5, 2012 at 8:56 AM, T Ü <shocking_blue2...@yahoo.com> wrote:
> It returns 3.6.23.1
>
> ________
> From: Pavel Ivanov <paiva...@gmail.com>
> To: T Ü <shocking_blue2...@yahoo.com>
> Cc: General Discussion of SQLite Database <sqlite-users@sqlite.org>
> Sent: Thursday, July 5, 2012 3:40 PM
>
> Subject: Re: [sqlite] Multiple connections to in-memory database
>
> On Thu, Jul 5, 2012 at 8:37 AM, T Ü <shocking_blue2...@yahoo.com> wrote:
>> By trying I found out that SQLiteConnection("Data
>> Source=:memory:;cache=shared"); worked.
>> In a single aspx.page at cs code, first I open an in-memory database
>> connection
>>SQLiteConnection conn = new SQLiteConnection ( "Data
>> Source=:memory:;cache=shared" );
>>conn.Open();
>> than create table and insert some data
>> then without closing that connection open another connection in the
>> sameway,
>> but when I try to select the rows of the table that I created in the
>> previous table, I get no such table error.
>> What am I doing wrong???
>
> Please execute "SELECT sqlite_version()" in your cs code and tell us
> the result of it.
>
> Pavel
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread Pavel Ivanov
On Thu, Jul 5, 2012 at 8:37 AM, T Ü  wrote:
> By trying I found out that SQLiteConnection("Data
> Source=:memory:;cache=shared"); worked.
> In a single aspx.page at cs code, first I open an in-memory database
> connection
> SQLiteConnection conn = new SQLiteConnection ( "Data
> Source=:memory:;cache=shared" );
> conn.Open();
> than create table and insert some data
> then without closing that connection open another connection in the sameway,
> but when I try to select the rows of the table that I created in the
> previous table, I get no such table error.
> What am I doing wrong???

Please execute "SELECT sqlite_version()" in your cs code and tell us
the result of it.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple connections to in-memory database

2012-07-05 Thread Pavel Ivanov
On Thu, Jul 5, 2012 at 7:46 AM, T Ü  wrote:
> I have an asp.net application.
> I open a sqlite in-memory connection with SQLiteConnection conn = new 
> SQLiteConnection ( "Data Source=:memory:" ); command.
> I read that by using cache=shared parameter, I can make that in-memory 
> database reachable from other connections.
>
> 1.What is the way of applying cache=shared parameter in this type of 
> connection? SQLiteConnection conn = new SQLiteConnection ( "Data 
> Source=:memory:?cache=shared" ); is not working?

I think you should write SQLiteConnection("Data
Source=file::memory:?cache=shared"). But this possibility was
introduced in SQLite 3.7.13. So your asp.net provider should be
compiled with the latest version of SQLite, otherwise it won't work.

> 2.What is the way of creating a new connection for accessing the previously 
> opened in-memory database?

You should create new connection the same way as previously opened
one, i.e. SQLiteConnection("Data Source=file::memory:?cache=shared").


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update trigger fires for all rows even if sqlite3_changes returns 0

2012-07-03 Thread Pavel Ivanov
On Tue, Jul 3, 2012 at 10:55 AM, Jay A. Kreibich  wrote:
> On Tue, Jul 03, 2012 at 02:43:29PM +0200, Paul van Helden scratched on the 
> wall:
>
>> >   The statement "UPDATE table SET column=NULL" updates every row in the
>> >   table.  The fact that some rows may already have a NULL in that
>> >   column is not important.
>> >
>> > Well, it is important to me, the word "change" means before != after :-)
>
>   You can argue about the naming of the _change() function all you
>   want.  It is a non-standard extension and the function operates as
>   documented.  If you want to call it poorly named, go ahead.  That
>   doesn't change what it does.
>
>   There is, however, little argument that the trigger is doing exactly
>   what one would expect.  You are applying an update operation to every
>   row, and the trigger is firing for every row.

BTW, I think you can add to trigger "WHEN NEW.column IS NOT
OLD.column" and it will fire only for rows where column value has
really changed (beware "IS NOT" with arbitrary right side works only
on SQLite 3.6.19 and above).

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Update trigger fires for all rows even if sqlite3_changes returns 0

2012-07-03 Thread Pavel Ivanov
On Tue, Jul 3, 2012 at 9:21 AM, Paul van Helden  wrote:
>> So rather than holding your breath for Oracle to change I'd recommend you
>> do it the portable way.
>>
> I'm not waiting for anything. My last question was simple: which is
> better? Since MySQL does it the "correct way" perhaps we can just think
> about this for sqlite4?

That's definitely not a correct way. It could be "intuitive" for those
who doesn't know SQL well. But for anybody else it's counter-intuitive
and I would be really disappointed if SQLite will implement that.

So the answer to your last question (as Michael already said): better
to write in SQL what you really want to do and not expect for SQL
engine to guess it for you. If you find writing complex WHERE clause
too complicated then don't use sqlite3_changes() function.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] max size of a TEXT field

2012-07-02 Thread Pavel Ivanov
On Mon, Jul 2, 2012 at 4:02 PM, deltagam...@gmx.net  wrote:
> Hello,
>
> I couldnt find in the documentation what the max size of TEXT field/column
> is.

http://www.sqlite.org/limits.html#max_length

By default it's 10^9 bytes and cannot be more than 2^31 - 1.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Error indicated through SQLITE_CONFIG_LOG callback but sqlite3_step() does not fail

2012-06-29 Thread Pavel Ivanov
On Fri, Jun 29, 2012 at 4:55 PM, Jeff Archer  wrote:
>>Kevin Benson kevin.m.benson at gmail.com Fri Jun 29 16:32:55 EDT 2012
>>
>>The mention of SQLITE_SCHEMA error and sqlite3VdbeExec() sounds like maybe
>>you're fighting an expired statement???
>
> Not likely.  I do Prepare, Bind, Step, Finalize using a wrapper function.
> The database file was created only moments before.
>
> My biggest question is: Why after this error being indicate through the
> SQLITE_CONFIG_LOG callback would the sqlite3_step() succeeded?

Because SQLite successfully re-prepared this statement behind the
scenes and thus was able to successfully finish sqlite3_step()
function.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Not sure how to interrupt this

2012-06-28 Thread Pavel Ivanov
I think SQLITE_SCHEMA can happen even if you run things like CREATE
TABLE or ALTER TABLE on the same connection. Also ATTACH DATABASE and
DETACH DATABASE should invalidate all statements prepared before that.

Pavel


On Thu, Jun 28, 2012 at 9:58 AM, Marc L. Allen
 wrote:
> Oh.. you're positing a second party.  Ok.. now I'm interested to see if there 
> was one.
>
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Igor Tandetnik
> Sent: Thursday, June 28, 2012 9:55 AM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Not sure how to interrupt this
>
> Marc L. Allen  wrote:
>> How could the schema have changed?
>
> Someone ran CREATE TABLE or VACUUM or similar on the database (possibly via a 
> different connection).
> --
> Igor Tandetnik
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Load SQLite from InputStream in Java as ReadOnly

2012-06-28 Thread Pavel Ivanov
On Thu, Jun 28, 2012 at 9:54 AM, Hector Guilarte  wrote:
> I don't think it would.
>
> Connection conn = DriverManager.getConnection("jdbc:sqlite:sample.db",
> config.toProperties());
>
> is telling to open a connection to the file "sample.db" which is located in
> the same folder as the application is executing, but is going for a *file*.
> However, I'm going to check what configuration properties can be passed to
> the method to see if one says something about a socket, an inputStream or
> something like that.

SQLite works only with files, nothing else. You could have some luck
feeding other types of data to SQLite if JDBC supports notion of
SQLite's VFS (Virtual File System) or if you use some other language
that supports it (e.g. C/C++). But even then you will have really hard
time trying to feed socket or inputStream to it, because they are
stream-like and support only consecutive reading while SQLite needs
random access to the data. How e.g. you'll execute operation "read
1024 bytes at offset 10240" on a stream? The only way you can do that
is to read everything from stream and then give to SQLite what it
needs. But then you can save everything you read from stream into a
temporary file and open that temporary file in SQLite.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] I consider this a bug. Anyone else?

2012-06-26 Thread Pavel Ivanov
On Tue, Jun 26, 2012 at 8:48 PM, Stephen Chrzanowski
 wrote:
> All;
>
> When you compile the amalgamation with your source code, doesn't the
> compiler reject/not include/remove/doesn't consider the code that could
> have been generated from the actual final build?  In other words, if you
> just have one function being used out of 10,000 functions, the final binary
> would only include the code for that single function and not the 10,000?

Compilers definitely can't do that because they generate obj-files and
if functions are not used in one obj-file they can be used in another
one. Linkers probably could do that (although there are lots of
limitations to that) but AFAIK they only optimize out the whole
obj-files, not individual functions. That is unless you turn on Whole
Program Optimization which as you understand is not turned on by
default because usually it requires lots of computational resources.

Pavel


> OP;
>
> The big question is how much time you want to invest to meet your standards.
>
> Seeing that this is a Windows application, not something for a handheld
> with limited resources, if the above is true, and you "pride" yourself on
> making your binaries as small as possible, personally, I'd think the
> 40-80-160 hours investment time to figure out how to get rid of the pure
> "bloat" the DLL gives a plausible exercise into investigating this, even if
> its on the side.
>
> On top of that, I'm the kind of developer who, if I have to require
> assistance from an external source, I'll leave it external from my main
> binary, that way if something does change, like FF in your case, then my
> application just needs to update a single file, not the entire application.
>
> As much as I can appreciate that your program is under 400k (Which I remind
> you is still smaller than the capacity of a 5 1/4 inch floppy disk, single
> sided), no user on the planet is going to be concerned that your
> application "blew up" to 900kb (Which now fits on a double sided 5 1/4 inch
> floppy).  I don't like bloat as much as the next guy, but when you're
> talking applications the size of a meg, on what seems to be a windows
> platform, NO ONE is going to notice a performance hit between a 400kb
> application and a 900kb application. Not to mention bloatware, to me, is
> more about how fast the system responds.  If it takes more than 15 seconds
> to show your initial UI, or you see elements of your UI just "showing up"
> randomly, that would be bloat, even if its a 1 meg file.  If your
> application is in line with a "hello world" app, is gigabytes in size, but
> everything seems to respond nice and quickly, excellent application, not
> bloat, well worth the download/install time, as an end-user view.
>
> Bottom line, how important is [staying close to 400kb and removing external
> dependencies] vs [having two physical files, in which, one can be updated
> on the fly (Pending no application use)]?  In your testing, if the new DLL
> breaks, simply fix, and push out a new version.  Otherwise, just have your
> app download the DLL from you or the machine hosting SQLite.
>
> The other thing I just thought of, the fact that this library is available
> AT ALL for better-than-free makes me jump for joy and hit my head on the
> ceiling.  Its a little black box that "works".  That 500kb DLL is something
> I didn't have to write, something that I can review and update if I want.
> Its something I didn't have to invent, can easily plug into new
> applications I write, small footprint, etc, etc.  Personally, could be
> 10meg in size, and I'd STILL use it.
>
>
> On Tue, Jun 26, 2012 at 5:39 PM, E. Timothy Uy  wrote:
>
>> " 40-80 hours digging deep into how System.Data.SQLite.dll is built "
>>
>> Lol, I actually did this.
>>
>> On Tue, Jun 26, 2012 at 11:00 AM, Andrew Rondeau
>> wrote:
>>
>> > The answer of "just add sqlite.c to your project" is great when you're
>> > making something in C. The entire world does not use C, nor is C (or
>> > C++) always the best option for a particular project.
>> >
>> > Timothy Uy's offer actually makes a lot of sense if you're using
>> > something other then C or C++.
>> >
>> > For example, I just can't ship a .Net .exe that uses x-copy
>> > deployment, and runs on Linux/Mac (via Mono) unless I invest about
>> > 40-80 hours digging deep into how System.Data.SQLite.dll is built.
>> >
>> > On Tue, Jun 26, 2012 at 9:10 AM, Simon Slavin 
>> > wrote:
>> > >
>> > > On 26 Jun 2012, at 4:55pm, bi...@aol.com wrote:
>> > >
>> > >> Thank you everyone who took the time to comment on my Windows DLL
>> >  question.
>> > >> I'm also glad I'm not the only one who sees the problem with not
>>  having
>> > >> the version in the resource block. This really would have helped when
>> >  Chrome
>> > >> and Firefox updated to a new version of SQLite and all my code stopped
>> > >> working.
>> > >
>> > > This is the reason you will 

Re: [sqlite] Selecting from a GROUP Select

2012-06-26 Thread Pavel Ivanov
On Tue, Jun 26, 2012 at 3:02 PM, Igor Tandetnik  wrote:
> On 6/26/2012 1:19 PM, Peter Haworth wrote:
>>
>> I still have the issue that, in order to be selected,  the rows in the
>> groups containing two entries must have a different value in a specific
>> column.
>
>
> I'm not quite sure I understand the condition, but perhaps you are looking
> for something like this:
>
>
> SELECT * from TableA WHERE z in (
>  SELECT z FROM TableA GROUP BY z
>  HAVING count(*)=2 and min(otherColumn) != max(otherColumn)
> );

Maybe this can be simplified?


SELECT * from TableA WHERE z in (
 SELECT z FROM TableA GROUP BY z
 HAVING count(distinct otherColumn)=2
);


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Selecting from a GROUP Select

2012-06-26 Thread Pavel Ivanov
I'm sorry, I didn't understand what you are trying to say. Please elaborate.

> I still have the issue that, in order to be selected,  the rows in the
> groups containing two entries must have a different value in a specific
> column.

What column are you talking about here? And in what query?


Pavel


On Tue, Jun 26, 2012 at 1:19 PM, Peter Haworth <p...@lcsql.com> wrote:
> Thank you Pavel, that works great.
>
> I still have the issue that, in order to be selected,  the rows in the
> groups containing two entries must have a different value in a specific
> column.
>
>
> The rows returned by the SELECT can be uniquely identified so I'm guessing
> this will involve linking TableA to itself and checking the necessary
> columns for different values.  Does that sound a reasonable way to approach
> this?
>
> Pete
> lcSQL Software <http://www.lcsql.com>
>
>
>
> On Sat, Jun 23, 2012 at 9:00 AM, <sqlite-users-requ...@sqlite.org> wrote:
>
>> Message: 5
>> Date: Fri, 22 Jun 2012 19:17:39 -0400
>> From: Pavel Ivanov <paiva...@gmail.com>
>> To: General Discussion of SQLite Database <sqlite-users@sqlite.org>
>> Subject: Re: [sqlite] Selecting from a GROUP Select
>> Message-ID:
>>        <cag1a4rtd4itkhmp6srmn81lof35ckndb464rh1wjuax-8si...@mail.gmail.com
>> >
>> Content-Type: text/plain; charset=ISO-8859-1
>>
>> I guess you need something like
>>
>> SELECT * from TableA
>> WHERE z in (
>> SELECT z FROM TableA GROUP BY z HAVING count(*)=2
>> );
>>
>>
>> Pavel
>>
>>
>> On Fri, Jun 22, 2012 at 6:43 PM, Peter Haworth <p...@lcsql.com> wrote:
>> > I have a situation where I need to select entries based on the count of
>> the
>> > number of entries in a GROUP = 2, and also that the value of a specific
>> > column must be different for each row in the group. I then need to select
>> > all the individual rows from the qualifying groups.
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] access from 2 different programms to same sqlite3-db

2012-06-26 Thread Pavel Ivanov
> Is it possible to check even if a db file exists and if necessary create it
> from within c++ ?
> So I would open the db file  sqlite3_open("ah.db", );   only if it really
> exists ..

Why do you need to create file from C++? If file doesn't exist SQLite
will automatically create it for you after sqlite3_open().

Pavel


On Tue, Jun 26, 2012 at 12:27 PM, deltagam...@gmx.net
<deltagam...@gmx.net> wrote:
> Am 26.06.2012 18:00, schrieb Pavel Ivanov:
>
>> On Tue, Jun 26, 2012 at 11:50 AM, deltagam...@gmx.net
>> <deltagam...@gmx.net> wrote:
>>>
>>> Am 26.06.2012 17:08, schrieb Pavel Ivanov:
>>>
>>>> You are leaking stmt statement (re-preparing without finaliznig) and
>>>> your call to sqlite3_close returns SQLITE_ERROR because of that, but
>>>> you don't even check that so you are leaking database connections as
>>>> well.
>>>>
>>>> Pavel
>>>>
>>>>
>>>> On Tue, Jun 26, 2012 at 11:01 AM, deltagam...@gmx.net
>>>> <deltagam...@gmx.net> wrote:
>>>>>
>>>>> Am 26.06.2012 16:49, schrieb Richard Hipp:
>>>>>
>>>>>> On Tue, Jun 26, 2012 at 10:46 AM, deltagam...@gmx.net
>>>>>> <deltagam...@gmx.net>wrote:
>>>>>>
>>>>>>> I have a c++ GUI application from where the db is read and the
>>>>>>> content
>>>>>>> is
>>>>>>> displayed in a Clistbox.
>>>>>>> Then I try to delete some rows from the sqlite3-db from the console.
>>>>>>> After rereading from within the GUI the deleted rows are still there.
>>>>>>>
>>>>>>> How is this possible ?
>>>>>>>
>>>>>> The GUI is holding a read transaction open.  Hence it sees a
>>>>>> consistent
>>>>>> snapshot of the database from the moment in time when the transaction
>>>>>> was
>>>>>> started.  Subsequent writes to the database are ignored by the GUI
>>>>>> until
>>>>>> it
>>>>>> closes its current transaction and starts a new one.
>>>>>>
>>>>>>
>>>>>>
>>>
> First, I would like to thank all for the great support, a special thanks to
> Pavel Ivanov and Richard Hipp
>
> I think with
> ==
>
>    char create_sql[] = "CREATE TABLE if not exists eventlog ("
>        "id INTEGER PRIMARY KEY,"
>        "eventdate DATETIME default current_timestamp,"
>        "eventtype TEXT,"
>        "counter INTEGER"
>        ")";
> rc = sqlite3_exec(db, create_sql, NULL, NULL, NULL);
> =
> it is a convenient way to check if a table exists.
>
> Is it possible to check even if a db file exists and if necessary create it
> from within c++ ?
> So I would open the db file  sqlite3_open("ah.db", );   only if it really
> exists ..
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] access from 2 different programms to same sqlite3-db

2012-06-26 Thread Pavel Ivanov
On Tue, Jun 26, 2012 at 11:50 AM, deltagam...@gmx.net
<deltagam...@gmx.net> wrote:
> Am 26.06.2012 17:08, schrieb Pavel Ivanov:
>
>> You are leaking stmt statement (re-preparing without finaliznig) and
>> your call to sqlite3_close returns SQLITE_ERROR because of that, but
>> you don't even check that so you are leaking database connections as
>> well.
>>
>> Pavel
>>
>>
>> On Tue, Jun 26, 2012 at 11:01 AM, deltagam...@gmx.net
>> <deltagam...@gmx.net> wrote:
>>>
>>> Am 26.06.2012 16:49, schrieb Richard Hipp:
>>>
>>>> On Tue, Jun 26, 2012 at 10:46 AM, deltagam...@gmx.net
>>>> <deltagam...@gmx.net>wrote:
>>>>
>>>>> I have a c++ GUI application from where the db is read and the content
>>>>> is
>>>>> displayed in a Clistbox.
>>>>> Then I try to delete some rows from the sqlite3-db from the console.
>>>>> After rereading from within the GUI the deleted rows are still there.
>>>>>
>>>>> How is this possible ?
>>>>>
>>>> The GUI is holding a read transaction open.  Hence it sees a consistent
>>>> snapshot of the database from the moment in time when the transaction
>>>> was
>>>> started.  Subsequent writes to the database are ignored by the GUI until
>>>> it
>>>> closes its current transaction and starts a new one.
>>>>
>>>>
>>>>
>>>
> 
> void InitialReadEventsData()
>
> {
>
>    // Remove all events from array
>    m_arrEvents.RemoveAll();
>
>    // write  events
>    Event newEvent;
>
> // sqlite3 reading ///
>
>        int rc, id, total_events;
>        char *sql, *sqltotal;
>        char *evdate, *evtype;
>        int evctr;
>
>        int the_event_ctr = 0;
>
>        CString datetime;
>        CString datepart;
>        CString timepart;
>
>
>        sqlite3 *db;
>        sqlite3_stmt *stmt;
>
>        sqlite3_open("ah.db", );
> /*
>
>    // check if table eventlog exists
>    char create_sql[] = "CREATE TABLE if not exists eventlog ("
>        "id INTEGER PRIMARY KEY,"
>        "eventdate DATETIME default current_timestamp,"
>        "eventtype TEXT,"
>        "counter INTEGER"
>        ")";
>
>    rc = sqlite3_exec(db, create_sql, NULL, NULL, NULL);
>
>
> */
>
>
>    // select count(*) from eventlog
>    sqltotal = "Select count(*) from eventlog";
>    rc = sqlite3_prepare(db, sqltotal, strlen(sqltotal), , NULL);
>    rc = sqlite3_step(stmt);
>    total_events = sqlite3_column_int(stmt, 0 );
>
>    std::cout << total_events << std::endl;
>
>
>
>    // select * from eventlog
>    sql = "Select id, eventdate, eventtype, counter FROM eventlog";
>    sqlite3_prepare(db, sql, strlen(sql), , NULL);
>
>    rc = sqlite3_step(stmt);
>
> while(rc == SQLITE_ROW) {
>    id = sqlite3_column_int(stmt, 0 );
>    //cid = sqlite3_column_int(stmt, 1 );
>    evdate = (char*)sqlite3_column_text(stmt, 1 );
>    evtype = (char*)sqlite3_column_text(stmt, 2 );
>    evctr = sqlite3_column_int(stmt, 3 );
>
>    datetime = evdate;
>
>    datepart = datetime.Mid(0,10);
>    timepart = datetime.Mid(11,5);
>
>    std::cout << datepart << "\t" << timepart << std::endl;
>
>    newEvent.m_nEvent = the_event_ctr;
>    newEvent.m_strLastEventDate = datepart ;
>    newEvent.m_strEventTime = timepart;
>    newEvent.m_strEventType = evtype;
>    newEvent.m_nCount = evctr;
>    WriteEvent(newEvent, the_event_ctr);
>
>
>    rc = sqlite3_step(stmt);
>
>    // increment eventcounter
>    the_event_ctr++;
>
> } // while
>
> rc = sqlite3_reset(stmt);
>
> rc = sqlite3_finalize(stmt);
> rc = sqlite3_close(db);
>
> // sqlite3 reading ///
>
> }
>
> =
>
> What am I missing now ? There is a rc = sqlite3_reset(stmt);  but the rc =
> sqlite3_close(db);  still returns error_code 5
> The sqlite3_exec is now comment. Do I have to "reset " and finalize this
> part normally too ? How is this done ?


When you prepare "select * from eventlog" statement you do not re-use
sqlite3_stmt object, you create a new one losing pointer to the old
statement. So you have to call sqlite3_finalize(stmt) before calling
sqlite3_prepare() at this point. And you don't have to call
sqlite3_reset() if you'll call sqlite3_finlaize() right after that,
just use sqlite3_finalize().


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] access from 2 different programms to same sqlite3-db

2012-06-26 Thread Pavel Ivanov
You are leaking stmt statement (re-preparing without finaliznig) and
your call to sqlite3_close returns SQLITE_ERROR because of that, but
you don't even check that so you are leaking database connections as
well.

Pavel


On Tue, Jun 26, 2012 at 11:01 AM, deltagam...@gmx.net
 wrote:
> Am 26.06.2012 16:49, schrieb Richard Hipp:
>
>> On Tue, Jun 26, 2012 at 10:46 AM, deltagam...@gmx.net
>> wrote:
>>
>>> I have a c++ GUI application from where the db is read and the content is
>>> displayed in a Clistbox.
>>> Then I try to delete some rows from the sqlite3-db from the console.
>>> After rereading from within the GUI the deleted rows are still there.
>>>
>>> How is this possible ?
>>>
>> The GUI is holding a read transaction open.  Hence it sees a consistent
>> snapshot of the database from the moment in time when the transaction was
>> started.  Subsequent writes to the database are ignored by the GUI until
>> it
>> closes its current transaction and starts a new one.
>>
>>
>>
>
> How can I close the transaction , and later open a new one ?
> BTW, transaction is still open although i use a sqlite3_close(db) ?
>
> Here is the code for reading from the db. By changing within the GUI from
> tab viewevents to another tab and back again to tab viewevents, the db is
> read again and should display all changes to the db which appeared during
> that time.
>
> ==
>
> int ReadViewEventsFormDBData()
> {
>    int nRetCode = ERROR_SUCCESS;
>
>    // Remove all events from array
>    m_arrEvents.RemoveAll();
>
>    // write  events
>    Event newEvent;
>
>    int rc, id, total_events;
>    char *sql, *sqltotal;
>    char *evdate, *evtype;
>    int evctr;
>
>    int the_event_ctr = 0;
>
>    CString datetime;
>    CString datepart;
>    CString timepart;
>
>    sqlite3 *db;
>    sqlite3_stmt *stmt;
>
>    sqlite3_open("ah.db", );
>
>    // check if table eventlog exists
>    char create_sql[] = "CREATE TABLE if not exists eventlog ("
>        "id INTEGER PRIMARY KEY,"
>        "eventdate DATETIME default current_timestamp,"
>        "eventtype TEXT,"
>        "counter INTEGER"
>        ")";
>
>    rc = sqlite3_exec(db, create_sql, NULL, NULL, NULL);
>
>    // select count(*) from eventlog
>    sqltotal = "Select count(*) from eventlog";
>    rc = sqlite3_prepare(db, sqltotal, strlen(sqltotal), , NULL);
>    rc = sqlite3_step(stmt);
>    total_events = sqlite3_column_int(stmt, 0 );
>
>    // select * from eventlog
>    sql = "Select id, eventdate, eventtype, counter FROM eventlog";
>    sqlite3_prepare(db, sql, strlen(sql), , NULL);
>
>
>    rc = sqlite3_step(stmt);
>
>    while(rc == SQLITE_ROW) {
>        id = sqlite3_column_int(stmt, 0 );
>        //cid = sqlite3_column_int(stmt, 1 );
>        evdate = (char*)sqlite3_column_text(stmt, 1 );
>        evtype = (char*)sqlite3_column_text(stmt, 2 );
>        evctr = sqlite3_column_int(stmt, 3 );
>
>        datetime = evdate;
>
>        datepart = datetime.Mid(0,10);
>        timepart = datetime.Mid(11,5);
>
>        std::cout << datepart << "\t" << timepart << std::endl;
>
>        newEvent.m_nEvent = the_event_ctr;
>        newEvent.m_strLastEventDate = datepart ;
>        newEvent.m_strEventTime = timepart;
>        newEvent.m_strEventType = evtype;
>        newEvent.m_nCount = evctr;
>
>        // add the new element to array
>        m_arrEvents.Add(newEvent);
>
>        rc = sqlite3_step(stmt);
>
>        // increment eventcounter
>        the_event_ctr++;
>
>    } // while
>
>    sqlite3_finalize(stmt);
>    sqlite3_close(db);
>    nRetCode = rc;
>
>    return nRetCode;
>
> } // ReadViewEventsFormDBData
>
> =
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to know the memory usage of an in-memory database.

2012-06-25 Thread Pavel Ivanov
On Mon, Jun 25, 2012 at 10:05 PM, Bo Peng  wrote:
>> Are these multiple tables in a single database (file), or multiple databases 
>> (files)?  Multiple connections or a single connection?
>
> Right now there are multiple read-only processes to read the same
> file. If I go with any RAM-based solution, I will have to use a single
> process to read database and feed the data to calculation processes.
>
>> In any case, you set the cache size in pages by executing "PRAGMA 
>> cache_size=" after opening the database connection.
>>
>> You can test it without modifying your code by executing "PRAGMA 
>> default_cache_size=;" against the database file using the shell.
>>
>> The default sqlite page cache per database connection is 2000 pages.  So the 
>> maximum memory used by sqlite for the page cache is  page_size * cache_size. 
>>  You can query these with "pragma page_size" and query/set the cache size 
>> with "pragma cache_size".
>
> This is a great idea because ideally I can load all database to cache
> if there are enough RAM. However, I am wondering if extra-large cache
> might actually hurt the performance if cached pages have to be
> consecutive. I mean, if there are 100,000 pages and I specify a cache
> to hold 40,000 pages. When I get the first and last table, will sqlite
> load the first 40,000 pages, read the first table, read the last
> 40,000 pages, and load the last table? I guess (and hope) sqlite is
> cleverer than that...

Yes, SQLite is better than that. Even for one table it loads into
cache only those pages which are needed to execute query, nothing
more.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Problem with including sqlite3.c into c++ project

2012-06-25 Thread Pavel Ivanov
On Mon, Jun 25, 2012 at 4:15 PM, deltagam...@gmx.net
 wrote:
> Hello,
>
> Im using MSVS 2010 for an c++ GUI project.
> After including sqlite3.h and sqlite3.c from the amalgamation-3071200
> and with the Project Properties--> C/C++  --> Precompiled Headers -->
> Precompiled Header --> Use (/Yu)
> I get the error
> sqlite3.c : fatal error C1853: 'Debug\Contegos_UI.pch' precompiled header
> file is from a previous version of the compiler, or the precompiled header
> is C++ and you are using it from C (or vice versa)
>
> If I change to Precompiled Header --> Create (/Yc)
> I get the error
> sqlite3.c(136660): error C2857: '#include' statement specified with the
> /YcStdAfx.h command-line option was not found in the source file
>
>
> How can I solve this problem ?

Change it to Precompiled Header --> Not Using Precompiled Headers.
Because you won't use the same headers to compile your application and
sqlite3.c.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects

2012-06-23 Thread Pavel Ivanov
On Sat, Jun 23, 2012 at 10:36 PM, Dennis Volodomanov <i...@psunrise.com> wrote:
> On 24/06/2012 12:29 PM, Pavel Ivanov wrote:
>>
>> AFAIK, checkpoints are application-specific, but SQLite prohibits
>> second writer until first one committed its transaction and released
>> database lock. So there can't be such thing as "two writers, both
>> writing to the same DB". If one writer writes, another one is locked
>> out and waits. And btw checkpoint cannot be completed if there are
>> some application with transactions that started before last commit to
>> the database was made. Although partial checkpoint is possible in such
>> situation.
>
> Doesn't this suggest, though, that if the first writer crashes during a
> checkpoint, the second writer will be forever locked out? Or is there some
> internal mechanism that takes care of that?

Yes, internally SQLite uses OS-level file locks. When process crashes
or goes away by any other reason OS clears all locks it held, so other
writer sees database unlocked and is able to proceed.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects

2012-06-23 Thread Pavel Ivanov
On Sat, Jun 23, 2012 at 10:18 PM, Dennis Volodomanov  wrote:
> It does raise an interesting question though - how is this handled in SQLite
> internally? When there are two writers, both writing to the same DB (WAL
> mode) and one of them crashes before reaching a checkpoint, will the second
> writer pick up on that and checkpoint correctly? To put it simply - are
> checkpoints DB-specific or application-specific?

AFAIK, checkpoints are application-specific, but SQLite prohibits
second writer until first one committed its transaction and released
database lock. So there can't be such thing as "two writers, both
writing to the same DB". If one writer writes, another one is locked
out and waits. And btw checkpoint cannot be completed if there are
some application with transactions that started before last commit to
the database was made. Although partial checkpoint is possible in such
situation.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] An interesting (strange) issue with selects

2012-06-23 Thread Pavel Ivanov
On Sat, Jun 23, 2012 at 9:21 PM, Dennis Volodomanov  wrote:
> On 22/06/2012 9:48 AM, Dennis Volodomanov wrote:
>>
>> I'll see if the new compilation options still make this happen, but it
>> takes a couple of hours for each test due to data volume and I'd need to run
>> a few tests (unless it occurs right away of course). I'll post back.
>>
>
> This hasn't occurred yet, but I did manage to replicate this in another way
> (using the new compilation options) by stopping (in debug)/crashing the
> second writer process. It appears (albeit hard to say decisively) that the
> entry in the table that was written by the crashed process (and thus it's
> wal is not flushed) is the one that comes up in count, but can't be selected
> by the other process. Does this make sense or is it just a coincidence that
> I'm seeing?

Such thing shouldn't ever happen, otherwise SQLite has a serious bug.

Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] fetching rows

2012-06-23 Thread Pavel Ivanov
No, SQLite doesn't do auto-commits every 25k insertions. It does
auto-commit after each INSERT statement (no matter how many rows it
inserts). If you wrap several INSERT statements into transaction it
will execute faster.

Pavel


On Sat, Jun 23, 2012 at 3:13 PM, Durga D <durga.d...@gmail.com> wrote:
> Thank you(Pavel) for the prompt response.
>
> Sqlite does auto commit for every 25k insertions. Do I need to change the
> number from 25k to x ( for ex: 100)?
>
> On Thu, Jun 21, 2012 at 7:54 AM, Pavel Ivanov <paiva...@gmail.com> wrote:
>
>> On Wed, Jun 20, 2012 at 11:33 PM, Durga D <durga.d...@gmail.com> wrote:
>> > Hi All,
>> >
>> >    I have to develop a sqlite application. Within the process, multiple
>> > threads are trying to access in write/read mode. Will sqlite supports
>> read
>> > and write at a time?
>> >
>> >    scenario:  1. x number of records (x related data) are going to insert
>> > in a transaction in ThreadA. still, not committed.
>> >                   2. In ThreadB (parallel thread), trying to read records
>> > (x related data), which are there in transaction.
>> >
>> >    By default, sqlite supports this scenario? or Do I need to enable any
>> > flags/macros?
>>
>> Yes, SQLite supports that. You can see some "problems" only if ThreadA
>> inserts a lot in one transaction, so that it doesn't fit into memory
>> cache.
>>
>>
>> Pavel
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Different backend possible?

2012-06-23 Thread Pavel Ivanov
On Sat, Jun 23, 2012 at 2:28 PM, Simon Slavin  wrote:
> On 23 Jun 2012, at 7:14pm, "Peter M. Friedrich" 
>  wrote:
>
>> do you think it's possible to create a different backend? I want to
>> develop a relational database system which uses tables in FITS-files
>> (for details about this format see
>> http://fits.gsfc.nasa.gov/fits_standard.html). I guess it would be a
>> good idea to use an approved database like SQLite with a new backend -
>> in this case a different BTree-implementation.
>
> Two ways to do it.  You can create your own virtual table mechanism
>
> 
>
> or you can implement your own virtual file system
>
> 
>
> A third alternative is to jettison the SQLite name entirely and just to take 
> some of the source code that makes up SQLite and use it for your own software 
> which accesses FITS files.  You don't need any license to use any of the 
> SQLite source code, just use it and (for preference) credit the SQLite team 
> for supplying it.

Another option is to do that Berkeley DB way
(http://www.oracle.com/technetwork/products/berkeleydb/overview/index-085366.html).
They use SQLite's API, full SQL engine from SQLite and execute it on
their own implementation of BTree.


Pavel
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Selecting from a GROUP Select

2012-06-22 Thread Pavel Ivanov
I guess you need something like

SELECT * from TableA
WHERE z in (
SELECT z FROM TableA GROUP BY z HAVING count(*)=2
);


Pavel


On Fri, Jun 22, 2012 at 6:43 PM, Peter Haworth  wrote:
> I have a situation where I need to select entries based on the count of the
> number of entries in a GROUP = 2, and also that the value of a specific
> column must be different for each row in the group. I then need to select
> all the individual rows from the qualifying groups.
>
> Getting the groups with count=2 is pretty straightforward:
>
> SELECT * FROM TableA GROUP BY z HAVING count(*)=2;
>
> Is there a format that will allow me to have the same SELECT statement
> return the individual rows from each qualifying group that have a different
> value in a specific column or is this something I should do in my
> application logic?  The value of 'z' in the above example could be used to
> select the individual rows and there are other columns in the table that
> could be used in a JOIN statement to link the table to itself.
>
> Thanks,
>
> Pete
> lcSQL Software 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite database on the android phone

2012-06-22 Thread Pavel Ivanov
Ask this on Android-related mailing list.

And if you include your populated database into application package
before installation and don't see any data after installation check
that you open your database with an absolute path otherwise you could
be opening some other database file, not the one you think you're
opening.


Pavel


On Thu, Jun 21, 2012 at 9:42 AM, kabugi esbon  wrote:
> Dear Sir/Madam,
>
> I have developed an android application which have an sqlite database with
> some data which i had imported from a CVS file and its working well on the
> emulator virtual machine but when i install the application on the android
> phone it does not come with the data on the database. my request is to
> assist me on how to import the CVS file or SQL file which contain my data
> to my sqlite database when am installing the application on the phone.
>
> Thank you in advance.
>
> Esbon Kabugi
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


  1   2   3   4   5   6   7   8   9   10   >