Re: [sqlite] Left join help.

2013-04-26 Thread James K. Lowden
On Fri, 26 Apr 2013 14:06:23 +0200
Clemens Ladisch  wrote:

> When using CROSS where standard SQL allows it (for an actual cartesian
> product), the table ordering does not really matter for optimization
> purposes because the DB has to do two nested full table scans anyway.

Well, no, not really: it doesn't *have* to.  

Logically, 

from A join B 
on A.a = B.a

is exactly the same as 

from A cross join B
where A.a = B.a

and

from A, B
where A.a = B.a

and the DBMS is free to treat them the same.  In fact, many do.  :-)

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Opening an in-memory database in two connections, one read-write, one read-only.

2013-04-26 Thread Scott Hess
Someone over here was trying to use a shared-cache in-memory database with
two connections, one a read-write connection for purposes of populating
things, the other a read-only connection for purposes of letting a
(trusted) user make queries.  They were surprised to find out that they
could run write queries against the read-only handle (the insert query in
my example).  Also, if you flip the order of the opens so that the
read-only open happens first, then the create statement fails
with SQLITE_READONLY.

[Aside: I suggested that it might be more appropriate to use an authorizer,
rather than try to open an in-memory database read-only.]

A couple minutes in the code makes me think that the sense of
"cache=shared" is implemented at the btree.c layer, so it is somewhat
unlikely that this is reasonable to actually support.  That said, it is
unexpected.  It seems like the library should throw something like
SQLITE_MISUSE when you request a shared-cache open incompatible with
previous opens.  WDYT?

The same basic problem also appears to happen for on-disk databases, which
makes me wonder if we're just doing something wrong.

Thanks,
scott

---
/* gcc -g -o sqlmem sqlmem.c sqlite3.c */

#include "sqlite3.h"
#include 

int main(int argc, char** argv) {
  sqlite3* rw;
  sqlite3* ro;
  int rc;
  char* err;

  rc = sqlite3_open_v2("file::memory:?cache=shared", ,
   SQLITE_OPEN_READWRITE | SQLITE_OPEN_URI, NULL);
  if (rc!=SQLITE_OK) {
fprintf(stderr, "rw rc==%d\n", rc);
return 1;
  }

  rc = sqlite3_open_v2("file::memory:?cache=shared", ,
   SQLITE_OPEN_READONLY | SQLITE_OPEN_URI, NULL);
  if (rc!=SQLITE_OK) {
fprintf(stderr, "ro rc==%d\n", rc);
return 1;
  }

  rc = sqlite3_exec(rw, "CREATE TABLE t (a TEXT)", NULL, NULL, );
  if (rc!=SQLITE_OK) {
fprintf(stderr, "rw create error: %d/%s\n", rc, err);
return 1;
  }

  rc = sqlite3_exec(ro, "SELECT 1", NULL, NULL, );
  if (rc!=SQLITE_OK) {
fprintf(stderr, "ro select error: %d/%s\n", rc, err);
return 1;
  }

  rc = sqlite3_exec(ro, "INSERT INTO t (a) VALUES ('a')", NULL, NULL, );
  if (rc!=SQLITE_OK) {
fprintf(stderr, "ro insert error: %d/%s\n", rc, err);
return 1;
  } else {
fprintf(stderr, "ro insert succeeeded?\n");
return 1;
  }
}
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of columns in group by statement affects query performance

2013-04-26 Thread James K. Lowden
On Fri, 26 Apr 2013 11:12:15 +0200
Hick Gunter  wrote:

> It is not unreasonable to assume that in a well designed SQL
> Statement the GROUP BY clause will be backed up by the necessary
> index and an identical ORDER BY clause

That is an entirely unreasonable assumption.  Order may or not matter.
I've often written GROUP BY queries ordered by the aggregate, or to
select the maximum.  

> Any index that covers all the GROUP BY fields is a "good" index
> because it allows aggregates to be computed "on the fly" as opposed
> to in a temporary table.

Agreed.  I hope it's clear now that "covers" is order-independent.  It
so happens, apparently, that the order in which the column names appear
in the GROUP BY determine whether or not the index is used.  Thats
unfortunate, because it makes two equivalent queries perform very
differently.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What pragma to use to get maximum speed (at expense of safety)?

2013-04-26 Thread David King
> The idea of temporary tables in-memory is nice, but I do not know how
> to apply it.
> I see in the documentation I can use the TEMP in CREATE TABLE, but I am not
> sure of the effect.
> Does it mean that the table is created in memory and it is lost in
> sqlite3_close?



There are a few things at work here. You can have your entire database in 
memory (and is lost on close) by giving sqlite_open the filename :memory:

Additionally and unrelatedly, you can create temporary tables (CREATE TEMP 
TABLE, as you say). These are tables that disappear on close. They are 
associated with the database connection, not the database file itself. They are 
usually stored in temporary files in a different on-disk location from your 
main database file.

With the pragma temp_store set to MEMORY, the temporary tables can be entirely 
in memory, even if the rest of your database isn't. Using this, you can mix 
on-disk tables (in your regular database) and in-memory tables (temporary 
tables, when temp_store=MEMORY). That lets you easily keep some intermediate 
stuff in memory and only flush out the disk tables when you're ready. Sometimes 
(depending on your usage patterns of course) this batching can help speed 
things up.


> 
> On Fri, Apr 26, 2013 at 8:07 PM, David King  (mailto:dk...@ketralnis.com)> wrote:
> > auto_vacuum Turn off autovacuum and just run it yourself when you're idle
> > foreign_keys Turn off foreign keys checks (or just don't use foreign keys)
> > ignore_check_constraints Same
> > journal_mode OFF might actually be faster than MEMORY, but disables 
> > rollback support
> > locking_mode EXCLUSIVE can be mildly faster in some cases
> > 
> > 
> > secure_delete OFF
> > 
> > 
> > synchronous OFF as you said
> > 
> > 
> > cache_size beef up as you said. this won't always make everything faster 
> > though, since it can starve the other processes on your machine for memory 
> > even for rarely-used sqlite data when they could potentially make better 
> > use of the OS page cache.
> > temp_store set to MEMORY, this will help if you have queries that create 
> > temporary tables, even if you're not doing so yourself (e.g. unindexed 
> > group bys)
> > 
> > 
> > 
> > If you can, use an entirely :memory: database. This may not work for you, 
> > but if it does, great.
> > 
> > Since you're not using journal_mode = WAL this is moot for you, but if you 
> > were I'd say turn off wal_autocheckpoint and wal_checkpoint yourself when 
> > you're idle
> > 
> > For my somewhat-similar use case I find that writing intermediate changes 
> > to an in-memory table (insert into my_temp_table) and periodically flushing 
> > those to disk (insert into real_table select from my_temp_table; delete 
> > from my_temp_table) can help speed things up if a lot of index-updating is 
> > involved in the on-disk table.
> > 
> > Make sure you're doing all of your inserts in a transaction. inserting is 
> > pretty cheap, but committing a transaction is expensive, and if you're not 
> > in a transaction each insert is its own transaction
> > 
> > Make sure you're re-using your prepared statements
> > 
> > Play with page_size to get it right for your write patterns
> > 
> > Don't use a connection concurrently, it's doing internal locking anyway. If 
> > you must, use the shared page cache. If you're doing it from multiple 
> > processes, use WAL mode.
> > 
> > 
> > 
> > 
> > On Friday, 26 April, 2013 at 10:44, Paolo Bolzoni wrote:
> > 
> > > The subject pretty much says it all, I use sqlite3 as a way to save
> > > temporary results from a calculation.
> > > 
> > > In this context I do not care about safety of the data. If the program
> > > fails or there is a blackout I will just delete the sqlite3 file, 
> > > eventually
> > > fix the bug, and restart.
> > > 
> > > At the moment I use this pragmas:
> > > 
> > > PRAGMA synchronous = OFF;
> > > PRAGMA journal_mode = MEMORY;
> > > PRAGMA cache_size = -10240;
> > > 
> > > Is there anything other I can do to speed-up sqlite3 at expenses of
> > > safety?
> > > 
> > > Thanks,
> > > Paolo
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> > 
> > 
> > 
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FW:

2013-04-26 Thread marco
http://www.horsecenter.com.br/i4jsow.php

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Programming API vs console

2013-04-26 Thread Simon Slavin

On 27 Apr 2013, at 3:29am, Igor Korot  wrote:

> sqlite> SELECT ownerid FROM owners WHERE ownername = 'Team 1' AND id = 1;
> 53

For testing, kill the sub-select in your INSERT command and just put a 53 in 
there.  See if that changes anything.

> sqlite>

Please retrieve the values returned by

last_insert_rowid()

or the C function

sqlite3_last_insert_rowid(sqlite3*)

before and after the insert and see if the insert changes the value.  After 
doing that ...

Please substitute your INSERT command with the simplest possible INSERT command 
you can think of and see if that has the desired effect.  First try one which 
inserts fixed values.  If that works  properly ...

Try putting the sub-SELECT back in.  Then try putting in one of the parameters. 
 Then add more parameters.  See if you can spot the point at which the command 
stops working.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Programming API vs console

2013-04-26 Thread Igor Korot
Simon,

On Fri, Apr 26, 2013 at 5:49 PM, Simon Slavin  wrote:

>
> On 27 Apr 2013, at 12:27am, Igor Korot  wrote:
>
> > And here is the log from the console:
> >
> > SQLite version 3.7.14 2012-09-03 15:42:36
> > Enter ".help" for instructions
> > Enter SQL statements terminated with a ";"
> > sqlite> INSERT INTO playersdrafted VALUES( 125, 1, ( SELECT ownerid FROM
> > owners WHERE ownername = "Team 1" AND id = 1 ), 38, 1, "OF" );
> > sqlite> SELECT * FROM playersdrafted;
> > 125|1|53|38.0|1|OF
> >
> > I can give a remote access to solve this mistery...
>
> Literal strings in SQLite should be in single quotes, not double quotes.
>  So first change that and see if that fixes it.
>

This code


int res;
query = wxString::Format( "INSERT INTO playersdrafted
VALUES( %d, %d, ( SELECT ownerid FROM owners WHERE ownername = \'%s\' AND
id = %d ), %d, %d, \'%s\' );", player.GetPlayerId(), leagueId,
const_cast( player ).GetOwner(), leagueId,
player.GetAmountPaid(), player.GetDraftOrder(), const_cast(
player ).GetDraftedPosition() );
char *error;
res = sqlite3_exec( m_handle, query, 0, 0,  );
if( res != SQLITE_OK )
{
wxMessageBox( wxString::Format( "Error inserting owners
for the new league: %s", error ) );
delete error;
}
else
{
res = sqlite3_prepare_v2( m_handle, "SELECT * FROM
playersdrafted;", -1, , 0 );
if( res == SQLITE_OK )
{
res = sqlite3_step( stmt );
if( res == SQLITE_ROW )
{
int playerid = sqlite3_column_int( stmt, 0 );
int leagueid = sqlite3_column_int( stmt, 1 );
}
}
}


executes fine. No problem at all. Under the debugger I can even see the
proper values of playerid and leagueid.
But on the next run the retrieval of this record is not done as the table
does not have it.
And trying to query table in the console sqlite3 does not yield any results.


> If not ...
>
> show us the results of the command .schema playersdrafted
>
> show is the results of "SELECT * FROM playersdrafted;" /before/ you do the
> INSERT.
>

Here it is:

SQLite version 3.7.14 2012-09-03 15:42:36
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> .schema playersdrafted
CREATE TABLE playersdrafted(playerid integer, id ineteger, ownerid integer,
draftprice double, draftorder integer, draftposition char(2), foreign
key(playerid) references players(playerid),foreign key(id) references
leagues(id), foreign key (ownerid) references owners(ownerid));
CREATE INDEX id_playerid ON playersdrafted(playerid,id);
sqlite> SELECT * FROM playersdrafted;
sqlite>


>
> and also the results of
>
> SELECT ownerid FROM owners WHERE ownername = 'Team 1' AND id = 1
>

and here

sqlite> SELECT ownerid FROM owners WHERE ownername = 'Team 1' AND id = 1;
53
sqlite>

Thank you.


> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What pragma to use to get maximum speed (at expense of safety)?

2013-04-26 Thread Simon Slavin

On 27 Apr 2013, at 2:21am, Paolo Bolzoni  wrote:

> The idea of temporary tables in-memory is nice, but I do not know how
> to apply it.
> I see in the documentation I can use the TEMP in CREATE TABLE, but I am not
> sure of the effect.

Temporary tables are a different thing.  Those are just tables which are 
automatically deleted when you close the connection.  You can make those in 
memory or on disk.

> Does it mean that the table is created in memory and it is lost in
> sqlite3_close?

Yes.  Here's something about them:



The basic advantage is that they're extremely fast.  Unless your memory is full 
and being paged out to disk.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What pragma to use to get maximum speed (at expense of safety)?

2013-04-26 Thread Paolo Bolzoni
The idea of temporary tables in-memory is nice, but I do not know how
to apply it.
I see in the documentation I can use the TEMP in CREATE TABLE, but I am not
sure of the effect.
Does it mean that the table is created in memory and it is lost in
sqlite3_close?

On Fri, Apr 26, 2013 at 8:07 PM, David King  wrote:
> auto_vacuum Turn off autovacuum and just run it yourself when you're idle
> foreign_keys Turn off foreign keys checks (or just don't use foreign keys)
> ignore_check_constraints Same
> journal_mode OFF might actually be faster than MEMORY, but disables rollback 
> support
> locking_mode EXCLUSIVE can be mildly faster in some cases
>
>
> secure_delete OFF
>
>
> synchronous OFF as you said
>
>
> cache_size beef up as you said. this won't always make everything faster 
> though, since it can starve the other processes on your machine for memory 
> even for rarely-used sqlite data when they could potentially make better use 
> of the OS page cache.
> temp_store set to MEMORY, this will help if you have queries that create 
> temporary tables, even if you're not doing so yourself (e.g. unindexed group 
> bys)
>
>
>
> If you can, use an entirely :memory: database. This may not work for you, but 
> if it does, great.
>
> Since you're not using journal_mode = WAL this is moot for you, but if you 
> were I'd say turn off wal_autocheckpoint and wal_checkpoint yourself when 
> you're idle
>
> For my somewhat-similar use case I find that writing intermediate changes to 
> an in-memory table (insert into my_temp_table) and periodically flushing 
> those to disk (insert into real_table select from my_temp_table; delete from 
> my_temp_table) can help speed things up if a lot of index-updating is 
> involved in the on-disk table.
>
> Make sure you're doing all of your inserts in a transaction. inserting is 
> pretty cheap, but committing a transaction is expensive, and if you're not in 
> a transaction each insert is its own transaction
>
> Make sure you're re-using your prepared statements
>
> Play with page_size to get it right for your write patterns
>
> Don't use a connection concurrently, it's doing internal locking anyway. If 
> you must, use the shared page cache. If you're doing it from multiple 
> processes, use WAL mode.
>
>
>
>
> On Friday, 26 April, 2013 at 10:44, Paolo Bolzoni wrote:
>
>> The subject pretty much says it all, I use sqlite3 as a way to save
>> temporary results from a calculation.
>>
>> In this context I do not care about safety of the data. If the program
>> fails or there is a blackout I will just delete the sqlite3 file, eventually
>> fix the bug, and restart.
>>
>> At the moment I use this pragmas:
>>
>> PRAGMA synchronous = OFF;
>> PRAGMA journal_mode = MEMORY;
>> PRAGMA cache_size = -10240;
>>
>> Is there anything other I can do to speed-up sqlite3 at expenses of
>> safety?
>>
>> Thanks,
>> Paolo
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Programming API vs console

2013-04-26 Thread Simon Slavin

On 27 Apr 2013, at 12:27am, Igor Korot  wrote:

> And here is the log from the console:
> 
> SQLite version 3.7.14 2012-09-03 15:42:36
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> INSERT INTO playersdrafted VALUES( 125, 1, ( SELECT ownerid FROM
> owners WHERE ownername = "Team 1" AND id = 1 ), 38, 1, "OF" );
> sqlite> SELECT * FROM playersdrafted;
> 125|1|53|38.0|1|OF
> 
> I can give a remote access to solve this mistery...

Literal strings in SQLite should be in single quotes, not double quotes.  So 
first change that and see if that fixes it.

If not ...

show us the results of the command .schema playersdrafted

show is the results of "SELECT * FROM playersdrafted;" /before/ you do the 
INSERT.

and also the results of

SELECT ownerid FROM owners WHERE ownername = 'Team 1' AND id = 1

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction question

2013-04-26 Thread Igor Tandetnik

On 4/26/2013 7:18 PM, Igor Korot wrote:

If I'm inside transaction and want to insert a record, will I see right
after executing sqlite3_step() or I have to execute a "COMMIT" statement to
see it?


The connection that executed sqlite3_step() would see its own changes 
right away. Other connections would only see changes after they've been 
committed. This is "I" in ACID.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Programming API vs console

2013-04-26 Thread Igor Korot
And here is the log from the console:

SQLite version 3.7.14 2012-09-03 15:42:36
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> INSERT INTO playersdrafted VALUES( 125, 1, ( SELECT ownerid FROM
owners WHERE ownername = "Team 1" AND id = 1 ), 38, 1, "OF" );
sqlite> SELECT * FROM playersdrafted;
125|1|53|38.0|1|OF

I can give a remote access to solve this mistery...

Thank you.

On Fri, Apr 26, 2013 at 12:31 AM, Igor Korot  wrote:

> Hi, ALL,
>
> On Thu, Apr 25, 2013 at 2:41 PM, Random Coder wrote:
>
>> On Tue, Apr 23, 2013 at 8:47 PM, Igor Korot  wrote:
>>
>> > query = wxString::Format(...);
>> > if( ( result = sqlite3_prepare_v2( m_handle, query, -1, , 0 ) ) !=
>> > SQLITE_OK )
>> >
>>
>> It's been a while since I've worked with wxWidgets, but when I did,
>> wxString didn't support an implicit conversion like you're using here.
>>
>> You need to do something like this for your sqlite_prepare_v2 call:
>>
>> sqlite3_prepare_v2(m_handle, (const char*)query.mb_str(wxConvUTF8), -1,
>> , 0);
>>
>> Though, I suppose if I'm right, this should have failed in some other way
>> much sooner.
>>
>
> Here is the table description:
>
> sqlite> .dump playersdrafted
> PRAGMA foreign_keys=OFF;
> BEGIN TRANSACTION;
> CREATE TABLE playersdrafted(playerid integer, id ineteger, ownerid
> integer, draftprice double, draftorder integer, draftposition char(2),
> foreign key(playerid) references players(playerid),foreign key(id)
> references leagues(id), foreign key (ownerid) references owners(ownerid));
> CREATE INDEX id_playerid ON playersdrafted(playerid,id);
> COMMIT;
>
> All primary keys are exist by the time the INSERT INTO.. happens.
>
> I even tried to do with sqlite3_exec(), but it is still failing
>
> Any idea?
>
> Thank you.
>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transaction question

2013-04-26 Thread Igor Korot
Hi, ALL,
If I'm inside transaction and want to insert a record, will I see right
after executing sqlite3_step() or I have to execute a "COMMIT" statement to
see it?

Thank you.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Version 3.7.17 Preview

2013-04-26 Thread Darren Duncan

On 2013.04.26 8:34 AM, Richard Hipp wrote:

Please visit http://www.sqlite.org/draft/releaselog/3_7_17.html to see the
proposed enhancements and changes in SQLite version 3.7.17.  Your comments,
criticisms and suggestions are welcomed and encouraged.


I think the extension mechanism is valuable, and a very appropriate way to bring 
in many kinds of extra functionality, such as regular expression support.  And 
I'm glad an extension for that is provided now, which gives much more power to 
type constraint definitions. -- Darren Duncan



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] What pragma to use to get maximum speed (at expense of safety)?

2013-04-26 Thread David King
auto_vacuum Turn off autovacuum and just run it yourself when you're idle 
foreign_keys Turn off foreign keys checks (or just don't use foreign keys)
ignore_check_constraints Same
journal_mode OFF might actually be faster than MEMORY, but disables rollback 
support
locking_mode EXCLUSIVE can be mildly faster in some cases


secure_delete OFF


synchronous OFF as you said


cache_size beef up as you said. this won't always make everything faster 
though, since it can starve the other processes on your machine for memory even 
for rarely-used sqlite data when they could potentially make better use of the 
OS page cache.
temp_store set to MEMORY, this will help if you have queries that create 
temporary tables, even if you're not doing so yourself (e.g. unindexed group 
bys)



If you can, use an entirely :memory: database. This may not work for you, but 
if it does, great.

Since you're not using journal_mode = WAL this is moot for you, but if you were 
I'd say turn off wal_autocheckpoint and wal_checkpoint yourself when you're idle

For my somewhat-similar use case I find that writing intermediate changes to an 
in-memory table (insert into my_temp_table) and periodically flushing those to 
disk (insert into real_table select from my_temp_table; delete from 
my_temp_table) can help speed things up if a lot of index-updating is involved 
in the on-disk table.

Make sure you're doing all of your inserts in a transaction. inserting is 
pretty cheap, but committing a transaction is expensive, and if you're not in a 
transaction each insert is its own transaction

Make sure you're re-using your prepared statements

Play with page_size to get it right for your write patterns

Don't use a connection concurrently, it's doing internal locking anyway. If you 
must, use the shared page cache. If you're doing it from multiple processes, 
use WAL mode.




On Friday, 26 April, 2013 at 10:44, Paolo Bolzoni wrote:

> The subject pretty much says it all, I use sqlite3 as a way to save
> temporary results from a calculation.
> 
> In this context I do not care about safety of the data. If the program
> fails or there is a blackout I will just delete the sqlite3 file, eventually
> fix the bug, and restart.
> 
> At the moment I use this pragmas:
> 
> PRAGMA synchronous = OFF;
> PRAGMA journal_mode = MEMORY;
> PRAGMA cache_size = -10240;
> 
> Is there anything other I can do to speed-up sqlite3 at expenses of
> safety?
> 
> Thanks,
> Paolo
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org (mailto:sqlite-users@sqlite.org)
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Roland Hughes
It actually does make sense to add chunking to sqlite.  There would be
some computational overhead, but, that all depends on the chunk size and
the cache size of the database.  It makes no sense to implement YAFS
(Yet Another File System) inside of SQLite.

While many here view SQLite only in terms of desktop applications, the
reality is it gets used in embedded systems for data gathering and image
processing.  Some of these systems gather data in real time, and others
poll devices periodically to obtain a "unit" of work.  The devices
generating the data have completely different operating systems and even
different Endianism than the data collection system.  These units of
work are eventually uploaded to yet another system (usually midrange or
mainframe) where they are processed into industrial strength database
systems in a much more granular fashion.  Chunking of data allows for
units of work to be arbitrary sizes.

I understand that many of you reading this may not grasp the application
so I will bastardize some real life stuff you might be able to picture.
Many of you probably run BOINC and participate in some noble research
project with the idle time of your computer.  (If you don't, you
should.)  No matter the project, they bust up massive amounts of data
into chunks.  Somewhere a table in a database identifies each chunk, the
date it was collected, processed, who processed it, a corresponding
results chunk, and some summary result information fields.  When your
BOINC client connects with the server it scans the database to identify
the next available chunk or chunks, assigns them to you, then sends the
chunks to your client for processing.  The database and the client do
not care about the content of the chunk, just its size and transmission
CRC.  For lack of a better description, the client plug-in for the
project is the only piece which knows about the content of the chunk and
how to process it.

It should be possible to add chunking to the database itself in such a
manner that any user who does not actually use blobs in their database
does not pay a computational penalty for the feature.

There is an ever increasing number of embedded systems which would like
to use a "linkable" database, but stumble when it comes to raw data
storage.

Just my 0.02.




On Fri, 2013-04-26 at 18:16 +0100, Simon Slavin wrote:

> On 26 Apr 2013, at 5:26pm, Stephen Chrzanowski  wrote:
> 
> > ALL THAT SAID, I doubt it'd get implemented
> 
> I'm also in this category.  In fact I hope it doesn't get implemented.  Yes, 
> technically it can be done.  But it's the sort of thing people assign as 
> Computer Science homework.
> 


-- 
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net

No U.S. troops have ever lost their lives defending our ethanol
reserves.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] What pragma to use to get maximum speed (at expense of safety)?

2013-04-26 Thread Paolo Bolzoni
The subject pretty much says it all, I use sqlite3 as a way to save
temporary results from a calculation.

In this context I do not care about safety of the data. If the program
fails or there is a blackout I will just delete the sqlite3 file, eventually
fix the bug, and restart.

At the moment I use this pragmas:

PRAGMA synchronous = OFF;
PRAGMA journal_mode = MEMORY;
PRAGMA cache_size = -10240;

Is there anything other I can do to speed-up sqlite3 at expenses of
safety?

Thanks,
Paolo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Version 3.7.17 Preview

2013-04-26 Thread jose isaias cabrera

"Richard Hipp" wrote...


Please visit http://www.sqlite.org/draft/releaselog/3_7_17.html to see the
proposed enhancements and changes in SQLite version 3.7.17.  Your 
comments,

criticisms and suggestions are welcomed and encouraged.


Wow!  Lots of new features...  Sweet!


Snapshot amalgamation builds are available at
http://www.sqlite.org/draft/download.html - please consider downloading a
snapshot and trying it out in your application(s) and letting us know if
you encounter any problems.

No proposed release date for SQLite version 3.7.17 has been established 
yet.


--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Simon Slavin

On 26 Apr 2013, at 5:26pm, Stephen Chrzanowski  wrote:

> ALL THAT SAID, I doubt it'd get implemented

I'm also in this category.  In fact I hope it doesn't get implemented.  Yes, 
technically it can be done.  But it's the sort of thing people assign as 
Computer Science homework.

If you're running SQLite you already have a file system.  I don't see any point 
in trying to squeeze a file system inside a DBMS inside a file system.  And 
SQLite, having been designed to be 'lite' in the first place, isn't a good 
place to do it.  That task stops only when you've implemented Posix.

If anyone is interested in pouring time into features which would be a good 
natural fit for SQLite4, here are two ideas:

A) The features currently built into FTS4 should be expressible in a more 
SQL-like way, and it should be possible implement searching many long text 
fields for combinations of kinds of text in a better way.  Devise storage 
format for something like

CREATE TOKENTABLE TABLE myTokenTable FOR myTable USING SEPARATORS (list here) 
AND STOPLIST (list here)

which uses the existing rowids in myTable to indicate which row of the 
underlying table a token appears in.  Work out semantics to allow for the 
updating of a token table after a transaction in which the underlying table is 
updated.  Work out semantics for searching the token table and the underlying 
table which are an elegant fit for SQLite3 syntax and English.  If you have 
time, write the code.

B) Work out a data structure which uses existing SQLite4 data types and can be 
used to store a table definition including all the things SQLite4 allows in the 
CREATE TABLE command.  In other words, if you were to store the information in 
the CREATE TABLE command in a structured way instead of as the commandstring, 
how would you do it ?  Your design should allow for easy implementation of some 
of the ALTER TABLE commands which SQLite3 doesn't support.  Pay great attention 
to being able to spot if command should not be completed because it breaks the 
schema.  If you have time, provide code which allows for conversion between 
string and structured representations of the CREATE TABLE command, perhaps 
using the Lemon parser for commandstring-->structure.  You do not need to worry 
about ALTER TABLE commands running quickly: assume that if one is used it can 
take as long as needed to run.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Richard Hipp
On Fri, Apr 26, 2013 at 12:26 PM, Stephen Chrzanowski
wrote:

> Streaming a chunk of data as one huge "thing" is going to be
> faster in regards to writing and reading.
>

That depends.  See http://www.sqlite.org/intern-v-extern-blob.html


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Stephen Chrzanowski
+0.75 to Roland for splitting the data, and another +1 for reiterating the
true functionality of what LITE means.

Splitting the data into chunks will help with keeping a database small as
it can throw the raw data into pages that were previously marked as
deleted, thereby not increasing the database sizes.  I think raw blob data
is already stored as a separate entity within the DB, but, I'm only about
5% through reading the 4meg amalgamation source code.  (Yes, I'm reading
line by line, manually.  Fortunately I've written a "book" program that
reads text files and remembers what page I'm on. ;))

The loss of 0.25 comes with the contradiction of light versus increased
work load.  Streaming a chunk of data as one huge "thing" is going to be
faster in regards to writing and reading.  You write data first byte to
last, and only stop at the EOF or if an IO error comes up.  Read the data
in with the same thought process.  No real CPU processing is going to be
affected. Now if you start throwing in splitting the data, you're going to
start throwing processing overhead to determine unused records, pages, and
what not.  Now on write the DBMS has to figure out what isn't used, see if
it has enough space in the page to write, and so on.  This takes away from
the speed part of Lite.  Reading will be less intensive, however, more
thought has to happen to properly read in the data.

So now the question comes to which camp wins?  The streaming group, or the
partitioning group?  Both can, and still maintain backwards compatibility
for reading, however, databases written with the partitioning of the blobs,
older versions of SQLite3 won't know what is going on, so it'll break.

Assuming (since I'm not sure) that the streaming option is what is
implemented in the current implementation of SQLite3, that particular code
base stays in place.  When a database is created, a PRAGMA can be set PRIOR
to the first write of any BLOB data that would indicate how the database
writes the data.  Resetting this pragma would have no effect after the
first BLOB is written, even if all BLOB data is removed.  By default, the
old method can be used so older versions of SQLite can handle the data.
This could potentially be extended to a per-table field for future
endeavors.

So now we're looking at one extra check to see how the data should be
written and read, which takes away from speed (Albeit extremely small slice
of time) however, both camps win.

ALL THAT SAID, I doubt it'd get implemented due to



On Fri, Apr 26, 2013 at 11:30 AM, Roland Hughes  wrote:

> Speaking as an IT professional with 20+ years in the field, I would have
> to say adding any kind of "file system" support to SQLite would be a
> horrible thing.  Yes, I've used Oracle.  I've also used the only real
> product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
> PostgreSQL, and Object databases like POET.
>
> Lite is called lite for a reason.  Bare functionality with a lot of
> speed.
>
> The architects for this product need to take a lesson from the old DOS
> xBase systems.  Blobs should not be handled as one big unit.   They need
> to be given unique ID's and stored in fixed size chunks off in a hidden
> table much like a "memo" field was back in the day.  The "hidden" or
> child table supporting the blob column would have a key of ID +
> sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
> They BytesUsed allows you to keep exact byte sizes.  All Chunk data
> types should be a raw byte data type.  There should be multiple chunk
> column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
> 10Meg chunk column types.
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Paolo Bolzoni
Writing a FS as sqlite3 as backend sounds an interesting challenge.

But I would like recalling everyone, that the question was about
writing an arbitrary precision integer in the DB considering that
the library writes the representation in a FILE*.

At the end I wrote a little FILE* wrapper around blob_open
handles that support only the modes "r" or "w". And I applied to
an existing blob or to newly created zeroblob.
It seems working fine.

Paolo

On Fri, Apr 26, 2013 at 5:30 PM, Roland Hughes
 wrote:
> Speaking as an IT professional with 20+ years in the field, I would have
> to say adding any kind of "file system" support to SQLite would be a
> horrible thing.  Yes, I've used Oracle.  I've also used the only real
> product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
> PostgreSQL, and Object databases like POET.
>
> Lite is called lite for a reason.  Bare functionality with a lot of
> speed.
>
> The architects for this product need to take a lesson from the old DOS
> xBase systems.  Blobs should not be handled as one big unit.   They need
> to be given unique ID's and stored in fixed size chunks off in a hidden
> table much like a "memo" field was back in the day.  The "hidden" or
> child table supporting the blob column would have a key of ID +
> sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
> They BytesUsed allows you to keep exact byte sizes.  All Chunk data
> types should be a raw byte data type.  There should be multiple chunk
> column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
> 10Meg chunk column types.
>
>
> On Tue, 2013-04-23 at 09:50 +0200, Dominique Devienne wrote:
>
>> On Mon, Apr 22, 2013 at 2:10 PM, Simon Slavin  wrote:
>>
>> > On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:
>> > > But I noticed that sqlite3_blob_write cannot increase the size of the
>> > pointed
>> > > open blob. So I ask, there is a way to treat a blob as a stream so I can
>> > write
>> > > or read values in it with ease?
>> >
>> > Unfortunately the size (length) of the BLOB is very significant to the
>> > larger space-handling aspects of SQLite's file format.  Making a BLOB
>> > longer could force SQLite to move the data from page to page and do lots of
>> > other internal reorganisation.  So you can reserve extra space when you
>> > write the BLOB, and you can read whatever you want, but the documentation
>> > is accurate.
>> >
>>
>> I also really wish SQLite blobs would map directly to the usual FILE*
>> semantic, both in being able to grow a blob via writing (and truncate it
>> too), but also and more importantly not rewriting the whole row or blob
>> when modifying only a few bytes of the blob, but only affected pages.
>> Basically another level of indirection, where the row holds only a blob
>> locator (like in Oracle), and the blob value is in separate, not
>> necessarily contiguous pages, as described here for example:
>> http://jonathanlewis.wordpress.com/2013/03/22/lob-update/. That way only
>> modified blob pages would need to participate in the transaction. SQLite is
>> not MVCC like Oracle, but the ability to at least not overwrite the whole
>> blob when changing 1 byte would be great. (I'm assuming there isn't, but
>> I'm no SQLite expert). My $0.02. --DD
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
> --
> Roland Hughes, President
> Logikal Solutions
> (630)-205-1593
>
> http://www.theminimumyouneedtoknow.com
> http://www.infiniteexposure.net
>
> No U.S. troops have ever lost their lives defending our ethanol
> reserves.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Version 3.7.17 Preview

2013-04-26 Thread Richard Hipp
Please visit http://www.sqlite.org/draft/releaselog/3_7_17.html to see the
proposed enhancements and changes in SQLite version 3.7.17.  Your comments,
criticisms and suggestions are welcomed and encouraged.

Snapshot amalgamation builds are available at
http://www.sqlite.org/draft/download.html - please consider downloading a
snapshot and trying it out in your application(s) and letting us know if
you encounter any problems.

No proposed release date for SQLite version 3.7.17 has been established yet.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Writing in a blob

2013-04-26 Thread Roland Hughes
Speaking as an IT professional with 20+ years in the field, I would have
to say adding any kind of "file system" support to SQLite would be a
horrible thing.  Yes, I've used Oracle.  I've also used the only real
product Oracle has, RDB on OpenVMS.  I've written books covering MySQL,
PostgreSQL, and Object databases like POET.

Lite is called lite for a reason.  Bare functionality with a lot of
speed.

The architects for this product need to take a lesson from the old DOS
xBase systems.  Blobs should not be handled as one big unit.   They need
to be given unique ID's and stored in fixed size chunks off in a hidden
table much like a "memo" field was back in the day.  The "hidden" or
child table supporting the blob column would have a key of ID +
sequence.  The actual row should be ID, Sequence, BytesUsed, Chunk.
They BytesUsed allows you to keep exact byte sizes.  All Chunk data
types should be a raw byte data type.  There should be multiple chunk
column types:  chunk24, chunkM, and chunkXM for 1024 bytes, 1Meg, and
10Meg chunk column types.


On Tue, 2013-04-23 at 09:50 +0200, Dominique Devienne wrote:

> On Mon, Apr 22, 2013 at 2:10 PM, Simon Slavin  wrote:
> 
> > On 22 Apr 2013, at 12:39pm, Paolo Bolzoni wrote:
> > > But I noticed that sqlite3_blob_write cannot increase the size of the
> > pointed
> > > open blob. So I ask, there is a way to treat a blob as a stream so I can
> > write
> > > or read values in it with ease?
> >
> > Unfortunately the size (length) of the BLOB is very significant to the
> > larger space-handling aspects of SQLite's file format.  Making a BLOB
> > longer could force SQLite to move the data from page to page and do lots of
> > other internal reorganisation.  So you can reserve extra space when you
> > write the BLOB, and you can read whatever you want, but the documentation
> > is accurate.
> >
> 
> I also really wish SQLite blobs would map directly to the usual FILE*
> semantic, both in being able to grow a blob via writing (and truncate it
> too), but also and more importantly not rewriting the whole row or blob
> when modifying only a few bytes of the blob, but only affected pages.
> Basically another level of indirection, where the row holds only a blob
> locator (like in Oracle), and the blob value is in separate, not
> necessarily contiguous pages, as described here for example:
> http://jonathanlewis.wordpress.com/2013/03/22/lob-update/. That way only
> modified blob pages would need to participate in the transaction. SQLite is
> not MVCC like Oracle, but the ability to at least not overwrite the whole
> blob when changing 1 byte would be great. (I'm assuming there isn't, but
> I'm no SQLite expert). My $0.02. --DD
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


-- 
Roland Hughes, President
Logikal Solutions
(630)-205-1593

http://www.theminimumyouneedtoknow.com
http://www.infiniteexposure.net

No U.S. troops have ever lost their lives defending our ethanol
reserves.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Export Of BLOB Data

2013-04-26 Thread Clemens Ladisch
Nigel Verity wrote:
> I am on Linux, but need to export the data to somebody on Windows
> running MS Access 2010.
>
> Can anybody advise on how to handle the export of the BLOB data?

AFAIK Access does not have blob literals.  So the best way would be to
copy the SQLite file to the Windows PC, install an SQLite ODBC driver,
and link to the SQLite table from inside Access.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Export Of BLOB Data

2013-04-26 Thread Nigel Verity



Hi

I have a table with a record definition broadly as follows:

ID (Integer Primary Key)
Field1 (Text)
Field2 (Text)
Field3 (BLOB)

The BLOB field contains document scans in JPG and PDF format, and some source 
documents in ODT and DOC formats.

I am on Linux, but need to export the data to somebody on Windows running MS 
Access 2010.

Can anybody advise on how to handle the export of the BLOB data? I'm assuming 
that CSV is a non-starter. My primary requirement is the use of Access as the 
target, but a generic approach for other targets would also be of great 
interest.

Thanks

Nige
  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sequential row numbers from query

2013-04-26 Thread Igor Tandetnik

On 4/26/2013 8:56 AM, hiteshambaliya wrote:

I want to get serial numbers for selected record from 1 to number of rows.
Your solution is right but in the situation when i want to sort by 'Party
Name' column then the serial number depend on autoid field arrange also as
sorted party name so i can't get it as serial records nos.

Please help me that what to do


While you iterate over the rows with sqlite3_step (or the equivalent in 
your language of choice), simply keep incrementing a counter.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sequential row numbers from query

2013-04-26 Thread hiteshambaliya
Hi,
I want to get serial numbers for selected record from 1 to number of rows.
Your solution is right but in the situation when i want to sort by 'Party
Name' column then the serial number depend on autoid field arrange also as
sorted party name so i can't get it as serial records nos.

Please help me that what to do

 
Robert Citek-2 wrote
> How can I get a query to display sequential row number in a select
> statement?
> 
> I have a simple database similar to this:
> 
> $ sqlite3 db .dump
> BEGIN TRANSACTION;
> CREATE TABLE foo (field);
> INSERT INTO "foo" VALUES('a');
> INSERT INTO "foo" VALUES('b');
> INSERT INTO "foo" VALUES('c');
> COMMIT;
> 
> $ sqlite3 db 'select * from foo ; '
> a
> b
> c
> 
> I would like to do the equivalent of this in SQL:
> 
> $ sqlite3 db 'select * from foo order by field desc ; ' | cat -n
>  1  c
>  2  b
>  3  a
> 
> I have looked into rowid but it keeps the actual row id from the table
> and does not reorder the rowid based on the sort order.
> 
> Pointers to references appreciated.
> 
> Regards,
> - Robert
> ___
> sqlite-users mailing list

> sqlite-users@

> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/sequential-row-numbers-from-query-tp47370p68477.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Left join help.

2013-04-26 Thread Danilo Cicerone
Many thanks to you all, I really appraciate your helpfulness,
Danilo


2013/4/26 Clemens Ladisch 

> Hick Gunter wrote:
> > Actually "CROSS" just forces SQLite to use the tables in the order
> > specified.
>
> This is an SQLite optimization extension.  I used CROSS because this is
> the only explicit join type where standard SQL allows to omit the join
> expression.
>
> When using CROSS where standard SQL allows it (for an actual cartesian
> product), the table ordering does not really matter for optimization
> purposes because the DB has to do two nested full table scans anyway.
>
> > It is the LEFT join that creates the rows where there is no match on
> > the right hand side.
>
> It is the CROSS join that creates the rows where there is no appx
> record.  The LEFT join just does not filter out those.
>
>
> Regards,
> Clemens
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] reusing statements after they were interrupted

2013-04-26 Thread Richard Hipp
On Thu, Apr 25, 2013 at 5:55 PM, Jason Boehle  wrote:

> is it still safe to reuse this statement even though reset() returns
> an error?
>

Yes

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] reusing statements after they were interrupted

2013-04-26 Thread Jason Boehle
I know the docs on sqlite3_reset() say that it will return an error code if
the last call to step() on the statement returned an error.  My question
is...is it still safe to reuse this statement even though reset() returns
an error?

It appears that on iOS 6.1, if the last call to step() on the statement was
interrupted, I can still successfully reuse the statement even though
reset() returns an error.

Thanks!

Jason Boehle
jboe...@gmail.com
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Port SQLite to VxWorks 6.8

2013-04-26 Thread pmb
We (Peter and me) have been having problems with the following lines of code
from the sqlite3.c of the  SQLite 3.7.16.1.

*Problem 1: *
Undefined symbol "isDelete". Any idea what this variable does? Or where it
is defined?

static int fillInUnixFile( . . . ) 
{ 
. . . 
#if OS_VXWORKS 
  if( rc!=SQLITE_OK ){ 
if( h>=0 ) robust_close(pNew, h, __LINE__); 
h = -1; 
osUnlink(zFilename); 
isDelete = 0;
  } 
if( isDelete ) pNew->ctrlFlags |= UNIXFILE_DELETE; 
#endif 
. . . 
} 


*Problem 2:*
fchown not defined in VxWorks, because it isn't a multi use OS. Is it safe
to comment out?

static int posixFchown(int fd, uid_t uid, gid_t gid){
  return geteuid() ? 0 : fchown(fd,uid,gid);
}



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Port-SQLite-to-VxWorks-6-8-tp68430p68446.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Left join help.

2013-04-26 Thread Clemens Ladisch
Hick Gunter wrote:
> Actually "CROSS" just forces SQLite to use the tables in the order
> specified.

This is an SQLite optimization extension.  I used CROSS because this is
the only explicit join type where standard SQL allows to omit the join
expression.

When using CROSS where standard SQL allows it (for an actual cartesian
product), the table ordering does not really matter for optimization
purposes because the DB has to do two nested full table scans anyway.

> It is the LEFT join that creates the rows where there is no match on
> the right hand side.

It is the CROSS join that creates the rows where there is no appx
record.  The LEFT join just does not filter out those.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Left join help.

2013-04-26 Thread Hick Gunter
Actually "CROSS" just forces SQLite to use the tables in the order specified. 
The result sets are identical, except for ordering and/or speed. It is the LEFT 
join that creates the rows where there is no match on the right hand side.

explain query plan select sub_descr,itx_descr,ifnull(app_load,0) from itx join 
subj left outer join appx on app_ref_itx=itx_id and app_ref_sub=sub_id;
sele  order  from  deta
  -    
0 0  0 SCAN TABLE itx (~100 rows)
0 1  1 SCAN TABLE subj (~100 rows)
0 2  2 SEARCH TABLE appx USING INDEX 
sqlite_autoindex_appx_1 (app_ref_itx=? AND app_ref_sub=?) (~1 rows)
explain query plan SELECT sub_descr,
   ...>itx_descr,
   ...>IFNULL(app_load, 0)
   ...> FROM   subj
   ...> CROSS JOIN itx
   ...>  LEFT JOIN appx ON sub_id = app_ref_sub AND
   ...>itx_id = app_ref_itx
   ...> ;
sele  order  from  deta
  -    
0 0  0 SCAN TABLE subj (~100 rows)
0 1  1 SCAN TABLE itx (~100 rows)
0 2  2 SEARCH TABLE appx USING INDEX 
sqlite_autoindex_appx_1 (app_ref_itx=? AND app_ref_sub=?) (~1 rows)

-Ursprüngliche Nachricht-
Von: Clemens Ladisch [mailto:clem...@ladisch.de]
Gesendet: Freitag, 26. April 2013 12:40
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Left join help.

Danilo Cicerone wrote:
> table appx stores item's(table itx) quantity load for each user (table
> subj). I'd to know how many items each user has:
>
> Paul|Box|3
> Paul|Letter|0
> Paul|Pen|0
> John|Box|0
> John|Letter|4
> John|Pen|0
>
> I tried:
>
> select sub_descr, itx_descr, app_load from subj left outer join appx
> on sub_id = app_ref_sub left outer join itx on app_ref_itx = itx_id;
>
> but it returns only:
>
> Paul|Box|3
> John|Letter|4

Your first LEFT includes users that do not have any items.
Your second LEFT includes quantity loads for which no item exists.

What you actually want is all combinations of users and items.  This is a cross 
join:

SELECT sub_descr,
   itx_descr,
   IFNULL(app_load, 0)
FROM   subj
CROSS JOIN itx
 LEFT JOIN appx ON sub_id = app_ref_sub AND
   itx_id = app_ref_itx


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


--
 Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 – 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Left join help.

2013-04-26 Thread Hick Gunter

http://www.sqlite.org/lang_select.html

(1) "If there is an ON clause specified, then the ON expression is evaluated 
for each row of the cartesian product as a boolean expression. All rows for 
which the expression evaluates to false are excluded from the dataset."


(2) " If the join-op is a "LEFT JOIN" or "LEFT OUTER JOIN", then after the ON 
or USING filtering clauses have been applied, an extra row is added to the 
output for each row in the original left-hand input dataset that corresponds to 
no rows at all in the composite dataset (if any). The added rows contain NULL 
values in the columns that would normally contain values copied from the 
right-hand input dataset."

(3) " When more than two tables are joined together as part of a FROM clause, 
the join operations are processed in order from left to right. In other words, 
the FROM clause (A join-op-1 B join-op-2 C) is computed as ((A join-op-1 B) 
join-op-2 C)."


([1,Paul], [2,John]) X ([1,1,3],[3,2,4])



=(1)=> ([1,Paul,1,1,3],[2,John,3,2,4])



(2) does not apply, as all LEFT rows have 1 corresponding result row



=(3)=>


([1,Paul,1,1,3],[2,John,3,2,4]) X ([1,Box],[2,Pen],[3,Letter])

=(1)=> ([1,Paul,1,1,3,1,Box],[2,John,3,2,4,3,Letter])



(2) does not apply, as all LEFT rows have 1 corresponding result row



>From which you select [Paul,Box,3] and [John,Letter,4].



The only way you are going to get more than 2 rows out of the query is to first 
join (no need for LEFT or OUTER) subj with itx; then you can go looking for the 
appx row that matches the combination.





Solution 1 (subquery)



select sub_descr,itx_descr,ifnull((select app_load from appx where 
app_ref_itx=itx_id and app_ref_sub=sub_id),0) load from itx  join subj;



sub_descr  itx_descr  load

-  -  

Paul   Box3

John   Box0

Paul   Pen0

John   Pen0

Paul   Letter 0

John   Letter 4



Solution 2 (join):



select sub_descr,itx_descr,ifnull(app_load,0) from itx join subj left outer 
join appx on app_ref_itx=itx_id and app_ref_sub=sub_id;



sub_descr  itx_descr  ifnull(app_load,0)

-  -  --

Paul   Box3

John   Box0

Paul   Pen0

John   Pen0

Paul   Letter 0

John   Letter 4



Don't forget to add an ORDER BY clause if you want a specific order.



-Ursprüngliche Nachricht-
Von: Danilo Cicerone [mailto:cyds...@gmail.com]
Gesendet: Freitag, 26. April 2013 11:40
An: SQLITE Forum
Betreff: [sqlite] Left join help.



Hi to all,

I'm looking for a query on the following schema:



PRAGMA foreign_keys=ON;

BEGIN TRANSACTION;

CREATE TABLE subj

(

   sub_id INTEGER PRIMARY KEY, -- 00

   sub_descr TEXT DEFAULT NULL -- 01

);

INSERT INTO "subj" VALUES(1,'Paul');

INSERT INTO "subj" VALUES(2,'John');

CREATE TABLE itx

(

   itx_id INTEGER PRIMARY KEY, -- 00

   itx_descr TEXT DEFAULT NULL -- 01

);

INSERT INTO "itx" VALUES(1,'Box');

INSERT INTO "itx" VALUES(2,'Pen');

INSERT INTO "itx" VALUES(3,'Letter');

CREATE TABLE appx

(

   app_ref_itx INTEGER DEFAULT NULL -- 00

  REFERENCES itx(itx_id) ON UPDATE CASCADE ON DELETE RESTRICT,

   app_ref_sub INTEGER DEFAULT NULL -- 01

  REFERENCES subj(sub_id) ON UPDATE CASCADE ON DELETE RESTRICT,

   app_load INTEGER NOT NULL DEFAULT 0, -- 02

   UNIQUE(app_ref_itx, app_ref_sub)

);

INSERT INTO "appx" VALUES(1,1,3);

INSERT INTO "appx" VALUES(3,2,4);

COMMIT;



table appx stores item's(table itx) quantity load for each user (table subj). 
I'd to know how many items each user has:



Paul|Box|3

Paul|Letter|0

Paul|Pen|0

John|Box|0

John|Letter|4

John|Pen|0



and the above is the result what I aim to!!!



I tried:



select sub_descr, itx_descr, app_load from subj left outer join appx on sub_id 
= app_ref_sub left outer join itx on app_ref_itx = itx_id;



but it returns only:



Paul|Box|3

John|Letter|4



Thanks in advance, Danilo

___

sqlite-users mailing list

sqlite-users@sqlite.org

http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 - 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Left join help.

2013-04-26 Thread Clemens Ladisch
Danilo Cicerone wrote:
> table appx stores item's(table itx) quantity load for each user (table
> subj). I'd to know how many items each user has:
>
> Paul|Box|3
> Paul|Letter|0
> Paul|Pen|0
> John|Box|0
> John|Letter|4
> John|Pen|0
>
> I tried:
>
> select sub_descr, itx_descr, app_load from subj
> left outer join appx on sub_id = app_ref_sub
> left outer join itx on app_ref_itx = itx_id;
>
> but it returns only:
>
> Paul|Box|3
> John|Letter|4

Your first LEFT includes users that do not have any items.
Your second LEFT includes quantity loads for which no item exists.

What you actually want is all combinations of users and items.  This
is a cross join:

SELECT sub_descr,
   itx_descr,
   IFNULL(app_load, 0)
FROM   subj
CROSS JOIN itx
 LEFT JOIN appx ON sub_id = app_ref_sub AND
   itx_id = app_ref_itx


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [SQLite.NET] DbProviderServices DB* methods implementation

2013-04-26 Thread Matthijs ter Woord
It's great to see SQLite to have a decent entityframework provider. One
area that's missing is the DB* methods of the DbProviderServivices (used to
check for and create/update databases).
I want to implement that (almost have it working already).

What's the best road to follow to get that implementation contributed?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Left join help.

2013-04-26 Thread Danilo Cicerone
Hi to all,
I'm looking for a query on the following schema:

PRAGMA foreign_keys=ON;
BEGIN TRANSACTION;
CREATE TABLE subj
(
   sub_id INTEGER PRIMARY KEY, -- 00
   sub_descr TEXT DEFAULT NULL -- 01
);
INSERT INTO "subj" VALUES(1,'Paul');
INSERT INTO "subj" VALUES(2,'John');
CREATE TABLE itx
(
   itx_id INTEGER PRIMARY KEY, -- 00
   itx_descr TEXT DEFAULT NULL -- 01
);
INSERT INTO "itx" VALUES(1,'Box');
INSERT INTO "itx" VALUES(2,'Pen');
INSERT INTO "itx" VALUES(3,'Letter');
CREATE TABLE appx
(
   app_ref_itx INTEGER DEFAULT NULL -- 00
  REFERENCES itx(itx_id) ON UPDATE CASCADE ON DELETE RESTRICT,
   app_ref_sub INTEGER DEFAULT NULL -- 01
  REFERENCES subj(sub_id) ON UPDATE CASCADE ON DELETE RESTRICT,
   app_load INTEGER NOT NULL DEFAULT 0, -- 02
   UNIQUE(app_ref_itx, app_ref_sub)
);
INSERT INTO "appx" VALUES(1,1,3);
INSERT INTO "appx" VALUES(3,2,4);
COMMIT;

table appx stores item's(table itx) quantity load for each user (table
subj). I'd to know how many items each user has:

Paul|Box|3
Paul|Letter|0
Paul|Pen|0
John|Box|0
John|Letter|4
John|Pen|0

and the above is the result what I aim to!!!

I tried:

select sub_descr, itx_descr, app_load from subj
left outer join appx on sub_id = app_ref_sub
left outer join itx on app_ref_itx = itx_id;

but it returns only:

Paul|Box|3
John|Letter|4

Thanks in advance, Danilo
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Order of columns in group by statement affects query performance

2013-04-26 Thread Hick Gunter
Do you have any experience with SQLite virtual tables? I guess not.

There are 20 issues here:

1) The abstract problem of choosing an Index for optimizing GROUP BY

2) the SQLite implementation (which I was referring to)

Ad 1)

Any index that covers all the GROUP BY fields is a "good" index because it 
allows aggregates to be computed "on the fly" as opposed to in a temporary 
table.

ORDER BY clause and multiple indices may complicate the matter


Ad 2)

SQLite attempts to handle virtual tables the same as native tables (which is 
one of the main reasons we chose SQLite). The VT interface does not allow 
publication of indexes nor creation of native indexes on virtual tables. The 
aOrderBy table of the interface implies an ordered list of fields, therefore 
SQLite would have to call xBestIndex n! times to discover the least costly 
index to use.

It is not unreasonable to assume that in a well designed SQL Statement the 
GROUP BY clause will be backed up by the necessary index and an identical ORDER 
BY clause (at least unintentionally by the programmer virtue of laziness 
resulting in copy-paste of the field list).

Thus the aOrderBy array being used for ORDER BY and GROUP BY in the VT 
interface.



http://www.sqlite.org/vtab.html

2.3 The xBestIndex Method
SQLite uses the xBestIndex method of a virtual table module to determine the 
best way to access the virtual table. The xBestIndex method has a prototype 
like this:

  int (*xBestIndex)(sqlite3_vtab *pVTab, sqlite3_index_info*);

...

Before calling this method, the SQLite core initializes an instance of the 
sqlite3_index_info structure with information about the query that it is 
currently trying to process. This information derives mainly from the WHERE 
clause and **ORDER BY or GROUP BY** clauses of the query, but also from any ON 
or USING clauses if the query is a join.


-Ursprüngliche Nachricht-
Von: James K. Lowden [mailto:jklow...@schemamania.org]
Gesendet: Donnerstag, 25. April 2013 16:34
An: sqlite-users@sqlite.org
Betreff: Re: [sqlite] Order of columns in group by statement affects query 
performance

On Thu, 25 Apr 2013 10:29:34 +0200
Hick Gunter  wrote:

> AFAIK SQLite treats GROUP BY the same way as ORDER BY (taken from
> hints in the virtual table description).

That might be so, in some limited sense.  It's obviously false in general 
because they mean different things and have different effects.

> If you have an index that covers the GROUP BY clause in field order,
> then aggregate functions need store only the current value; if not,
> then you need an ephemeral table to hold the aggregate values.

Nonsense.  The query parser sees GROUP BY A,B.  The optimizer sees an index 
ordered B,A.  By permuting the order of the columns in the GROUP BY clause, it 
finds a match for the index and uses it.

Yes, the problem is O(n^2), where n is the number of columns in the GROUP BY, 
but n is always small; even 7 columns could be checked in less than 50 
iterations.

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


--
 Gunter Hick
Software Engineer
Scientific Games International GmbH
Klitschgasse 2 – 4, A - 1130 Vienna, Austria
FN 157284 a, HG Wien
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This e-mail is confidential and may well also be legally privileged. If you 
have received it in error, you are on notice as to its status and accordingly 
please notify us immediately by reply e-mail and then delete this message from 
your system. Please do not copy it or use it for any purposes, or disclose its 
contents to any person as to do so could be a breach of confidence. Thank you 
for your cooperation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Programming API vs console

2013-04-26 Thread Igor Korot
Hi, ALL,

On Thu, Apr 25, 2013 at 2:41 PM, Random Coder wrote:

> On Tue, Apr 23, 2013 at 8:47 PM, Igor Korot  wrote:
>
> > query = wxString::Format(...);
> > if( ( result = sqlite3_prepare_v2( m_handle, query, -1, , 0 ) ) !=
> > SQLITE_OK )
> >
>
> It's been a while since I've worked with wxWidgets, but when I did,
> wxString didn't support an implicit conversion like you're using here.
>
> You need to do something like this for your sqlite_prepare_v2 call:
>
> sqlite3_prepare_v2(m_handle, (const char*)query.mb_str(wxConvUTF8), -1,
> , 0);
>
> Though, I suppose if I'm right, this should have failed in some other way
> much sooner.
>

Here is the table description:

sqlite> .dump playersdrafted
PRAGMA foreign_keys=OFF;
BEGIN TRANSACTION;
CREATE TABLE playersdrafted(playerid integer, id ineteger, ownerid integer,
draftprice double, draftorder integer, draftposition char(2), foreign
key(playerid) references players(playerid),foreign key(id) references
leagues(id), foreign key (ownerid) references owners(ownerid));
CREATE INDEX id_playerid ON playersdrafted(playerid,id);
COMMIT;

All primary keys are exist by the time the INSERT INTO.. happens.

I even tried to do with sqlite3_exec(), but it is still failing

Any idea?

Thank you.

> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users