Re: [sqlite] Bugreport - slowdown in sqlite after the ANALYZE statement

2014-12-09 Thread Eduardo Morras
On Tue, 9 Dec 2014 10:38:34 -0500
"James K. Lowden"  wrote:

> On Tue, 09 Dec 2014 12:06:20 +0100
> Jan Stan?k  wrote:
> 
> >  INSERT INTO CoreCache (ModelID, ItemID)
> >  SELECT
> ... 
> >  ORDER BY Year
> 
> Why ORDER BY on INSERT?  Does it work better?  I would expect the
> unnecessary sort to be pure overhead.  

If you insert in correct index order, the index update phase is faster because 
it don't need rebalance the b-tree so often after each insert.


---   ---
Eduardo Morras 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Dan Kennedy

On 12/10/2014 05:06 AM, Simon Slavin wrote:

On 9 Dec 2014, at 8:57pm, Nick  wrote:


Environment is Linux with multiple (c. 4-6) processes accessing a single sqlite database 
named "test.db".

Backup:
- New process started using cronjob to initiate application checkpoint until 
completion.
- rsync diff the file "test.db" to another drive/location (specifically ignoring the 
"-shm" and "-wal" file).
- exit process

Restore:
- rsync the file "test.db" from another drive/location.

Will not be trustworthy if the database is being written to during the rsync 
operations.




Strictly speaking the database file may not be well-formed even if there 
is no ongoing checkpoint. If:


  a) process A opens a read transaction,
  b) process B opens and commits a write transaction to the database,
  c) process C checkpoints the db,

then the db file considered without the *-wal file may be corrupt. The 
problem comes about because process C can only checkpoint frames up 
until the start of B's transaction. And there is an optimization that 
will prevent it from copying any earlier frames for which there exists a 
frame in B's transaction that corresponds to the same database page. So 
it effectively copis only a subset of the modifications made by earlier 
transactions into the db file - not necessarily creating a valid db file.


Dan.









A) Ensure all processes besides the backup process have the database closed 
while it is being copied. Establish some kind of semaphore so they can tell 
when it's safe to open the database again.

B) Use the SQLite Backup API which was invented to do what you want.





Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Simon Slavin

On 10 Dec 2014, at 12:30am, Nick  wrote:

> That's interesting Simon I didn't expect the database not to be trustworthy.

The database will be trustworthy at any instant.  Your copy of it will be 
corrupt because the file will be changing while you are copying it.

> In WAL mode I thought the database file is only written to when 
> checkpointing. Have I misunderstood this journaling mode?

How do you intend to prevent your other processes from checkpointing while you 
take the backup ?  You can disable checkpointing for your own connection to the 
database but not for the connections other processes have.

> Again I may have misunderstood the docs around the Backup API, does it not 
> start again from the beginning copying pages if another process writes to the 
> database during the process? In practice could it successfully backup a 2GB 
> database that is being written to once a second?

Not if the writing never stopped.  But there's no way to take a copy of a file 
which is constantly being rewritten.  rsync can't do it either.  How can 
anything copy a file which is constantly being modified ?

You can BEGIN EXCLUSIVE and then END once your backup is finished.  That should 
prevent other processes writing to the file.  You will have to deal with what 
happens if your BEGIN EXCLUSIVE times out, and you will have to put long 
timeouts in your other processes so they can handle the file being locked long 
enough for the entire copy to be taken.  That's the only way I can think of to 
do it.  And yes, it will prevent writing to the database while it's being 
copied.

On the other hand, there's a different way to clone a database: log the changes.

When something issues an INSERT/DELETE/UPDATE command, execute the command but 
also append a copy of that command to a text file somewhere.  When you want to 
bring your backup copy up-to-date, take a copy of the log file, then execute 
all the commands in it to your out-of-date copy.

You need a method of zeroing out the log file, or knowing where you got to on 
your last backup.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Nick

On 9 Dec 2014, at 22:06, Simon Slavin wrote:

> 
> On 9 Dec 2014, at 8:57pm, Nick  wrote:
> 
>> Environment is Linux with multiple (c. 4-6) processes accessing a single 
>> sqlite database named "test.db".
>> 
>> Backup:
>> - New process started using cronjob to initiate application checkpoint until 
>> completion.
>> - rsync diff the file "test.db" to another drive/location (specifically 
>> ignoring the "-shm" and "-wal" file).
>> - exit process
>> 
>> Restore:
>> - rsync the file "test.db" from another drive/location.
> 
> Will not be trustworthy if the database is being written to during the rsync 
> operations.  Recommend either of the following:
> 
> A) Ensure all processes besides the backup process have the database closed 
> while it is being copied. Establish some kind of semaphore so they can tell 
> when it's safe to open the database again.
> 
> B) Use the SQLite Backup API which was invented to do what you want.
> 
> Simon.

That's interesting Simon I didn't expect the database not to be trustworthy. In 
WAL mode I thought the database file is only written to when checkpointing. 
Have I misunderstood this journaling mode?

Again I may have misunderstood the docs around the Backup API, does it not 
start again from the beginning copying pages if another process writes to the 
database during the process? In practice could it successfully backup a 2GB 
database that is being written to once a second?

Regards
Nick
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] seeking advice

2014-12-09 Thread RSmith


On 2014/12/09 22:41, Rene Zaumseil wrote:

Hi there,

I have to store and retrieve up to 2000 parameters.
The parameters can have real and integer values.
The max. change rate is 100ms and the max. duration is up to some hours.

The simple solution would be to use plain binary files. It's fast but not
flexible.

So I came to sqlite. Before starting I would like to ask if someone could give
me some advice which way to go. Here are my current versions.

Version 1: One big table with time stamp and one column for each parameter
   - I can have up to SQLITE_MAX_COLUMNS=2000 but how can I insert all values
with only SQLITE_MAX_VARIABLE_NUMBER=999?
   - What about space consumption if NULL values are given?

Version 2: One table for each parameter with time stamp and value
   - Does this scale for up to 2000 parameters?
   - Can sqlite handle so much tables?

Version 3: One table with time stamp, parameter id and parameter value
   - Is it working when all values change?
   - Is retrieving values for one parameter fast?


Definitely No. 3 always. SQLite will retrieve a value from among 2000 items on an indexed column in under 1ms (as will any other 
localized DB). Even I/O lag will be masked by the cache at that size.


Assuming the parameter names do not change all the time and the time-stamp column is not indexed, writing will be similarly fast 
using a suitable journal mode. On this point, you might simply use an in-memory DB since you do not care about data loss on power 
failure (use ":memory:" as the file-name) which will be even faster but with an on-disk DB you will have a snapshot remain of the 
current parameter set if your application fails - which might assist debugging.


Also, making that DB, be sure to use "WITHOUT ROWID" and specifying the parameter-name as the primary key. Use Numeric affinity for 
time-stamp and value columns.


Cheers,
Ryan




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Docs: typos in SQLite Result Codes

2014-12-09 Thread Zsbán Ambrus
There are some typos on the documentation page for SQLite Result codes
"http://sqlite.org/rescode.html;.

Under the heading "(1038) SQLITE_CANTOPEN_CONVPATH", the paragraph
talks about the error code "SQLITE_CANTOPEN_SEEK".  That constant does
not exist.  The text probably means "SQLITE_CANTOPEN_CONVPATH"
instead.

Similarly, under the heading "(6410) SQLITE_IOERR_GETTEMPPATH" and
"() SQLITE_IOERR_CONVPATH", the text mentions "SQLITE_IOERR_SEEK"
which is an existing error code, but is probably not the one meant.

The meaning of several error codes are also not documented on this
page, such as those of SQLITE_ABORT, SQLITE_CANTOPEN, SQLITE_EMPTY,
SQLITE_FORMAT, SQLITE_IOERR, SQLITE_MISMATCH,  SQLITE_IOERR_READ,
SQLITE_IOERR_WRITE, etc.  This may not be a bug, because the page
claims only that the document "strives to" explain each error code.
The sqlite3.h header file has short comments next to each primary
error code at least.

-- Ambrus
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Simon Slavin

On 9 Dec 2014, at 8:57pm, Nick  wrote:

> Environment is Linux with multiple (c. 4-6) processes accessing a single 
> sqlite database named "test.db".
> 
> Backup:
> - New process started using cronjob to initiate application checkpoint until 
> completion.
> - rsync diff the file "test.db" to another drive/location (specifically 
> ignoring the "-shm" and "-wal" file).
> - exit process
> 
> Restore:
> - rsync the file "test.db" from another drive/location.

Will not be trustworthy if the database is being written to during the rsync 
operations.  Recommend either of the following:

A) Ensure all processes besides the backup process have the database closed 
while it is being copied. Establish some kind of semaphore so they can tell 
when it's safe to open the database again.

B) Use the SQLite Backup API which was invented to do what you want.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Online/Hot backup of WAL journalling mode database

2014-12-09 Thread Nick
Hi,

I'd like to check my understanding of Sqlite in WAL journalling mode. With 
automatic checkpointing turned off would the following psuedo-code result in a 
online backup approach that allows robust restore of the database with data 
fresh up to the last checkpoint?

Environment is Linux with multiple (c. 4-6) processes accessing a single sqlite 
database named "test.db".

Backup:
- New process started using cronjob to initiate application checkpoint until 
completion.
- rsync diff the file "test.db" to another drive/location (specifically 
ignoring the "-shm" and "-wal" file).
- exit process

Restore:
- rsync the file "test.db" from another drive/location.

Regards
Nick



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] seeking advice

2014-12-09 Thread Simon Slavin

On 9 Dec 2014, at 8:41pm, Rene Zaumseil  wrote:

> Version 3: One table with time stamp, parameter id and parameter value
>  - Is it working when all values change?
>  - Is retrieving values for one parameter fast?

That one.  Versions 1 & 2 will both, technically, work, but they're abuse of 
how SQL should be used and will result in horrible code.

The speed for retrieving all parameters will be bound by your programming 
language.  SQLite will do its side of the job very quickly.  And since columns 
have just affinity and not type, having some values INTEGER and other REAL will 
work fine.

> I will write and read the data on the same time. But writing should have 
> priority.

Multithread ?  Multiprocess ?  Neither are needed, but those are the things you 
need to decide on next.

Also, do not forget to pick a journal mode and to set a busy_timeout.  Ah, I 
see you already mentioned journal_mode.  Good.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] seeking advice

2014-12-09 Thread Rene Zaumseil
Hi there,

I have to store and retrieve up to 2000 parameters.
The parameters can have real and integer values.
The max. change rate is 100ms and the max. duration is up to some hours.

The simple solution would be to use plain binary files. It's fast but not 
flexible.

So I came to sqlite. Before starting I would like to ask if someone could give 
me some advice which way to go. Here are my current versions.

Version 1: One big table with time stamp and one column for each parameter
  - I can have up to SQLITE_MAX_COLUMNS=2000 but how can I insert all values 
with only SQLITE_MAX_VARIABLE_NUMBER=999?
  - What about space consumption if NULL values are given?

Version 2: One table for each parameter with time stamp and value
  - Does this scale for up to 2000 parameters?
  - Can sqlite handle so much tables?

Version 3: One table with time stamp, parameter id and parameter value
  - Is it working when all values change?
  - Is retrieving values for one parameter fast?

I will write and read the data on the same time. But writing should have 
priority. I could live with a potential lost of data because of a power 
failure. So I will use try the settings of the options "synchronous" and 
"journal_mode". Anything else I should consider?


Thank you
rene
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread J T
I just thought of what may be a simpler solution.

I'm assuming that there is a certain limit to the length of the books 
(positions can be safely assumed to never exceed say, 100,000)

So what can be done is

update page set position=position + 10 where position>='3';
insert into page(book,position) values('1','3');
update page set position=position - 9 where position>10;

This will work around the unique contraint and seems simpler than dropping it 
everytime you want to insert a page.

 
 

 

 

-Original Message-
From: Gwendal Roué 
To: General Discussion of SQLite Database 
Sent: Mon, Dec 8, 2014 12:07 pm
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail



> Le 8 déc. 2014 à 17:21, Simon Slavin  a écrit :
> 
>> Why not an opt-in way to ask for deferred constraint checking. The key here 
is only to allow perfectly legit requests to run. With all the due respect to 
sqlite implementors and the wonderful design of sqlite.
> 
> SQL-99 includes a syntax for deferred checking.  We don't need to invent our 
own syntax with a PRAGMA. However, it is done when the constraint is defined 
rather than being something one can turn on or off.  So you would need to think 
out whether you wanted row- or transaction-based checking when you define each 
constraint in the first place.

Hi Simon,

This topic is fascinating. Googling for SQL-99 deferred checking, I stumbled 
upon this page which shows how deferred index maintenance affects Oracle query 
plan, and performance : 
https://alexanderanokhin.wordpress.com/deferred-index-maintenance/.

I now understand that the strategy for checking index constraints is tied to 
their maintenance.

The `UPDATE pages SET position = position + 1 WHERE book_id = 0 AND position >= 
1` query we are talking about has indeed to perform both. Such an 
innocuous-looking request, and it sends us right into the very guts of 
relational constraints :-)

Gwendal

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-09 Thread Hadley Wickham
> Also, SSD drives wear out fast.  We don't have good figures yet for 
> mass-produced drives (manufacturers introduce new models faster than the old 
> ones wear out, so it's hard to gather stats) but typical figures show a drive 
> failing in from 2,000 to 3,000 write cycles of each single block.  Your drive 
> does something called 'wear levelling' and it has a certain number of blocks 
> spare and will automatically swap them in when the first blocks fail, but 
> after that your drive is smoke.  And VACUUM /thrashes/ a drive, doing huge 
> amounts of reading and writing as it rebuilds tables and indexes.  You don't 
> want to do something like that on an SSD without a good reason.

The SSD endurance experiment suggests that you might not need to worry
too much about it:
http://techreport.com/review/27436/the-ssd-endurance-experiment-two-freaking-petabytes

Hadley

-- 
http://had.co.nz/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread J T
I just thought of what may be a simpler solution.

I'm assuming that there is a certain limit to the length of the books 
(positions can be safely assumed to never exceed say, 100,000)

So what can be done is

update page set position=position + 10 where position>='3';
insert into page(book,position) values('1','3');
update page set position=position - 9 where position>10;

This will work around the unique contraint and seems simpler than dropping it 
everytime you want ot insert a page.

 

 

 

-Original Message-
From: James K. Lowden 
To: sqlite-users 
Sent: Tue, Dec 9, 2014 10:38 am
Subject: Re: [sqlite] sqlite bugreport : unique index causes valid updates to 
fail


On Mon, 08 Dec 2014 22:01:15 +0700
Dan Kennedy  wrote:

> On 12/08/2014 09:55 PM, Nico Williams wrote:
> > Ideally there would be something like DEFERRED foreign key checking
> > for uniqueness constraints...
> 
> You could hack SQLite to do enforce unique constraints the same way
> as FKs. When adding an entry to a UNIQUE index b-tree, you check for
> a duplicate. If one exists, increment a counter. Do the opposite when 
> removing entries - decrement the counter if there are two or more 
> duplicates of the entry you are removing. If your counter is greater 
> than zero at commit time, a UNIQUE constraint has failed.

It's not *deferred* constraint checking.  It's constraint checking.
Best to honor the transaction first.  

Rather than adding to the syntax, perhaps a pragma could cause updates
to happen in a transaction: 

1.  Create a temporary table to hold the after-image of the updated
rows. 
2.  begin transaction
3 . Delete the rows from the target table. 
3.  Insert the updated rows from the temporary table.  
4.  commit
5.  drop temporary table. 

Of course there are more efficient answers available deeper in the
update logic, affecting only the partcular columns at the time the
constraint is enforced.  I guess they all involve deleting the
prior set from the index and inserting the new one.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-09 Thread Roger Binns
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

On 12/08/2014 05:36 PM, David Barrett wrote:
> *Re: "a simple way is to sleep in the progress callback"* -- Can
> you tell me more about this?  Are you referring to the callback
> provided to sqlite3_exec(), or something else?

https://sqlite.org/c3ref/progress_handler.html

Roger
-BEGIN PGP SIGNATURE-
Version: GnuPG v1

iEYEARECAAYFAlSHIZ4ACgkQmOOfHg372QResgCg1AXMQWpW0LnhKVc9k02TXRfN
P0wAoLdmiexWvkkiZOojFb7BSwZXF07X
=97eR
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How do I update multiple rows in a single sql statement

2014-12-09 Thread Igor Tandetnik

On 12/9/2014 10:38 AM, James K. Lowden wrote:

If the subquery to the right of the SET clause produces
more than one row, the statement fails.


Are you sure? Normally, a scalar subquery doesn't fail when the 
resultset contains more than one row - it just silently produces the 
value from the first row of the first column. I'm pretty sure that's how 
it works in SQLite (but am too lazy to check).



With SQL Server's syntax, it succeeds with the target holding the "last" value, 
whatever that was.


Succeeding with the target holding the "first" value doesn't sound like 
a significant difference.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bugreport - slowdown in sqlite after the ANALYZE statement

2014-12-09 Thread James K. Lowden
On Tue, 09 Dec 2014 12:06:20 +0100
Jan Stan?k  wrote:

>  INSERT INTO CoreCache (ModelID, ItemID)
>  SELECT
... 
>  ORDER BY Year

Why ORDER BY on INSERT?  Does it work better?  I would expect the
unnecessary sort to be pure overhead.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How do I update multiple rows in a single sql statement

2014-12-09 Thread James K. Lowden
On Mon, 08 Dec 2014 20:57:00 -0500
Igor Tandetnik  wrote:

> Yes, there are workarounds (a view; or REPLACE INTO may sometimes be 
> pressed into service). But I, for one, kinda miss UPDATE ... FROM.

Be careful what you wish for.  :-)  

The only implementation of UPDATE...FROM that I know is on SQL Server
(Sybase & Microsoft).  If the join criteria are sastified by more than
one row, each successive value is applied.  The result is
nondeterministic, produces no diagnostic, and cannot be prevented.  

The standard SQL syntax -- while verbose, granted -- at least gets the
right answer.  If the subquery to the right of the SET clause produces
more than one row, the statement fails.  With SQL Server's syntax, it
succeeds with the target holding the "last" value, whatever that was.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread James K. Lowden
On Mon, 08 Dec 2014 15:48:41 +0200
RSmith  wrote:

> > UPDATE pages SET position = position + 1 WHERE book_id = 0 AND
> > position >= 1;
> 
> NOT a bug...  the moment you SET position to position +1 for the
> first iteration of the query, it tries to make that entry look like
> (0,2) and there is of course at this point in time already an entry
> like (0,2).

Yes, that's how SQLite works, or doesn't.  Whether or not it's a bug
depends on how you define the term.  

The issue has come up here before: contrary to the SQL standard, SQLite
does not support constraint enforcement with transaction semantics.
I've never heard of another SQL DBMS that behaves that way.  

sqlite> create table T (t int primary key);
sqlite> insert into T values (1), (2);
sqlite> update T set t = t+1;
Error: column t is not unique

As the OP discovered, the one recourse is to relieve the constraint
during the update.  Another is to update a temporary table, and then
delete & insert the rows in a transaction.  I would say "must implement
one's own transaction semantics" is, if not a bug, at least a
misfeature.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite bugreport : unique index causes valid updates to fail

2014-12-09 Thread James K. Lowden
On Mon, 08 Dec 2014 22:01:15 +0700
Dan Kennedy  wrote:

> On 12/08/2014 09:55 PM, Nico Williams wrote:
> > Ideally there would be something like DEFERRED foreign key checking
> > for uniqueness constraints...
> 
> You could hack SQLite to do enforce unique constraints the same way
> as FKs. When adding an entry to a UNIQUE index b-tree, you check for
> a duplicate. If one exists, increment a counter. Do the opposite when 
> removing entries - decrement the counter if there are two or more 
> duplicates of the entry you are removing. If your counter is greater 
> than zero at commit time, a UNIQUE constraint has failed.

It's not *deferred* constraint checking.  It's constraint checking.
Best to honor the transaction first.  

Rather than adding to the syntax, perhaps a pragma could cause updates
to happen in a transaction: 

1.  Create a temporary table to hold the after-image of the updated
rows. 
2.  begin transaction
3 . Delete the rows from the target table. 
3.  Insert the updated rows from the temporary table.  
4.  commit
5.  drop temporary table. 

Of course there are more efficient answers available deeper in the
update logic, affecting only the partcular columns at the time the
constraint is enforced.  I guess they all involve deleting the
prior set from the index and inserting the new one.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Drop Table Behavior

2014-12-09 Thread Clemens Ladisch
Lukas wrote:
> PRAGMA foreign_keys = ON;
>
> create table a ( id int primary key );
> create table b ( id int primary key );
> create table c ( id int primary key,
>  aid int,
>  bid int,
>  foreign key (aid) references a (id) on delete cascade,
>  foreign key (bid) references b (id) on delete cascade );
>
> insert into a values(1);
> insert into b values(1);
> insert into c values(1,1,1);
>
> drop table a;
> drop table b;
>
> Why is "drop table a" possible? It breaks the schema.

Because SQLite does not check whether it breaks the schema.  You could
recreate it, and everything would be fine.

> Why is "drop table b" causing the exception "Error: no such table:
> main.a" and what is the meaning of this message?

Before the table itself is dropped, all rows are deleted.  This requires
cascading the deletions them to c, but table c is broken because of the
missing table a.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Drop Table Behavior

2014-12-09 Thread Lukas

Hello

We found some strange behavior we can not explain. We execute the 
following script:


PRAGMA foreign_keys = ON;

create table a ( id int primary key );
create table b ( id int primary key );
create table c (id int primary key, aid int,  bid int, foreign key (aid) 
references a (id) on delete cascade, foreign key (bid) references b (id) 
on delete cascade );


insert into a values(1);
insert into b values(1);
insert into c values(1,1,1);

drop table a;
drop table b;
drop table c;

Questions:
Why is "drop table a" possible? It breaks the schema.
Why is "drop table b" causing the exception "Error: no such table: 
main.a" and what is the meaning of this message? Table b don't 
references a directly.


Best regards,
Lukas

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Bugreport - slowdown in sqlite after the ANALYZE statement

2014-12-09 Thread Richard Hipp
Answered by adding a comment at
https://bugzilla.redhat.com/show_bug.cgi?id=1161844

On Tue, Dec 9, 2014 at 6:06 AM, Jan Staněk  wrote:

> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> Hi,
> some of the banshee users noticed a huge slowdown in its operation
> after upgrading to version 3.8.7 from 3.8.6. Here is the related log :
>
> [4 Debug 13:24:27.263] Executed in 12819ms
> DELETE FROM CoreCache WHERE ModelID = 9;
> INSERT INTO CoreCache (ModelID, ItemID) SELECT
> 9, CoreTracks.TrackID
> FROM (SELECT MIN(CoreTracks.TrackID) AS TrackID,
> CoreTracks.Year FROM CoreTracks GROUP BY CoreTracks.Year) AS CoreTracks
> WHERE CoreTracks.Year IN
> (SELECT CoreTracks.Year FROM CoreTracks, CoreCache
> WHERE CoreCache.ModelID = 371 AND
>   CoreCache.ItemID = CoreTracks.TrackID )
> ORDER BY Year
>
> Reverting to 3.8.6, gives back a fast answer :
>
> [4 Debug 13:21:05.433] Executed in 24ms
> DELETE FROM CoreCache WHERE ModelID = 9;
> INSERT INTO CoreCache (ModelID, ItemID) SELECT
> 9, CoreTracks.TrackID
> FROM (SELECT MIN(CoreTracks.TrackID) AS TrackID,
> CoreTracks.Year FROM CoreTracks GROUP BY CoreTracks.Year) AS CoreTracks
> WHERE CoreTracks.Year IN
> (SELECT CoreTracks.Year FROM CoreTracks, CoreCache
> WHERE CoreCache.ModelID = 371 AND
>   CoreCache.ItemID = CoreTracks.TrackID )
> ORDER BY Year
>
> The original bug reporter then went on and possibly isolated the bug.
> Details are at https://bugzilla.redhat.com/show_bug.cgi?id=1161844 .
>
> Thanks for your work,
> - --
> Jan Stanek - Red Hat Associate Developer Engineer - Databases Team
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v2
>
> iEYEARECAAYFAlSG16wACgkQXbaA6cD3QD38pwCcDiofiIh5jo+E8P5B/DhxLzGF
> fGsAn1RJ8SjjEANSjUm4I1j+zReQfj0G
> =2QtO
> -END PGP SIGNATURE-
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using binding in sqlite insert statements

2014-12-09 Thread Prakash Premkumar
Thanks a lot for your reply Simon. It was returning SQLITE_DONE. (But I
have checked for SQLITE_OK in my code). It's fixed now

Thank you
Prakash

On Tue, Dec 9, 2014 at 5:02 PM, Simon Davies 
wrote:

> On 9 December 2014 at 10:56, Prakash Premkumar 
> wrote:
> > Hi,
> >
> > I'm trying to use the sqlite_bind function calls to pass values to the
> > insert queries in sql.
> >
> > Here's the code I'm trying to execute: http://pastebin.com/kMvAHsrk
> >
> > I get an error after the sqlite3_step() function call :
> > The error message is "Unknown error".
>
> Check expected return value:
> http://www.sqlite.org/c3ref/step.html
>
> >
> > Can you kindly help me fix this ?
> >
> > Thanks
> > Prakash
>
> Regards,
> Simon
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using binding in sqlite insert statements

2014-12-09 Thread Simon Davies
On 9 December 2014 at 10:56, Prakash Premkumar  wrote:
> Hi,
>
> I'm trying to use the sqlite_bind function calls to pass values to the
> insert queries in sql.
>
> Here's the code I'm trying to execute: http://pastebin.com/kMvAHsrk
>
> I get an error after the sqlite3_step() function call :
> The error message is "Unknown error".

Check expected return value:
http://www.sqlite.org/c3ref/step.html

>
> Can you kindly help me fix this ?
>
> Thanks
> Prakash

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Bugreport - slowdown in sqlite after the ANALYZE statement

2014-12-09 Thread Jan Staněk
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hi,
some of the banshee users noticed a huge slowdown in its operation
after upgrading to version 3.8.7 from 3.8.6. Here is the related log :

[4 Debug 13:24:27.263] Executed in 12819ms
DELETE FROM CoreCache WHERE ModelID = 9;
INSERT INTO CoreCache (ModelID, ItemID) SELECT
9, CoreTracks.TrackID
FROM (SELECT MIN(CoreTracks.TrackID) AS TrackID,
CoreTracks.Year FROM CoreTracks GROUP BY CoreTracks.Year) AS CoreTracks
WHERE CoreTracks.Year IN
(SELECT CoreTracks.Year FROM CoreTracks, CoreCache
WHERE CoreCache.ModelID = 371 AND
  CoreCache.ItemID = CoreTracks.TrackID )
ORDER BY Year

Reverting to 3.8.6, gives back a fast answer :

[4 Debug 13:21:05.433] Executed in 24ms
DELETE FROM CoreCache WHERE ModelID = 9;
INSERT INTO CoreCache (ModelID, ItemID) SELECT
9, CoreTracks.TrackID
FROM (SELECT MIN(CoreTracks.TrackID) AS TrackID,
CoreTracks.Year FROM CoreTracks GROUP BY CoreTracks.Year) AS CoreTracks
WHERE CoreTracks.Year IN
(SELECT CoreTracks.Year FROM CoreTracks, CoreCache
WHERE CoreCache.ModelID = 371 AND
  CoreCache.ItemID = CoreTracks.TrackID )
ORDER BY Year

The original bug reporter then went on and possibly isolated the bug.
Details are at https://bugzilla.redhat.com/show_bug.cgi?id=1161844 .

Thanks for your work,
- -- 
Jan Stanek - Red Hat Associate Developer Engineer - Databases Team
-BEGIN PGP SIGNATURE-
Version: GnuPG v2

iEYEARECAAYFAlSG16wACgkQXbaA6cD3QD38pwCcDiofiIh5jo+E8P5B/DhxLzGF
fGsAn1RJ8SjjEANSjUm4I1j+zReQfj0G
=2QtO
-END PGP SIGNATURE-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using binding in sqlite insert statements

2014-12-09 Thread Prakash Premkumar
Hi,

I'm trying to use the sqlite_bind function calls to pass values to the
insert queries in sql.

Here's the code I'm trying to execute: http://pastebin.com/kMvAHsrk

I get an error after the sqlite3_step() function call :
The error message is "Unknown error".

Can you kindly help me fix this ?

Thanks
Prakash
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How do I update multiple rows in a single sql statement

2014-12-09 Thread Luuk

On 9-12-2014 02:31, Igor Tandetnik wrote:

On 12/8/2014 8:20 PM, Keith Medcalf wrote:


update temp_table
set id=(select id from some_table where c=42),
operation='UPDATE'
where exists (select 1
  from some_table s
  where s.a=temp_table.a and s.b=temp_table.b and s.c=42);

is the proper way of phrasing of a correlated subquery ...


Now the problem is that (select id from some_table where c=42) takes an
id from some row of some_table - not necessarily the row with matching a
and b.


OK, thanks for the extra input...

I hope it's enough for the OP.




Without some form of UPDATE...FROM (supported by some SQL engines, but
not SQLite), I can't think of a way to avoid repeating the whole
three-conjuncts condition twice - once in SET id=, and again in WHERE.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Artificially slow VACUUM by injecting a sleep() somewhere?

2014-12-09 Thread RSmith


On 2014/12/09 03:36, David Barrett wrote:

Hi all, great questions:

*Re: Why VACUUM.*  We vacuum weekly.  This particular database is a
"rolling journal" -- we are constantly adding new rows to the end of the
table, and every week we truncate off the head of the journal to only keep
3M rows at the "tail".  Given that we're truncating the "head", without
vacuuming we'd be inserting the new rows at the "front" of the database
with the old rows at the "end" -- and then each truncation would leave the
database more and more fragmented.  Granted, this is on SSDs so the
fragmentation doesn't matter a *ton*, but it just adds up and gets worse
over time.  Anyway, agreed it's not the most important thing to do, but all
things being equal I'd like to do it if I can to keep things clean and
snappy.


Hi David - this is a completely unneeded step. SQLite will maintain it's internal structure, it doesn't do 'rolling' data usage and 
it will do so using the minimum IO. SQLite knows about people deleting and inserting, it will re-use and not waste, so no need to 
try and accomplish that. Only use Vacuum maybe after some months or such when you are doing all your proper DB maintenance - 
otherwise the performance gain will be negligible (and if you do experience some or other significant performance gain after a 
vacuum in your use-case, please let us know because something might be broken then).


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users