Re: [sqlite] Problem with rename table

2016-09-15 Thread Zsbán Ambrus
On Tue, Sep 6, 2016 at 3:37 PM, Radovan Antloga
 wrote:
> I can't find a solution how to fix my database after
> I have renamed table DOKUMENTI to DOKUMENTI2.
> Table DOKUMENTI had trigger dokumenti_trigger1
> and after renaming table I cant execute any sql. I forgot
> to drop trigger first. So now I always get error:
> malformed database schema (dokumenti_trigger1) -
> no such table main.dokumenti.

I wonder, in such a case, is it possible to temporarily disable
triggers with the sqlite3_db_config(db,
SQLITE_DBCONFIG_ENABLE_TRIGGER, 0, (int *)nullptr); call, then drop
that trigger, then re-enable triggers?  I haven't tried to see if this
works or not.

-- Ambrus
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-15 Thread Richard Hipp
On 9/15/16, Martin Raiber  wrote:
>
> The program opens the database file with fd =
> open("/path/to/database/file", ...) and then closes it with close(fd)
> using the OS file api. The close() clears the posix file locks of the
> process in the database file (that is all posix file locks of all open
> connections in the process).

Yep. That's why we say that Posix locks are broken by design.
https://www.sqlite.org/src/artifact/be9ca0f90?ln=968

It's pretty easy to tell which parts of unix were developed by Dennis
Richie or Bill Joy and which parts came out of a committee.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-15 Thread Martin Raiber
On 16.09.2016 00:53 Simon Slavin wrote:
> On 15 Sep 2016, at 11:38pm, Martin Raiber  wrote:
>
>> There are two instances in the program where the sqlite database file is
>> opened
>> and closed outside of sqlite3 (to backup the database file and to sync
>> it before
>> checkpointing). This clears away the posix locks on the database files.
>> This does
>> not cause problems unless another process accesses the database file.
>> For instance
>> it deletes the wal file while it is still in use.
> Do you mean that the process makes an SQLite call which deletes the WAL file, 
> or that it deletes the WAL file using a file-handling call ?
>
> It should be impossible for a SQLite call to delete a WAL file while it's in 
> use.  The only times I've seen this done are when a SQLite database was open 
> by two different computers, one accessing it on a local disk and the other 
> accessing it across a network using a SMB share.  This, naturally, messes up 
> multi-access filehandling.

The program opens the database file with fd =
open("/path/to/database/file", ...) and then closes it with close(fd)
using the OS file api. The close() clears the posix file locks of the
process in the database file (that is all posix file locks of all open
connections in the process). The sqlite command line tool is able to get
an exclusive lock on quitting, checkpoints and deletes the wal file
which later causes the IO errors.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-15 Thread Simon Slavin

On 15 Sep 2016, at 11:38pm, Martin Raiber  wrote:

> There are two instances in the program where the sqlite database file is
> opened
> and closed outside of sqlite3 (to backup the database file and to sync
> it before
> checkpointing). This clears away the posix locks on the database files.
> This does
> not cause problems unless another process accesses the database file.
> For instance
> it deletes the wal file while it is still in use.

Do you mean that the process makes an SQLite call which deletes the WAL file, 
or that it deletes the WAL file using a file-handling call ?

It should be impossible for a SQLite call to delete a WAL file while it's in 
use.  The only times I've seen this done are when a SQLite database was open by 
two different computers, one accessing it on a local disk and the other 
accessing it across a network using a SMB share.  This, naturally, messes up 
multi-access filehandling.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Random IOERR_SHORT_READ in sqlite3_prepare_v2

2016-09-15 Thread Martin Raiber
After getting some additional information, namely that the users are
also using the
sqlite3 command line tool to read data from the database, I think I
found out how
this issue occurs.

There are two instances in the program where the sqlite database file is
opened
and closed outside of sqlite3 (to backup the database file and to sync
it before
checkpointing). This clears away the posix locks on the database files.
This does
not cause problems unless another process accesses the database file.
For instance
it deletes the wal file while it is still in use. This causes the error
messages below.

Solution seems to be to not close the database file after opening it and
to reuse
the file handle (on Linux).

On 14.09.2016 13:05 Martin Raiber wrote:
> Hi,
>
> there have been three reports by users using my software of SQLite
> 3.12.0 returning SQLITE_IOERR and logging a SQLITE_IOERR_SHORT_READ
> (522). Specifically:
>
> 2016-09-12 04:37:04: WARNING: SQLite: disk I/O error errorcode: 522
> 2016-09-12 04:37:04: ERROR: Error preparing Query [PRAGMA cache_size =
> -2048]: disk I/O error
>
> One instance was on FreeBSD where I thought it could be caused by ZFS.
> The other two instances are on Linux now. On FreeBSD the issue was
> "fixed" by repeating the prepare after it failed with an IO-error.
>
> One user has captured an strace. I cannot actually see the short read,
> though: https://forums.urbackup.org/t/urbackup-crashing/2402/8
>
> Environment:
>
> * Databases are in WAL journal mode
> * synchronous=NORMAL
> * wal_autocheckpoint is OFF. Checkpointing is done in a separate thread
> with PRAGMA wal_checkpoint(PASSIVE) and wal_checkpoint(TRUNCATE) if the
> WAL file is bigger than a certain size
>
> Thanks for any help!
>
> Regards,
> Martin
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Complicated join

2016-09-15 Thread Simon Slavin

On 15 Sep 2016, at 9:46pm, David Raymond  wrote:

> The idea is to find the closest date that matches that couldn't be matched to 
> another record.

[snip]

> Can this join be done in SQL?

I wouldn't even try to do it in any SQL engine.  It would be ridiculously 
difficult to debug.  Even "the closest date that matches that couldn't be 
matched to another record" by itself requires processing every row of a table 
using a metric you haven't defined.

If I did do it I'd use multiple parses.  One parse to work out the matching key 
values for each table and store them in another column of the table, the final 
parse to do the LEFT JOIN query.

But your question is phrased not in terms of set operations SQL implements but 
in terms of a standard procedural programming language, so perhaps you should 
use one.  Sooner or later you're going to have to do some programming.

Simon.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Complicated join

2016-09-15 Thread David Raymond
Can it be done in SQL? Yes.

In any sort of pretty or efficient manner? Ehhh, maybe?

I came up with something that seems to work with the small sample cases that I 
came up with to try and cover your requirements there, but it's got a couple 
levels of CTE's with long "where not exists..." clauses etc, and I've probably 
missed something. If you could provide a sample set of insert statements to 
paste in along with "here's what I hope to see at the end from this" that would 
help out.

Also, when you ask "Can this be done in SQL?" are you asking...
-in a single statement?
-in only SQL, but multiple statements are ok (such as using intermediate temp 
tables)?
-with an initial SQL query, but then the ability to muck about with the 
returned results in the language of your choice thereafter?
-something else?

Thanks,

-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org] On 
Behalf Of David Bicking
Sent: Thursday, September 15, 2016 11:53 AM
To: SQLite Mailing List
Subject: [sqlite] Complicated join

I have two tables:
CREATE TABLE M ( CombineKeyFields, EvtNbr, TransDate, OtherFields, PRIMARY 
KEY(CombinedKeyFields, EvtNbr, TransDate));CREATE TABLE E  ( CombineKeyFields, 
EvtNbr, TransDate, OtherFields, PRIMARY KEY(CombinedKeyFields, EvtNbr, 
TransDate));
"CombinedKeyFields" is shorthand for a combination of about a half dozen fields 
in the primary key."TransDate" is an integer storing a proprietary date 
sequence number, where an older date is always less than a newer date
Now, I want to do E LEFT JOIN M
(1) The CombinedKeyFields must always match in each table

(2) Match using the EvtNbr, but if no match, use the lowest M.EvtNbr that 
matches the CombinedKeyFields

(3) Match using the TransDate but if no exact match, match on the M.TransDate 
that is less than the E.TransDate but greater than the prior E.TransDate
For M.TransDate = 94E.TransDate = 96 will match to 94but E.TransDate = 98 will 
have no match because 94 is less than the prior trans at 96..The idea is to 
find the closest date that matches that couldn't be matched to another record.
All this data is coming from upstream data, so this is the data we have on 
hand, though the schema for this reporting package can still be changed, but I 
would have to justify the change by saying the report can only be done with the 
change...
Can this join be done in SQL?
If this were an inner join, I believe I could use CASE statements in the WHERE 
clause, but I'm not sure that would work moving it to the JOIN... ON clause.
Any advice or help is much appreciated.
Thanks,David
Saying a prayer that the email gods won't wrap all these lines together in to 
an unreadable mess like the last time I asked for advice here...
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-15 Thread R Smith


On 2016/09/14 8:29 PM, Alex Ward wrote:


We currently have 500 tables and 1500 triggers in the schema.   Perhaps that is 
why we didn't have much luck having one connection per thread or opening a 
connection per access.  Perhaps our schema needs a rework, would one table with 
a million rows be better than 500 tables with 2000 rows each?


YES. In 99% of cases it is far more better to have x million rows in a 
single table than trying to use multiple tables. It is the job of the DB 
engine to efficiently handle large tables, and sqlite (like most other 
DB engines) does a brilliant job of it because it is the Alpha use-case 
and the single biggest focus of efforts to enhance during development.
There are cases when a dispersed schema might be superior, but they are 
few and obscure. 1 Table = 1 set of Indices, 1 set of triggers, 1 set of 
views.


Perhaps your case was different, but mostly when people do something 
like this multiple-table thing, it is because the try to think for the 
DB engine, and assume the multiple table way is better (maybe it just 
"feels" better), but upon testing, you will find the amount of effort to 
run multiple tables outweighs any gain in access speed significantly. 
Also, a B-Tree Index works far better on one large table than many 
B-Tree indices on many tables. The increase in time taken to hit a 
specific PK in a large table diminishes rapidly with table growth. (In 
simple binary terms, IIRC, it takes 5 lookup steps to hit a PK in just 
30 rows, yet only 24 look-up steps to hit a PK in 1 million rows, 25 to 
hit it in 2 mil rows, etc. - Law of diminishing returns in action)




We were considering adding a view per table too, that would make it 3000 
elements in the schema, if 100 is considered a lot are we attempting to use 
sqlite in a manner that it is not suited for?  We may need to consider another 
alternative if that's the case, although that would be a shame.


This alone should be enough of a reason to reconsider.


Sadly we are not comparing sqlite with another SQL DB. We are 
attempting to replace an in-house memory resident non-sql database 
with sqlite. So our comparison is between a) reading from memory and 
b) opening a connection and reading from sqlite. 


This is perfect, it's what SQLite was born to do. Just trust SQLite to 
do it's thing and don't try to pre-empt how it will fare and prematurely 
optimize by distributed schemata and the like. Take the simplest route 
first, if that turns out to really be too slow, /then/ perhaps ask what 
can be done to improve, given the data/schema specifics.


(And yes, we are not oblivious to the fact that you may have already 
invested insane amounts of time in doing it the other way, don't throw 
away the code yet, just try the normal way also)!


Good luck!
Ryan


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Complicated join

2016-09-15 Thread David Bicking
I have two tables:
CREATE TABLE M ( CombineKeyFields, EvtNbr, TransDate, OtherFields, PRIMARY 
KEY(CombinedKeyFields, EvtNbr, TransDate));CREATE TABLE E  ( CombineKeyFields, 
EvtNbr, TransDate, OtherFields, PRIMARY KEY(CombinedKeyFields, EvtNbr, 
TransDate));
"CombinedKeyFields" is shorthand for a combination of about a half dozen fields 
in the primary key."TransDate" is an integer storing a proprietary date 
sequence number, where an older date is always less than a newer date
Now, I want to do E LEFT JOIN M
(1) The CombinedKeyFields must always match in each table(2) Match using the 
EvtNbr, but if no match, use the lowest M.EvtNbr that matches the 
CombinedKeyFields

(3) Match using the TransDate but if no exact match, match on the M.TransDate 
that is less than the E.TransDate but greater than the prior E.TransDate
For M.TransDate = 94E.TransDate = 96 will match to 94but E.TransDate = 98 will 
have no match because 94 is less than the prior trans at 96..The idea is to 
find the closest date that matches that couldn't be matched to another record.
All this data is coming from upstream data, so this is the data we have on 
hand, though the schema for this reporting package can still be changed, but I 
would have to justify the change by saying the report can only be done with the 
change...
Can this join be done in SQL?
If this were an inner join, I believe I could use CASE statements in the WHERE 
clause, but I'm not sure that would work moving it to the JOIN... ON clause.
Any advice or help is much appreciated.
Thanks,David
Saying a prayer that the email gods won't wrap all these lines together in to 
an unreadable mess like the last time I asked for advice here...
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Ordering looks incorrect

2016-09-15 Thread David Crayford



On 14/09/2016 9:59 PM, R Smith wrote:
I think this is answered, but since you are a bit new to SQLite, and 
to be somewhat more informant...


SQLite is certainly a different experience to the enterprise class data 
bases that I'm used to but I like it. I'm still getting used to the duck 
typing aspect.
I've built it on a z/OS mainframe with very little changes and it works 
great in the z/OS UNIX environment. I'm wondering what it will take to 
get it to work

in the native file system.



On 2016/09/14 3:19 PM, David Crayford wrote:
Of course. How do I do something similar to DB2 decimal() function 
which will add trailing zeros to precision? I tried round(col,6) but 
it knocked off the trailing zero like 12.12345.


It's not so much the trailing zeroes you need, it's the leading 
spaces. But you should order by the original value, not by "column 2" 
because "column 2" contains RESULTS (or OUTPUT if you will), not 
original values, and you have stringified the results with your 
printf() statement, so they will now sort like strings.




Got it, the sort on original value was what I needed. And thank you to 
you chaps for pointing it out.


At least, this is how SQLite thinks of that statement - I'm unsure if 
this is in line (or not in line) with any standard. I think from a 
previous discussion we concluded that ordering, limiting and 
offsetting were all non-standard adaptions by various SQL 
implementations and so have no true conformance spec.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When is data committed on one connection seen on another?

2016-09-15 Thread Hick Gunter
>> As already stated, and per my own experience, each thread should have it's
>> own connection and do whatever it needs to do there, without interference
>> from other threads.
>
>I appreciate this point.  Early prototyping indicated that this might not be 
>possible for our system, which makes me a little nervous if that is what most 
>users end up doing.  We will >definitely take another look at not sharing 
>connections.
>

Maybe consider creating a pool of "reader" threads that get created at 
application startup. Each one opens it's very own dedicated connection at the 
beginning. Also have one "listener" thread that accepts requests and farms them 
out to the "readers".

That way you have control over the number of connections being created and the 
overhead of connecting is done once at application startup.


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users