Re: [sqlite] Custom functions, variable parameters, and SELECT triggers

2015-01-30 Thread Tristan Van Berkom
On Sat, 2015-01-31 at 00:04 -0500, Igor Tandetnik wrote:
> On 1/30/2015 10:44 PM, David Barrett wrote:
> > Is it possible to create a trigger that calls a custom function and passes
> > in NEW.*?
> 
> Not literally NEW.* . You'll have to spell out individual columns as 
> parameters.
> 
> > 2) I'm *assuming* if you pass a "*" into that function, it'll just call
> > that function with however many columns are available.
> 
> Your assumption is incorrect. If I recall correctly, the syntax 
> "myFunc(*)" is accepted, and is equivalent to "myFunc()" - that is, no 
> parameters are passed. Pretty much the only reason to allow this syntax 
> is to accept "count(*)".
> 
> > 3) It seems that there is a way to create a custom trigger that has no
> > byproduct but to call SELECT.  The only reason I can imagine you'd want to
> > do that is to call a custom function.  But can I call that function with
> > all columns using a *?  (I can't quite figure this out from the docs alone.)
> 
> Well, you could have tested it, and discovered it didn't work. You don't 
> even need a custom function, you could have used a built-in one.
> 
> >  SELECT myFunc( NEW.* );
> 
> That would produce a syntax error.
> 
> > Are these assumptions correct, and should the above generally work?
> 
> No, and no.
> 
> > My
> > goal is to execute myFunc() every time there's an INSERT/UPDATE/DELETE on a
> > given table -- but I want myFunc() to be reusable and not need to know the
> > structure of the table it's being called on.
> 
> I'm not sure how the necessity of myFunc(NEW.*) syntax follows from 
> this. Why can't the trigger call myFunc(new.colA, new.colB)?
> 
> You can write a variadic custom function (one that can be called with an 
> arbitrary number of arguments), if that's what you are asking.

Additional note,

In order to generate queries on tables for which you dont know their
structure (I've found this particularly useful in dynamic schema
upgrades), you might find this useful:

  PRAGMA table_info ('table_name')

This part should help you to generate a query which passes all the
columns of a given table to myFunc()

Cheers,
-Tristan


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] expected performance of selecting large numbers of records

2015-01-30 Thread Godmar Back
I have a single SQLite 2-column table with a primary TEXT key and a value,
like so (id TEXT PRIMARY KEY, value TEXT). One process adds new records
periodically, perhaps 1-10 per minute. The database currently has 850,000
entries and is 7.3GB large on disk.

I also need to perform bulk queries, which appear to be very slow. On an
average desktop PC, for instance, a "SELECT COUNT(*)" takes over 5 minutes.

If I want to do a dump of the table, as in "SELECT value FROM " I'll
quickly get "database is locked" errors. Googling revealed that those are
because a long running select keeps a cursor, and thus readlock on the
entire database. I have since rewritten the query using multiple SELECT *
FROM ... LIMIT a, b where b = 50 and a = 0, 50, 100, 150, .  However,
it takes 20 hours to fully extract the table's 850,000 records, with only
minimal per record processing.

My question: is this performance expected, or am I doing something wrong?
Is there a quick way to count and/or dump an approximate snapshot of a
single 2-column table such as the one shown above, even while keeping the
database available for concurrent writes?

I'm using Ubuntu 12.10 with 3.7.9-2ubuntu1.1 libsqlite3 using the Python
binding.

 - Godmar
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Custom functions, variable parameters, and SELECT triggers

2015-01-30 Thread Igor Tandetnik

On 1/30/2015 10:44 PM, David Barrett wrote:

Is it possible to create a trigger that calls a custom function and passes
in NEW.*?


Not literally NEW.* . You'll have to spell out individual columns as 
parameters.



2) I'm *assuming* if you pass a "*" into that function, it'll just call
that function with however many columns are available.


Your assumption is incorrect. If I recall correctly, the syntax 
"myFunc(*)" is accepted, and is equivalent to "myFunc()" - that is, no 
parameters are passed. Pretty much the only reason to allow this syntax 
is to accept "count(*)".



3) It seems that there is a way to create a custom trigger that has no
byproduct but to call SELECT.  The only reason I can imagine you'd want to
do that is to call a custom function.  But can I call that function with
all columns using a *?  (I can't quite figure this out from the docs alone.)


Well, you could have tested it, and discovered it didn't work. You don't 
even need a custom function, you could have used a built-in one.



 SELECT myFunc( NEW.* );


That would produce a syntax error.


Are these assumptions correct, and should the above generally work?


No, and no.


My
goal is to execute myFunc() every time there's an INSERT/UPDATE/DELETE on a
given table -- but I want myFunc() to be reusable and not need to know the
structure of the table it's being called on.


I'm not sure how the necessity of myFunc(NEW.*) syntax follows from 
this. Why can't the trigger call myFunc(new.colA, new.colB)?


You can write a variadic custom function (one that can be called with an 
arbitrary number of arguments), if that's what you are asking.

--
Igor Tandetnik

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Custom functions, variable parameters, and SELECT triggers

2015-01-30 Thread David Barrett
Is it possible to create a trigger that calls a custom function and passes
in NEW.*?  To break that question down:

1) I know it's possible to create custom functions that take a variable
number of parameters.

2) I'm *assuming* if you pass a "*" into that function, it'll just call
that function with however many columns are available.  For example, this
would call myFunc() with two parameters:

CREATE TABLE foo ( colA INTEGER, colB TEXT );
SELECT myFunc( * ) FROM foo;

3) It seems that there is a way to create a custom trigger that has no
byproduct but to call SELECT.  The only reason I can imagine you'd want to
do that is to call a custom function.  But can I call that function with
all columns using a *?  (I can't quite figure this out from the docs alone.)

CREATE TRIGGER myFuncOnFooInsert AFTER INSERT ON foo
BEGIN
SELECT myFunc( NEW.* );
END

Are these assumptions correct, and should the above generally work?  My
goal is to execute myFunc() every time there's an INSERT/UPDATE/DELETE on a
given table -- but I want myFunc() to be reusable and not need to know the
structure of the table it's being called on.  Thanks, as always, I
appreciate your help!

-david
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Mario M. Westphal
I estimate that over 90% of the users keep the database on local disks. I can 
tell from the log files.

Keeping the SQLite database it on a network server really hurts performance. 
That’s not what SQLite is designed for, besides all other aspects of network 
locking mentioned in various SQLite docs. I use a MySQL or other RDBMS backend 
for such scenarios.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Stephen Chrzanowski
On Fri, Jan 30, 2015 at 8:07 AM, Mario M. Westphal  wrote:

>
> When a user encounters the problem he/she restores the last working
> backup. I have a few users who faced this problem more than once. Here I
> always assumed some hardware glirch, a shaky USB connection, disk trouble,
> network problems (if the database is held on a Windows server or NAS),
> buggy SAMBA implementations or similar. Often, when asked, users
> ‘remembered’ a power failure, or some other problems with the disk or
> network. Case closed.
>

*Client/Server model*
It hasn't been mentioned by you yet, but if your software is acting in a
client/server model, ensure that your server is accessing the file LOCALLY
and not at a remote point.  Ensure that you're using the internal SQLite
threading tools and checking every single result for every single call in
the server software.  Do not ever let a remote client directly access to
the database file.

*NAS - Network Attached Storage*

If multiple users are accessing the file that lives on a different
computer, it is remote storage, which means NAS.  Any computer with any
share available on a network, that machine *IS* to be considered a NAS to a
remote machine.  Drobo, FTP, Windows, Unix/Linux,  CIFS/NFS/etc - Whatever
the protocol used, if what you're accessing isn't local to the computer, it
is a NAS.  Windows, Linux, and "Other" network protocols, be it 'buggy'
SAMBA or a Windows file share, it doesn't matter.  *ALL* are prone to
making SQLite have issues.  A single user using a single remote source
should be OK (But I wouldn't trust it), but the SECOND you start throwing
multiple connections at a remote file, you're begging, pleading, and even
offering your first born child to the computer Gods asking for data
problems.  The problem is NOT with Windows, and the problem isn't going to
show up in your event logs anywhere, but with the file sharing protocol
itself at the remote side, and even THAT machine won't make note of bad
file accesses or when a file is accessed.  The remote system isn't properly
releasing the necessary lock information to your computer, which is where
the problem is happening.

Directly from https://www.sqlite.org/howtocorrupt.html

--
2.0 File locking problemsSQLite uses file locks on the database file, and
on the write-ahead log or WAL file, to coordinate access between concurrent
processes. Without coordination, two threads or processes might try to make
incompatible changes to a database file at the same time, resulting in
database corruption.

2.1 Filesystems with broken or missing lock implementations

SQLite depends on the underlying filesystem to do locking as the
documentation says it will. But some filesystems contain bugs in their
locking logic such that the locks do not always behave as advertised. *This
is especially true of network filesystems and NFS in particular.* If SQLite
is used on a filesystem where the locking primitives contain bugs, and if
two or more threads or processes try to access the same database at the
same time, then database corruption might result.

--
{Highlighted by me}

Write your software to detect where the file is being loaded from.  If your
software is written for Windows, it is SIMPLISTIC to find out what kind of
drive you're accessing a file from, and it is even MORE simplistic to find
out if you're accessing a file via a UNC (\\system\share) by just looking
at what the full file path your program is loading the file from.  I've
never coded anything under a 'Nix system except for scripts, but there
should be a way to find out if the path you're accessing is remote by
looking at /etc/fstab (or equiv) and track from there.  The moment your
software sees a that it is accessing something OTHER than a file "local to
the computer, be it HDD/SDD/USB", warn the user of possible data
corruption, log that the attempt was made, and go from there on whatever
path you want to proceed.  Proceed with systems running as usual, or, deny
access to the file, or close out of the application entirely.

Confirm, with ABSOLUTE CERTAINTY, that database files that are being used
are on local storage devices and validate that if these files ARE being
accessed locally, THEN maybe start digging into different kinds of
corruption problems.  Removing a machine from your process is going to make
things MUCH easier to diagnose.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID-based queries slow in FTS db

2015-01-30 Thread farkas andras
Thank you for the quick fix and the info, I will wait for 3.8.9 to trickle down 
into DBD::SQLite. AF
Dan Kennedy  írta:
>On 01/30/2015 10:49 PM, Dominique Devienne wrote:
>> On Fri, Jan 30, 2015 at 4:38 PM, Dan Kennedy  wrote:
>>
>>> On 01/29/2015 02:29 AM, farkas andras wrote:
>>>
 [...] but searches based on ROWID are atrociously slow and hog massive
 amounts of memory [...]

>>> Looks like range constraints on rowids were only taken into account when
>>> there was also a MATCH term in the WHERE clause. Now fixed here:
>>>
>>>http://www.sqlite.org/src/info/85dc12625d300f
>>>
>>> The fix should be in 3.8.9.
>>
>> Just curious Dan. The tests added do not seem to check the query plans
>> despite the report being about a performance issue. I only skimmed them,
>> and Im unfamiliar with TCL and the exact specifics of SQLite testing, 
>> so I
>> could well have missed them, but I do recall seen other perf tests checking
>> execution plans, in addition to checking correctness. Did I miss them?
>
>Fair point. It would be better if there were tests to show that the 
>queries were being correctly optimized.
>
>But the change was fairly trivial, and I didnt think there was much 
>chance that it would fail to optimize the queries correctly. Also, its 
>a pretty obscure optimization (one complaint in how many years?), so I 
>figured it wasnt all that important. Finally its fiddly to test in 
>this case, as the EXPLAIN QUERY PLAN (or even EXPLAIN) output is not 
>sufficient to figure out if its working properly or not. So I just 
>checked by hand that the optimization is working.
>
>On the other hand, that the change could contain some bug related to 
>integer overflow or some other boundary condition is a real risk. So the 
>tests focus on that.
>
>Dan.
>
>
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best Practice for read-only access

2015-01-30 Thread Duquette, William H (393K)

On 1/29/15 5:48 PM, "Richard Hipp"  wrote:

>On 1/29/15, Duquette, William H (393K) 
>wrote:
>> Howdy!
>>
>> I've got an object that encapsulates access to an SQLite database,
>>i.e., all
>> writes to the database are done in terms of method calls to the object.
>> However, I want to give the application read-only access to the
>>database for
>> queries.  There are two obvious ways to do this:
>>
>> 1. I can define an "authorizer", and have it deny write access for
>>queries
>> coming from outside the object.
>>
>> 2. I can open two database handles on the one file, one of them
>>read-only,
>> and give the outside application access to the read-only database
>>handle.
>>
>> At present the application is single-threaded, so simultaneous access
>>isn't
>> an issue (and I'm using WAL mode anyway).
>>
>> I'm currently using #1; I enable the authorizer before queries from
>>outside,
>> and remove it afterward, each time.
>>
>> I kind of like #2--it's simpler--but I'm worried that it would increase
>> memory usage considerably.
>>
>
>I vote for #2.  Measure the memory usage if that is a concern.

Thanks, Richard!

Will


>
>-- 
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID-based queries slow in FTS db

2015-01-30 Thread Dan Kennedy

On 01/30/2015 10:49 PM, Dominique Devienne wrote:

On Fri, Jan 30, 2015 at 4:38 PM, Dan Kennedy  wrote:


On 01/29/2015 02:29 AM, farkas andras wrote:


[...] but searches based on ROWID are atrociously slow and hog massive
amounts of memory [...]


Looks like range constraints on rowids were only taken into account when
there was also a MATCH term in the WHERE clause. Now fixed here:

   http://www.sqlite.org/src/info/85dc12625d300f

The fix should be in 3.8.9.


Just curious Dan. The tests added do not seem to check the query plans
despite the report being about a performance issue. I only skimmed them,
and I'm unfamiliar with TCL and the exact specifics of SQLite testing, so I
could well have missed them, but I do recall seen other perf tests checking
execution plans, in addition to checking correctness. Did I miss them?


Fair point. It would be better if there were tests to show that the 
queries were being correctly optimized.


But the change was fairly trivial, and I didn't think there was much 
chance that it would fail to optimize the queries correctly. Also, it's 
a pretty obscure optimization (one complaint in how many years?), so I 
figured it wasn't all that important. Finally it's fiddly to test in 
this case, as the EXPLAIN QUERY PLAN (or even EXPLAIN) output is not 
sufficient to figure out if it's working properly or not. So I just 
checked by hand that the optimization is working.


On the other hand, that the change could contain some bug related to 
integer overflow or some other boundary condition is a real risk. So the 
tests focus on that.


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID-based queries slow in FTS db

2015-01-30 Thread Dominique Devienne
On Fri, Jan 30, 2015 at 4:38 PM, Dan Kennedy  wrote:

> On 01/29/2015 02:29 AM, farkas andras wrote:
>
>> [...] but searches based on ROWID are atrociously slow and hog massive
>> amounts of memory [...]
>>
>
> Looks like range constraints on rowids were only taken into account when
> there was also a MATCH term in the WHERE clause. Now fixed here:
>
>   http://www.sqlite.org/src/info/85dc12625d300f
>
> The fix should be in 3.8.9.


Just curious Dan. The tests added do not seem to check the query plans
despite the report being about a performance issue. I only skimmed them,
and I'm unfamiliar with TCL and the exact specifics of SQLite testing, so I
could well have missed them, but I do recall seen other perf tests checking
execution plans, in addition to checking correctness. Did I miss them?
Thanks, --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ROWID-based queries slow in FTS db

2015-01-30 Thread Dan Kennedy

On 01/29/2015 02:29 AM, farkas andras wrote:

Hi all, Im using FTS through DBD::SQLite (perl) to query large text databases 
(~10GB, ~10 million records). The regular FTS MATCH searches work fine (they usually 
run under a second), but searches based on ROWID are atrociously slow and hog massive 
amounts of memory. Im trying to retrieve a couple of adjacent rows like so:
my $q_c = $dbh->prepare( "SELECT * FROM ftstable WHERE (ROWID BETWEEN 1000 AND 
1040)" );
# my $q_c = $dbh->prepare( "SELECT * FROM ftstable LIMIT 1040 OFFSET 1000" ); # 
tried this too, it isnt any better
$q_c->execute();
The execute takes several minutes and uses ~600 MB of memory. Now, 
http://www.sqlite.org/changes.html writes that:
3.8.1: FTS4 queries are better able to make use of docid<$limit constraints to 
limit the amount of I/O required
Theres also this thread, indicating that rowid searches on FTS databases 
are optimized: 
http://sqlite.1065341.n5.nabble.com/FTS-full-text-query-vs-query-by-rowid-td77534.html
 I was using 3.7.x so I updated DBD::SQLite and that got me up to SQLite 3.8.7, but 
I see no significant improvement. Explain query gives the same result as the linked 
thread: 0|0|0|SCAN TABLE tmdata VIRTUAL TABLE INDEX 393216.
Maybe there is a better way to write the query? If not, is there any hope that 
this will get fixed?


Looks like range constraints on rowids were only taken into account when 
there was also a MATCH term in the WHERE clause. Now fixed here:


  http://www.sqlite.org/src/info/85dc12625d300f

The fix should be in 3.8.9.

Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Simon Slavin

On 30 Jan 2015, at 1:07pm, Mario M. Westphal  wrote:

> What worries me more are the incidents where users see this problem happen 
> several times, with q database kept on a local hard disk or SSD.

Just to make it clear, when corruption is reported, the corruption is not 
automatically fixed.  The database will still be corrupt, and as the app 
continues it may notice the same corruption again and report it again.  This is 
why I asked you whether you are noticing more corruption or are just continuing 
to use a corrupt database.

So yes, if the user continues to use the same database, they'll get more error 
messages.  And if they restore a backup it might be a good idea to check to see 
whether that backup is corrupt.  At least until you have tracked down the cause 
of your corruption and stopped it.

> that’s really hard to tell, because unless SQLite has to access a corrupted 
> section of the file during normal operation, or integrity_check() is run, a 
> damaged database may behave perfectly normal for a long time...


One column of one row of one table may get corrupted.  If that's the case then 
the database can be used for years without any problem being noticed.  
Theoretically "PRAGMA integrity_check" will notice it, however.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Mario M. Westphal
> Okay.  First, stop doing VACUUM after this.  You're not improving things and 
> you may be making things worse

 

Not important. If this error is encountered the database is marked and the user 
reminded on every open/close to replace it with a backup. The database is not 
supposed to be used after SQLite has reported it as corrupt.

 

When a user encounters the problem he/she restores the last working backup. I 
have a few users who faced this problem more than once. Here I always assumed 
some hardware glirch, a shaky USB connection, disk trouble, network problems 
(if the database is held on a Windows server or NAS), buggy SAMBA 
implementations or similar. Often, when asked, users ‘remembered’ a power 
failure, or some other problems with the disk or network. Case closed.

 

 

What worries me more are the incidents where users see this problem happen 
several times, with q database kept on a local hard disk or SSD. The Windows 
event log shows no reports about anything disk related. No power failure. No 
hard shut-down. No problems reading or writing data in other applications.

 

The database may be several months old or fresh. The error is sometimes 
encountered during a diagnosis run (with integrity_check) or a 
SELECT/INSERT/UPDATE suddenly returns the dreaded SQLITE_CORRUPT error code. 
This can happen for databases with 200 MB or databases with 10 GB. It 
apparently does not necessarily happen during times of high activity or bulk 
inserts. But that’s really hard to tell, because unless SQLite has to access a 
corrupted section of the file during normal operation, or integrity_check() is 
run, a damaged database may behave perfectly normal for a long time...

 

I have now implemented the ErrorCallback routine and future versions will log 
anything reported that way to the log file. Maybe this gives us some more data 
to work with. I assume that this function is safe to use in a scenario where 
multiple instances/connections of SQLite are in use in parallel? My application 
uses multiple threads, but each thread uses a separate instance of SQLite.

 

-- Mario

 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread RSmith


On 2015/01/30 14:45, Mario M. Westphal wrote:

- The databases in question are stored on a location hard disk or SSD.

- If a user stores his database on a NAS box or Windows server, it is accessed 
directly, via standard Windows file system routines.

- From what I can tell, network-based databases are not more likely to corrupt 
than databases stored on built-in disks or SSDs or databases kept on disks or 
USB sticks connected via USB.


That is simply not true. The report-back on locking success via a local resource (albeit for a removable drive) is under normal 
circumstances absolute and correct. For a network file (remote) source, that is just not true in near all network cases.  If you can 
be sure only one instance of your program access it over the network and nothing else, then it should not be harmed, but this is 
difficult.


Users kill their processes and re-start programs and SQLite connections (unwittingly) that finds hot roll-back journals and all 
kinds of things that might fall into a long "busy" cycle which may again prompt a process-kill, etc.


It's easy to tell though, when you get reports of corruption, require the file location information. A pattern should quickly emerge 
if this is a networking problem.



- My software is updated every 2 to 4 weeks, and I always include and ship with 
the latest SQLite version.

- There is a big variance in when users update so some users may work with 
versions several months old, but not older than 2 months, typically.

- A user may access a database from multiple computers, but then only in 
read-only mode. Write access is only permitted when the database is opened in 
exclusively.

- I use SQLite since about 2008, but the code base is changed frequently. I 
maintain old databases (up to maybe one year old and use them in regression 
tests before shipping).



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Stephan Beal
On Fri, Jan 30, 2015 at 1:45 PM, Mario M. Westphal  wrote:

> - From what I can tell, network-based databases are not more likely to
> corrupt than databases stored on built-in disks or SSDs or databases kept
> on disks or USB sticks connected via USB.
>

That's a big assumption. Network filesystems are historically _notorious_
for locking-related problems (the root of many corruption problems).

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "database disk image is malformed" error occurs more

2015-01-30 Thread Mario M. Westphal
- The databases in question are stored on a location hard disk or SSD.

- If a user stores his database on a NAS box or Windows server, it is accessed 
directly, via standard Windows file system routines.

- From what I can tell, network-based databases are not more likely to corrupt 
than databases stored on built-in disks or SSDs or databases kept on disks or 
USB sticks connected via USB.

- My software is updated every 2 to 4 weeks, and I always include and ship with 
the latest SQLite version. 

- There is a big variance in when users update so some users may work with 
versions several months old, but not older than 2 months, typically.

- A user may access a database from multiple computers, but then only in 
read-only mode. Write access is only permitted when the database is opened in 
exclusively.

- I use SQLite since about 2008, but the code base is changed frequently. I 
maintain old databases (up to maybe one year old and use them in regression 
tests before shipping). 

 

-- Mario

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Truncation of floating point numbers in SQLite?

2015-01-30 Thread Stephan Beal
On Fri, Jan 30, 2015 at 3:45 AM, Donald Shepherd 
wrote:

> I'm still not convinced whether it's the behaviour causing my problem, but
> it does look like negative zero is another special case:
>
> SQLite version 3.8.7.2 2014-11-18 20:57:56
> Enter ".help" for usage hints.
> sqlite> create table datatable2 (doublevalue real);
> sqlite> insert into datatable2 values(-0.0);
>

FWIW, "special" doubles like inf, nan, and -0 have no standardized C89 APIs
(they were added in C99), so it is unsurprising that sqlite (C89, aside
from its use of "long long") treats signed and unsigned 0 the same.

http://en.wikipedia.org/wiki/C99#IEEE.C2.A0754_floating_point_support
http://stackoverflow.com/questions/9657993/negative-zero-in-c


According to this page:
http://stackoverflow.com/questions/5095968/does-float-have-a-negative-zero-0f

"the standard" (it's not clear if they mean C89 or C99) _requires_
"positive and negative zero to test as equal," an implication of which is
that it would be impossible to tell them apart in SQL implementations based
on that.

-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Truncation of floating point numbers in SQLite?

2015-01-30 Thread RSmith


On 2015/01/30 05:49, Donald Shepherd wrote:

Trying to retrieve a stored qNaN or sNaN returns a column type of NULL and a 
value of 0.


Thank you for letting us know.

Well I suppose that's SQLite's method to answer with errors of the sort, returning NULL (as is the case with div0 for instance). 
Which makes some sense and is probably documented (though I did not check).  It seems to get really unhandled bitwise round-trips 
you will need to store/receive  blobs or strings in the 0xNNN format.


At first I thought SQLite (or any RDBMS) should really strore whatever you give and return it untouched, but that cannot be true for 
an RDBMS because it has to interpret the data, it isn't just a binary store. It has to answer questions like SELECT ("colA"+3), 
(7/"colB");  or sort by colA or use a collation on strings etc. etc. - all of which means it must care about what the value relates 
to and cannot simply ignore it unless stored as an ignorant type (i.e Blob).



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users