Re: [sqlite] unicode case insensitive

2014-10-25 Thread dd
I am summarizing options to support unicode case-sensitive:

1. Richard Hipp: icu ext
2. Aleksey Tulinov: https://bitbucket.org/alekseyt/nunicode#markdown-
header-sqlite3-extension
3. Grey's suggestion: custom collation

 Please add in options list if I missed.

 I don't have experience in custom collations and icu extension. Needs
to support Unicode case insensitive (collate nocase with uniquely indexed
column) for database for all unicode charsets without storing locale.

 I would like to know what are the disadvantages/pitfalls with icu
extension?

Can somebody point which option is suitable for my requirement.


On Fri, Oct 24, 2014 at 11:51 PM, James K. Lowden 
wrote:

> On Fri, 24 Oct 2014 21:44:50 +0400
> dd  wrote:
>
> > >>Convert everything to upper (or lower) case brute force.
> >Sorry. I am not clear. Can you please elaborate this.
>
> The standard function tolower(3) is locale-dependent. If your locale is
> set to match the data's single-byte encoding,
>
> tolower('Ö') == tolower('ö') .
>
> If you are using Unicode, you have towlower(3) as defined by C99.  If
> you're using utf-8, you'll want to call ivonv(3) first to convert the
> string to an array of wint_t.
>
> Plug that into the xCompare function of a custom collation, and you
> have your own case-insensitive matching capability.
>
> --jkl
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unicode case insensitive

2014-10-25 Thread dd
typo:
*I am summarizing options to support unicode* case-insensitive*:

On Sat, Oct 25, 2014 at 10:34 AM, dd  wrote:

> I am summarizing options to support unicode case-sensitive:
>
> 1. Richard Hipp: icu ext
> 2. Aleksey Tulinov: https://bitbucket.org/alekseyt/nunicode#markdown-
> header-sqlite3-extension
> 3. Grey's suggestion: custom collation
>
>  Please add in options list if I missed.
>
>  I don't have experience in custom collations and icu extension. Needs
> to support Unicode case insensitive (collate nocase with uniquely indexed
> column) for database for all unicode charsets without storing locale.
>
>  I would like to know what are the disadvantages/pitfalls with icu
> extension?
>
> Can somebody point which option is suitable for my requirement.
>
>
> On Fri, Oct 24, 2014 at 11:51 PM, James K. Lowden <
> jklow...@schemamania.org> wrote:
>
>> On Fri, 24 Oct 2014 21:44:50 +0400
>> dd  wrote:
>>
>> > >>Convert everything to upper (or lower) case brute force.
>> >Sorry. I am not clear. Can you please elaborate this.
>>
>> The standard function tolower(3) is locale-dependent. If your locale is
>> set to match the data's single-byte encoding,
>>
>> tolower('Ö') == tolower('ö') .
>>
>> If you are using Unicode, you have towlower(3) as defined by C99.  If
>> you're using utf-8, you'll want to call ivonv(3) first to convert the
>> string to an array of wint_t.
>>
>> Plug that into the xCompare function of a custom collation, and you
>> have your own case-insensitive matching capability.
>>
>> --jkl
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] quasi-bug related to locking, and attached databases....

2014-10-25 Thread Stephan Beal
On Fri, Oct 24, 2014 at 11:16 PM, Simon Slavin  wrote:

> Which version of SQLite are you using ?
> What operating system are you using (including which version) ?
> What formats are the volumes those files are stored on ?
>

Simon, FYI: this is the "'main' db aliasing" problem i brought up a couple
months ago, which we solved by using your suggestion: re-attach the db
directly after opening it, so that (as Dave said) all DBs have well-known
internal names regardless of what order they get opened in.

Reproduced here with 3.8.6 on Linux/x64:

sqlite>  insert or replace into main.dest ( name, value ) values
('allow',(select value from aux.source where name = 'allow'));
Error: database is locked

now without the db names:

sqlite> insert or replace into dest ( name, value ) values ('allow',(select
value from source where name = 'allow'));



-- 
- stephan beal
http://wanderinghorse.net/home/stephan/
http://gplus.to/sgbeal
"Freedom is sloppy. But since tyranny's the only guaranteed byproduct of
those who insist on a perfect world, freedom will have to do." -- Bigby Wolf
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How to enable icu for sqlite shell

2014-10-25 Thread dd
Hi,

  I need to verify unicode stuff with shell. Is it possible?

   Tried with .load icu, thrown "icu.so cannot open...". Where can I find
icu.so on sqlite.org?

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [regression] SQLite 3.8.7 causes Evolution to crash

2014-10-25 Thread Paul Menzel
Dear Richard,


Am Mittwoch, den 22.10.2014, 21:53 -0400 schrieb Richard Hipp:
> On Wed, Oct 22, 2014 at 5:14 PM, Paul Menzel wrote:

> > after the upgrade of libsqlite3 from 3.8.6 to 3.8.7 Evolution crashes
> > with a segmentation fault.
> >
> > pool[6371]: segfault at 0 ip   (null) sp a67d26ec error 14
> >
> > Several people have reproduced this [1].
> 
> The problem *might* be an incomplete VFS implementation in Evolution.  I
> put a more detailed comment on the Bugzilla ticket.

thank you a lot for the analysis leading to a solution.

Was it just bad luck that such a change to use different code paths is
done for a bug fix release (3.8.6 to 3.8.7)? Or don’t you use semantic
versioning for SQLite?


Thanks,

Paul


[1] https://bugzilla.gnome.org/show_bug.cgi?id=738965


signature.asc
Description: This is a digitally signed message part
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] unique with icu

2014-10-25 Thread dd
Hi,

  icu enabled for sqlite. I didn't do any custom collations/like operator.

CREATE TABLE test(id integer primary key autoincrement, t text collate
nocase, unique(t));

Case 1:  When I try to insert 'd' and 'D', throwing constraint violation.
 (SUCCESS)
Case 2:  When I try to insert 'ö' and 'Ö', sqlite inserted both
sucessfully. (FAILED)

 I expect case 2 should throw constraint violation, but not.  Am I missing
anything here?

Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [regression] SQLite 3.8.7 causes Evolution to crash

2014-10-25 Thread Richard Hipp
On Sat, Oct 25, 2014 at 6:15 AM, Paul Menzel <
paulepan...@users.sourceforge.net> wrote:

>
> thank you a lot for the analysis leading to a solution.
>
> Was it just bad luck that such a change to use different code paths is
> done for a bug fix release (3.8.6 to 3.8.7)? Or don’t you use semantic
> versioning for SQLite?
>
>
The 4th digit increments for bug-fix releases.  For example:  3.8.6 to
3.8.6.1.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Ross Altman
Hi guys,

I'm currently working with a pretty gigantic database (116 Gb at the
moment, and growing). Performing a simple SELECT routine with one filter
takes between 7 and 15 minutes, which is starting to become a problem. The
command I'm using is the following:

SELECT NVERTS FROM ToricCY WHERE H11=2;

Pretty basic. I just want the NVERTS column of the table ToricCY where
another column labeled H11 is 2, nothing fancy. Because of the huge size of
the database, I would expect it to take a while, but I was wondering if you
guys have any recommendations on how to speed it up (if possible).

Also, because this problem will only get worse as the database grows, I'm
looking for alternative approaches to storing this data. Does anyone have
experience working with databases this big?

Thanks!

Best,
Ross
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Alessandro Marzocchi
Do you have an index on that column?
Il 25/ott/2014 14:03 "Ross Altman"  ha scritto:

> Hi guys,
>
> I'm currently working with a pretty gigantic database (116 Gb at the
> moment, and growing). Performing a simple SELECT routine with one filter
> takes between 7 and 15 minutes, which is starting to become a problem. The
> command I'm using is the following:
>
> SELECT NVERTS FROM ToricCY WHERE H11=2;
>
> Pretty basic. I just want the NVERTS column of the table ToricCY where
> another column labeled H11 is 2, nothing fancy. Because of the huge size of
> the database, I would expect it to take a while, but I was wondering if you
> guys have any recommendations on how to speed it up (if possible).
>
> Also, because this problem will only get worse as the database grows, I'm
> looking for alternative approaches to storing this data. Does anyone have
> experience working with databases this big?
>
> Thanks!
>
> Best,
> Ross
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Stephen Chrzanowski
Make sure H11 has an index on it.  Also ensure that ToricCY doesn't have
blobs attached to it.  Throw the blobs into a different table and use a 1:1
relationship to link them, then only pull the blobs when needed.

SQLite apparently will load an entire row of data out of the database, even
if the query doesn't need the additional data, so if you have a huge blob,
it'll read in that entire blob then toss the blob to /dev/null

You might also want to try and run an analyze on the database as well.
That MIGHT take a while (At 116gig, yeah, I'd say so -- Make sure you have
PLENTY of drive space where the DB lives), so, if it being used in a live
environment, you might need to take your package offline for maintenance.

At this point, throwing an index on the database WILL take some time as
well.  Moving the blobs from one table to another is also going to take a
long while as not only will you need to copy the data from one table to
another (So you have two copies of the database), but SQLite doesn't
support a ALTER TABLE DELETE FIELD type of statement, so, you'll have to
reconstruct the table (Or use an IDE that'll do that work for you) which
means a copy of your required data from one table to a new table, then a
delete of the old table.

On Fri, Oct 24, 2014 at 7:09 PM, Ross Altman  wrote:

> Hi guys,
>
> I'm currently working with a pretty gigantic database (116 Gb at the
> moment, and growing). Performing a simple SELECT routine with one filter
> takes between 7 and 15 minutes, which is starting to become a problem. The
> command I'm using is the following:
>
> SELECT NVERTS FROM ToricCY WHERE H11=2;
>
> Pretty basic. I just want the NVERTS column of the table ToricCY where
> another column labeled H11 is 2, nothing fancy. Because of the huge size of
> the database, I would expect it to take a while, but I was wondering if you
> guys have any recommendations on how to speed it up (if possible).
>
> Also, because this problem will only get worse as the database grows, I'm
> looking for alternative approaches to storing this data. Does anyone have
> experience working with databases this big?
>
> Thanks!
>
> Best,
> Ross
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Teg
Hello Ross,

Is  H11  indexed?   15 minutes suggests to me that it's doing a linear
scan and you need an index.  I have databases that big and performance
is pretty decent most of the time.

I'm sure the boys here will ask you for a schema probably an "explain"
on the query. 

C

Friday, October 24, 2014, 7:09:05 PM, you wrote:

RA> Hi guys,

RA> I'm currently working with a pretty gigantic database (116 Gb at the
RA> moment, and growing). Performing a simple SELECT routine with one filter
RA> takes between 7 and 15 minutes, which is starting to become a problem. The
RA> command I'm using is the following:

RA> SELECT NVERTS FROM ToricCY WHERE H11=2;

RA> Pretty basic. I just want the NVERTS column of the table ToricCY where
RA> another column labeled H11 is 2, nothing fancy. Because of the huge size of
RA> the database, I would expect it to take a while, but I was wondering if you
RA> guys have any recommendations on how to speed it up (if possible).

RA> Also, because this problem will only get worse as the database grows, I'm
RA> looking for alternative approaches to storing this data. Does anyone have
RA> experience working with databases this big?

RA> Thanks!

RA> Best,
RA> Ross
RA> ___
RA> sqlite-users mailing list
RA> sqlite-users@sqlite.org
RA> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Robert Hairgrove
On Fri, 2014-10-24 at 19:09 -0400, Ross Altman wrote:
> I'm currently working with a pretty gigantic database (116 Gb at the
> moment, and growing). Performing a simple SELECT routine with one filter
> takes between 7 and 15 minutes, which is starting to become a problem. The
> command I'm using is the following:
> 
> SELECT NVERTS FROM ToricCY WHERE H11=2;

Questions:

1) Are you querying a local database file, or pulling the data in over a
network connection?

2) Is there an index defined for the column H11?

> Pretty basic. I just want the NVERTS column of the table ToricCY where
> another column labeled H11 is 2, nothing fancy. Because of the huge size of
> the database, I would expect it to take a while, but I was wondering if you
> guys have any recommendations on how to speed it up (if possible).
> 
> Also, because this problem will only get worse as the database grows, I'm
> looking for alternative approaches to storing this data. Does anyone have
> experience working with databases this big?


116 GB seems to be at odds with a database system named "SQLite". :) But
most popular client/server RDBMS's should be able to handle it with no
problem. MySQL, for example, supports table partitions which can have an
enormous impact on performance. With SQLite, you can approximate
partitioning by splitting the database into several smaller ones and use
the ATTACH command to run queries over several databases at once.

Hope this helps!

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Richard Hipp
On Sat, Oct 25, 2014 at 8:15 AM, Stephen Chrzanowski 
wrote:

>
> SQLite apparently will load an entire row of data out of the database, even
> if the query doesn't need the additional data, so if you have a huge blob,
> it'll read in that entire blob then toss the blob to /dev/null
>

SQLite only loads a prefix of each row from the file - the minimum prefix
required to resolve the query.  So if the BLOB is out on the end of the
row, it does not get loaded.

However, when updating a row, SQLite rewrites the entire row.  (It has to,
because of the use of variable-width encodings, since a change to any field
effects the location of all subsequent fields.)  So if you have a row with
both a BOOLEAN and a 1MB BLOB, you have to write 1MB in order to change the
value of the BOOLEAN.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Minor Documentation Issue - Virtual Tables

2014-10-25 Thread Stephen Chrzanowski
https://www.sqlite.org/vtab.html

I was trying to understand what Virtual Tables are, and how they'd benefit
me in a new app I'm building, but I noticed that the numbering system on
the page is incorrect, or, they slipped something into my coffee this AM.
(I should thank them if they did!)

The Implementation header should be 1.2, not 2.2.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Stephen Chrzanowski
Ahh.. Thanks Richard.  So if you were to have blobs live at the front of
the row, it'll have to read through that blob to get that byte in the next
field, correct?

On Sat, Oct 25, 2014 at 8:31 AM, Richard Hipp  wrote:

> On Sat, Oct 25, 2014 at 8:15 AM, Stephen Chrzanowski 
> wrote:
>
> >
> > SQLite apparently will load an entire row of data out of the database,
> even
> > if the query doesn't need the additional data, so if you have a huge
> blob,
> > it'll read in that entire blob then toss the blob to /dev/null
> >
>
> SQLite only loads a prefix of each row from the file - the minimum prefix
> required to resolve the query.  So if the BLOB is out on the end of the
> row, it does not get loaded.
>
> However, when updating a row, SQLite rewrites the entire row.  (It has to,
> because of the use of variable-width encodings, since a change to any field
> effects the location of all subsequent fields.)  So if you have a row with
> both a BOOLEAN and a 1MB BLOB, you have to write 1MB in order to change the
> value of the BOOLEAN.
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Richard Hipp
On Sat, Oct 25, 2014 at 9:14 AM, Stephen Chrzanowski 
wrote:

> Ahh.. Thanks Richard.  So if you were to have blobs live at the front of
> the row, it'll have to read through that blob to get that byte in the next
> field, correct?
>
>
Correct.
-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please fix the EBCDIC support

2014-10-25 Thread mario@tiscali

Hello, I just found this thread and I consider it very interesting.

John's porting of SQLite to z/OS is a very nice option, and as he 
mentioned before, the modifications to get to it were minor, if any.
As far as I understand though, the result is a database which stores 
numbers as big-endian BUT strings in the local EBCDIC encoding of the 
hosting system. While this deviates from the statement that defines 
SQLite textual data as UTF-8 encoded, everything works fine (to the 
extent we tested it so far) if using a z/OS created DB on the same z/OS.


I believe this results from what is stated in 
http://www.sqlite.org/version3.html:


"SQLite is not particular about the text it receives and is more than 
happy to process text strings that are not normalized or even 
well-formed UTF-8 or UTF-16. Thus, programmers who want to store IS08859 
data can do so using the UTF-8 interfaces. As long as no attempts are 
made to use a UTF-16 collating sequence or SQL function, the byte 
sequence of the text will not be modified in any way."


Of course the uncompatible text encoding makes the z/OS DB unusable on 
other platforms and vice-versa.


I personally agree with "k" that encoding text strings in the expected 
format is user's responsibility, and even on z/OS at least 
sqlite3_prepare() and sqlite3_bind_text()  should pass their parameter 
in UTF-8 format where needed (I am sure there are other APIs who need 
the same approach). By the way this is easily doable using the iconv() 
family of services.


What I think though is that, even properly encoding API parameters in 
UTF-8 would not be enough to produce a portable SQLite DB.


To get there, in my opinion we should need to fix a set of text strings 
internally used by SQLite, that in z/OS are EBCDIC encoded, and other 
platforms expect to be UTF-8. One example is the SQLITE_FILE_HEADER, 
which John has fixed using a configure 'trick'. Another, I think, 
affects the sqlite_master table where in addition to the CREATE 
statement (which should be passed UTF-8 by the caller), there is a type 
colum, defined as text, which on z/OS is again EBCDIC encoded. Not sure 
this is the only one, but moving a very simple z/OS created DB on linux, 
and fixing the sqlite_master issue I was able to use it as-is.


I think that it would be nice to have z/OS creating a structurally 
healthy DB. This would allow portability for users willing to embrace 
the EBCDIC<->UTF-8 conversion for user data, without requiring 
additional effort for users not interested in portability.


While I am not fluent in SQLite internal structure, I am willing to help 
removing these flaws if directed / supported.


Thank you,
mario
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite 3.8.7

2014-10-25 Thread Richard Hipp
On Thu, Oct 23, 2014 at 11:55 AM, Luigi Iemma  wrote:

> Hi,
>
> SELECT TdoIdoc,RdoCart,RdoQuat
>  FROM Tesdoc
>  INNER JOIN Rigdoc ON RdoIdoc BETWEEN TdoIdoc*1000 AND
> TdoIdoc*1000+999
>  WHERE TdoTipo=60 AND TdoAnno BETWEEN 2014 AND 2014
>  GROUP BY TdoIdoc
>
> When I run this query on 3.8.5 it takes 0.126 seconds,
> when I run this query on 3.8.7 it takes 17.37 seconds
>

Version 3.8.7 (also version 3.8.6) is using a different query plan.

Actually, 3.8.6 and 3.8.7 are making the better choice, based on the
information at hand.  The query planner just doesn't have sufficient
information to make a good choice in this case.  As Simon pointed out, you
can run ANALYZE to give the query planner more information.  Or you can
give it hints.  One possible hint is to add a "likelihood() function around
the TdoTipo=60 constraint:

SELECT TdoIdoc,RdoCart,RdoQuat
 FROM Tesdoc, Rigdoc
WHERE RdoIdoc BETWEEN TdoIdoc*1000 AND TdoIdoc*1000+999
  AND likelihood(TdoTipo=60, 0.01)  <<< HERE
  AND TdoAnno BETWEEN 2014 AND 2014
 GROUP BY TdoIdoc;



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Inserting a row with all defaults set in table

2014-10-25 Thread Stephen Chrzanowski
I've got a table that has defaults set for all fields

CREATE TABLE [tEvents] (
  [EventID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
  [Airline] CHAR DEFAULT '',
  [TicketID] INTEGER DEFAULT 0,
  [Resolved] BOOL DEFAULT 0);


Seems to me it'd be a bit redundant to do an "insert into tEvents
(TicketID) values (0)" when the default is already set (And who says I may
not want to change the default later to -1 for whatever reason?).

The question is, how would I insert a blank row and rely on the defaults
I've got in the schema?  "insert into tEvents () values ()" fails, as does
removing the first pair of brackets, as well as removing all brackets.

I plan on getting the last_insert_row value after I do the insert to do
other things on my form, and at that point, I don't care what the other
fields are, as they'll be blank on the field anyways.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting a row with all defaults set in table

2014-10-25 Thread Richard Hipp
On Sat, Oct 25, 2014 at 9:49 AM, Stephen Chrzanowski 
wrote:

> I've got a table that has defaults set for all fields
>
> CREATE TABLE [tEvents] (
>   [EventID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
>   [Airline] CHAR DEFAULT '',
>   [TicketID] INTEGER DEFAULT 0,
>   [Resolved] BOOL DEFAULT 0);
>
>
> The question is, how would I insert a blank row and rely on the defaults
> I've got in the schema?
>

INSERT INTO tEvents DEFAULT VALUES;

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Ali Jawad
Thanks for the input, I did add PRAGMA busy_timeout=1500; before any query
but I am still getting loads of locked database errors, please advice

On Mon, Oct 20, 2014 at 3:44 AM, Keith Medcalf  wrote:

>
> Actually, you should set the timeout for each connection.  The computer
> does not matter.
>
> >-Original Message-
> >From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
> >boun...@sqlite.org] On Behalf Of Simon Slavin
> >Sent: Sunday, 19 October, 2014 18:35
> >To: General Discussion of SQLite Database
> >Subject: Re: [sqlite] Question on locks
> >
> >
> >On 20 Oct 2014, at 1:15am, Ali Jawad  wrote:
> >
> >> Thanks, I hope it works, I hate to use MySQL for this project. Pardon
> >my
> >> ignorance but the example at hand below is for C, any ideas on how to
> >set
> >> for Bash and PHP ? Or better yet globally for sqlite3
> >
> >There's a PRAGMA which sets the same thing:
> >
> >
> >
> >The time is in milliseconds and for testing purposes you want to set it
> >to at least 1 milliseconds.
> >
> >Execute the pragma after you open the connection /on all the computers
> >which are opening the database/.
> >
> >Simon.
> >___
> >sqlite-users mailing list
> >sqlite-users@sqlite.org
> >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting a row with all defaults set in table

2014-10-25 Thread Simon Davies
On 25 October 2014 14:49, Stephen Chrzanowski  wrote:
> I've got a table that has defaults set for all fields
>
> CREATE TABLE [tEvents] (
>   [EventID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
>   [Airline] CHAR DEFAULT '',
>   [TicketID] INTEGER DEFAULT 0,
>   [Resolved] BOOL DEFAULT 0);
>
>
> Seems to me it'd be a bit redundant to do an "insert into tEvents
> (TicketID) values (0)" when the default is already set (And who says I may
> not want to change the default later to -1 for whatever reason?).
>
> The question is, how would I insert a blank row and rely on the defaults
> I've got in the schema?  "insert into tEvents () values ()" fails, as does
> removing the first pair of brackets, as well as removing all brackets.

INSERT INTO tEvents( EventId ) VALUES( null );

Regards,
Simon
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Inserting a row with all defaults set in table

2014-10-25 Thread Stephen Chrzanowski
Perfect.  Thanks guys.

On Sat, Oct 25, 2014 at 10:03 AM, Simon Davies  wrote:

> On 25 October 2014 14:49, Stephen Chrzanowski  wrote:
> > I've got a table that has defaults set for all fields
> >
> > CREATE TABLE [tEvents] (
> >   [EventID] INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT,
> >   [Airline] CHAR DEFAULT '',
> >   [TicketID] INTEGER DEFAULT 0,
> >   [Resolved] BOOL DEFAULT 0);
> >
> >
> > Seems to me it'd be a bit redundant to do an "insert into tEvents
> > (TicketID) values (0)" when the default is already set (And who says I
> may
> > not want to change the default later to -1 for whatever reason?).
> >
> > The question is, how would I insert a blank row and rely on the defaults
> > I've got in the schema?  "insert into tEvents () values ()" fails, as
> does
> > removing the first pair of brackets, as well as removing all brackets.
>
> INSERT INTO tEvents( EventId ) VALUES( null );
>
> Regards,
> Simon
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Stephen Chrzanowski
You did that per connection?

On Sat, Oct 25, 2014 at 10:02 AM, Ali Jawad  wrote:

> Thanks for the input, I did add PRAGMA busy_timeout=1500; before any query
> but I am still getting loads of locked database errors, please advice
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Ali Jawad
Yes please see  examples "these are in for loops, but these are the only
occurances of Sqlite in the files in question":

bash script

sqlite3 websites.db "PRAGMA busy_timeout=1500;CREATE TABLE [$SITE] (DATE
INT ,EU INT , US INT);"

php script

$ret = $db->query("PRAGMA busy_timeout=1500;SELECT eu,us,date FROM [$site]
ORDER BY date(DATE) DESC  LIMIT 10");




On Sat, Oct 25, 2014 at 4:12 PM, Stephen Chrzanowski 
wrote:

> You did that per connection?
>
> On Sat, Oct 25, 2014 at 10:02 AM, Ali Jawad  wrote:
>
> > Thanks for the input, I did add PRAGMA busy_timeout=1500; before any
> query
> > but I am still getting loads of locked database errors, please advice
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Installing 3.8.7 gives 3.8.6

2014-10-25 Thread Arnaud Meuret

Hi everyone,

On a fresh CentOS 5, compiling SQLite using the [current tarball][1] for  
3.8.7, the version being installed ends up in `/usr/local/lib` as  
**3.8.6**:


bash-3.2# make install
make[1]: Entering directory `/mongrel2/sqlite-autoconf-3080700'
 /bin/mkdir -p '/usr/local/lib'
 /bin/sh ./libtool   --mode=install /usr/bin/install -c
libsqlite3.la '/usr/local/lib'
libtool: install: /usr/bin/install -c .libs/libsqlite3.so.0.8.6  
/usr/local/lib/libsqlite3.so.0.8.6
libtool: install: (cd /usr/local/lib && { ln -s -f libsqlite3.so.0.8.6  
libsqlite3.so.0 || { rm -f libsqlite3.so.0 && ln -s libsqlite3.so.0.8.6  
libsqlite3.so.0; }; })
libtool: install: (cd /usr/local/lib && { ln -s -f libsqlite3.so.0.8.6  
libsqlite3.so || { rm -f libsqlite3.so && ln -s libsqlite3.so.0.8.6  
libsqlite3.so; }; })
libtool: install: /usr/bin/install -c .libs/libsqlite3.lai  
/usr/local/lib/libsqlite3.la
libtool: install: /usr/bin/install -c .libs/libsqlite3.a  
/usr/local/lib/libsqlite3.a


 [1]: https://www.sqlite.org/2014/sqlite-autoconf-3080700.tar.gz


Is it normal ? I expected to find libsqlite3.so.0.8.7.


--A
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Installing 3.8.7 gives 3.8.6

2014-10-25 Thread Dan Kennedy

On 10/25/2014 09:36 PM, Arnaud Meuret wrote:

Hi everyone,

On a fresh CentOS 5, compiling SQLite using the [current tarball][1] 
for 3.8.7, the version being installed ends up in `/usr/local/lib` as 
**3.8.6**:


bash-3.2# make install
make[1]: Entering directory `/mongrel2/sqlite-autoconf-3080700'
 /bin/mkdir -p '/usr/local/lib'
 /bin/sh ./libtool   --mode=install /usr/bin/install -c 
libsqlite3.la '/usr/local/lib'
libtool: install: /usr/bin/install -c .libs/libsqlite3.so.0.8.6 
/usr/local/lib/libsqlite3.so.0.8.6
libtool: install: (cd /usr/local/lib && { ln -s -f 
libsqlite3.so.0.8.6 libsqlite3.so.0 || { rm -f libsqlite3.so.0 && ln 
-s libsqlite3.so.0.8.6 libsqlite3.so.0; }; })
libtool: install: (cd /usr/local/lib && { ln -s -f 
libsqlite3.so.0.8.6 libsqlite3.so || { rm -f libsqlite3.so && ln -s 
libsqlite3.so.0.8.6 libsqlite3.so; }; })
libtool: install: /usr/bin/install -c .libs/libsqlite3.lai 
/usr/local/lib/libsqlite3.la
libtool: install: /usr/bin/install -c .libs/libsqlite3.a 
/usr/local/lib/libsqlite3.a


 [1]: https://www.sqlite.org/2014/sqlite-autoconf-3080700.tar.gz


Is it normal ? I expected to find libsqlite3.so.0.8.7.


It's normal. I think that number is only supposed to change if the new 
release is not ABI compatible with the previous. It's a coincidence that 
"8.6" matches the tail of the previous release number. Earlier releases 
also used "libsqlite3.so.0.8.6".


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] How to enable icu for sqlite shell

2014-10-25 Thread dave
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of dd
> Sent: Saturday, October 25, 2014 3:59 AM
> To: General Discussion of SQLite Database
> Subject: [sqlite] How to enable icu for sqlite shell
> 
> 
> Hi,
> 
>   I need to verify unicode stuff with shell. Is it possible?
> 
>Tried with .load icu, thrown "icu.so cannot open...". 
> Where can I find
> icu.so on sqlite.org?

You very probably have to compile this yourself, sqlite tends to be a source
distribution, with some precompiled binaries of the shell provided as a
convenience.  I don't know if you have the capabilty to build on-hand, but a
quick web search revealed this
http://stackoverflow.com/questions/1818678/compiling-the-icu-sqlite-extensio
n-statically-linked-to-icu
Which might give you a leg-up on the process, in particular related to the
ICU extension.

-dave


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Simon Slavin
On 25 Oct 2014, at 3:31pm, Ali Jawad  wrote:

> bash script
> 
> sqlite3 websites.db "PRAGMA busy_timeout=1500;CREATE TABLE [$SITE] (DATE
> INT ,EU INT , US INT);"

Creating and destroying tables always involves a long lock.

> php script
> 
> $ret = $db->query("PRAGMA busy_timeout=1500;SELECT eu,us,date FROM [$site]
> ORDER BY date(DATE) DESC  LIMIT 10");

This doesn't work.  The query will process only the query command.  You want 
something more like

// do this just once, soon after creating the $db connection
$ret = $db-exec("PRAGMA busy_timeout=1500");

// do this when you need the result
$ret = $db->query("SELECT eu,us,date FROM [$site] ORDER BY date(DATE) DESC 
LIMIT 10");

By the way ... I notice you are creating a table with a variable name.  This is 
usually a bad sign.  It might make more sense to put your data into one table, 
and add a column which contains the $site .  Then you don't need to create a 
new table when you have data for a new site.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Ali Jawad
Thanks Simon, the create process is a one off. As for the table name I did
use this approach as to not accumulate too much data in one table and
instead split the data in multiple tables. From a design POV  in sqlite is
this a mistake. And will the pragma for php eliminate locks ?
On Oct 25, 2014 7:23 PM, "Simon Slavin"  wrote:

> On 25 Oct 2014, at 3:31pm, Ali Jawad  wrote:
>
> > bash script
> >
> > sqlite3 websites.db "PRAGMA busy_timeout=1500;CREATE TABLE [$SITE] (DATE
> > INT ,EU INT , US INT);"
>
> Creating and destroying tables always involves a long lock.
>
> > php script
> >
> > $ret = $db->query("PRAGMA busy_timeout=1500;SELECT eu,us,date FROM
> [$site]
> > ORDER BY date(DATE) DESC  LIMIT 10");
>
> This doesn't work.  The query will process only the query command.  You
> want something more like
>
> // do this just once, soon after creating the $db connection
> $ret = $db-exec("PRAGMA busy_timeout=1500");
>
> // do this when you need the result
> $ret = $db->query("SELECT eu,us,date FROM [$site] ORDER BY date(DATE) DESC
> LIMIT 10");
>
> By the way ... I notice you are creating a table with a variable name.
> This is usually a bad sign.  It might make more sense to put your data into
> one table, and add a column which contains the $site .  Then you don't need
> to create a new table when you have data for a new site.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question on locks

2014-10-25 Thread Simon Slavin

On 25 Oct 2014, at 7:16pm, Ali Jawad  wrote:

> Thanks Simon, the create process is a one off. As for the table name I did
> use this approach as to not accumulate too much data in one table and
> instead split the data in multiple tables. From a design POV  in sqlite is
> this a mistake. And will the pragma for php eliminate locks ?

You should definitely execute the PRAGMA as a separate command, not as part of 
your SELECT command.

I do not know for sure that, done as above, it will fix your lock.  I'm not 
sure why you are getting the locks.  But it is the next step for you to try, 
and if it doesn't fix them it will provide good diagnostic information.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite 3.8.7

2014-10-25 Thread Luigi Iemma
thanks for answers

Luigi.

2014-10-25 15:46 GMT+02:00 Richard Hipp :

> On Thu, Oct 23, 2014 at 11:55 AM, Luigi Iemma 
> wrote:
>
> > Hi,
> >
> > SELECT TdoIdoc,RdoCart,RdoQuat
> >  FROM Tesdoc
> >  INNER JOIN Rigdoc ON RdoIdoc BETWEEN TdoIdoc*1000 AND
> > TdoIdoc*1000+999
> >  WHERE TdoTipo=60 AND TdoAnno BETWEEN 2014 AND 2014
> >  GROUP BY TdoIdoc
> >
> > When I run this query on 3.8.5 it takes 0.126 seconds,
> > when I run this query on 3.8.7 it takes 17.37 seconds
> >
>
> Version 3.8.7 (also version 3.8.6) is using a different query plan.
>
> Actually, 3.8.6 and 3.8.7 are making the better choice, based on the
> information at hand.  The query planner just doesn't have sufficient
> information to make a good choice in this case.  As Simon pointed out, you
> can run ANALYZE to give the query planner more information.  Or you can
> give it hints.  One possible hint is to add a "likelihood() function around
> the TdoTipo=60 constraint:
>
> SELECT TdoIdoc,RdoCart,RdoQuat
>  FROM Tesdoc, Rigdoc
> WHERE RdoIdoc BETWEEN TdoIdoc*1000 AND TdoIdoc*1000+999
>   AND likelihood(TdoTipo=60, 0.01)  <<< HERE
>   AND TdoAnno BETWEEN 2014 AND 2014
>  GROUP BY TdoIdoc;
>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique with icu

2014-10-25 Thread dd
any inputs.

On Sat, Oct 25, 2014 at 3:09 PM, dd  wrote:

> Hi,
>
>   icu enabled for sqlite. I didn't do any custom collations/like operator.
>
> CREATE TABLE test(id integer primary key autoincrement, t text collate
> nocase, unique(t));
>
> Case 1:  When I try to insert 'd' and 'D', throwing constraint violation.
>  (SUCCESS)
> Case 2:  When I try to insert 'ö' and 'Ö', sqlite inserted both
> sucessfully. (FAILED)
>
>  I expect case 2 should throw constraint violation, but not.  Am I missing
> anything here?
>
> Thanks.
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique with icu

2014-10-25 Thread Richard Hipp
On Sat, Oct 25, 2014 at 7:09 AM, dd  wrote:

> Hi,
>
>   icu enabled for sqlite. I didn't do any custom collations/like operator.
>
> CREATE TABLE test(id integer primary key autoincrement, t text collate
> nocase, unique(t));
>
> Case 1:  When I try to insert 'd' and 'D', throwing constraint violation.
>  (SUCCESS)
> Case 2:  When I try to insert 'ö' and 'Ö', sqlite inserted both
> sucessfully. (FAILED)
>
>  I expect case 2 should throw constraint violation, but not.  Am I missing
> anything here?
>
>
>
Please read https://www.sqlite.org/src/artifact/d9fbbad0c2f and especially
the part about ICU collating sequences.  "NOCASE" is still the standard
ASCII-only collating sequence, even if you enable ICU.



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Ross Altman
Thanks for all the responses. The small integer column H11 comes before the
large string column NVERTS, so doesn't that mean SQLite is only loading the
minimum required while filtering? If that's the case then I don't
understand why it's taking up to 15 minutes to load.

Also, yes this database is local, and not being read over a network.

Best,
Ross

On Sat, Oct 25, 2014 at 9:15 AM, Richard Hipp  wrote:

> On Sat, Oct 25, 2014 at 9:14 AM, Stephen Chrzanowski 
> wrote:
>
> > Ahh.. Thanks Richard.  So if you were to have blobs live at the front of
> > the row, it'll have to read through that blob to get that byte in the
> next
> > field, correct?
> >
> >
> Correct.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Installing 3.8.7 gives 3.8.6

2014-10-25 Thread Arnaud Meuret


Interesting, thanks.

It is notorious that whoever maintains the RedHat package curiouly decided  
a few years ago to break this and realese "a 0.8.6" version that does not  
include sqlite3_prepare_v2() creating a lot of confusion. Can you confirm  
that it is safe to nuke this crippled version and replace it with any  
correct 0.8.6 ?


--A

On Sat, 25 Oct 2014 16:55:56 +0200, Dan Kennedy   
wrote:



On 10/25/2014 09:36 PM, Arnaud Meuret wrote:

Hi everyone,

On a fresh CentOS 5, compiling SQLite using the [current tarball][1]  
for 3.8.7, the version being installed ends up in `/usr/local/lib` as  
**3.8.6**:


bash-3.2# make install
make[1]: Entering directory `/mongrel2/sqlite-autoconf-3080700'
 /bin/mkdir -p '/usr/local/lib'
 /bin/sh ./libtool   --mode=install /usr/bin/install -c  
libsqlite3.la '/usr/local/lib'
libtool: install: /usr/bin/install -c .libs/libsqlite3.so.0.8.6  
/usr/local/lib/libsqlite3.so.0.8.6
libtool: install: (cd /usr/local/lib && { ln -s -f  
libsqlite3.so.0.8.6 libsqlite3.so.0 || { rm -f libsqlite3.so.0 && ln -s  
libsqlite3.so.0.8.6 libsqlite3.so.0; }; })
libtool: install: (cd /usr/local/lib && { ln -s -f  
libsqlite3.so.0.8.6 libsqlite3.so || { rm -f libsqlite3.so && ln -s  
libsqlite3.so.0.8.6 libsqlite3.so; }; })
libtool: install: /usr/bin/install -c .libs/libsqlite3.lai  
/usr/local/lib/libsqlite3.la
libtool: install: /usr/bin/install -c .libs/libsqlite3.a  
/usr/local/lib/libsqlite3.a


 [1]: https://www.sqlite.org/2014/sqlite-autoconf-3080700.tar.gz


Is it normal ? I expected to find libsqlite3.so.0.8.7.


It's normal. I think that number is only supposed to change if the new  
release is not ABI compatible with the previous. It's a coincidence that  
"8.6" matches the tail of the previous release number. Earlier releases  
also used "libsqlite3.so.0.8.6".


Dan.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Simon Slavin

On 25 Oct 2014, at 9:07pm, Ross Altman  wrote:

> Thanks for all the responses. The small integer column H11 comes before the
> large string column NVERTS, so doesn't that mean SQLite is only loading the
> minimum required while filtering? If that's the case then I don't
> understand why it's taking up to 15 minutes to load.

The searching is taking the time.  Because, as a number of other people have 
pointed out, there is no index on the H11 column, so SQLite has to look at 
every row in the table to see whether it qualifies for your SELECT.  Create an 
index by doing something like

CREATE INDEX ToricCY_H11 ON ToricCY (H11)

then do as many SELECTs as you want.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performing a SELECT on an enormous database...

2014-10-25 Thread Ross Altman
Oh, I see! Thanks for the clarification, Simon! I wasn't aware that there
was a command for indexing. That should definitely help a lot.

On Sat, Oct 25, 2014 at 4:32 PM, Simon Slavin  wrote:

>
> On 25 Oct 2014, at 9:07pm, Ross Altman  wrote:
>
> > Thanks for all the responses. The small integer column H11 comes before
> the
> > large string column NVERTS, so doesn't that mean SQLite is only loading
> the
> > minimum required while filtering? If that's the case then I don't
> > understand why it's taking up to 15 minutes to load.
>
> The searching is taking the time.  Because, as a number of other people
> have pointed out, there is no index on the H11 column, so SQLite has to
> look at every row in the table to see whether it qualifies for your
> SELECT.  Create an index by doing something like
>
> CREATE INDEX ToricCY_H11 ON ToricCY (H11)
>
> then do as many SELECTs as you want.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] unique with icu

2014-10-25 Thread Joseph R. Justice
On Sat, Oct 25, 2014 at 3:44 PM, Richard Hipp  wrote:

> On Sat, Oct 25, 2014 at 7:09 AM, dd  wrote:
>


> > Hi,
> >
> >   icu enabled for sqlite. I didn't do any custom collations/like
> operator.
> >
> > CREATE TABLE test(id integer primary key autoincrement, t text collate
> > nocase, unique(t));
> >
> > Case 1:  When I try to insert 'd' and 'D', throwing constraint violation.
> >  (SUCCESS)
> > Case 2:  When I try to insert 'ö' and 'Ö', sqlite inserted both
> > sucessfully. (FAILED)
> >
> >  I expect case 2 should throw constraint violation, but not.  Am I
> missing
> > anything here?
>
> Please read https://www.sqlite.org/src/artifact/d9fbbad0c2f and especially
> the part about ICU collating sequences.  "NOCASE" is still the standard
> ASCII-only collating sequence, even if you enable ICU.
>

I'm looking at that page now.  (I'm not especially interested in this issue
-- it's just a whim.)



When I look at the following text from section 1.1:

To utilise "general" case mapping, the upper() or lower() scalar
functions are invoked with one argument:

upper('ABC') -> 'abc'
lower('abc') -> 'ABC'

I wonder if the examples shown are reversed, and it should be
upper('abc') -> 'ABC', et al.



W.r.t. the original poster's question, given what DRH has said here, I
wonder if they might have to introduce a third column to the table, say
t_unique, where t_unique is defined to be the value of t after it has been
passed through the upper() and/or lower() functions (as they are defined by
1.1), and in the definition of the table instead of unique(t) have
unique(t_unique).  Obviously this would bloat the size of the table in any
real usage (since t and t_unique could potentially be quite long).  And I
recognize that such bloat may be unacceptable to the original poster.

Perhaps a "good enough" solution would be to define t_unique as being a
hash of the value of upper(t) (or lower(t)), and again have
unique(t_unique).  I recognize this could potentially lead to a false
positive (two unrelated strings hashing to the same value and therefore
colliding), but hopefully the chance of that occurring in real life would
be acceptably small.  Alternatively, define t_unique to be a concatenation
of two or three hash values of upper(t), where each hash value is generated
using a different hash function -- the chance of two unrelated strings
hashing to the same value with two or three unrelated hash functions should
be far, far smaller even than that of them hashing to the same value under
only one function.  (Pulling numbers out of thin air, if we say the chance
of two unrelated strings hashing to the same value with any reasonable hash
function is no more likely than 1x10^-10, then concatenating the value of
three unrelated hash values should result in a collision no more often at
worst than 1x10^-30 (assuming my math is correct), which is 20 orders of
magnitude (or more!) smaller than the chance using just one hash function.)
 Of course, here we are probably going to use far more CPU, since we will
be hashing strings at least once (and possibly two or three times,
depending on the chosen implementation) and this is not necessarily a cheap
thing to compute.

Alternatively, and this would require programming / API changes to SQLite,
perhaps a "nocase_icu" could be defined / created which, in the absence of
ICU being enabled for sqlite, works identically to how "nocase" works now,
but which when ICU is enabled works in the fashion expected by the original
poster as to how they thought plain "nocase" would work with ICU enabled.
 (I assume that changing how "nocase" works when ICU is enabled is
unacceptable since that would be a backwards-incompatible change, and it's
possible there are programs existing which depend on the current behavior.)



Hope this is of some use, interest.  Thanks for your time.



Joseph
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] quasi-bug related to locking, and attached databases....

2014-10-25 Thread dave
> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Stephan Beal
> Sent: Saturday, October 25, 2014 3:32 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] quasi-bug related to locking,and 
> attached databases
...
> 
> Simon, FYI: this is the "'main' db aliasing" problem i 
> brought up a couple
> months ago, which we solved by using your suggestion: re-attach the db
> directly after opening it, so that (as Dave said) all DBs 
> have well-known
> internal names regardless of what order they get opened in.
> 
> Reproduced here with 3.8.6 on Linux/x64:
> 
> sqlite>  insert or replace into main.dest ( name, value ) values
> ('allow',(select value from aux.source where name = 'allow'));
> Error: database is locked


Really, it's a fundamental problem irrespective of 'self attached
databases', it happens any time you attach a database more than once.  I
wish I realized that when I first reported it.  E.g.:

C:\Documents and Settings\person>sqlite3 db2.db
SQLite version 3.6.16
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table dest ( name text, value text );
sqlite> create table source ( name text, value text );
sqlite> insert into source ( name, value ) values ( 'allow', 'yes' );
sqlite> .exit

C:\Documents and Settings\person>sqlite3 db1.db
SQLite version 3.6.16
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> attach database 'db2.db' as dest;
sqlite> attach database 'db2.db' as src;
sqlite> .datavases
unknown command or invalid arguments:  "datavases". Enter ".help" for help
sqlite> .databases
seq  name file
---  ---
--
0main C:\Documents and Settings\person\db1.db
2dest C:\Documents and Settings\person\db2.db
3src  C:\Documents and Settings\person\db2.db
sqlite> insert or replace into dest.dest (name, value) values
('allow',(select value from src.source where name = 'allow' ));
SQL error: database is locked


I would imagine that a possible clean fix would be for the pager to have a
'lock count', locking only when it goes from 0 to 1.  If that is actually
workable, then all the rest of sqlite can blythely carry on with no
modification.

-dave


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] bug, or misuse? crash using sqlite3_mutex_alloc()...

2014-10-25 Thread dave
I am doing the following, which causes a crash:
 
  sqlite3_mutex_enter(sqlite3_mutex_alloc(SQLITE_MUTEX_STATIC_MASTER));
 
The crash occus if this is the first sqlite call I make.  Looking at the
source, there is a line:
if( id<=SQLITE_MUTEX_RECURSIVE && sqlite3_initialize() ) return 0;
 
Since the mutex I am getting is a static one, sqlite3_initialize() is not
called, and so all the fxn ptrs for the mutex impl are NULL, and crashing
ensues.
 
I work around this by making an explicit call to sqlite3_initialize() prior
to the line I describe above, but I ask this question in case the 'id
side-steps init()' logic is done for some reason I don't understand.
Otherwise, it looks like a logic error to me.  Any advice appreciated.
 
tia,
-dave
 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS pagination

2014-10-25 Thread supermariobros
Well, they all give exactly the same output.

sqlite> EXPLAIN QUERY PLAN SELECT rowid  FROM activity_text_content WHERE
activity_text_content MATCH 'x' ORDER BY rowid ASC LIMIT 100; 
0|0|0|SCAN TABLE activity_text_content VIRTUAL TABLE INDEX 4:ASC (~0 rows)

sqlite> EXPLAIN QUERY PLAN SELECT rowid  FROM activity_text_content WHERE
activity_text_content MATCH 'x' AND rowid>1000 ORDER BY rowid ASC LIMIT 10; 
0|0|0|SCAN TABLE activity_text_content VIRTUAL TABLE INDEX 4:ASC (~0 rows)

sqlite> EXPLAIN QUERY PLAN SELECT rowid  FROM activity_text_content WHERE
activity_text_content MATCH 'x' LIMIT 100; 0|0|0|SCAN TABLE
activity_text_content VIRTUAL TABLE INDEX 4: (~0 rows)


It almost looks like EXPLAIN ignores the second part where rowid is compared
or sorted.
how should I understand that?




--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/FTS-pagination-tp78754p78831.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users