Re: [sqlite] how to store latin strings with no casae

2014-10-23 Thread dd
Thanks for your response.

https://www.sqlite.org/faq.html#q18

>>The SQLite source code includes an "ICU" extension that does these
overloads.

Will it solve for all charsets if ICU enabled?

On Thu, Oct 23, 2014 at 6:28 PM, Constantine Yannakopoulos <
alfasud...@gmail.com> wrote:

> On Thu, Oct 23, 2014 at 2:47 PM, dd  wrote:
>
> > Hi,
> >
> >   database schema defined with collate nocase. It supports only for ascii
> > (upper and lower). If I want to support db for other characters with
> > nocase, what steps I need to consider during schema design.
> >
> > for ex:
> >
> > *À Á Â Ã Ä Å Æ = *
> >
> > * à á â ã ä å æ *
> >
>
> ​you can write​ your own collations
>  to support custom
> comparing and sorting of strings or any other data and add them to SQLite
> .
>
> ​There is a pitfall though: The LIKE operator will not honour your
> collation
> . You may choose to override it
>  by using the
> sqlite3_create_function() api so that it does, but if you do you will lose
> the LIKE optimization .
>
> --
> Constantine
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance regression in Banshee with sqlite 3.8.7 on Arch Linux

2014-10-23 Thread Richard Hipp
On Thu, Oct 23, 2014 at 4:03 PM, Tomislav Ljubej  wrote:

> I've searched Banshee's source code and it seems 'CoreCache' is a TEMP
> table according to some comments in the code but I have no clue where
> it's actually defined, there is no 'CREATE TABLE' statement in the
> code for that particular table (I've searched even with TEMP or
> TEMPORARY keywords, nothing). Then I've googled around and found this
> definition:
>
> CREATE TABLE CoreCache (
> OrderID INTEGER PRIMARY KEY,
> ModelID INTEGER,
> ItemID INTEGER);
>

Thanks.  That was sufficient for me to reproduce the problem.  The problem
is caused by this check-in:

   http://www.sqlite.org/src/info/0bdf1a086b

And that check-in was necessary to ensure correct behavior in certain
obscure circumstances.  So I cannot easily back it out.  But I can work to
try to find a different optimization that makes your query run faster.

In the meantime, may I suggest rewriting your query.  The query you have is
this:

INSERT INTO CoreCache (ModelID, ItemID)
SELECT 9, CoreTracks.TrackID
  FROM (SELECT MIN(CoreTracks.TrackID) AS TrackID, CoreTracks.Year
  FROM CoreTracks GROUP BY CoreTracks.Year) AS CoreTracks
 WHERE CoreTracks.Year IN
   (SELECT CoreTracks.Year FROM CoreTracks, CoreCache
 WHERE CoreCache.ModelID = 71
   AND CoreCache.ItemID = CoreTracks.TrackID )
 ORDER BY Year;

I suggest rewriting it as follows:

INSERT INTO CoreCache (ModelID, ItemID)
SELECT 9, MIN(TrackID)
  FROM CoreTracks
 GROUP BY Year
HAVING Year IN
   (SELECT CoreTracks.Year FROM CoreTracks, CoreCache
 WHERE CoreCache.ModelID = 71
   AND CoreCache.ItemID = CoreTracks.TrackID )
 ORDER BY Year;

Or perhaps this:

INSERT INTO CoreCache (ModelID, ItemID)
SELECT 9, MIN(TrackID)
  FROM CoreTracks
 WHERE Year IN
   (SELECT CoreTracks.Year FROM CoreTracks, CoreCache
 WHERE CoreCache.ModelID = 71
   AND CoreCache.ItemID = CoreTracks.TrackID )
 GROUP BY Year
 ORDER BY Year;

In 3.8.6, SQLite was making the above transformation automatically.  But
there are corner cases where this transformation is not valid and so it was
disabled for 3.8.7, which is apparently what is causing your slowdown.

No promises, but I will try to make this transformation occur automatically
again for SQLite 3.8.8, at least in your case where it does appear to be
valid.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please fix the EBCDIC support

2014-10-23 Thread John McKown
On Thu, Oct 16, 2014 at 4:33 PM, k  wrote:

> Hi all,
> this is my first reply-to post to this mailing list (using gmane nttp
> interface) so I hope this post passes moderation ok and is correctly
> threaded and not duplicated...).
>
> Regarding the inability to use databases created on EBCDIC systems on
> 'normal' systems, my initial thoughts are that sqlite should at least
> create databases with the magic number 'SQLite format 3' correctly - ie the
> magic number should be encoded in the source as a hex literal rather than a
> string.
>

​I did this on a port of 3.8.7 that I am working on. It turns out that this
is a const char which is initialized by a C macro variable called
SQLITE_FILE_HEADER. I created a "do_config.sh" file to contain my
./configure. It looks like:

​export CFLAGS='-O3 -V -DSQLITE_MAX_MMAPSIZE=1048576 -qTARG=zOSV1R12 '
export CFLAGS="${CFLAGS} -qLANG=EXTC99 -qFLOAT=IEEE -qnolist -qnosource "
export CFLAGS="${CFLAGS} -D_POSIX_C_SOURCE=200112L -D_XOPEN_SOURCE=600"
export CFLAGS="${CFLAGS} -DSQLITE_ENABLE_COLUMN_METADATA"
export CFLAGS="${CFLAGS} -DSQLITE_ENABLE_SQLLOG"
export CFLAGS="${CFLAGS} -DHAVE_POSIX_FALLOCATE=0"
export CFLAGS="${CFLAGS}
-DSQLITE_FILE_HEADER='\"123121114151164145040146157162155141164040063000\"'"
# The above is the octal encoding for the phrase "SQLite format 3\0" in
ASCII.
# This is the first phase of being compatible with the ASCII version of the
sqlite data base file.
# Each octal value requires four \ in front of it due to multiple shell
evaluations. When presented
# to the compiler, it will look like a single \ as it really should.
./configure CC=xlc CPP="xlc -E" CXX=xlc++ CXXPP="xlc++ -E"
--prefix=$PWD/sqlite-run​


Guess what? I can use "sqlite3" on either Linux or z/OS to create a test
sqlite3 data base file. I can ftp it, in binary, to the other system. And
on that system, I can access it without an error message. !! HOWEVER !! Due
to the differences in encoding, neither side can see any tables created by
the other side. I have not had any time to check up on why this is so. I
remember some of the code does a check to validate any "name". And a "name"
encoded in EBCDIC does not validate when accessed on an ASCII system. And
vice versa. "fixing" this, first of all, would likely be a major task. And,
more importantly, likely above my current C knowledge.



>
> Regarding codepage conversions, how does this normally work? As Teg said
> (or alluded to), should it not be the application's responsibly to do the
> codepage conversion and pass to the sqlite engine the text in the
> appropriate Unicode encoding (UTF-8/UTF-16) as required. I'm not sure
> though, if there is a distinction here between using the bind functions of
> the API vs supplying literal text in the SQL. ** The documentation on the C
> API does not say a lot about how encodings are  handled for text (bound or
> in the SQL) **. I would expect in any case, if the application is locale
> aware it should respect the settings of the respective LC_* environment
> variables and perform the necessary conversions before calling the sqlite
> functions??
>
> Regarding the specific case of performing EBCDIC<->ASCII conversions on
> zOS, this is (IMHO) not the approach to take, since (outside of the unix
> subsystem of zOS) any EBCDIC codepage can be in use, and as per the above,
> a locale aware application should take care of this, rather than leaving it
> to the sqlite engine.
>
> All the above of course would (again IMHO) be moot for blobs, where
> everything would be stored as is.
>
> On the subject of support for sqlite on zOS, has anyone investigated the
> possibility on making the command line interface, or the engine itself (via
> a custom vfs) support the MVS (record orientated) filesystem?
>
> Please share your thoughts,
> Thanks in advance,
>
> k
>
>
​To address all of "k"'s requirements, I _think_ that it would require a
change in SQLite's architecture. SQLite would really need to work in,
perhaps, UTF-8 for all its internal information. I say UTF-8 because most
of the data kept in z/OS EBCDIC (IBM-1047) characters can be "round trip"
converted to/from UTF-8. And using UTF-8 would likely _not_ impact any
current ASCII users. But that is up to Dr. Hipp to decide. Definitely above
my pay grade.​


-- 
The temperature of the aqueous content of an unremittingly ogled
culinary vessel will not achieve 100 degrees on the Celsius scale.

Maranatha! <><
John McKown
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Performance regression in Banshee with sqlite 3.8.7 on Arch Linux

2014-10-23 Thread Tomislav Ljubej
I've searched Banshee's source code and it seems 'CoreCache' is a TEMP
table according to some comments in the code but I have no clue where
it's actually defined, there is no 'CREATE TABLE' statement in the
code for that particular table (I've searched even with TEMP or
TEMPORARY keywords, nothing). Then I've googled around and found this
definition:

CREATE TABLE CoreCache (
OrderID INTEGER PRIMARY KEY,
ModelID INTEGER,
ItemID INTEGER);

On this URL: 
https://mail.gnome.org/archives/banshee-list/2009-January/msg00174.html

Anyway I have attached the .fullschema output.

On Thu, Oct 23, 2014 at 3:09 AM, Richard Hipp  wrote:
> Thanks for sending in the performance regression report.
>
> On Wed, Oct 22, 2014 at 1:51 PM, Tomislav Ljubej  wrote:
>
>> Hello,
>>
>> There is a performance regression in the media player Banshee on the
>> query that is in the attachment. The query takes about 13 seconds on
>> my i7 4770 processor. I tried the latest stable version before the
>> current one which is 3.8.6.1 and the performance regression doesn't
>> happen, the query completes in less than half a second.
>>
>> The query is run 3 times at Banshee startup so it slows down the
>> startup of the application significantly.
>>
>> Also attached is the .schema output for the database Banshee uses. I'm
>> not sure why the 'CoreCache' table isn't mentioned in the schema but
>> I'm pretty sure it's actually referring to 'CoreCacheModels', I got
>> the query from Banshee's debug output so it may not reflect the actual
>> names in the database.
>>
>
> Several problems:
>
> (1) Your "query" is actually two statements:  a DELETE followed by an
> INSERT.  The INSERT is fed from a SELECT.  Would I be correct in assuming
> that the SELECT is the problem?
>
> (2) You are right that CoreCache is not in the schema, so the SELECT is not
> valid.  I tried changing the name of the CoreCache table in the SELECT into
> CoreCacheModels, but then it complains that CoreCacheModels has no ItemID
> column.
>
> (3) After you determine the correct schema and query, it would be helpful
> to us if you can send us the output of ".fullschema".  The ".fullschema"
> output includes the sqlite_stat1 and sqlite_stat4 tables (if they exist)
> which contain histogram data used by the query planner.  Those table (if
> they exist) are needed by us in order to reproduce the behavior your are
> observing.
>
> (4) If you are able to send us your complete database file, perhaps by
> private email, that would be even more helpful, but is not absolutely
> necessary.
>
>
>>
>> --
>> Tomislav Ljubej
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>>
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



-- 
Tomislav Ljubej
CREATE TABLE CoreConfiguration (
EntryID INTEGER PRIMARY KEY,
Key TEXT,
Value   TEXT
);
CREATE TABLE CorePrimarySources (
PrimarySourceID INTEGER PRIMARY KEY,
StringIDTEXT UNIQUE,
CachedCount INTEGER,
IsTemporary INTEGER DEFAULT 0
);
CREATE TABLE CoreTracks (
PrimarySourceID INTEGER NOT NULL,
TrackID INTEGER PRIMARY KEY,
ArtistIDINTEGER,
AlbumID INTEGER,
TagSetIDINTEGER,
ExternalID  INTEGER,

MusicBrainzID   TEXT,

Uri TEXT,
MimeTypeTEXT,
FileSizeINTEGER,
BitRate INTEGER,
SampleRate  INTEGER,
BitsPerSample   INTEGER,
Attributes  INTEGER DEFAULT 5,
LastStreamError INTEGER DEFAULT 0,

Title   TEXT,
TitleLoweredTEXT,
TitleSort   TEXT,
TitleSortKeyBLOB,
TrackNumber INTEGER,
TrackCount  INTEGER,
DiscINTEGER,
DiscCount   INTEGER,
DurationINTEGER,
YearINTEGER,
Genre   TEXT,
ComposerTEXT,
Conductor   TEXT,
Grouping  

Re: [sqlite] Any tips on reducing memory requirements for small MCU?

2014-10-23 Thread Richard Hipp
On Thu, Oct 23, 2014 at 2:25 PM, Dennis Field  wrote:

>
>
> Are there any omit/other preprocessor defines that are particularly helpful
> for reducing memory usage?
>
>
SQLITE_SMALL_STACK - but that will require regenerating the amalgamation.



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any tips on reducing memory requirements for small MCU?

2014-10-23 Thread Dennis Field
Thanks, all! Sorry for the delayed response. I ran into an SD card reading
issue that I'm still trying to solve (multiple block reading), and I've
worked around it for now to keep trying to make progress on the database.

I have switched to memsys3 and the 60 KB heap seems sufficient for now to
run the stated query as well as open a temporary memory table for inquiries
to look for new devices.

I had to put my own, global, mutex around every access to SQLite since I
don't seem to be having much luck with SQLite's built in mutexing (it's
claiming database corruption / malformed schema when I access it from the
Bluetooth thread). Once I put a mutex around every call to SQLite I got
back in business. I suppose I could alternatively restructure the
application so SQLite is always only called from a single thread, but that
sounds about as tricky.

Here is my current schema:

CREATE TABLE `Pairings` (
`Address` blob,
`UserID` integer,
`ClassOfDevice` blob,
`ConnectPriority` integer,
`Name` text,
PRIMARY KEY(Address)
)

CREATE TABLE `LinkKeys` (
`Address` blob,
`LinkKey` blob,
PRIMARY KEY(Address)
)

As per Bluetooth specification, Address is always 6 bytes, LinkKey is
always 16, ClassOfDevice always 3.

I know, not a whole lot. I could maybe get away with putting LinkKey as
another column in Pairings, but this relational setup is a small example of
what might end up happening for other types of data I'd like to store. (If
Bluetooth pairings were the only thing, this might be overkill and I'd just
create a flat structure and store it directly in EEPROM or something like
that)

As I have attempted to start familiarizing myself with how memsys3 works
differently than memsys5, one thing does worry me - memsys3 comes with the
caveat that there is no mathematical guarantee against fragmentation. So, I
guess the next question would be...how do I know when it's fragmented, as
opposed to when it's simply run plumb out as memsys5 did under this
condition? Is there a clean way to wipe the memory and start over again
when this happens? (e.g. close database connection, reopen, voila heap
magically starts over as if first started, no more fragmentation)

I have set my default page size to 512 and will rebuild my database and see
where I end up next.

I've defined about as many SQLITE_OMIT_* things as I can think of that I
won't need (except the ones that would require regenerating the
amalgamation source). Code space is getting a little tight (770kb of 1mb
used so far), but the main thing is memory allocation at the moment
(between SQLite and Bluetooth, both of which need dynamic alloc/free, I've
got half the RAM dedicated to them). Code space can be reduced in a couple
of other ways if I need, and as a last resort I'll regenerate amalgamation
source with a few more omit flags.

Are there any omit/other preprocessor defines that are particularly helpful
for reducing memory usage?

Many thanks! I appreciate all the responses!

On Mon, Oct 20, 2014 at 7:20 PM, Richard Hipp  wrote:

> On Mon, Oct 20, 2014 at 5:21 PM, Dennis Field  wrote:
>
> >
> > When I initialize SQLite, I am instructing it to take a single heap for
> > memsys5 of 60 * 1024.
> >
>
> Have you tried using memsys3 instead of memsys5?  Memsys3 can be more
> memory-efficient.
>
> You might also want to disable LOOKASIDE memory, and set you default pages
> size to 512 bytes.
>
> What does your schema look like?
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite 3.8.7

2014-10-23 Thread Richard Hipp
On Thu, Oct 23, 2014 at 11:55 AM, Luigi Iemma  wrote:

> Hi,
>
> SELECT TdoIdoc,RdoCart,RdoQuat
>  FROM Tesdoc
>  INNER JOIN Rigdoc ON RdoIdoc BETWEEN TdoIdoc*1000 AND
> TdoIdoc*1000+999
>  WHERE TdoTipo=60 AND TdoAnno BETWEEN 2014 AND 2014
>  GROUP BY TdoIdoc
>
> When I run this query on 3.8.5 it takes 0.126 seconds,
> when I run this query on 3.8.7 it takes 17.37 seconds
>

Please run the ".fullschema" command from the command-line shell and send
us the output.



> if I change 'INNER JOIN Rigdoc' with 'LEFT JOIN Rigdoc' it takes 0.063
> seconds
>
> the table Rigdoc have column RdoIdoc as PRIMARY KEY
>
>
> Best regards
> Luigi.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regression with sqlite 3.8.7

2014-10-23 Thread Simon Slavin

On 23 Oct 2014, at 4:55pm, Luigi Iemma  wrote:

> When I run this query on 3.8.5 it takes 0.126 seconds,
> when I run this query on 3.8.7 it takes 17.37 seconds

Can you do an ANALYZE then try it again ?

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Regression with sqlite 3.8.7

2014-10-23 Thread Luigi Iemma
Hi,

SELECT TdoIdoc,RdoCart,RdoQuat
 FROM Tesdoc
 INNER JOIN Rigdoc ON RdoIdoc BETWEEN TdoIdoc*1000 AND
TdoIdoc*1000+999
 WHERE TdoTipo=60 AND TdoAnno BETWEEN 2014 AND 2014
 GROUP BY TdoIdoc

When I run this query on 3.8.5 it takes 0.126 seconds,
when I run this query on 3.8.7 it takes 17.37 seconds
if I change 'INNER JOIN Rigdoc' with 'LEFT JOIN Rigdoc' it takes 0.063 seconds

the table Rigdoc have column RdoIdoc as PRIMARY KEY


Best regards
Luigi.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] how to store latin strings with no casae

2014-10-23 Thread Constantine Yannakopoulos
On Thu, Oct 23, 2014 at 2:47 PM, dd  wrote:

> Hi,
>
>   database schema defined with collate nocase. It supports only for ascii
> (upper and lower). If I want to support db for other characters with
> nocase, what steps I need to consider during schema design.
>
> for ex:
>
> *À Á Â Ã Ä Å Æ = *
>
> * à á â ã ä å æ *
>

​you can write​ your own collations
 to support custom
comparing and sorting of strings or any other data and add them to SQLite
.

​There is a pitfall though: The LIKE operator will not honour your collation
. You may choose to override it
 by using the
sqlite3_create_function() api so that it does, but if you do you will lose
the LIKE optimization .

--
Constantine
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [regression] SQLite 3.8.7 causes Evolution to crash

2014-10-23 Thread Jungle Boogie

Dear Richard, Ralf

From: Richard Hipp 
Sent:  Wed, 22 Oct 2014 21:53:36 -0400
To: General Discussion of SQLite Database  Cc: Ralf 
Mardorf 

Subject: Re: [sqlite] [regression] SQLite 3.8.7 causes Evolution to crash
>

On Wed, Oct 22, 2014 at 5:14 PM, Paul Menzel <
paulepan...@users.sourceforge.net> wrote:


Dear SQLite folks,


after the upgrade of libsqlite3 from 3.8.6 to 3.8.7 Evolution crashes
with a segmentation fault.

 pool[6371]: segfault at 0 ip   (null) sp a67d26ec error 14

Several people have reproduced this [1].



The problem *might* be an incomplete VFS implementation in Evolution.  I
put a more detailed comment on the Bugzilla ticket.




For posterity, Richard is right:
https://bugzilla.gnome.org/show_bug.cgi?id=738965


In the stack trace linked in Comment 1 above, in Thread 45, I see that the
SQLite routine sqlite3OsRead() invokes an external routine named
camel_sqlite3_file_xRead().  From this I presume that evolution is using a
custom VFS for SQLite that is implemented in the file named "camel-db.c".  Is
that correct?


Thanks for the investigation. You are absolutely right, camel-db provides its
own SQLite VFS to have delayed writes to a disk.


Dr. Hipp, thank you for all your efforts to continue making SQLite still so 
usable and powerful.




--
inum: 883510009027723
sip: jungleboo...@sip2sip.info
xmpp: jungle-boo...@jit.si
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS pagination

2014-10-23 Thread Clemens Ladisch
supermariobros wrote:
> If I am using FTQ that looks like this
> SELECT * FROM mail WHERE body MATCH 'sqlite'
> can I add to it "WHERE rowid > 5 AND rwoid <10"
> or it will significantly slow it down.

How much did it slow down when you tested it?

Anyway,
without index:
  sqlite> EXPLAIN QUERY PLAN SELECT * FROM t;
  0|0|0|SCAN TABLE t VIRTUAL TABLE INDEX 0:
with word search:
  sqlite> EXPLAIN QUERY PLAN SELECT * FROM t WHERE t MATCH 'x';
  0|0|0|SCAN TABLE t VIRTUAL TABLE INDEX 3:
your query:
  sqlite> EXPLAIN QUERY PLAN SELECT * FROM t WHERE t MATCH 'x' AND rowid 
BETWEEN 5 AND 10;
  0|0|0|SCAN TABLE t VIRTUAL TABLE INDEX 393219:

So it uses some index (and the lowest two bits are still set, so
it still uses the FTS index, but this is an implementation detail.)

This might be different with a different SQLite version.


> If so what would be the best approach for pagination

The rowid cannot be used for pagination because you get the
numbers of the original rows.

You would have to use OFFSET/LIMIT, which is inefficient for
large offsets.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unbinding parameters

2014-10-23 Thread Hick Gunter
http://www.sqlite.org/c3ref/clear_bindings.html

-Ursprüngliche Nachricht-
Von: Baruch Burstein [mailto:bmburst...@gmail.com]
Gesendet: Donnerstag, 23. Oktober 2014 13:47
An: General Discussion of SQLite Database
Betreff: [sqlite] Unbinding parameters

It says here (https://www.sqlite.org/c3ref/bind_blob.html) that
a) Unbound parameters are interpreted as NULL, and
b) Bindings are not cleared by sqlite3_reset()

Is there any way to clear all bindings, so that if I don't set them again they 
will insert NULL?

--
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Unbinding parameters

2014-10-23 Thread Baruch Burstein
Sorry for the noise, but I found it:
https://www.sqlite.org/c3ref/clear_bindings.html

On Thu, Oct 23, 2014 at 2:46 PM, Baruch Burstein 
wrote:

> It says here (https://www.sqlite.org/c3ref/bind_blob.html) that
> a) Unbound parameters are interpreted as NULL, and
> b) Bindings are not cleared by sqlite3_reset()
>
> Is there any way to clear all bindings, so that if I don't set them again
> they will insert NULL?
>
> --
> ˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı
>



-- 
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] how to store latin strings with no casae

2014-10-23 Thread dd
Hi,

  database schema defined with collate nocase. It supports only for ascii
(upper and lower). If I want to support db for other characters with
nocase, what steps I need to consider during schema design.

for ex:

*À Á Â Ã Ä Å Æ = *

* à á â ã ä å æ *



Thanks.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Unbinding parameters

2014-10-23 Thread Baruch Burstein
It says here (https://www.sqlite.org/c3ref/bind_blob.html) that
a) Unbound parameters are interpreted as NULL, and
b) Bindings are not cleared by sqlite3_reset()

Is there any way to clear all bindings, so that if I don't set them again
they will insert NULL?

-- 
˙uʍop-ǝpısdn sı ɹoʇıuoɯ ɹnoʎ 'sıɥʇ pɐǝɹ uɐɔ noʎ ɟı
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] collation problem / sqlite extensions on Andoird

2014-10-23 Thread Grzes Dev
Hi,

I have an app written in Android/Xamarin.
Database [SQLite version 3.7.4] is using custom collation sequence.
Collation sequence is loaded into DB on app startup.
// as described here:
http://www.codeproject.com/Articles/44349/Case-Insensitive-Sort-of-UTF-Data-Using-System-Da

Now when I try to access database file outside of the app [eg. from
console/terminal] i often end up with error:
Error: no such collation sequence: UTF8CI
// cause collation is loaded only in app context

I want to create "loadable module" with proper collation seqeunce and load
it by eg.
sqlite>.load /path/to/my/module.so

Question:
Can anyone tried to compile "custom sqlite extension" [best regarding
setting collation sequence] for Android and successfully loaded it into
Android emulator sqlite?
Ane recipies will be appreciated.

Best regards
GT
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] FTS pagination

2014-10-23 Thread supermariobros
Quick question. If I am using FTQ that looks like this "SELECT * FROM mail
WHERE body MATCH 'sqlite' " can I add to it "WHERE rowid > 5 AND rwoid <10"
or it will significantly slow it down.  If so what would be the best
approach for pagination, For example if I get 500 rows with the matching
term and obviously I do not need them all right away. Should I just use the
first option and maybe select only rowids and then go through them? 

Any suggestions?


Thanks



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/FTS-pagination-tp78754.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] FTS pagination

2014-10-23 Thread supermariobros
Or maybe, if I am using android, it should be done at the cursor level?



--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/FTS-pagination-tp78754p78755.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users