Re: [sqlite] library routine called out of sequence

2011-12-24 Thread Tito Ciuro
Hi Sreekumar,

Do you have a db connection shared across threads? This is not advised . Try 
opening a db connection per thread instead.

-- Tito

On Dec 23, 2011, at 15:06, Sreekumar TP  wrote:

> Hi,
> 
> yes, I know. I have a multithreaded app. One db conection.I will try to
> reproduce the problem with a simple app.
> 
> However, the difference between the working and non working case is that in
> the working case statements are prepared each time. In the non working
> case, its reused.
> 
> The very first call to bind throws this error.
> 
> Sreekumar
> On Dec 23, 2011 2:51 PM, "Pavel Ivanov"  wrote:
>> 
>> Devil is in the details. What you described in general is okay. But I
>> guess the exact code you have has some problems. To find them we
>> should see the code.
>> 
>> An just some first check: do you have a single-threaded application
>> accessing database?
>> 
>> 
>> Pavel
>> 
>> 
>> On Fri, Dec 23, 2011 at 3:32 AM, Sreekumar TP 
> wrote:
>>> Hi ,
>>> 
>>> I have a strange problem. I do the following in sequence -
>>> 
>>> 1. open database
>>> 2. set pragmas
>>> 3.create table
>>> 4. prepare a paremetrised statement ( SELECT ..)
>>> 5. bind values to the statement
>>> 6. step
>>> 
>>> At 5, I get the error  "library routine called out of sequence".
>>> 
>>> If I replace the parameterised statement  with a 'non-parameterised'
>>> statement, I have no issues.
>>> 
>>> If I use a parameterized INSERT statement , I do not get the error.
>>> 
>>> Any suggestions?
>>> 
>>> -Sreekumar
>>> ___
>>> sqlite-users mailing list
>>> sqlite-users@sqlite.org
>>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-24 Thread Tito Ciuro
Hi,

It has worked fairly well with small databases, but I see the problem with 
medium to large files. Have you tried to run ANALYZE on your database? I'm 
curious to know how long it takes.

-- Tito

On Jul 24, 2011, at 8:26 AM, Григорий Григоренко wrote:

>> 
>> Perhaps my post dated Aug. 19, 2009 will help a little bit:
>> 
>> http://www.cocoabuilder.com/archive/cocoa/242954-core-data-dog-slow-when-using-first-time-after-boot.html
>> 
>> -- Tito
>> 
> 
> Thanks for sharing.  "warming file" is a way to cache whole database as I 
> understand it.
> 
> After everything is cached scattered reading from database runs faster. 
> 
> Unfortunately, in my case base size is ~ 3.5 Gb; it's too big for this 
> strategy.
> 
> Even if I read at 25 Mb/s rate it will took 3500 / 25 = ~140 seconds just to 
> read whole db file.
> 
> And what's more important I've only 2 Gb of RAM. 
> 
> Anyway, thanks for sharing. I guess these cases are similar. 
> 
> 
> To me problem looks like this: 
> 
> SQLITE needs to read  (cache) from db a lot (too much?) while first-time 
> query execution even if a query uses nicely matched index and returns nothing.
> 
> And SQLITE is doing lot's of scattered readings during query execution; not 
> trying to somehow batch read or similar. That's why file caching helps.
> 
> If it's true not sure there's a simple and nice solution. 
> 
> I'll try some ideas (including normalization) and report results in this 
> topic next week.
> 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SELECT query first run is VERY slow

2011-07-22 Thread Tito Ciuro
Hello all,

Perhaps my post dated Aug. 19, 2009 will help a little bit:

http://www.cocoabuilder.com/archive/cocoa/242954-core-data-dog-slow-when-using-first-time-after-boot.html

-- Tito

On Jul 22, 2011, at 10:40 AM, Simon Slavin wrote:

> 
> On 22 Jul 2011, at 2:11pm, Григорий Григоренко wrote:
> 
>> Let's say it'll shrink by 50%. Still, if first-time runnning query timing 
>> will change from 2 min to 1 min it is still not acceptable. 
>> I cannot wait even a minute without logging.
>> 
>> So, before restructuring database and re-writing code I just want to 
>> understand SQLITE behaviour.
>> The question is: does SQLITE need to load whole index to start using it? Or 
>> SQLITE can use partial loads to quickly find records it needs (defined by 
>> columns in conditions)?
> 
> SQLite loads only the pages it needs.  It doesn't load a whole index into 
> memory unless the entire index fits in one page (only a few rows).  If your 
> search still runs too slowly your problem is more likely to be that you are 
> searching on text fields rather than integer fields.
> 
> Searching on integers is /must/ faster than searching on test fields.  If you 
> store the id code of the computer in your log file rather than the text name 
> of the computer, a search for log entries for a particular computer will be 
> faster.  I don't know what you're doing with your 'kind' column, but again if 
> you assign integer values to your different values for 'kind' and store those 
> instead, your search will be faster.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hi Steven.

You're absolutely right.

Adding the sources to a real iOS project (standard Xcode 4, View-based iOS app) 
results in 885 KB. Great to see the linker/stripping process remove all this 
symbol info.

Thanks for the help,

-- Tito

On May 16, 2011, at 8:49 AM, Steven Parkes wrote:

> Well, for my part, I forgot that I'm not including the R-tree stuff at this 
> point. I do include FTS (that's why I have a custom build, not to mention the 
> latest WAL stuff.)
> 
> And to the extent it matters, I'm not using LLVM.
> 
> But I am building -O0 -ggdb. That doesn't do any inline or deadcode 
> elimination and it throws lots and lots and lots of symbol stuff in. Lots.  
> Really. Lots.
> 
> Unless I'm missing something, you're looking at something that doesn't 
> matter. The size of the .a is very weakly correlated to resulting size of the 
> executable. You don't ship the .a (or you certainly shouldn't be). It just 
> gets linked into your executable and then you strip your executable. Both the 
> linking and the stripping steps remove a ton of symbol information. Much of 
> the size of the sqlite .o's and .a's are symbols that are going to be dumped.
> 
> To give you an idea, as I said, my sqlite library is about 3M. My custom 
> openssl build is 17M. My own library is 37M. The final executable in 
> debugging mode is 7M before strip and 5M. Again, with -O0.
> 
> I'm pretty sure this is a non-issue.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hi Richard,

On May 16, 2011, at 8:40 AM, Richard Hipp wrote:

> I took the amalgamation file (sqlite3.c) and compiled it thusly:
> 
>   gcc -Os -DSQLITE_ENABLE_FTS3 -DSQLITE_ENABLE_RTREE -c sqlite3.c
> 
> The resulting binary size (as reported by the "size" command) is 392,203
> bytes.  That's 383 KiB.  I don't know how you are getting multi-megabyte
> builds

Darn. I wonder what settings in Xcode 4 are generating this size. I'll 
investigate more.

Thanks for the help,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hi Simon,

No, I don't need it... but someone else might. I was considering adding FTS3 to 
my library in case someone needs to search across several text records. 
Depending on the table size, LIKE or GLOB would be very expensive because doing 
a full table scan would not be the most optimal approach, even if the database 
is not that large on an iPhone.

To be honest, it's more of a curiosity than actual need. I would not add it 
until it was needed, but I'd like to know what lies ahead.

Thanks Simon,

-- Tito

On May 16, 2011, at 8:26 AM, Simon Slavin wrote:

> 
> On 16 May 2011, at 4:06pm, Marco Bambini wrote:
> 
>> 4.3 MB seems really too big... you are probably building a debug version of 
>> the library.
> 
> Agreed.  I don't know what's wrong but nothing should take 4Meg.
> 
> Also, do you really need FTS3 ?  Try a simple search using LIKE or GLOB.  
> With the small amount of data your app will have (since it has to fit within 
> an iPhone anyway), is your search really too slow for normal use ?
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hi Steven,

OK. First of all, I messed up by compiling shell.c (which is included in the 
SQLite amalgamated distro.) Removing it brings the size to:

GCC:
- Debug: 3.1 MB
- Release: 3.4 MB

LLVM Compiler 2.0:
- Debug: 3.7 MB
- Release: 4 MB

This is what I did:

1) Create a new project in Xcode, iOS static library
2) Add the SQLite amalgamation sources (sqlite3.c, sqlite3.h, sqlite3ext.h)
3) Choose GCC
4) Build for Run (uses the Debug config)
5) Build for Archive (uses the Release config)
6) Choose LLVM
7) Repeat steps 4 and 5

If you want to check the project, please contact me off-list and I'll gladly 
send it to you.

Cheers,

-- Tito

On May 16, 2011, at 8:18 AM, Steven Parkes wrote:

>> Compiling a static library of SQLite's amalgamated version weighs at about 
>> 4.3 MB
> 
> Where are you coming up with this number? My .a is 2792KB and that's with 
> both armv6 and armv7, debugging, and full symbols.
> 
> I pull in sqlite3, openssl, about a billion other things, and plenty of my 
> own code and my post-strip size of the executable is still on the order of 
> 5MB and that's still with debugging turned on.
> 
> I may be missing something, but I can't believe this a real issue.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hi Marco,

Oops. My bad. Building for Archive (Release version) has a size of 3.6 MB.

-- Tito

On May 16, 2011, at 8:06 AM, Marco Bambini wrote:

> 4.3 MB seems really too big... you are probably building a debug version of 
> the library.
> 
> --
> Marco Bambini
> http://www.sqlabs.com
> 
> 
> 
> 
> 
> 
> On May 16, 2011, at 5:01 PM, Tito Ciuro wrote:
> 
>> Hello,
>> 
>> I have a question about SQLite running on iOS. If I'm not mistaken, SQLite 
>> on iOS is not compiled with R*Tree and FTS3. Compiling a static library of 
>> SQLite's amalgamated version weighs at about 4.3 MB, which represents almost 
>> 25% of the 20 MB-per-app allowed on the App Store. For many, this is a major 
>> setback because many apps can easily reach this limit.
>> 
>> My question is: since a "light" version of SQLite is already included in 
>> iOS, would it be too complicated to build a static library with only R*Tree 
>> and FTS3 support? The idea being of course that the app would link against 
>> iOS' SQLite and the app's R*Tree/FTS3 library, thus reducing the app's 
>> footprint considerably.
>> 
>> Are there dependencies that would make this attempt a nightmare? Has anyone 
>> gone through this?
>> 
>> Thanks in advance,
>> 
>> -- Tito
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Using SQLite with R*Tree and FTS3 support in iOS

2011-05-16 Thread Tito Ciuro
Hello,

I have a question about SQLite running on iOS. If I'm not mistaken, SQLite on 
iOS is not compiled with R*Tree and FTS3. Compiling a static library of 
SQLite's amalgamated version weighs at about 4.3 MB, which represents almost 
25% of the 20 MB-per-app allowed on the App Store. For many, this is a major 
setback because many apps can easily reach this limit.

My question is: since a "light" version of SQLite is already included in iOS, 
would it be too complicated to build a static library with only R*Tree and FTS3 
support? The idea being of course that the app would link against iOS' SQLite 
and the app's R*Tree/FTS3 library, thus reducing the app's footprint 
considerably.

Are there dependencies that would make this attempt a nightmare? Has anyone 
gone through this?

Thanks in advance,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question: Memory-Based Databases

2011-05-10 Thread Tito Ciuro
Thanks Pavel!

-- Tito

On May 10, 2011, at 11:12 AM, Pavel Ivanov wrote:

> Until you reach limit set by 'pragma cache_size' memory usage would be
> the same for in-memory database and on-disk database. When the size of
> your database grows beyond 'pragma cache_size' in-memory database
> starts to consume more memory than on-disk one, because it has nowhere
> to push pages out of the cache.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question: Memory-Based Databases

2011-05-10 Thread Tito Ciuro
Hello,

I have been using memory-based databases (opened via :memory:) and performance 
is great. However, one of the things I assumed with memory-based databases was 
that memory usage would be higher than the temporary or persistent databases 
stored on disk. Is this true, or is the memory usage pretty much similar? I've 
perform a few tests with fairly small data sets and I don't see a difference. 
Any ideas?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-02-01 Thread Tito Ciuro
Hello Marian,

On Feb 1, 2011, at 1:28 PM, Marian Cascaval wrote:

> Since this topic has lead to different sub-topic I dare ask a question (I'm a 
> beginner both in C++ and, Oh boy, in SQLite too).
> 
> Tito, do you really need the 5th argument in sqlite3_prepare_v2() i.e. 
> ""?
> From what I understand from your code, there's only one SQL statement to be 
> prepared, so there would be no need for the supposedly next SQL statement.
> Do you reuse (reset) these statements?
> I was under the impression that the 5th argument is used when the SQL 
> statement 
> string contains more than one SQL statement.
> 
> Thanks for your patience if I misunderstood something .. or all.

Yes, you understood right. I was using the 5th arg for debugging reasons, as I 
was checking whether the memory was being smashed somewhere. As for the 
statements, yes, I'm reusing them all the time.

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-02-01 Thread Tito Ciuro
Hi Drake,

On Feb 1, 2011, at 9:33 AM, Drake Wilson wrote:

> Quoth Tito Ciuro <tci...@mac.com>, on 2011-02-01 09:01:09 -0200:
>> I don't think so. The fileSystemRepresentation method should we used
>> when dealing with file-based paths.
> 
> But not when dealing with SQLite paths, unless I'm mistaken about what
> fileSystemRepresentation does.  sqlite3_open_v2 always takes UTF-8 and
> does any filesystem-specific encoding transformations internally.  (It
> may still be that it does it incorrectly on some platforms, in which
> case that may be a bug.)

Thanks for the heads up.

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-02-01 Thread Tito Ciuro
Hi Afriza,

On Feb 1, 2011, at 4:16 AM, Afriza N. Arief wrote:

> On Tue, Feb 1, 2011 at 3:38 AM, Tito Ciuro <tci...@mac.com> wrote:
> 
>> Hello,
>> 
>> The following code snippet runs fine on Mac OS X, but fails on the iOS
>> simulator:
>> 
>>   // Obtain a path for the database
>>   NSString *docs =
>> [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask,
>> YES) lastObject];
>>   NSString *path = [[docs stringByAppendingPathComponent:@
>> "myDB.sqlite"]fileSystemRepresentation];
>> 
>>   // Open the database
>>   sqlite3 *db = NULL;
>>   int statusOpen = sqlite3_open_v2( fileSystemRepresentation, ,
>>  SQLITE_OPEN_READWRITE |
>> SQLITE_OPEN_CREATE | SQLITE_OPEN_AUTOPROXY | SQLITE_OPEN_FULLMUTEX, NULL);
>> 
> 
> Do you need UTF8String for the sqlite3_open_v2() ?

I don't think so. The fileSystemRepresentation method should we used when 
dealing with file-based paths. This is because the length of the encoded string 
in foreign file systems can be longer than the number of Unicode characters in 
the NSString. So, you would end up with a different length (a "wrong" string) 
by using UTF8String.

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-01-31 Thread Tito Ciuro
Hello Simon,

On Jan 31, 2011, at 8:29 PM, Simon Slavin wrote:

> 
> On 31 Jan 2011, at 7:55pm, Tito Ciuro wrote:
> 
>> What do you mean by "no further"? In my app, I use both statements. I have 
>> trimmed down the code in this email thread to show the error. Be sure I use 
>> these two statements later on. The reason I cache the statements is to save 
>> time during inserts, since I use them a lot.
> 
> There's probably no reason not to do that.  Sorry for any alarm.
> 
>> If having two statements prepared is not permissible, should't it fail on 
>> Mac OS X as well?
> 
> There are plenty of examples of SQLite stuff that succeeds in one compiler 
> and fails in another, or succeeds on one OS or File System and fails in 
> another.  SQLite doesn't exhaustively error-check every operation because if 
> would slow it down too much.  So the only things that are safe to do are the 
> things the documentation says you can do.
> 
> But please ignore my previous question about having two simultaneous prepared 
> statements.
> 
> Simon.

No, problem, thanks for your help.

After refactoring the code a bit and catching/reporting errors better, it 
started working again. I'm still not 100% sure why it didn't work before, which 
bugs me quite a bit. But, after spending some time testing it, I've had so far 
100% success rate during testing and execution. Go figure... :-/

Best regards,

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-01-31 Thread Tito Ciuro
Hi Simon,

On Jan 31, 2011, at 5:49 PM, Simon Slavin wrote:

> 
> On 31 Jan 2011, at 7:38pm, Tito Ciuro wrote:
> 
>> So my question I have is, why would the second sqlite3_prepare_v2 statement 
>> fail only on path-based iOS apps? :-/
> 
> The other question is: Is it permissable to have two statements prepared but 
> no further, for the same database connection.

What do you mean by "no further"? In my app, I use both statements. I have 
trimmed down the code in this email thread to show the error. Be sure I use 
these two statements later on. The reason I cache the statements is to save 
time during inserts, since I use them a lot.

If having two statements prepared is not permissible, should't it fail on Mac 
OS X as well?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Second sqlite3_prepare_v2() call fails on iOS path-based databases

2011-01-31 Thread Tito Ciuro
Hello,

The following code snippet runs fine on Mac OS X, but fails on the iOS 
simulator:

// Obtain a path for the database
NSString *docs = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, 
NSUserDomainMask, YES) lastObject];
NSString *path = [[docs 
stringByAppendingPathComponent:@"myDB.sqlite"]fileSystemRepresentation];

// Open the database
sqlite3 *db = NULL;
int statusOpen = sqlite3_open_v2( fileSystemRepresentation, ,
   SQLITE_OPEN_READWRITE | 
SQLITE_OPEN_CREATE | SQLITE_OPEN_AUTOPROXY | SQLITE_OPEN_FULLMUTEX, NULL);


// Build the first statement
sqlite3_stmt *oneStatement = NULL;
const char *oneSQL = [[NSString stringWithFormat:@"INSERT INTO %@(%@, %@, 
%@, %@) VALUES (?,?,?,?);", NSFValues, NSFKey, NSFAttribute, NSFValue, 
NSFDatatype]UTF8String];
int statusOne = sqlite3_prepare_v2(db, oneSQL, (int)strlen(oneSQL), 
, );


// Build the second statement
sqlite3_stmt *twoStatement = NULL;
const char *twoSQL = [[NSString stringWithFormat:@"INSERT INTO %@(%@, %@, 
%@, %@) VALUES (?,?,?,?);", NSFKeys, NSFKey, NSFPlist, NSFCalendarDate, 
NSFObjectClass]UTF8String];
int statusTwo = sqlite3_prepare_v2(db, twoSQL, (int)strlen(twoSQL), 
, );

What I see is that statusTwo returns 1, and I have no idea why. What is really 
puzzling is that if I open the database  in memory or temporary mode, it works 
fine in both Mac OS X and iOS!

So my question I have is, why would the second sqlite3_prepare_v2 statement 
fail only on path-based iOS apps? :-/

Thanks in advance,

-- Tito










___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Using Bulk Insert in SQLite

2011-01-10 Thread Tito Ciuro
Sunil,

Google is your friend:

http://www.google.com/search?client=safari=en=bulk+insert+sqlite=UTF-8=UTF-8

Regards,

-- Tito

On Jan 10, 2011, at 11:25 AM, Sunil Bhardwaj wrote:

> Hi
> 
> Please suggest, how can we implement Bulk Insert in SQLite.
> 
> Thanks
> Sunil Bhardwaj
> Ext. 1125 (0120-2567001)
> 9818868910
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Documentation v2

2010-12-10 Thread Tito Ciuro
Thank you Simon!

On Dec 10, 2010, at 10:08 AM, Simon Slavin wrote:

> 
> On 10 Dec 2010, at 6:02pm, Tito Ciuro wrote:
> 
>> Does it only work for Explorer? No Linux or Mac support?
> 
> The web site has copies of itself for download.  See the fourth item on
> 
> http://www.sqlite.org/download.html
> 
> Use any web browser.
> 
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite Documentation v2

2010-12-10 Thread Tito Ciuro
Hello Dagdamor,

Does it only work for Explorer? No Linux or Mac support?

-- Tito

On Dec 10, 2010, at 7:55 AM, Dagdamor wrote:

> SQLite Documentation (unofficial, HTML Help version) has been updated.
> 
> Changes:
> 
> - keywords list (index) extended with many new terms
> - one missing link (Locking And Concurrency) added to the TOC
> - four missing images (rounded corners) added to the file
> - more accurate links to the main sections
> - SQLite releases are now listed in the correct, "natural" order
> 
> The download link is the same:
> 
> http://www.phpc.ru/files/sqlite/sqlite_docs_3_7_3.chm
> 
> If you can't see the contents, unblock the .chm first (right-click on the
> file in Explorer, then click "Unblock") - thanks to Luuk34 for the tip.
> 
> Also thanks to Artur Reilin for nice feedback and advises about the docs.
> 
> Enjoy!
> 
> -- 
> Regards,
> Serge Igitov
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open on non-DB files / corrupt DBs - Found word(s) list error in the Text body

2010-11-25 Thread Tito Ciuro
Hi Nick,

On Nov 25, 2010, at 2:09 PM, Nick Shaw wrote:

> As Igor points out, that may not fail if the corruption is in a table
> you don't query.
> 
> In this case, the databases I'm working with are either small enough for
> the speed of an integrity check not to be an issue, or the integrity of
> the database is more important than a delayed DB open (which happens in
> a seprate thread to my apps' main GUI, so the user won't notice an issue
> anyway).
> 
> Thanks for all the help peeps,
> Nick.

Agreed.

However, the original email looks like it's a two-part question: opening 
non-SQLite files and detecting corrupted databases. Looks like the solution 
would be to integrate a method combining both Igor and Simon's solution:

1) SELECT on sqlite_master. All SQLite3 databases contain it, so it would fail 
right away is it wasn't a valid database.
2) Issue a "PRAGMA integrity_check" statement, if (1) succeeds.

However, I'm wondering whether (1) is redundant. If the integrity check detects 
right away that the file is not a SQLite3 database, then we're good. But if the 
integrity check reads the whole file before failing, perhaps running (1) would 
not be such a bad after all.

Also, if there is a need to know whether a database is corrupted, an 
optimization seems to be to issue "PRAGMA integrity_check (1)" statement, which 
would bail out at the first sign of trouble. As stated in the docs: 
http://www.sqlite.org/pragma.html

> PRAGMA integrity_check; 
> PRAGMA integrity_check(integer)
> 
> This pragma does an integrity check of the entire database. It looks for 
> out-of-order records, missing pages, malformed records, and corrupt indices. 
> If any problems are found, then strings are returned (as multiple rows with a 
> single column per row) which describe the problems. At most integer errors 
> will be reported before the analysis quits. The default value for integer is 
> 100. If no errors are found, a single row with the value "ok" is returned.


Cheers,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_open on non-DB files / corrupt DBs

2010-11-25 Thread Tito Ciuro
On 25 Nov 2010, at 12:51, Igor Tandetnik  wrote:

> Run "PRAGMA integrity_check" right after opening.

That could be a potentially slow operation if the database is valid and 
contains lots of records.

Wouldn't be better to issue a SELECT statement and let SQLite come back with a 
SQLITE_NOTADB error right away?

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_get_table() vs sqlite3_step

2010-11-21 Thread Tito Ciuro
Hi Igor,

No, no... I really meant a roundtrips to disk. So, while sqlite3_get_table() 
seemed like a faster but memory hungry mechanism to retrieve the result set, 
sqlite3_step() seemed that it would take longer, especially in disk-based 
systems, but more memory-efficient.

Thanks for pointing to me that sqlite3_get_table() is not faster than 
sqlite3_step().

-- Tito

On 21 Nov 2010, at 14:14, Igor Tandetnik <itandet...@mvps.org> wrote:

> Tito Ciuro <tci...@mac.com> wrote:
>> Let me start by saying that I'm aware that sqlite3_get_table() should not be 
>> used (as per the documentation). I'm curious about
>> one thing though: if the computer/device has sufficient memory to hold the 
>> result set returned by sqlite3_get_table(), wouldn't
>> it be more optimized performance-wise than calling sqlite3_step N times to 
>> obtain the same result set?  
> 
> sqlite3_get_table is just a wrapper around sqlite3_prepare/sqlite3_step.
> 
>> Sounds like sqlite3_get_table() would take less time to access the storage 
>> subsystem as opposed to sqlite_step() with multiple
>> roundtrips
> 
> You say "roundtrip" as if it's a network request to some remote machine or 
> something. Here, a "roundtrip" is a function call within the same process. In 
> this sense, a call to malloc is a roundtrip to memory allocation subsystem - 
> you want to avoid those at all costs, right?
> -- 
> Igor Tandetnik
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3_get_table() vs sqlite3_step

2010-11-21 Thread Tito Ciuro
Thanks a lot Max!

-- Tito

On 21 Nov 2010, at 14:04, Max Vlasov <max.vla...@gmail.com> wrote:

> On Sun, Nov 21, 2010 at 6:31 PM, Tito Ciuro <tci...@mac.com> wrote:
> 
>> Hello,
>> 
>> Sounds like sqlite3_get_table() would take less time to access the storage
>> subsystem as opposed to sqlite_step() with multiple roundtrips, at the
>> expense of using lots more RAM, of course. So assuming RAM wasn't an issue,
>> why not use sqlite3_get_table()? Why is its usage being discouraged?
>> 
>> 
>> Tito, looking at the sources... sqlite3_get_table seems to be sqlite3_exec
> call with a callback and sqlite3_exec is sqlite3_prepare/sqlite3_step
> wrapper. So, probably no special access or performance boost
> 
> Max Vlasov
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] sqlite3_get_table() vs sqlite3_step

2010-11-21 Thread Tito Ciuro
Hello,

Let me start by saying that I'm aware that sqlite3_get_table() should not be 
used (as per the documentation). I'm curious about one thing though: if the 
computer/device has sufficient memory to hold the result set returned by 
sqlite3_get_table(), wouldn't it be more optimized performance-wise than 
calling sqlite3_step N times to obtain the same result set?

Sounds like sqlite3_get_table() would take less time to access the storage 
subsystem as opposed to sqlite_step() with multiple roundtrips, at the expense 
of using lots more RAM, of course. So assuming RAM wasn't an issue, why not use 
sqlite3_get_table()? Why is its usage being discouraged?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about manifest typing/data affinity

2010-11-07 Thread Tito Ciuro
Hello everyone,

Sorry about my last email... I clicked Send too quickly.

Jay, the book is great, I have discovered quite a few details I had overlooked 
(or perhaps missed, since I worked with earlier versions of SQLite and some 
current features were not available yet).

Thank you and all who responded. It's definitely more clear to me how this 
works... excellent.

Best regards,

-- Tito

On 06/11/2010, at 21:28, Jay A. Kreibich wrote:

> On Sat, Nov 06, 2010 at 08:35:10PM -0300, Tito Ciuro scratched on the wall:
>> Hello,
>> 
>> I have a question about manifest typing/data affinity. Assume I have 
>> created this table:
>> 
>> CREATE TABLE foo (ROWID INTEGER PRIMARY KEY, key TEXT, attr TEXT, value 
>> NONE);
>> 
>> I was reading the Using SQLite book and came across page #38 (#60 on
>> the PDF version) where it states:
> 
> 
>  I hope the book is proving useful.
> 
> 
>> "None: A column with a none affinity has no preference over storage
>> class.  Each value is stored as the type provided, with no attempt
>> to convert anything."
>> 
>> So it seems that using NONE is a hint that different data types are
>> going to be stored in that column, correct?
> 
>  I suppose you could look at it that way, but it isn't meant to be
>  explicit.
> 
>  With the exception of an INTEGER PRIMARY KEY, any column can hold
>  row-values of any type.  The affinity will influence stored values
>  to specific types, but it does not prevent other types from being
>  stored if the conversion cannot be made.
> 
>  The NONE affinity simply indicates to the database that you always
>  store values in their provided type, and that no conversions should
>  be attempted, even if a conversion could be done in a loss-less way.
> 
>> My main question has to do with binding values to precompiled statements.
>> For the value column, should I:
>> 
>> a) use sqlite3_bind_value()?
> 
>  sqlite3_bind_value() is for binding "sqlite3_value" data structures.
>  If you have one of those (from an sqlite3_column_value(), for
>  example), and it is in the type representation you want (text,
>  int, etc), then go ahead and use it.
> 
>  If you're binding a more traditional value, such as a string or
>  integer, you should use one of the sqlite3_bind_text(),
>  sqlite3_bind_int(), or similar functions.
> 
>  The bind function you choose will set the "type" of the value.
>  If the column has a NONE affinity, no conversion will be attempted
>  and the value, in the representation you provided, will be stored
>  directly.
> 
>> b) store it as a string using sqlite3_bind_text()? Will
>> sqlite3_bind_text() allow SQLite to choose the proper data affinity
>> even though I'm binding it as text?
> 
>  An affinity is a property of a table column.  The affinity of a
>  column is defined by the "SQL type" passed to CREATE TABLE.  Once the
>  table has been created, the affinity of each column is set.  So the
>  only time SQLite "chooses" an affinity is when it parses the CREATE
>  TABLE statement.
> 
>  So, for example, if you have an INSERT statement that binds a text
>  value to a parameter used to set a column with a NONE affinity, the
>  value in the new row will *always* be stored as a text value,
>  regardless of the string value.  If you were to bind an integer, the
>  value would always be stored as an integer, and so on.
> 
>  This would work differently if the column had a NUMERIC affinity, for
>  example.  In that case, binding the string "abc" would result in
>  storage of a text value, while binding the string "132" would result
>  in an integer and "85.3" would result in a floating-point.  Binding
>  43 (the integer, not a two-character string, using sqlite3_bind_int())
>  would result in an integer as well.  
> 
> 
> 
>  As you might know, you can use different sqlite3_column_xxx() functions
>  to extract values in a specific representation (type), even if that 
>  returned representation does not match the type of the stored value.
>  For example, if you know you're going to print out a value, you can
>  use sqlite3_column_text() to get a text representation of the value,
>  even if that value is stored in the database as an integer.  The
>  conversions used for this are given in table 7.1 of Using SQLite.
> 
>  On the input side, you can use different sqlite3_bind_xxx() function
>  to provide values in a representation (type) that might not match the
>  type used to actually store the value.  The affinity defines your
>  "preferred" storage type, and is used as a hint to do input
>  conversions.  So if you're

Re: [sqlite] Question about manifest typing/data affinity

2010-11-07 Thread Tito Ciuro

On 06/11/2010, at 21:28, Jay A. Kreibich wrote:

> On Sat, Nov 06, 2010 at 08:35:10PM -0300, Tito Ciuro scratched on the wall:
>> Hello,
>> 
>> I have a question about manifest typing/data affinity. Assume I have 
>> created this table:
>> 
>> CREATE TABLE foo (ROWID INTEGER PRIMARY KEY, key TEXT, attr TEXT, value 
>> NONE);
>> 
>> I was reading the Using SQLite book and came across page #38 (#60 on
>> the PDF version) where it states:
> 
> 
>  I hope the book is proving useful.
> 
> 
>> "None: A column with a none affinity has no preference over storage
>> class.  Each value is stored as the type provided, with no attempt
>> to convert anything."
>> 
>> So it seems that using NONE is a hint that different data types are
>> going to be stored in that column, correct?
> 
>  I suppose you could look at it that way, but it isn't meant to be
>  explicit.
> 
>  With the exception of an INTEGER PRIMARY KEY, any column can hold
>  row-values of any type.  The affinity will influence stored values
>  to specific types, but it does not prevent other types from being
>  stored if the conversion cannot be made.
> 
>  The NONE affinity simply indicates to the database that you always
>  store values in their provided type, and that no conversions should
>  be attempted, even if a conversion could be done in a loss-less way.
> 
>> My main question has to do with binding values to precompiled statements.
>> For the value column, should I:
>> 
>> a) use sqlite3_bind_value()?
> 
>  sqlite3_bind_value() is for binding "sqlite3_value" data structures.
>  If you have one of those (from an sqlite3_column_value(), for
>  example), and it is in the type representation you want (text,
>  int, etc), then go ahead and use it.
> 
>  If you're binding a more traditional value, such as a string or
>  integer, you should use one of the sqlite3_bind_text(),
>  sqlite3_bind_int(), or similar functions.
> 
>  The bind function you choose will set the "type" of the value.
>  If the column has a NONE affinity, no conversion will be attempted
>  and the value, in the representation you provided, will be stored
>  directly.
> 
>> b) store it as a string using sqlite3_bind_text()? Will
>> sqlite3_bind_text() allow SQLite to choose the proper data affinity
>> even though I'm binding it as text?
> 
>  An affinity is a property of a table column.  The affinity of a
>  column is defined by the "SQL type" passed to CREATE TABLE.  Once the
>  table has been created, the affinity of each column is set.  So the
>  only time SQLite "chooses" an affinity is when it parses the CREATE
>  TABLE statement.
> 
>  So, for example, if you have an INSERT statement that binds a text
>  value to a parameter used to set a column with a NONE affinity, the
>  value in the new row will *always* be stored as a text value,
>  regardless of the string value.  If you were to bind an integer, the
>  value would always be stored as an integer, and so on.
> 
>  This would work differently if the column had a NUMERIC affinity, for
>  example.  In that case, binding the string "abc" would result in
>  storage of a text value, while binding the string "132" would result
>  in an integer and "85.3" would result in a floating-point.  Binding
>  43 (the integer, not a two-character string, using sqlite3_bind_int())
>  would result in an integer as well.  
> 
> 
> 
>  As you might know, you can use different sqlite3_column_xxx() functions
>  to extract values in a specific representation (type), even if that 
>  returned representation does not match the type of the stored value.
>  For example, if you know you're going to print out a value, you can
>  use sqlite3_column_text() to get a text representation of the value,
>  even if that value is stored in the database as an integer.  The
>  conversions used for this are given in table 7.1 of Using SQLite.
> 
>  On the input side, you can use different sqlite3_bind_xxx() function
>  to provide values in a representation (type) that might not match the
>  type used to actually store the value.  The affinity defines your
>  "preferred" storage type, and is used as a hint to do input
>  conversions.  So if you're taking use input for a number, you can
>  take the text value passed in by your applications GUI and pass that
>  directly to sqlite3_bind_text(), even if the string represents a
>  number, and you want to store it as a number (and have told the
>  database this by using an INTEGER, REAL, or NUMERIC affinity).
> 
>  The NONE affinity is simply a way of saying that y

[sqlite] Question about manifest typing/data affinity

2010-11-06 Thread Tito Ciuro
Hello,

I have a question about manifest typing/data affinity. Assume I have created 
this table:

CREATE TABLE foo (ROWID INTEGER PRIMARY KEY, key TEXT, attr TEXT, value NONE);

I was reading the Using SQLite book and came across page #38 (#60 on the PDF 
version) where it states:

"None: A column with a none affinity has no preference over storage class. Each 
value is stored as the type provided, with no attempt to convert anything."

So it seems that using NONE is a hint that different data types are going to be 
stored in that column, correct?

My main question has to do with binding values to precompiled statements. For 
the value column, should I:

a) use sqlite3_bind_value()?
b) store it as a string using sqlite3_bind_text()? Will sqlite3_bind_text() 
allow SQLite to choose the proper data affinity even though I'm binding it as 
text?

Thanks in advance,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] NanoStore has moved to Google Code

2010-10-06 Thread Tito Ciuro
Hello everyone,

Based on feedback from other developers, I have decided to move NanoStore, a 
Cocoa wrapper for SQLite, to Google Code:

http://code.google.com/p/nanostore/

The Sourceforge repository is now considered obsolete and will be removed 
shortly.

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [ANN] Release: NanoStore 1.0 for Mac and iOS

2010-09-22 Thread Tito Ciuro
NanoStore 1.0
© Webbo, L.L.C., 2010. All rights reserved.
September 21, 2010

Today, Webbo is pleased to announce the release of NanoStore:

http://sourceforge.net/projects/nanostore/

NanoStore is a Cocoa wrapper for SQLite, a C library that implements an 
embeddable SQL database engine.

With NanoStore, you store data using a dictionary of any depth. The developer 
can decide what to store on the fly, unlike other systems that require the 
developer to design a schema. With NanoStore just build your dictionary and 
store it. That's all there is to it! Every data element in the dictionary is 
indexed (except BLOBs) so there's no need to keep a list of indexed separately. 
You can disable indexing, import your data in batch mode, save it and then 
reindex at once, which is quite efficient. For even better performance, all I/O 
can be performed in memory and save the new database to disk at once, which is 
even faster. And if you feel adventurous, you can even do that in Fast mode and 
save extra SQLite processing.

All these variations come with pros and cons, sure... but you have a choice. 
You can decide what's best *for you* and map a strategy to *your* model as 
there accessors available for most SQLite settings and pragmas that will allow 
you to tune it to your liking.

The list of classes include:

NSFNanoStore
NSFNanoExpression
NSFNanoSearch
NSFNanoResult

You also have full access to the sqlite3* handle, in case you need it (hey... 
you're a developer right?)

In addition, the NanoStore project includes:

- Unit tests
- An iOS plain-vanilla app to demonstrate how easy it is to embed NanoStore in 
your project

Enjoy!

-- Tito

*
Tito Ciuro
R Group, Webbo, L.L.C.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Books which cover C API

2010-07-07 Thread Tito Ciuro

On 08/07/2010, at 02:11, Mohit Sindhwani wrote:

> On 8/7/2010 8:06 AM, Tito Ciuro wrote:
>> On 8 Jul 2010, at 01:58, Mohit Sindhwani<m...@onghu.com>  wrote:
>> 
>> 
>>> On 8/7/2010 2:55 AM, Jay A. Kreibich wrote:
>>> 
>>>> On Wed, Jul 07, 2010 at 07:45:02PM +0100, Andrew Wood scratched on the 
>>>> wall:
>>>> 
>>>> 
>>>>> Which of the books on the market is the best for covering the C API?
>>>>> 
>>>>> 
>>>>   http://sqlite.org/books.html
>>>> 
>>>> 
>>>> 
>>>> 
>>>>   If you want to go out and buy something today, the most popular book
>>>>   is "The Definitive Guide to SQLite" by Mike Owens.  It is a bit
>>>>   older, but the core APIs haven't really changed:
>>>> 
>>>> http://www.amazon.com/Definitive-Guide-SQLite-Mike-Owens/dp/1590596730/
>>>> 
>>>> 
>>> +1 for the definitive guide - it is very good!
>>> 
>>> Best Regards,
>>> Mohit.
>>> 8/7/2010 | 7:58
>>> 
>> ... except for the index. It's useless.
>> 
> 
> which is ironic for a book about databases :D
> 
> Cheers,
> Mohit.
> 8/7/2010 | 8:11 AM.

Yes. The book is quite good, I admit, but if you need to find specific 
things... the answer might not be referenced where you think it'd be. Or not 
mentioned at all. Is it really *that* difficult? I truly hope they pay 
attention to this whenever in future releases.

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Books which cover C API

2010-07-07 Thread Tito Ciuro
On 8 Jul 2010, at 01:58, Mohit Sindhwani  wrote:

> On 8/7/2010 2:55 AM, Jay A. Kreibich wrote:
>> On Wed, Jul 07, 2010 at 07:45:02PM +0100, Andrew Wood scratched on the wall:
>> 
>>> Which of the books on the market is the best for covering the C API?
>>> 
>>   http://sqlite.org/books.html
>> 
>> 
>> 
>> 
>>   If you want to go out and buy something today, the most popular book
>>   is "The Definitive Guide to SQLite" by Mike Owens.  It is a bit
>>   older, but the core APIs haven't really changed:
>> 
>> http://www.amazon.com/Definitive-Guide-SQLite-Mike-Owens/dp/1590596730/
>> 
> 
> +1 for the definitive guide - it is very good!
> 
> Best Regards,
> Mohit.
> 8/7/2010 | 7:58

... except for the index. It's useless.

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] When to close a db?

2010-06-21 Thread Tito Ciuro
Sam,

Could it be that you have prepared statements still active when you're  
trying to close the db? (not finalized, that is)

-- Tito

Sent from my iPhone

On 20 Jun 2010, at 23:17, Sam Carleton   
wrote:

> I am getting some strange behavior out of my app, which happens to be
> both an Apache module and some Axis2/C Web Services which run under
> Apache.  From time to time, it is VERY inconsistent, when the code
> calls sqlite3_close() the Apache server crashes.  I don't recall the
> error right off.
>
> From reading the documentation, it looks to me like sqlite3_close()
> should be called if *ppDb has a value, irregardless of the result code
> from the sqlite3_open_v2() call.  Is there more to it?
>
> Sam
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about how to interpret pragma integrity_check

2009-09-15 Thread Tito Ciuro
On Sep 15, 2009, at 6:59 PM, P Kishor wrote:

> On Tue, Sep 15, 2009 at 8:50 PM, Tito Ciuro <tci...@mac.com> wrote:
>> On Sep 15, 2009, at 6:42 PM, P Kishor wrote:
>>
>>> Could it be that those seemingly identical multiple rows actually  
>>> have
>>> trailing spaces or some other non-visible character? Check for their
>>> length.
>>
>> Here we go:
>>
>>> sqlite> SELECT clientName, entityName, length(entityName) FROM  
>>> MyDBState
>>> ORDER BY entityName;
>>> com.apple.AddressBook|com.apple.contacts.CalendarURI|30
>>> com.apple.AddressBook|com.apple.contacts.Contact|26
>>> com.apple.AddressBook|com.apple.contacts.Date|23
>>> com.apple.AddressBook|com.apple.contacts.Email Address|32
>>> com.apple.AddressBook|com.apple.contacts.Group|24
>>> com.apple.AddressBook|com.apple.contacts.IM|21
>>> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo|42
>>> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo|42
>>> com.apple.AddressBook|com.apple.contacts.Phone Number|31
>>> com.apple.AddressBook|com.apple.contacts.Phone Number|31
>>> com.apple.AddressBook|com.apple.contacts.Related Name|31
>>> com.apple.AddressBook|com.apple.contacts.Related Name|31
>>> com.apple.AddressBook|com.apple.contacts.SmartGroup|29
>>> com.apple.AddressBook|com.apple.contacts.SmartGroup|29
>>> sqlite>
>>
>
> punk...@lucknow ~/Desktop/stuff$sqlite3
> SQLite version 3.6.11
> Enter ".help" for instructions
> Enter SQL statements terminated with a ";"
> sqlite> CREATE TABLE MyDBState (clientName TEXT, entityName
> TEXT,propertyNames BLOB, PRIMARY KEY (clientName, entityName));
> sqlite> INSERT INTO MyDBState (clientName, entityName) VALUES
> ('com.apple.AddressBook', 'com.apple.contacts.SmartGroup');
> sqlite> INSERT INTO MyDBState (clientName, entityName) VALUES
> ('com.apple.AddressBook', 'com.apple.contacts.SmartGroup');
> SQL error: columns clientName, entityName are not unique
>
> SQLite should stop you from inserting duplicate PK right away, like  
> above.
>
>
> By the way, in your original post, you gave a different name for your
> table... see below --
>
>> CREATE TABLE MyDBState (clientName TEXT, entityName TEXT,
>> propertyNames BLOB, PRIMARY KEY (clientName, entityName))
>>
>> If I output the data using sqlite3, I get:
>>
>> sqlite> SELECT clientName, entityName FROM SyncState ORDER BY
>> entityName;
>
> Where did SyncState come from? Is that different from MyDBState? Or,
> was that just a typo?
>
>
> Maybe some of the folks knowing more about the innards of the software
> can help. Which version are you using?

It was a typo. I'm using Mac OS X 10.6.1, so the version seems to be  
3.6.12

Yeah, it's puzzling how this can be happening. We really don't know  
why... we're looking.

Thanks again,

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about how to interpret pragma integrity_check

2009-09-15 Thread Tito Ciuro
On Sep 15, 2009, at 6:42 PM, P Kishor wrote:

> Could it be that those seemingly identical multiple rows actually have
> trailing spaces or some other non-visible character? Check for their
> length.

Here we go:

> sqlite> SELECT clientName, entityName, length(entityName) FROM  
> MyDBState ORDER BY entityName;
> com.apple.AddressBook|com.apple.contacts.CalendarURI|30
> com.apple.AddressBook|com.apple.contacts.Contact|26
> com.apple.AddressBook|com.apple.contacts.Date|23
> com.apple.AddressBook|com.apple.contacts.Email Address|32
> com.apple.AddressBook|com.apple.contacts.Group|24
> com.apple.AddressBook|com.apple.contacts.IM|21
> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo|42
> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo|42
> com.apple.AddressBook|com.apple.contacts.Phone Number|31
> com.apple.AddressBook|com.apple.contacts.Phone Number|31
> com.apple.AddressBook|com.apple.contacts.Related Name|31
> com.apple.AddressBook|com.apple.contacts.Related Name|31
> com.apple.AddressBook|com.apple.contacts.SmartGroup|29
> com.apple.AddressBook|com.apple.contacts.SmartGroup|29
> sqlite>


-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about how to interpret pragma integrity_check

2009-09-15 Thread Tito Ciuro
Hello,

On Sep 15, 2009, at 6:15 PM, P Kishor wrote:

> well, your clientName, entityName combo is not unique in the list
> above, and it should be given it is a PK. I have no idea how you
> managed to insert these rows while the PK constraint was active.

Precisely.

I cannot reproduce this easily, but this bug is there for sure.

Thank you,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A question about how to interpret pragma integrity_check

2009-09-15 Thread Tito Ciuro
Hello,

On Sep 15, 2009, at 12:55 PM, D. Richard Hipp wrote:

>
> On Sep 15, 2009, at 3:33 PM, Tito Ciuro wrote:
>
>> Hello,
>>
>> Given the following pragma integrity_check output:
>>
>>> sqlite> pragma integrity_check;
>>> rowid 106931 missing from index sqlite_autoindex_MyDBState_1
>>> rowid 106933 missing from index sqlite_autoindex_MyDBState_1
>>> rowid 106935 missing from index sqlite_autoindex_MyDBState_1
>>> wrong # of entries in index sqlite_autoindex_MyDBState_1
>>
>> Does this mean that:
>>
>> 1) the index is corrupted
>> 2) the rowids are truly missing (data loss)
>>
>> Running vacuum doesn't solve the problem.
>
> It means the index is corrupt.  Try running "REINDEX".

When I run REINDEX I get the following error:

> SQL error: indexed columns are not unique

The table was created like this:

> CREATE TABLE MyDBState (clientName TEXT, entityName TEXT,  
> propertyNames BLOB, PRIMARY KEY (clientName, entityName))

If I output the data using sqlite3, I get:

> sqlite> SELECT clientName, entityName FROM SyncState ORDER BY  
> entityName;
> com.apple.AddressBook|com.apple.contacts.CalendarURI
> com.apple.AddressBook|com.apple.contacts.Contact
> com.apple.AddressBook|com.apple.contacts.Date
> com.apple.AddressBook|com.apple.contacts.Email Address
> com.apple.AddressBook|com.apple.contacts.Group
> com.apple.AddressBook|com.apple.contacts.IM
> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo
> com.apple.AddressBook|com.apple.contacts.ImageTransformationInfo
> com.apple.AddressBook|com.apple.contacts.Phone Number
> com.apple.AddressBook|com.apple.contacts.Phone Number
> com.apple.AddressBook|com.apple.contacts.Related Name
> com.apple.AddressBook|com.apple.contacts.Related Name
> com.apple.AddressBook|com.apple.contacts.SmartGroup
> com.apple.AddressBook|com.apple.contacts.SmartGroup
> sqlite>


Thanks again,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] A question about how to interpret pragma integrity_check

2009-09-15 Thread Tito Ciuro
Hello,

Given the following pragma integrity_check output:

> sqlite> pragma integrity_check;
> rowid 106931 missing from index sqlite_autoindex_MyDBState_1
> rowid 106933 missing from index sqlite_autoindex_MyDBState_1
> rowid 106935 missing from index sqlite_autoindex_MyDBState_1
> wrong # of entries in index sqlite_autoindex_MyDBState_1

Does this mean that:

1) the index is corrupted
2) the rowids are truly missing (data loss)

Running vacuum doesn't solve the problem. The documentation states the  
following about vacuum:

> The VACUUM command cleans the main database by copying its contents  
> to a temporary database file and reloading the original database  
> file from the copy. This eliminates free pages, aligns table data to  
> be contiguous, and otherwise cleans up the database file structure

I suspect that vacuum cannot fix this issue because there's nothing  
wrong with the index (structurally speaking). Am I correct? When this  
problem appears, is there a way to recover from this state? I would  
think that rebuilding the index would help, since the new ones would  
point to the right set of data. However, if there is a real data loss,  
dropping the index would only mask the real problem.

Any ideas?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] about journal file

2009-08-29 Thread Tito Ciuro
On 29/08/2009, at 06:25, Zhanjun You wrote:

> I finally understand do not need to deal with journal file.
> But I do not know what circumstances led to the journal file, this  
> time to
> read the database file will fail.What may cause such a thing happen?

Google is your friend: typing 'sqlite journal' shows the following  
link as the first match:

http://www.sqlite.org/lockingv3.html

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing 'ends with' queries

2009-08-23 Thread Tito Ciuro
Hi Simon,

On Aug 23, 2009, at 8:52 PM, Simon Slavin wrote:

>
> On 24 Aug 2009, at 4:22am, Tito Ciuro wrote:
>
>> But that would introduce the overhead of doubling the space required
>> for every string + an additional column index.
>
> One of the options I mentioned was to store the reversed string and
> not the original.  If you need to reconstruct the original string,
> read the reversed one and reverse it your software.
>
> Will you actually be short on space ?  If you're programming for an
> embedded platform then I can understand space is a concern.  If you're
> using a standard desktop computer and your database doesn't have
> millions of records it might not be.  By all means, work to your
> priorities.
>
>> If the schema contains
>> more columns where this type of query needs to take place, it seems  
>> to
>> me that this would not be a good solution.
>
> 'good' relative to what ?  I'm always interested in better solutions.

Yeah, 'good' is too ambiguous.

What I meant is that if a table contains several columns, some or all  
of which need to be searched using 'ends with', then replicating the  
columns (by reversing the string) and keeping extra indices could  
potentially affect performance (obviously, it depends on the number of  
rows). Also, the extra logic needed to keep track seems a bit overkill.

Perhaps I should be consider FTS instead. I'd like to play with the  
idea of matching a particular string using FTS and then and query the  
result set using 'ends with' (i.e.  LIKE '*foo'). This way the query  
would scan the FTS result set, as opposed to the entire table.

Thanks again,

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Optimizing 'ends with' queries

2009-08-23 Thread Tito Ciuro
Hi Simon,

But that would introduce the overhead of doubling the space required  
for every string + an additional column index. If the schema contains  
more columns where this type of query needs to take place, it seems to  
me that this would not be a good solution.

Thanks for your help,

-- Tito

On Aug 23, 2009, at 8:15 PM, Simon Slavin wrote:

>
> On 24 Aug 2009, at 3:44am, Tito Ciuro wrote:
>
>> Is there a way to optimize this type of queries? (column Value is
>> indexed):
>>
>> SELECT Value FROM MyValues WHERE Value LIKE '%crashed.'
>>
>> I've seen the document where 'begins with' queries can be optimized
>> using >= and < (end of the '4.0 The LIKE optimization' section):
>>
>> http://www.sqlite.org/optoverview.html
>>
>> Can I optimize this query to take advantage of the index?
>
> Sure.  When you use INSERT either define two columns (value and
> valueReversed) or just store valueReversed.  Then use LIKE with
> valueReversed instead of value.  There is no 'reverse string' function
> built into SQLite, but I bet whichever programming language you're
> using makes it easy to reverse a string.
>
> Simon.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Optimizing 'ends with' queries

2009-08-23 Thread Tito Ciuro
Hello,

Is there a way to optimize this type of queries? (column Value is  
indexed):

SELECT Value FROM MyValues WHERE Value LIKE '%crashed.'

I've seen the document where 'begins with' queries can be optimized  
using >= and < (end of the '4.0 The LIKE optimization' section):

http://www.sqlite.org/optoverview.html

Can I optimize this query to take advantage of the index?

Thanks in advance,

-- Tito


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question about sqlite3_extended_result_codes()

2009-06-27 Thread Tito Ciuro
Hello,

I have a couple questions about sqlite3_extended_result_codes():

1) Once I enable it, is it possible to determine whether extended  
result codes is enabled for a given a sqlite3* handle?

2) Do I have to process each result code in order to obtain the  
"regular" SQLite code, or can I compare it directly? For example:

// assume db is valid...
int resultCode = sqlite3_exec(db, ..., );
if (resultCode == SQLITE_OK) {
  // do something...
}

3) Are the extended results returned *only* when SQLITE_IOERR is  
detected? How does it work?

Thanks for the help,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] [OT] How to implement paging

2009-05-25 Thread Tito Ciuro
Hi Sam,

On May 25, 2009, at 10:58 AM, Sam Carleton wrote:

> Example:  Following the logic of the ScrollingCursor page, lets assume
> a total result set of 88 titles.  If the lasttitle happens to be the
> 29th title, so the set that is returned is 30 through 34, how do I
> determine that this is the 6th page of a total of 18 pages?


Let's assume 88 titles. For the sake of the argument, say you decide  
to display 12 per page. That would be 88 / 12 = 7.3 = 8 pages. If you  
need to know in which page a specific title lies (say 63), you can do  
something like ceil (63 % 12) = ceil (5.25) = 6th page. Following the  
same logic, ceil (88 / 12) = 8 pages total. Does that answer your  
question?

Cheers,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] corrupt database recovery

2009-04-26 Thread Tito Ciuro
Hi Gene,

On Apr 25, 2009, at 3:47 PM, Gene wrote:

> Every now and again, we have a database that gets corrupt in the  
> field (bad
> coding on our end, not sqlite).

How do you corrupt a database with bad coding? Just curious...

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Heuristics of when to vacuum

2009-04-12 Thread Tito Ciuro
Hi Eugene,

On 12 Apr 2009, at 9:53 AM, Eugene Wee wrote:

> Hi,
>
> On Mon, Apr 13, 2009 at 12:40 AM, Tito Ciuro <tci...@mac.com> wrote:
>> One question, when I run the command I see that the fragmentation in
>> "All tables" is greater than "All tables and indices". How can that
>> be? The sum of all tables and their indices doesn't add up (10.6% +
>> 7.5% ≠ 9.6%):
>
> From the given information:
> All tables
> Percentage of total database..  66.4%
> Fragmentation.  10.6%
>
> All indices
> Percentage of total database..  33.6%
> Fragmentation.   7.5%
>
> Now,
> 0.664 * 10.6 + 0.336 * 7.5 = 9.5584
>
> This rounds to 9.6, which corresponds to:
> All tables and indices
> Fragmentation.   9.6%

Yup. That makes sense. Thanks!

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Heuristics of when to vacuum

2009-04-12 Thread Tito Ciuro
Hi Kees,

On Apr 12, 2009, at 5:27 AM, Kees Nuyt wrote:

> PRAGMA freelist_count; tells you how many pages are free.
> If there are many free pages, you may have a reason to
> vacuum. It doesn't tell anything about the average
> percentage of payload in database pages, which would be
> another reason to vacuum.
> For a full analysis, you'd have to run the sqlite3_analyzer
> program, or incorporate part of its code in your
> application.

I just tried sqlite3_analyzer. It's amazing the amount of information  
that it displays! Excellent.

One question, when I run the command I see that the fragmentation in  
"All tables" is greater than "All tables and indices". How can that  
be? The sum of all tables and their indices doesn't add up (10.6% +  
7.5% ≠ 9.6%):

> *** All tables and indices ***
>
> Percentage of total database.. 100.0%
> Number of entries. 79581
> Bytes of storage consumed. 6328320
> Bytes of payload.. 5148625 81.4%
> Average payload per entry. 64.70
> Average unused bytes per entry 10.88
> Average fanout 254.00
> Fragmentation.   9.6%
> Maximum payload per entry. 5406
> Entries that use overflow. 10.001%
> Index pages used.. 4
> Primary pages used 1540
> Overflow pages used... 1
> Total pages used.. 1545
> Unused bytes on index pages... 747045.6%
> Unused bytes on primary pages. 858287  13.6%
> Unused bytes on overflow pages 00.0%
> Unused bytes on all pages. 865757  13.7%
>
> *** All tables ***
>
> Percentage of total database..  66.4%
> Number of entries. 16838
> Bytes of storage consumed. 4202496
> Bytes of payload.. 3522875 83.8%
> Average payload per entry. 209.22
> Average unused bytes per entry 33.26
> Average fanout 254.00
> Fragmentation.  10.6%
> Maximum payload per entry. 5406
> Entries that use overflow. 10.006%
> Index pages used.. 4
> Primary pages used 1021
> Overflow pages used... 1
> Total pages used.. 1026
> Unused bytes on index pages... 747045.6%
> Unused bytes on primary pages. 552618  13.2%
> Unused bytes on overflow pages 00.0%
> Unused bytes on all pages. 560088  13.3%
>
> *** All indices **
>
> Percentage of total database..  33.6%
> Number of entries. 62743
> Bytes of storage consumed. 2125824
> Bytes of payload.. 1625750 76.5%
> Average payload per entry. 25.91
> Average unused bytes per entry 4.87
> Fragmentation.   7.5%
> Maximum payload per entry. 48
> Entries that use overflow. 00.0%
> Primary pages used 519
> Overflow pages used... 0
> Total pages used.. 519
> Unused bytes on primary pages. 305669  14.4%
> Unused bytes on overflow pages 0
> Unused bytes on all pages. 305669  14.4%

How does sqlite3_analyzer determine the fragmentation percentage of  
All tables/All tables + indices? This is the information I was looking  
for. Where can I obtain the relevant code where this fragmentation is  
calculated? I know I could run sqlite3_analyzer and parse the results,  
but incorporating the calculation in my code would be faster.

Thanks Kees,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Heuristics of when to vacuum

2009-04-11 Thread Tito Ciuro
Hi Lawrence,

On Apr 11, 2009, at 7:51 PM, Lawrence Gold wrote:

> I can't offer a formula, but I suggest making it an option for the
> users of the software, with sufficient warning that it could take some
> time, as well as a Cancel button.  Another thing you could do is to
> schedule the vacuum for a time when you know the software won't be in
> use -- for example, those of us who write software for K-12 schools
> can safely schedule operations like this for midnight on  
> weekends.  :-)

It's not an application. It's a framework which is used by a daemon  
process. There can't be a UI, and scheduling a vacuum when it's not  
needed is wasteful, especially because the databases can be quite  
large. This is why I was looking for some way to determine whether  
vacuum is needed, so that it's performed when it makes sense to do so.

Thanks anyway, I appreciate your input! :-)

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Heuristics of when to vacuum

2009-04-11 Thread Tito Ciuro
Hello,

I'm sure this question has been asked before, but I haven't been able  
to find it in the archives: when does it make sense to vacuum? If an  
application which deals with a large database vacuums say, on  
termination, it may take a long time to process them and not gain much  
from that operation. I understand that vacuum is needed when lots of  
"holes" are left behind after many objects (table, index, or trigger)  
have been removed from the database.

The question is: what is the threshold I'm looking for? Is there a  
formula I can apply that would hint to me when this op would be  
beneficial?

Thanks in advance,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] httpd server ???

2009-03-17 Thread Tito Ciuro
... or you can try Cherokee:

http://www.cherokee-project.com/

-- Tito

On 17 mars 2009, at 16:39, Nuno Magalhães wrote:

>> Does anyone know of an embedded http server that can serve and/or  
>> create pages from a sqlite database?
>
> What do you mean embedded? If you mean fast and lightweight, i use  
> nginx.
>
> Nuno Magalhães
> LU#484677
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [OFF] Re: sqlite3 datbase disk image malformed

2009-03-05 Thread Tito Ciuro
Hi donnied,

On Mar 5, 2009, at 4:30 AM, donnied wrote:

>
> rsync was corrupting the database.  I'll have to exclude the  
> database from
> rsync backup.
>
>
> D. Richard Hipp wrote:
>>
>>
>> See http://www.sqlite.org/atomiccommit.html and especially section  
>> 9.0
>> "Things That Can Go Wrong"

I'm curious... did you do anything specific to end with a corrupted  
database when using rsync? Would you mind specifying the steps to do  
that?

Thank you,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 datbase disk image malformed

2009-03-04 Thread Tito Ciuro
Hi Dan,

On Mar 4, 2009, at 9:21 AM, Dan wrote:

>
> On Mar 5, 2009, at 12:10 AM, Tito Ciuro wrote:
>
>> Hello,
>>
>> On Mar 4, 2009, at 6:06 AM, D. Richard Hipp wrote:
>>
>>> See http://www.sqlite.org/atomiccommit.html and especially section
>>> 9.0
>>> "Things That Can Go Wrong"
>>
>> Reading the above link, I'm curious about a specific case: 4.2 Hot
>> Rollback Journals. It states that:
>>
>> [...]
>> The first time that any SQLite process attempts to access the  
>> database
>> file, it obtains a shared lock as described in section 3.2 above. But
>> then it notices that there is a rollback journal file present. SQLite
>> then checks to see if the rollback journal is a "hot journal".
>> [...]
>>
>> SQLite's documentation in http://www.sqlite.org/lockingv3.html states
>> the following about a shared lock:
>>
>> [...]
>> The database may be read but not written. Any number of processes can
>> hold SHARED locks at the same time, hence there can be many
>> simultaneous readers. But no other thread or process is allowed to
>> write to the database file while one or more SHARED locks are active.
>> [...]
>>
>> So, if when SQLite attempts to access the data file for the first  
>> time
>> (thread T1) and obtains a shared lock, it seems that there's a window
>> of opportunity for secondary thread (T2) to obtain another shared
>> lock. In this case, T1 would not obtain an exclusive lock until T2  
>> has
>> completed reading. This would potentially leave T2 with damaged/
>> inconsistent data. Once T2's shared lock was relinquished, T1 would
>> proceed to rollback the hot journal.
>>
>> Shouldn't the first connection obtain an exclusive lock right away
>> instead and then perform the testing for the existence of a hot
>> journal? I'm probably mistaken, but this is what I gather from the
>> documentation mentioned above.
>
> After obtaining a shared-lock, SQLite tests for the existence of
> a hot-journal file. The test for a hot-journal file is that the
> journal file exists and that no other connection holds a RESERVED
> or PENDING lock on the database file. We know no other process
> is holding an EXCLUSIVE lock on the database file, since we are
> holding a SHARED lock.
>
> If it determines that there is a hot-journal file in the file system,
> SQLite obtains an EXCLUSIVE lock on the database file. It does
> not obtain a RESERVED or PENDING lock first like it does normally,
> but jumps straight to EXCLUSIVE.
>
> If the EXCLUSIVE lock is obtained Ok, roll back the journal file.
> If not, then release all locks and return SQLITE_BUSY. If the
> EXCLUSIVE lock cannot be obtained, then some other process must
> have obtained a SHARED lock. The other process will also try to
> roll back the hot-journal. By releasing all locks, hopefully
> we can get out of the other processes way fast enough to allow it
> to obtain the EXCLUSIVE lock and roll back the journal file.
>
> The key is that at no point is it possible for a second process
> to conclude that the database is valid when there is really a
> hot-journal file that requires rollback in the file-system. If
> there are multiple clients all trying to access the database at
> once then a few operations might return SQLITE_BUSY, but eventually
> one of the clients will successfully obtain the EXCLUSIVE lock
> and roll back the hot-journal.
>
> Dan.

Makes perfect sense (especially the last paragraph). Thanks for taking  
the time to write a detailed explanation.

Regards,

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] sqlite3 datbase disk image malformed

2009-03-04 Thread Tito Ciuro
Hello,

On Mar 4, 2009, at 6:06 AM, D. Richard Hipp wrote:

> See http://www.sqlite.org/atomiccommit.html and especially section 9.0
> "Things That Can Go Wrong"

Reading the above link, I'm curious about a specific case: 4.2 Hot  
Rollback Journals. It states that:

[...]
The first time that any SQLite process attempts to access the database  
file, it obtains a shared lock as described in section 3.2 above. But  
then it notices that there is a rollback journal file present. SQLite  
then checks to see if the rollback journal is a "hot journal".
[...]

SQLite's documentation in http://www.sqlite.org/lockingv3.html states  
the following about a shared lock:

[...]
The database may be read but not written. Any number of processes can  
hold SHARED locks at the same time, hence there can be many  
simultaneous readers. But no other thread or process is allowed to  
write to the database file while one or more SHARED locks are active.
[...]

So, if when SQLite attempts to access the data file for the first time  
(thread T1) and obtains a shared lock, it seems that there's a window  
of opportunity for secondary thread (T2) to obtain another shared  
lock. In this case, T1 would not obtain an exclusive lock until T2 has  
completed reading. This would potentially leave T2 with damaged/ 
inconsistent data. Once T2's shared lock was relinquished, T1 would  
proceed to rollback the hot journal.

Shouldn't the first connection obtain an exclusive lock right away  
instead and then perform the testing for the existence of a hot  
journal? I'm probably mistaken, but this is what I gather from the  
documentation mentioned above.

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transactions and attached databases

2009-02-26 Thread Tito Ciuro
Hi Donald,

On Feb 25, 2009, at 3:39 PM, Griggs, Donald wrote:

> Greetings, Tito,
>
> Did you see page:
>http://www.sqlite.org/lang_attach.html
>
>
> Transactions involving multiple attached databases are atomic,  
> assuming
> that the main database is not ":memory:".
>
> It then goes on to say:
>
> If the main database is ":memory:" then transactions continue to be
> atomic within each individual database file. But if the host computer
> crashes in the middle of a COMMIT where two or more database files are
> updated, some of those files might get the changes where others might
> not.

No, I didn't look at that page... sorry about that. I was looking at  
the SQLite list instead for answers. Thank you for the help!

Regards,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transactions and attached databases

2009-02-25 Thread Tito Ciuro
Hi Donald,

On Feb 25, 2009, at 3:39 PM, Griggs, Donald wrote:

> Greetings, Tito,
>
> Did you see page:
>http://www.sqlite.org/lang_attach.html
>
>
> Transactions involving multiple attached databases are atomic,  
> assuming
> that the main database is not ":memory:".
>
> It then goes on to say:
>
> If the main database is ":memory:" then transactions continue to be
> atomic within each individual database file. But if the host computer
> crashes in the middle of a COMMIT where two or more database files are
> updated, some of those files might get the changes where others might
> not.

No, I didn't look at that page... sorry about that. I was looking at  
the SQLite list instead for answers. Thank you for the help!

Regards,

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Transactions and attached databases

2009-02-25 Thread Tito Ciuro
Hello,

If I attach one or more databases and wrap a series of operations  
which affect some/all of them, would ROLLBACK or COMMIT treat these  
operations atomically?  For example:

Open database 'foo';
Attach database 'bar' as a1;

BEGIN TRANSACTION;
  INSERT INTO main.some_table ... ;
  DELETE FROM main.some_table WHERE ... ;
  INSERT INTO a1.another_table ... ;
COMMIT/ROLLBACK TRANSACTION;

Would the insertions and deletions be either committed or rolled back  
atomically? Is this supported?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about sqlite3_clear_bindings

2008-12-08 Thread Tito Ciuro

On 8 Dec 2008, at 11:54 AM, Igor Tandetnik wrote:

> Tito Ciuro <[EMAIL PROTECTED]> wrote:
>> On 8 Dec 2008, at 11:29 AM, Igor Tandetnik wrote:
>>
>>>> I'd like to know why sqlite3_clear_bindings
>>>> () exists in the first place.
>>>
>>> I don't know. It doesn't look particularly useful to me, either.
>>
>> Looking through the docs, it seems that sqlite3_clear_bindings() is a
>> convenience function for int sqlite3_bind_null(sqlite3_stmt*, int).  
>> So
>> if you have N columns, instead of clearing them one by one
>> sqlite3_clear_bindings() does that for you in one call.
>
> Well, yes. I know this. The question is, when, in practice, would you
> want to set all parameters to NULL?

Well, probably never :-)

However, if the majority of the columns need to be cleared, it's  
probably easier to clear all of them and rebind the ones needed...

-- Tito

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about sqlite3_clear_bindings

2008-12-08 Thread Tito Ciuro
Hi Igor,

On 8 Dec 2008, at 11:29 AM, Igor Tandetnik wrote:

>> I'd like to know why sqlite3_clear_bindings
>> () exists in the first place.
>
> I don't know. It doesn't look particularly useful to me, either.

Looking through the docs, it seems that sqlite3_clear_bindings() is a  
convenience function for int sqlite3_bind_null(sqlite3_stmt*, int). So  
if you have N columns, instead of clearing them one by one  
sqlite3_clear_bindings() does that for you in one call. I'm just  
guessing...

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about sqlite3_clear_bindings

2008-12-08 Thread Tito Ciuro
Hi Igor,

On 7 Dec 2008, at 10:49 PM, Igor Tandetnik wrote:

> You might want to if most of your parameters are NULL, and you just  
> bind
> a some of them.

I didn't read your reply carefully enough. It makes sense, thank you.

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Question about sqlite3_clear_bindings

2008-12-08 Thread Tito Ciuro
Hello Igor,

On 7 Dec 2008, at 10:49 PM, Igor Tandetnik wrote:

> "Tito Ciuro" <[EMAIL PROTECTED]> wrote in message
> news:[EMAIL PROTECTED]
>> 1 • Create the object using sqlite3_prepare_v2() or a related
>> function. 2 • Bind values to host parameters using the
>> sqlite3_bind_*() interfaces.
>> 3 • Run the SQL by calling sqlite3_step() one or more times.
>> 4 • Reset the statement using sqlite3_reset().
>> 5 • Clear bindings using sqlite3_clear_bindings() then go back to  
>> step
>> 2. Do this zero or more times.
>
> This step is optional. You don't need to call sqlite3_clear_bindings.
> You might want to if most of your parameters are NULL, and you just  
> bind
> a some of them.
>
>> 6 • Destroy the object using sqlite3_finalize().
>>
>> If step 5 is incorrect, when should I be calling
>> sqlite3_clear_bindings ()?
>
> You don't have to, if you don't want to.

I'm curious as to why it's not needed. Is it because the latter bind  
overrides the former one? I'd like to know why sqlite3_clear_bindings 
() exists in the first place. Is it being kept around for backwards  
compatibility?

Thank you,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Question about sqlite3_clear_bindings

2008-12-07 Thread Tito Ciuro
Hello,

I was reading the following pages:

http://www.sqlite.org/c3ref/stmt.html
http://www.sqlite.org/c3ref/clear_bindings.html

I was wondering why sqlite3_clear_bindings() is not mentioned in the  
SQL Statement Object page. Would the following be incorrect?:

1 • Create the object using sqlite3_prepare_v2() or a related function.
2 • Bind values to host parameters using the sqlite3_bind_*()  
interfaces.
3 • Run the SQL by calling sqlite3_step() one or more times.
4 • Reset the statement using sqlite3_reset().
5 • Clear bindings using sqlite3_clear_bindings() then go back to step  
2. Do this zero or more times.
6 • Destroy the object using sqlite3_finalize().

If step 5 is incorrect, when should I be calling sqlite3_clear_bindings 
()?

Thanks,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Saving a memory-based SQLite database to disk

2008-11-09 Thread Tito Ciuro
Hi Jay,

Definitely worth investigating. Thanks for the tip!

-- Tito

On 9 Nov 2008, at 8:44 AM, Jay A. Kreibich wrote:

> On Sat, Nov 08, 2008 at 03:34:29PM -0800, Tito Ciuro scratched on  
> the wall:
>> Hello,
>>
>> If I open a SQLite database in memory (using :memory:), would it be
>> possible to save it on disk? Here's the reason:
>
>> Manipulating data in memory is ~3.4x faster than using the file  
>> system
>> (YMMV). If I could process the bulk of the data in RAM, then I could
>> save the final database on disk saving precious time.
>
>
>  Another option is to use an on-disk database, but turn off most of
>  the safety and security features.
>
>  -- Turn the page cache up so it is large enough to hold the whole
> database in the cache.
>
>  -- Turn the journal file off.
>
>  -- Turn synchronous off.
>
>  (See http://sqlite.org/pragma.html for how to do all that)
>
>  There will be some start-up cost as the database is pulled into the
>  cache, and it won't be quite as fast as a true :memory: database,
>  since it still writes out data into the OS file-cache buffers.
>  Overall your performance will be very similar to a fully in-memory
>  database, plus you won't have to deal with shuffling data back and
>  forth between a memory database and a disk database.
>
>  Of course, you get most of that performance by turning off most
>  of the safety and security features so the reliability of a database
>  in this mode is more or less the same as an in-memory database...
>  the database is very likely to get corrupt if you suffer a process or
>  system failure.
>
>  But it is faster.  And sometimes that's what counts.
>
>  -j
>
> -- 
> Jay A. Kreibich < J A Y  @  K R E I B I.C H >
>
> "Our opponent is an alien starship packed with atomic bombs.  We have
> a protractor."   "I'll go home and see if I can scrounge up a ruler
> and a piece of string."  --from Anathem by Neal Stephenson
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Saving a memory-based SQLite database to disk

2008-11-09 Thread Tito Ciuro
Thank you for the pointer Joshua.

-- Tito

On 8 Nov 2008, at 3:46 PM, Joshua Paine wrote:

> Tito Ciuro wrote:
>> If I open a SQLite database in memory (using :memory:), would it be
>> possible to save it on disk?
>
> Open a disk db and use the ATTACH sql command to add a memory DB. Do
> your ops in the memory DB, then insert select the results into your  
> disk db.
>
> -- 
> Joshua Paine
> LetterBlock: Web applications built with joy
> http://letterblock.com/
> 301-576-1920
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Saving a memory-based SQLite database to disk

2008-11-08 Thread Tito Ciuro
Hello,

If I open a SQLite database in memory (using :memory:), would it be  
possible to save it on disk? Here's the reason:

Using a file-based SQLite database:

2008-11-08 15:15:44.180 XML2Plist[5554:10b] Number of XML documents  
converted successfully: 2861
2008-11-08 15:15:53.053 XML2Plist[5554:10b] Object count: 2861
2008-11-08 15:15:53.054 XML2Plist[5554:10b] Value count: 116914
2008-11-08 15:15:53.055 XML2Plist[5554:10b] Average number of  
attributes: 41
2008-11-08 15:15:53.055 XML2Plist[5554:10b] Average dictionary  
throughput: 325 (dict/sec)
2008-11-08 15:15:53.056 XML2Plist[5554:10b] Average insertion  
throughput: 13614 (rows/sec)
2008-11-08 15:15:53.056 XML2Plist[5554:10b] Store size: 17953792
2008-11-08 15:15:53.057 XML2Plist[5554:10b] Total time to add 2861  
objects: 8.798 seconds

Using a memory-based SQLite database:

2008-11-08 15:14:42.695 XML2Plist[5536:10b] Number of XML documents  
converted successfully: 2861
2008-11-08 15:14:45.275 XML2Plist[5536:10b] Object count: 2861
2008-11-08 15:14:45.276 XML2Plist[5536:10b] Value count: 116914
2008-11-08 15:14:45.277 XML2Plist[5536:10b] Average number of  
attributes: 41
2008-11-08 15:14:45.278 XML2Plist[5536:10b] Average dictionary  
throughput: 1114 (dict/sec)
2008-11-08 15:14:45.279 XML2Plist[5536:10b] Average insertion  
throughput: 46637 (rows/sec)
2008-11-08 15:14:45.279 XML2Plist[5536:10b] Store size: 0 (*)
2008-11-08 15:14:45.280 XML2Plist[5536:10b] Total time to add 2861  
objects: 2.568 seconds

(*) It's zero because I'm not aware of an API that will allow me to  
find out the size of memory-based databases.

Manipulating data in memory is ~3.4x faster than using the file system  
(YMMV). If I could process the bulk of the data in RAM, then I could  
save the final database on disk saving precious time.

Any ideas?

Thanks in advance,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] How can I count the number of times a word present in text indexed with FTS 3?

2008-06-10 Thread Tito Ciuro
Hello,

I was reading the document 'http://www.sqlite.org/cvstrac/wiki?p=FtsUsage' 
  and I was wondering whether there is a way to count the number of  
times a word is present in text indexed with FTS 3.


For example:
CREATE VIRTUAL TABLE recipe USING fts3(name, instructions);
INSERT INTO recipe VALUES ( 'coffee cake', 'Bake in oven for 30  
minutes. Remove from the oven and allow to cool for 5 minutes. Turn  
pan over onto a serving platter to remove. Serve by pulling apart  
chunks with forks' );

Is there some way to count the times the word 'oven' appears in the  
text? Where I'm trying to get at: to retrieve all rows where 'oven' is  
more often referenced. This would allow me to build a"relevance" list,  
for example.
Thanks in advance,

-- Tito
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Best way to optimize this query?

2007-05-02 Thread Tito Ciuro

Hi Donald,

On May 2, 2007, at 11:25 AM, Griggs, Donald wrote:


The ROWID is indexed implicitly I believe, so it may be slowing things
slightly if you index it explicitly.


Yes, I was aware of that, thanks for the heads up.


Regarding:  "What if I have, say, 500 to retrieve?"

You can create a temporary table, perhaps in ram memory, where you  
store

the GUIDS, e.g.
   CREATE TEMP TABLE MyGUIDS(GUID);
Then pull them all out of the people table all at once with:

SELECT * FROM People
  WHERE GUID IN (SELECT GUID FROM MyGUIDS)
  ORDER BY 


Thank you very much,

-- Tito


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Best way to optimize this query?

2007-05-02 Thread Tito Ciuro

Hello,

On May 2, 2007, at 11:11 AM, P Kishor wrote:


On 5/2/07, Tito Ciuro <[EMAIL PROTECTED]> wrote:
When you say "speed things up," is it not fast enough yet? Numbers
would be helpful.


I've just tested it and the query is *very* fast. I was just  
wondering whether this type of query looked right.



try it, if SQLite complains, you will know, and you will have to
approach the problem differently ;-)


I'll try that. Thanks a lot,

-- Tito

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] Best way to optimize this query?

2007-05-02 Thread Tito Ciuro

Hello,

Assume the following scenario: I store people in a table, like this:

People
ROWID, idx
GUID, idx
First, idx
Last, idx
Email
...

In the app, the user can select People GUIDs from different sources  
and then retrieve the info from the database. The easy/suboptimal  
route to retrieve the records would be to perform a SELECT per GUID  
selected. I thought of something like this:


SELECT * FROM People where GUID in ("ABC", "RDT", "TUV");

Is there a better way to include all these GUIDs on a single SQL  
statement to speed things up?


Questions I have:

1) Is this the best way to solve the problem? Suggestions?
2) Is there a limit on the number of parameters I can pass to "in"?  
What if I have, say, 500 to retrieve? Will SQLite complain about this?


Thanks a lot,

-- Tito

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] sqlite Performance

2007-03-15 Thread Tito Ciuro

Hello,

IIRC (it was a while ago), one way to speed up insertion for large  
data sets is to drop the indexes, do the inserts (wrapped around a  
transaction) and then rebuild the indexes. For smaller data sets, the  
drop/rebuild indexes solution doesn't make sense because the time it  
takes to do that invalidates the performance gain. However, larger  
data sets seemed to benefit greatly. Again... that was a while  
ago... :-)


I should dust-off my test app and see what the results are with the  
latest sources. I'll let you know what I find out.


-- Tito

On Mar 15, 2007, at 11:42 AM, John Stanton wrote:

There are no free lunches.  When Sqlite stores your data item it  
not only writes it into a linked list of pages in a file but also  
inserts at least on key into a B-Tree index.  It does it quite  
efficiently so what you are seeing is the inevitable overhead of  
storing the data in a structured form.  The value of the structure  
becomes obvious when you are retrieving a single item from a set of  
millions and the index allows you to access it in a tiny fraction  
of the time it would take to search an unstructured list like a  
flat file.


The ACID implementation in Sqlite provides data security but is  
does involve a significant overhead.  You pay a price for not  
losing data in a system crash.


Like all things in life "you pays your money and you takes your  
choice".  It is somewhat simpler with Sqlite in that you don't pay  
your money, you just take your choice.


If you want faster Sqlite performance use faster disks.  The  
latency is important so 15,000 rpm disks will be better than 5,400  
rpm ones.


Ken wrote:
To answer your question:  Yes I can use a flat file at this stage,  
but eventually it needs to be imported into some type of  
structure. So to that end I decided early on to use sqlite to  
write the data out.   I was hoping for better performance. The raw  
I/O to read the data and process is around .75 seconds (no write i/ 
o).. So using a flat file output costs about .7 seconds.
  Using sqlite to do the output costs about 2.25 seconds. My  
question is why? And what can be done to improve this  
performance?   John Stanton <[EMAIL PROTECTED]> wrote: Ken wrote:
I'm looking for suggestions on improving performance of my sqlite  
application.
Here are system timings for a run where the sqlite db has been  
replaced with a flat file output.

real 0m1.459s
user0m0.276s
sys  0m0.252s
This is a run when using sqlite as the output format.
real 0m3.095s
user0m1.956s
sys  0m0.160s
As you can see sqlite takes twice as long and almost 8 times the  
user time.

Output size for flat file:   13, 360, 504flatfile.dat
Output size fo sqlit file:   11,042,816   sqlt.db f
Slite db has the following pragmas set.
 PRAGMA default_synchronous=FULL
 PRAGMA temp_store=memory
 PRAGMA page_size=4096
 PRAGMA cache_size=2000
Any ideas how to get the sqlite output timings to a more  
respectable level would be appreciated.

Thanks
Ken
If you want flat file performance, use a flat file.  Sqlite is  
built on top of a flat file and cannot be faster or even as fast.   
If your application can use a flat file, why use anything more  
complex?
- 


To unsubscribe, send email to [EMAIL PROTECTED]
- 




-- 
---

To unsubscribe, send email to [EMAIL PROTECTED]
-- 
---





-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] multi threats parallel reading db

2006-08-18 Thread Tito Ciuro

On Aug 18, 2006, at 6:45 AM, [EMAIL PROTECTED] wrote:


Please double-check your code.  The error message "library routine
called out of sequence" is what you get when you try to use the
the same database connection from more than one thread at one time.


If I remember correctly, there are two other cases where  "library  
routine called out of sequence" would happen:


- the database connection has already been closed
- you're trying to use a prepared statement that has already been  
finalized


Is this still true?

Regards,

-- Tito

Re: [sqlite] Intermittent crash in sqlite3_prepare()

2006-04-24 Thread Tito Ciuro

Hi Doug,

This is what I see in the console:

(gdb) print *db
$2 = {
  nDb = 2,
  aDb = 0x177a750c,
  flags = 32,
  errCode = 5,
  autoCommit = 1 '\001',
  temp_store = 0 '\0',
  nTable = 1,
  pDfltColl = 0x1770f300,
  lastRowid = 0,
  priorNewRowid = 0,
  magic = -1607883113,
  nChange = 0,
  nTotalChange = 0,
  init = {
iDb = 0,
newTnum = 1,
busy = 0 '\0'
  },
  pVdbe = 0x0,
  activeVdbeCnt = 0,
  xTrace = 0,
  pTraceArg = 0x0,
  xProfile = 0,
  pProfileArg = 0x0,
  pCommitArg = 0x0,
  xCommitCallback = 0,
  pRollbackArg = 0x0,
  xRollbackCallback = 0,
  pUpdateArg = 0x0,
  xUpdateCallback = 0,
  xCollNeeded = 0,
  xCollNeeded16 = 0,
  pCollNeededArg = 0x0,
  pErr = 0x17715590,
  zErrMsg = 0x0,
  zErrMsg16 = 0x0,
  xAuth = 0,
  pAuthArg = 0x0,
  xProgress = 0,
  pProgressArg = 0x0,
  nProgressOps = 0,
  pNext = 0x0,
  aFunc = {
keyClass = 3 '\003',
copyKey = 0 '\0',
count = 36,
first = 0x17737ef0,
xMalloc = 0x44f304 ,
xFree = 0x44f2b0 ,
htsize = 64,
ht = 0x1b12c00
  },
  aCollSeq = {
keyClass = 3 '\003',
copyKey = 0 '\0',
count = 2,
first = 0x17576f60,
xMalloc = 0x44f304 ,
xFree = 0x44f2b0 ,
htsize = 8,
ht = 0x1770d7d0
  },
  busyHandler = {
xFunc = 0x2a2ae4 ,
pArg = 0x177a7420,
nBusy = -1
  },
  busyTimeout = 3000,
  aDbStatic = {{
  zName = 0x2a4f44 "main",
  pBt = 0x1778a6b0,
  inTrans = 0 '\0',
  safety_level = 3 '\003',
  pAux = 0x0,
  xFreeAux = 0,
  pSchema = 0x177434f0
}, {
  zName = 0x2a4f4c "temp",
  pBt = 0x0,
  inTrans = 0 '\0',
  safety_level = 1 '\001',
  pAux = 0x0,
  xFreeAux = 0,
  pSchema = 0x17507660
}}
}
(gdb) print zSql
$3 = 0x1774fec0 "SELECT * FROM main.sqlite_master WHERE name =  
'documents';"


Also:

nBytes = -1
ppStmt = 0x0
pzTail = 0x0

I realized that I was initializing 'ppStmt' to NULL by mistake   
before calling sqlite3_prepare(). Removing the initialization cleared  
the error.


Thanks Doug.

-- Tito

On 24/04/2006, at 12:41, Doug Nebeker wrote:

What does the code look like that is calling sqlite3_prepare?  The  
crash

point is just dereferening a pointer that you passed in, and it looks
like the pointer value must be bad.


int sqlite3_prepare(
  sqlite3 *db,  /* Database handle. */
  const char *zSql, /* UTF-8 encoded SQL statement. */
  int nBytes,   /* Length of zSql in bytes. */
  sqlite3_stmt **ppStmt,/* OUT: A pointer to the prepared
statement */
  const char** pzTail   /* OUT: End of parsed string */
){
  Parse sParse;
  char *zErrMsg = 0;
  int rc = SQLITE_OK;
  int i;

  /* Assert that malloc() has not failed */
  assert( !sqlite3MallocFailed() );

  assert( ppStmt );
  *ppStmt = 0;  << the crash occurs here...



Any ideas why this is happening?

Thanks,

-- Tito


To find out more about Reuters visit www.about.reuters.com

Any views expressed in this message are those of the individual  
sender, except where the sender specifically states them to be the  
views of Reuters Ltd.






[sqlite] Intermittent crash in sqlite3_prepare()

2006-04-24 Thread Tito Ciuro

Hello,

When I'm running several threads at the same, I sometimes get a crash  
in sqlite3_prepare() in the marked statement below:



int sqlite3_prepare(
  sqlite3 *db,  /* Database handle. */
  const char *zSql, /* UTF-8 encoded SQL statement. */
  int nBytes,   /* Length of zSql in bytes. */
  sqlite3_stmt **ppStmt,/* OUT: A pointer to the prepared  
statement */

  const char** pzTail   /* OUT: End of parsed string */
){
  Parse sParse;
  char *zErrMsg = 0;
  int rc = SQLITE_OK;
  int i;

  /* Assert that malloc() has not failed */
  assert( !sqlite3MallocFailed() );

  assert( ppStmt );
  *ppStmt = 0;  << the crash occurs here...



Any ideas why this is happening?

Thanks,

-- Tito


Re: [sqlite] Is it safe to read or write a table while being indexed?

2006-04-18 Thread Tito Ciuro

Hi Christian,

On 18/04/2006, at 8:22, Christian Smith wrote:


SQLite will make possible whatever is safe. Just be prepared to handle
SQLITE_BUSY and SQLite will take care of the rest.


Great. This is what I do now. No worries here then...

If the indexing process uses a large SQLite cache (using the  
cache_size

PRAGMA) then the indexer can perform some of the indexing without
promoting the lock to EXCLUSIVE, and allow concurrent access with  
readers.

This is using a RESERVED lock. However, the EXCLUSIVE lock will be
required once the cache is filled with dirty pages as pages will be
required to be spilled to the database file (and hence the rollback
journal etc will be created.)  This will occur once all current  
readers

have finished. When an EXCLUSIVE lock is required, no new readers are
allowed.


Thanks so much for the clarification.

Regards,

-- Tito


[sqlite] Acquiring a PENDING lock

2006-04-16 Thread Tito Ciuro

Hello,

As stated in the documentation I see that:


A deferred transaction starts without a lock and obtains a SHARED  
lock on the first read and the first write operation creates a  
RESERVED lock.


An immediate acquires a RESERVED lock as soon as the BEGIN command  
is executed, without waiting for the database to be used.


An exclusive transaction causes EXCLUSIVE locks to be acquired on  
all databases. After a BEGIN EXCLUSIVE, you are guaranteed that no  
other thread or process will be able to read or write the database  
until the transaction is complete.



Then I see the following statement:


A PENDING lock is always just a temporary stepping stone on the  
path to an EXCLUSIVE lock and so the pager module does not track  
PENDING locks.



Does that mean that if there are readers accessing the database and I  
issue a BEGIN EXCLUSIVE, SQLite will create a PENDING lock barring  
new readers to have access? Then as soon as the current readers  
finish, the PENDING lock will be upgraded to an EXCLUSIVE lock,  
barring other readers and writers from accessing the database, correct?


Thanks,

-- Tito

Re: [sqlite] Is it safe to read or write a table while being indexed?

2006-04-16 Thread Tito Ciuro

On 16/04/2006, at 3:52, [EMAIL PROTECTED] wrote:


Tito Ciuro <[EMAIL PROTECTED]> wrote:

- Does SQLite acquire an EXCLUSIVE lock when indexing?


Yes


- If I'm not mistaken, an EXCLUSIVE lock does not stop other readers
from accessing the database.



You are mistaken.  An EXCLUSIVE lock means that only the connection
holding the lock is able to access the file.



Thanks for the clarification.

Regards,

-- Tito


[sqlite] Is it safe to read or write a table while being indexed?

2006-04-15 Thread Tito Ciuro

Hello,

I was wondering whether it is safe to read or write a table while  
being indexed. Here's a scenario: for batch imports, it's sometimes  
better to DROP the indexes, do the INSERTs and then recreate the  
relevant indexes. Indexing may take a little bit of time, so I was  
wondering if:


- Other processes/threads could read or write the data (even though  
the indexes are not yet available)


- Is it safe, or even possible?

- Does SQLite acquire an EXCLUSIVE lock when indexing? If not, should  
I wrap the CREATE INDEX statements within a BEGIN EXCLUSIVE  
transaction, to be safe?


- If I'm not mistaken, an EXCLUSIVE lock does not stop other readers  
from accessing the database. What would happen if a process/thread  
reads data being indexed?


Thanks in advance,

-- Tito


Re: [sqlite] [SOLVED] Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-28 Thread Tito Ciuro

Hi Denis,

I've been reading your email carefully and I'd like to comment it.

On 28/03/2006, at 14:24, Dennis Cote wrote:

With these tables you will have 25K rows in the File table, one per  
file, and 250K rows in the Attribute table assuming an average of  
10 attributes per file (your example had 14).


An index on the attribute name and value pairs will greatly speedup  
lookups of a particular attribute.


There is still some redundancy in this set of tables since the  
names of the attributes are being repeated over and over in the  
Attribute table. This may not be a big problem, but you really only  
have 100 attributes, so there is no sense storing 25K copies of the  
name of a common attribute that applies to all files. This  
information can be factored out into a third table as shown below.


[...] This will give you three tables (which will not cause a  
noticeable difference in the database open time). One with 25K rows  
of file names. One with 100 rows of attribute definition data,  
which includes the attribute name.  And one with  250K rows of  
attribute value data.


Due to application requirements, I must rely on LIKE and GLOB in  
order to match data, matching data that contains some value  
(sensitive or insensitive match).


Now, it seems to me that using either LIKE or GLOB will force a row  
scan anyhow, since it can't use the index, correct? So your solution  
would force me to perform the row scan for 250K rows when matching  
values. That is 10 times more data to scan through.


I'm not sure if there is a way to speed up LIKE or GLOB queries such as:

SELECT myvalue FROM files WHERE myvalue LIKE '%finit%';

Thanks a lot for the advice.

Regards,

-- Tito


Re: [sqlite] [SOLVED] Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-28 Thread Tito Ciuro

John,

Did you read my replies at all? If not, please take 15 seconds to do  
so. I thanked *all of you* in all three emails.


Just in case, if that wasn't enough, allow me to do that for the  
fourth time: I most sincerely appreciate the time and effort that you  
guys have taken to answer my emails. I really do appreciate it your  
help.


Best regards,

-- Tito

On 28/03/2006, at 13:45, John Stanton wrote:


Tito,

Several people have bothered to spend the time and effort to give  
you some well considered help.  That is what forums like this are  
for, professional guidance and development.  Thank them.


Re: [sqlite] [SOLVED] Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-28 Thread Tito Ciuro

MGC,

I have no idea why you're so angry. Anyway, there are so many things  
I can think of saying, I'll just make it brief and to the point.


1) Regarding your statement:

This thing won't scale. I'd like to see it when you have the 4.5  
million records my database contains,

and that is still tiny for all intents-and-purposes.


For the type of application I'm building, it doesn't need to scale.  
At most I'll have 25K records. Even with the 100K database I've used  
for testing, it simply flies.


I'm not building anything for the outside world. Just something that  
serves me well.


2) Regarding:


Absolutely NO ONE  suggested moving anything out of "SQLite-land".


What!?!? You mentioned it two days ago!:


Stuff it into a sorted flat file.
that would be faster and simpler.


3) Regarding your statement:

As to your 'real good reason' for doing it this way,  I'd bet cash  
money it's crap and based on nothing more than 'Because that's the  
way I decided to do it, and I'm smart'.


Talk about making things up... you're a funny guy :-)

I'm storing variable-length data, with a very different set of  
attributes. Some may have 1 attribute, others tens of them. Perhaps  
even a hundred. Using a column per attribute is not a good idea. A  
few days ago I asked this question and Dr. Hipp mentioned:



The more tables you have, the slower the first query will run
and the more memory SQLite will use.  For long-running applications
where the startup time is not a significant factor, 100s or
1000s of tables is fine.  For a CGI script that starts itself
up anew several times per second, then you should try to keep
the number of tables below a 100, I think.  Less than that if
you can. You should also try and keep down the number of tables
in low-memory embedded applications, in order to save on memory
usages.  Each table takes a few hundred bytes of memory - depending
on the number of columns and features.


Having two columns (one for the key and the other one for the data  
itself) seems like a good balance between speed and ease of use. I  
don't care if it doesn't scale because the intended deployment is 25K  
at most, as I said earlier. Even with data sets 4x that size works fine.


There is absolutely no reason this could not be properly designed  
and STILL fit any possible need for that ugly packed record.


So you know it all, eh? And you call me arrogant? :-)

I'm very happy for the solution: it's speedy and is simple. As for  
the original question I posted, I'm also glad to report that LIKE and  
GLOB works fine.


Cheers,

-- Tito


[sqlite] [SOLVED] Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-28 Thread Tito Ciuro

Hi Martin,

I haven't been able to pick up pace my email until this morning.  
Sorry about that.


As it turns out, the error was on my side (surprise). The code seemed  
to be working fine: early tests showed data being churned as  
expected, so I started cleaning it up and factoring up the code. To  
make the story short, I accidentally deleted the line that was  
incrementing the index keeping in sync with the array enumerator.  
Result: I was always storing element zero in the database. No wonder  
I had 4000+ entries looking exactly the same! The first version of  
the code was working fine, so I assumed that LIKE and GLOB were there  
culprits after I started noticing weird results.


Oh!... just a quick message for the row-scan non-believers out there:  
SQLite flies, even under this scenario. I'm getting wonderful  
performance numbers by keeping everything within SQLite-land. The  
code is simple and I let SQLite do all the magic. What else can I ask  
for? :-)


Thanks everybody for your help and comments.

Regards,

-- Tito

On 27/03/2006, at 3:40, Martin Jenkins wrote:


Tito,

I knocked up a quick test with python and apsw and it worked as  
intended. My data isn't exactly the same as yours in that I don't  
have the variety in the keys, but you're not having problems with  
those. My test database contains your data with/without embedded  
carriage returns - as expected, it makes no difference.


In the following, zip(..) is a quick hack to get all the results  
from the query. The spurious '[', ']' and other brackets  
surrounding the results are a result of the way that apsw returns  
data (as lists of python tuples).


Apologies for the extreme width of the following lines. :(

zip(csr.execute("select * from t"))

[
(('file5809', '(0,NSFileTypeRegular,0,22537,0,staff, 
234881026,294022,2004-12-16 10:11:00 -0800,tciuro,384,2006-03-26  
08:01:55 -0800,502,20)'),),
(('file0101581a', '(1,NSFileTypeRegular,1,22554,0,staff, 
234881026,294022,2004-12-16 10:11:03 -0800,tciuro,384,2006-03-26  
08:04:55 -0800,502,20)'),),
(('file0202582b', '(2,NSFileTypeRegular,2,22571,0,staff, 
234881026,294022,2004-12-16 10:11:06 -0800,tciuro,384,2006-03-26  
08:07:55 -0800,502,20)'),),

...
(('file595d', '(\n   0,\nNSFileTypeRegular,\n0,\n  
22877,\n0,\nstaff,\n234881026,\n294022,\n 
2004-12-16 10:11:00 -0800,\ntciuro,\n384,\n2006-03-26  
08:01:55 -0800,\n 502,\n20\n)'),),
(('file0101596e', '(\n   1,\nNSFileTypeRegular,\n1,\n  
22894,\n0,\nstaff,\n234881026,\n294022,\n 
2004-12-16 10:11:03 -0800,\ntciuro,\n384,\n2006-03-26  
08:04:55 -0800,\n 502,\n20\n)'),), (('file0202597f',
'(\n   2,\nNSFileTypeRegular,\n2,\n22911,\n0,\n 
staff,\n 234881026,\n294022,\n2004-12-16 10:11:06 -0800, 
\ntciuro,\n 384,\n2006-03-26 08:07:55 -0800,\n502,\n 
20\n)'),),

...
]

zip(csr.execute("SELECT * FROM t WHERE CMValues GLOB '*2004-12-16  
10:11:45 -0800*'"))


[
(('file15155908', '(15,NSFileTypeRegular,15,22792,0,staff, 
234881026,294022,2004-12-16 10:11:45 -0800,tciuro,384,2006-03-26  
08:46:55 -0800,502,20)'),),
(('file15155a5c', '(\n   15,\nNSFileTypeRegular,\n15,\n  
23132,\n0,\nstaff,\n234881026,\n294022,\n 
2004-12-16 10:11:45 -0800,\ntciuro,\n384,\n2006-03-26  
08:46:55 -0800,\n 502,\n20\n)'),)

]

zip(csr.execute("SELECT * FROM t WHERE CMValues LIKE '%2004-12-16  
10:11:45 -0800%'"))


[
(('file15155908', '(15,NSFileTypeRegular,15,22792,0,staff, 
234881026,294022,2004-12-16 10:11:45 -0800,tciuro,384,2006-03-26  
08:46:55 -0800,502,20)'),),
(('file15155a5c', '(\n   15,\nNSFileTypeRegular,\n15,\n  
23132,\n0,\nstaff,\n234881026,\n294022,\n 
2004-12-16 10:11:45 -0800,\ntciuro,\n384,\n2006-03-26  
08:46:55 -0800,\n 502,\n20\n)'),)

]

Could you try reducing your search strings and see if there's a  
point at which they start working?


HTH,

Martin Jenkins
XQP Ltd
Ascot, UK

- Original Message - From: "Tito Ciuro" <[EMAIL PROTECTED]>
To: "Forum SQLite" <sqlite-users@sqlite.org>
Sent: Sunday, March 26, 2006 6:50 PM
Subject: [sqlite] LIKE and GLOB bug with numbers?



Hello,

I've populated a datafile with 40.176 records which contain file  
attributes and file paths. I have two columns, CMKey and  
CMValues.  The column CMKey contains the path to the file and the  
column  CMValues contains the attribute values. For example:


CMKey: Application Support/AbiSuite/AbiWord.Profile

CMValues:
(
0,
NSFileTypeRegular,
1,
21508,
0,
staff,
234881026,
294022,
2004-12-16 10:11:35 -0800,
tciuro,
384,
2006-03-26 08:35:55 -0800,
502,
20
)

Both columns are of type TEXT.

This is what I've found:

1) SELECT * FROM FinderFiles WHERE CMKey GLOB '*AbiW

Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-26 Thread Tito Ciuro

Hello everybody,

On 26/03/2006, at 10:08, John Stanton wrote:

LIKE and GLOB do a row scan, and give you none of the advantages of  
an RDBMS.  Why not use a flat file and grep and get simplicity and  
greater speed?


I'm very well aware that LIKE and GLOB perform a row scan. I do  
appreciate your concerns about the presumed lack of suitability of my  
design, but that is not open for discussion.


The original question was related about something very specific:  
whether there is a bug in LIKE and GLOB when dealing with  numbers.  
Regardless of whether my design is suitable or not, I think it's  
important to clarify what's happening. Other users might be suffering  
from this issue without knowing it.


Again, thanks for your responses.

Regards,

-- Tito

Re: [sqlite] LIKE and GLOB bug with numbers?

2006-03-26 Thread Tito Ciuro

On 26/03/2006, at 10:51, MGC wrote:


Your design is fundamentaly wrong.
I don't know what your intended use
is for this data, but I am logging identical fstat file info along  
with an

MD5 sums.


Well... if you don't know what is the intended use for the data, how  
can you say that my design is fundamentally wrong? :-)


It's not wrong. That's the way it has to be. Now, if I could match  
the data properly with LIKE and GLOB, that would be great.


Thanks for your response though.

Regards,

-- Tito

[sqlite] LIKE and GLOB bug with numbers?

2006-03-26 Thread Tito Ciuro

Hello,

I've populated a datafile with 40.176 records which contain file  
attributes and file paths. I have two columns, CMKey and CMValues.  
The column CMKey contains the path to the file and the column  
CMValues contains the attribute values. For example:


CMKey: Application Support/AbiSuite/AbiWord.Profile

CMValues:
(
0,
NSFileTypeRegular,
1,
21508,
0,
staff,
234881026,
294022,
2004-12-16 10:11:35 -0800,
tciuro,
384,
2006-03-26 08:35:55 -0800,
502,
20
)

Both columns are of type TEXT.

This is what I've found:

1) SELECT * FROM FinderFiles WHERE CMKey GLOB '*AbiWord.Profile*'  
returns 1 match. This is correct.


2) SELECT * FROM FinderFiles WHERE CMKey LIKE '%ABIWORD.Profile%'  
returns 1 match. This is correct.


3) SELECT * FROM FinderFiles WHERE CMValues GLOB '*2004-12-16  
10:11:35 -0800*' returns 40.176 matches. This is not correct. There  
is no way I created these 40.176 file at the *very same* time. Just  
to be sure, I looked at one random file (of the 40.176) and I've  
obtained the following creation date attribute:


NSFileCreationDate = 2004-02-21 06:12:43 -0800;

The same problem occurs if I perform the query:

SELECT * FROM FinderFiles WHERE CMValues LIKE '%2004-12-16 10:11:35  
-0800%'


This problem seems to occur when trying to match something with numbers:

- If I look for NSFilePosixPermissions 448 (which I know exists) I  
get zero matches

- If I look for strings, such as in step #1 or #2, it works fine.

Something is wrong, I just can't figure out why...

Any ideas? Is this a bug?

Thanks,

-- Tito


[sqlite] "Safe" maximum numbers of tables?

2006-03-23 Thread Tito Ciuro

Hello,

I was reading the FAQ and I came across this statement:


In practice, SQLite must read and parse the original SQL of all  
table and index declarations everytime a new database file is  
opened, so for the best performance of sqlite3_open() it is best to  
keep down the number of declared tables.


Is there a "safe" maximum number of tables that doesn't negatively  
impact too much sqlite3_open()? I wonder if it's 10, 25...?


Any ideas?

Thanks,

-- Tito

Re: [sqlite] INSERT multiple times using sqlite_bind_text()

2006-03-20 Thread Tito Ciuro

Perfect! It works fine now... :-)

Many thanks,

-- Tito

On 20/03/2006, at 12:23, [EMAIL PROTECTED] wrote:


Tito Ciuro <[EMAIL PROTECTED]> wrote:

Hello,

I'm trying to INSERT many records using sqlite3_bind_text(). This is
what I do:


// Finish...
result = sqlite3_finalize(statement);

What am I missing?



Use sqlite3_reset() if you intent to reuse the statement.
sqlite3_finalize is the destructor.
--
D. Richard Hipp   <[EMAIL PROTECTED]>





[sqlite] INSERT multiple times using sqlite_bind_text()

2006-03-20 Thread Tito Ciuro

Hello,

I'm trying to INSERT many records using sqlite3_bind_text(). This is  
what I do:


sqlite3_stmt *statement = NULL;
const char *sql = "INSERT INTO mytable(foo, bar) VALUES (?,?);"
int result = sqlite3_prepare(sqliteDatabase, sql, -1, , NULL);

// Do a bunch of binds and execute...
for (i = 0; i < 100; i++)
{
const char *keyString = "one"
const char *valueString = "two"

int resultBindKey = sqlite3_bind_text ( statement, 1, keyString, -1,  
SQLITE_STATIC);
int resultBindContent = sqlite3_bind_text ( statement, 2,  
valueString, -1, SQLITE_STATIC);


if ((resultBindKey == SQLITE_OK) && (resultBindContent == SQLITE_OK)) {
BOOL waitingForRow = YES;

do {
 int result = sqlite3_step(statement);

 switch (result) {
  case SQLITE_BUSY:
  break;
 case SQLITE_OK:
 case SQLITE_DONE:
  waitingForRow = NO;
  break;
 case SQLITE_ROW:
  waitingForRow = NO;
  break;
 default:
  waitingForRow = NO;
  break;
 }
} while (waitingForRow);
}

// Finish...
result = sqlite3_finalize(statement);

The first iteration works fine, but after that I get a SQLITE_MISUSE  
(ID 21) when trying to sqlite3_bind_text(). I thought that I was  
supposed to prepare the statement once, then do a bunch of binds and  
at the end, finalize.


What am I missing?

Thanks,

-- Tito


Re: [sqlite] Unrecognized token error

2006-03-19 Thread Tito Ciuro

Hi John,

On 19/03/2006, at 18:23, John Stanton wrote:


Tito Ciuro wrote:

Hello,
I'm trying to store a string and I get the following error:
unrecognized token: \"!\"\"
This happens with SQLite 3.2.7
Any ideas?
Thanks,
-- Tito

What is the string?


If I escape single quotes (that is, from "'" to "''"), then  
everything is fine. I usually do that when I want to query for a  
value that contains a quote (i.e. O'Rourke).


However, why do I have to manipulate the string value to be stored?  
Shouldn't SQLite allow me to store any string I want?


Thanks,

-- Tito


[sqlite] [LONG]Re: [sqlite] Unrecognized token error

2006-03-19 Thread Tito Ciuro

Hi John,

On 19/03/2006, at 18:23, John Stanton wrote:


Tito Ciuro wrote:

Hello,
I'm trying to store a string and I get the following error:
unrecognized token: \"!\"\"
This happens with SQLite 3.2.7
Any ideas?
Thanks,
-- Tito

What is the string?


Here's the string (it's a preferences file):

\n\"-//Apple Computer//DTD PLIST 1.0//EN\" \"http://www.apple.com/DTDs/ 
PropertyList-1.0.dtd\">\n\n\n 
\tABDirectoryResultColumnTitle\n\tInstant  
Messaging\n\tAccountSortOrder\n\t\n\t 
\t3A86A458-555A-4D2C-A710-2D3E40372D7A\n\t\n 
\tAutosaveChats\n\t\n\tBuddyInfoSelectedTabkey>\n\t0\n\tBuddyList.EnableGroups\n 
\t\n\tBuddyList.Visible\n\t\n 
\tCardsBlockingPresentityPictures\n\t\n\t 
\tCA3EA6FE-7F19-4419-8AB1-571F8D445738:ABPerson\n\tarray>\n\tConfirmSendFiles\n\t\n 
\tCustomAvailableMessages\n\t\n\t\tFat  
Binaries over coffee\n\t\tCocoa bindings with a  
bagel\n\t\tPalo Alto Caf\U00e9, California\n 
\t\t@Sina's House\n\t\tWeb Services +  
Bonjour = Yummy!\n\t\tSharing data just got  
easier ;-)\n\t\twww.liveplasma.com\n\t 
\tIn Yuba City, CA!\n\t\tRunning around the  
house...\n\t\n\tCustomAwayMessages\n 
\t\n\t\tBe right back!\n\t\n 
\tCustomInBalloonColor\n\t\n 
\tCustomOutBalloonColor\n\t\n 
\tCustomStatusMessageSheetFrame\n\t46 537 390 286  
0 0 1280 832 \n\tDockDebounceTime\n\t0.5real>\n\tGammaFadePauseTime\n\t0.00499888241291real>\n\tJabberViewer.EnableGroups\n\t\n 
\tJabberViewer.Visible\n\t\n 
\tMessageInTextStyle\n\t\n\t\tbgcolor\n\t 
\t-1\n\t\tfgcolor\n\t\t0integer>\n\t\tfont\n\t\tHelveticaNeue\n\t 
\tsize\n\t\t13\n\t\n 
\tMessageTextStyle\n\t\n\t\tbgcolor\n\t 
\t8107502\n\t\tfgcolor\n\t\t0integer>\n\t\tfont\n\t\tHelveticaNeue\n\t 
\tsize\n\t\t13\n\t\n 
\tNSFontPanelAttributes\n\t1, 1\n 
\tNSFontPanelPreviewHeight\n\t0.0\n 
\tNSNavBrowserPreferedColumnContentWidth\n\t186real>\n\tNSNavLastRootDirectory\n\t~/Desktopstring>\n\tNSNavPanelExpandedSizeForOpenMode\n\t 
{519, 400}\n\tNSPreferencesContentSize\n\t 
{517, 389}\n\tNSPreferencesSelectedIndex3\n 
\t1\n\tNSRecentDocumentRecords\n 
\t\n\t\t\n\t\t\t_NSLocator\n\t\t\t\n\t\t 
\t\t_NSAlias\n\t\t\t\t\n\t\t\t 
\tAAHyAAIAAA1Nb25rZXlTaHV0dGxl\n\t\t\t\tAAC/ 
PbTFSCsEYB4fQmFyYmVyYSBHZW1tYSBv\n\t\t\t 
\tbiAyIzFDMTA2OC5pY2hhdAAA\n\t\t\t 
\tABwQaMArChkAAP//\n\t\t\t\t// 
8AAAkgBmlDaGF0cwAQAAgA\n\t\t\t\tAL8 
+FzURAAgAAMArepkBABAABGAeAAO4uwAD\n\t\t\t 
\tty8AA2dqAAIAS01vbmtleVNodXR0bGU6VXNlcnM6dGNp\n\t\t\t 
\tdXJvOkRvY3VtZW50czppQ2hhdHM6QmFyYmVyYSBHZW1t\n\t\t\t 
\tYSBvbiAyIzFDMTA2OC5pY2hhdAAADgBWACoAQgBhAHIA\n\t\t\t 
\tYgBlAHIAYQAgAEcAZQBtAG0AYQAgAG8AbgAgADIAMAAw\n\t\t\t 
\tADYALQAwADMALQAwADEAIABhAHQAIAAwADgALgAyADEA\n\t\t\t 
\tLgBpAGMAaABhAHQADwAcAA0ATQBvAG4AawBlAHkAUwBo\n\t\t\t 
\tAHUAdAB0AGwAZQASAEhVc2Vycy90Y2l1cm8vRG9jdW1l\n\t\t\t 
\tbnRzL2lDaGF0cy9CYXJiZXJhIEdlbW1hIG9uIDIwMDYt\n\t\t\t 
\tMDMtMDEgYXQgMDguMjEuaWNoYXQAEwABLwAAFQACAA3/\n\t\t\t\t/wAA\n\t\t\t 
\t\n\t\t\t\n\t\t\n\t\t\n\t\t 
\t_NSLocator\n\t\t\t\n\t\t\t\t_NSAlias\n\t 
\t\t\t\n\t\t\t\tAAHqAAIAAA1Nb25rZXlTaHV0dGxl\n\t 
\t\t\tAAC/PbTFSCsEYB4fUG9nZ2kgSnVhbiBvbiAy\n\t\t\t 
\tMDA1LSNDODlCNi5pY2hhdAAA\n\t\t\t 
\tAAyJtr96Z9kAAP//\n\t\t\t\t// 
8AAAkgBmlDaGF0cwAQAAgA\n\t\t\t\tAL8 
+FzURAAgAAL96ykkBABAABGAeAAO4uwAD\n\t\t\t 
\tty8AA2dqAAIAS01vbmtleVNodXR0bGU6VXNlcnM6dGNp\n\t\t\t 
\tdXJvOkRvY3VtZW50czppQ2hhdHM6UG9nZ2kgSnVhbiBv\n\t\t\t 
\tbiAyMDA1LSNDODlCNi5pY2hhdAAADgBQACcAUABvAGcA\n\t\t\t 
\tZwBpACAASgB1AGEAbgAgAG8AbgAgADIAMAAwADUALQAx\n\t\t\t 
\tADAALQAxADgAIABhAHQAIAAwADgALgA1ADAALgBpAGMA\n\t\t\t 
\taABhAHQADwAcAA0ATQBvAG4AawBlAHkAUwBoAHUAdAB0\n\t\t\t 
\tAGwAZQASAEVVc2Vycy90Y2l1cm8vRG9jdW1lbnRzL2lD\n\t\t\t 
\taGF0cy9Qb2dnaSBKdWFuIG9uIDIwMDUtMTAtMTggYXQg\n\t\t\t 
\tMDguNTAuaWNoYXQAABMAAS8AABUAAgAN//8AAA==\n\t\t\t\t\n\t\t\tdict>\n\t\t\n\t\t\n\t\t\t_NSLocator\n\t\t 
\t\n\t\t\t\t_NSAlias\n\t\t\t\t\n\t\t\t 
\tAAHsAAIAAA1Nb25rZXlTaHV0dGxl\n\t\t\t\tAAC 
+spCGSCsDglYfQnJldHQgTmVlbHkgb24g\n\t\t\t 
\tMjAwNSM4Nzc2Mi5pY2hhdAAA\n\t\t\t 
\tAAh3Yr7TbEoAAP//\n\t\t\t\t// 
8AAAkgBmlDaGF0cwAQAAgA\n\t\t\t 
\tAL6y8vYRAAgAAL7TzroBABAAA4JWAAOCDwAB\n\t\t\t 
\tttoAAGvYAAIAS01vbmtleVNodXR0bGU6VXNlcnM6dGNp\n\t\t\t 
\tdXJvOkRvY3VtZW50czppQ2hhdHM6QnJldHQgTmVlbHkg\n\t\t\t 
\tb24gMjAwNSM4Nzc2Mi5pY2hhdAAADgBSACgAQgByAGUA\n\t\t\t 
\tdAB0ACAATgBlAGUAbAB5ACAAbwBuACAAMgAwADAANQAt\n\t\t\t 
\tADAANgAtADEAMwAgAGEAdAAgADEANwAuADAAMAAuAGkA\n\t\t\t 
\tYwBoAGEAdAAPABwADQBNAG8AbgBrAGUAeQBTAGgAdQB0\n\t\t\t 
\tAHQAbABlABIARlVzZXJzL3RjaXVyby9Eb2N1bWVudHMv\n\t\t\t 
\taUNoYXRzL0JyZXR0IE5lZWx5IG9uIDIwMDUtMDYtMTMg\n\t\t\t 
\tYXQgMTcuMDAuaWNoYXQAEwABLwAAFQACAA3//wAA\n\t\t\t\t\n\t\t\tdict>\n\t\t\n\t\t\n\t\t\t_NSLocator\n\t\t 
\t\n\t\t\t\t_NSAlias\n\t\t\t\t\n\t\

[sqlite] Unrecognized token error

2006-03-19 Thread Tito Ciuro

Hello,

I'm trying to store a string and I get the following error:

unrecognized token: \"!\"\"

This happens with SQLite 3.2.7

Any ideas?

Thanks,

-- Tito


Re: [sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hi James!

Even though I dragged the entire directory, os.c and shell.c were the  
only two files no included in the project. Weird...


Thanks so much!

-- Tito

On 06/03/2006, at 17:11, James W. Walker wrote:


Tito Ciuro <[EMAIL PROTECTED]> wrote:

When I create an Xcode project in Mac OS X and put all the SQLite  
files there, it doesn't compile:


ld: warning prebinding disabled because of undefined symbols
ld: Undefined symbols:
_sqlite3OsCheckReservedLock
_sqlite3OsClose
_sqlite3OsFileSize
_sqlite3OsLock
_sqlite3OsOpenDirectory
_sqlite3OsRead
_sqlite3OsSeek
_sqlite3OsSetFullSync
_sqlite3OsSync
_sqlite3OsTruncate
_sqlite3OsUnlock
_sqlite3OsWrite
/usr/bin/libtool: internal link edit command failed


Those are link errors, not compile errors, right?

It used to compile fine in the past. I see that these symbols are  
defined in os.h, which is included in the project. Any ideas?


The symbols are *declared* in os.h, but *defined* in os.c.  Is os.c  
in the project?

--
  James W. Walker, ScriptPerfection Enterprises, Inc.
  <http://www.write-brain.com/>




Re: [sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hello Marcel,

I should have been more specific. Sorry about that: when I ./ 
configure and then make, everything is fine. It passes all the tests.  
Cool. When I create an Xcode project in Mac OS X and put all the  
SQLite files there, it doesn't compile:


ld: warning prebinding disabled because of undefined symbols
ld: Undefined symbols:
_sqlite3OsCheckReservedLock
_sqlite3OsClose
_sqlite3OsFileSize
_sqlite3OsLock
_sqlite3OsOpenDirectory
_sqlite3OsRead
_sqlite3OsSeek
_sqlite3OsSetFullSync
_sqlite3OsSync
_sqlite3OsTruncate
_sqlite3OsUnlock
_sqlite3OsWrite
/usr/bin/libtool: internal link edit command failed

It used to compile fine in the past. I see that these symbols are  
defined in os.h, which is included in the project. Any ideas?


Thanks,

-- Tito

On 06/03/2006, at 12:52, Marcel Strittmatter wrote:


Hi

On my PowerMac G4 with Tiger 10.4.5 (Build 8H14), it works.
You can download an archive with finished configure and make steps  
from the following address:


http://www.dinoware.com/mailinglists/sqlite-3.3.4.tar.gz

You can look at the configure.log and make.log files to check for  
differences on your system.


Marcel

On 06.03.2006, at 21:11, [EMAIL PROTECTED] wrote:


Tito Ciuro <[EMAIL PROTECTED]> writes:


Hello,

SQLite 3.3.4
Mac OS X Tiger 10.4.5

After ./configure and make SQLite 3.3.4, I see that some files have
disappeared:

os_test.c
os_test.h
os_unix.h
os_win.h

This results in a few undefined symbols:

sqlite3OsClose
sqlite3FileSize
sqlite3OsLock
...

Any ideas?


First guess would be that 'configure' isn't detecting that OS X is  
Unix-like.
I suspect you might get a hint of what's going on if you carefully  
inspect the
output from 'configure' to see what supported OS it's detecting  
(if any).


Derrell






Re: [sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hello,

On 06/03/2006, at 12:44, [EMAIL PROTECTED] wrote:


I build the OS-X version for the website on Tiger using
the ./configure script in the TEA version of SQLite.  You
might try it as a work-around.

http://www.sqlite.org/sqlite-3.3.4-tea.tar.gz


I get a "document not found" error.

Thanks,

-- Tito

Re: [sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hello Manfred,

On 06/03/2006, at 12:43, Manfred Bergmann wrote:


What source package did you download?


http://www.sqlite.org/sqlite-3.3.4.tar.gz



Were these files missing bevor configure and make, too?


Yes.

Thanks,

-- Tito

Re: [sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hi Derrell,

On 06/03/2006, at 12:11, [EMAIL PROTECTED] wrote:

First guess would be that 'configure' isn't detecting that OS X is  
Unix-like.
I suspect you might get a hint of what's going on if you carefully  
inspect the
output from 'configure' to see what supported OS it's detecting (if  
any).


Derrell


I don't see anything bizarre. Here's the output:


Last login: Mon Mar  6 13:10:04 on ttyp1
Welcome to Darwin!
da0304a-dhcp158:~ tciuro$ cd /Users/tciuro/Desktop/sqlite-3.3.4/
da0304a-dhcp158:~/Desktop/sqlite-3.3.4 tciuro$ ./configure
checking build system type... i686-apple-darwin8.5.1
checking host system type... i686-apple-darwin8.5.1
checking for gcc... gcc
checking for C compiler default output file name... a.out
checking whether the C compiler works... yes
checking whether we are cross compiling... no
checking for suffix of executables...
checking for suffix of object files... o
checking whether we are using the GNU C compiler... yes
checking whether gcc accepts -g... yes
checking for gcc option to accept ANSI C... none needed
checking for a sed that does not truncate output... /usr/bin/sed
checking for egrep... grep -E
checking for ld used by gcc... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking for /usr/bin/ld option to reload object files... -r
checking for BSD-compatible nm... /usr/bin/nm -p
checking whether ln -s works... yes
checking how to recognise dependent libraries... pass_all
checking how to run the C preprocessor... gcc -E
checking for ANSI C header files... yes
checking for sys/types.h... yes
checking for sys/stat.h... yes
checking for stdlib.h... yes
checking for string.h... yes
checking for memory.h... yes
checking for strings.h... yes
checking for inttypes.h... yes
checking for stdint.h... yes
checking for unistd.h... yes
checking dlfcn.h usability... yes
checking dlfcn.h presence... yes
checking for dlfcn.h... yes
checking for g++... g++
checking whether we are using the GNU C++ compiler... yes
checking whether g++ accepts -g... yes
checking how to run the C++ preprocessor... g++ -E
checking for g77... no
checking for f77... no
checking for xlf... no
checking for frt... no
checking for pgf77... no
checking for fort77... no
checking for fl32... no
checking for af77... no
checking for f90... no
checking for xlf90... no
checking for pgf90... no
checking for epcf90... no
checking for f95... no
checking for fort... no
checking for xlf95... no
checking for ifc... no
checking for efc... no
checking for pgf95... no
checking for lf95... no
checking for gfortran... no
checking whether we are using the GNU Fortran 77 compiler... no
checking whether  accepts -g... no
checking the maximum length of command line arguments... 65536
checking command to parse /usr/bin/nm -p output from gcc object... ok
checking for objdir... .libs
checking for ar... ar
checking for ranlib... ranlib
checking for strip... strip
checking if gcc static flag  works... yes
checking if gcc supports -fno-rtti -fno-exceptions... no
checking for gcc option to produce PIC... -fno-common
checking if gcc PIC flag -fno-common works... yes
checking if gcc supports -c -o file.o... yes
checking whether the gcc linker (/usr/bin/ld) supports shared  
libraries... yes

checking dynamic linker characteristics... darwin8.5.1 dyld
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... yes
checking whether to build static libraries... yes
configure: creating libtool
appending configuration tag "CXX" to libtool
checking for ld used by g++... /usr/bin/ld
checking if the linker (/usr/bin/ld) is GNU ld... no
checking whether the g++ linker (/usr/bin/ld) supports shared  
libraries... yes

checking for g++ option to produce PIC... -fno-common
checking if g++ PIC flag -fno-common works... yes
checking if g++ supports -c -o file.o... yes
checking whether the g++ linker (/usr/bin/ld) supports shared  
libraries... yes

checking dynamic linker characteristics... darwin8.5.1 dyld
checking how to hardcode library paths into programs... immediate
checking whether stripping libraries is possible... yes
appending configuration tag "F77" to libtool
checking for a BSD-compatible install... /usr/bin/install -c
checking for gawk... no
checking for mawk... no
checking for nawk... no
checking for awk... awk
Version set to 3.3
Release set to 3.3.4
Version number set to 3003004
checking for gcc... (cached) gcc
checking whether we are using the GNU C compiler... (cached) yes
checking whether gcc accepts -g... (cached) yes
checking for gcc option to accept ANSI C... (cached) none needed
checking switches for the host compiler... -g -O2
checking target compiler... gcc
checking switches on the target compiler... -g -O2
checking target linker... gcc
checking switches on the target compiler... checking for ranlib...  
(cached) ranlib

-g -O2
checking if host and target 

[sqlite] Problems compiling SQLite 3.3.4 on Mac OS X

2006-03-06 Thread Tito Ciuro

Hello,

SQLite 3.3.4
Mac OS X Tiger 10.4.5

After ./configure and make SQLite 3.3.4, I see that some files have  
disappeared:


os_test.c
os_test.h
os_unix.h
os_win.h

This results in a few undefined symbols:

sqlite3OsClose
sqlite3FileSize
sqlite3OsLock
...

Any ideas?

Thanks,

-- Tito


Re: [sqlite] Where are temporary tables/indices stored?

2005-09-01 Thread Tito Ciuro

Of course!!

Thanks Dennis :-)

-- Tito

On 01/09/2005, at 23:05, Dennis Cote wrote:


Tito Ciuro wrote:




If I execute a statement such as 'CREATE TEMP TABLE...' without   
specifying the database name, where is the SQL statement stored?  
I've  tried using 'temp' as the database name, but it doesn't  
return anything.




Tito,

The temp table information is stored in a second master table, just  
like the sqlite_master table, called sqlite_temp_master. Try this:


create temp table t(1,b);
select * from sqlite_temp_master;

HTH
Dennis Cote







Re: [sqlite] How to retrieve sqlite version

2005-09-01 Thread Tito Ciuro

Hi Jeff,

Just call sqlite3_libversion()

Regards,

-- Tito

On 01/09/2005, at 19:55, Dinsmore, Jeff wrote:


I can't seem to come up with how to get the version from sqlite.

The frustrating thing is that I've done it before... As I recall,  
it's a

select, but for the life of me, I can't remember the right syntax.

Anyone have  that info handy?

Thanks,

Jeff Dinsmore
MIS - Interfaces
Ridgeview Medical Center
[EMAIL PROTECTED]
952.442.2191 x6592



Ridgeview Medical Center Confidentiality Notice: This email  
message, including any attachments, is for the sole use of the  
intended recipient(s) and may contain confidential and privileged  
information. Any unauthorized review, use, disclosure or  
distribution is prohibited. If you are not the intended recipient,  
please contact the sender by reply email and destroy all copies of  
the original message.




[sqlite] Where are temporary tables/indices stored?

2005-09-01 Thread Tito Ciuro

Hello,

Regarding the TEMP keyword, the documentation states:


If the "TEMP" or "TEMPORARY" keyword occurs in between "CREATE" and  
"TABLE" then the table that is created is only visible within that  
same database connection and is automatically deleted when the  
database connection is closed. Any indices created on a temporary  
table are also temporary. Temporary tables and indices are stored  
in a separate file distinct from the main database file.


If a  is specified, then the table is created in the  
named database. It is an error to specify both a   
and the TEMP keyword, unless the  is "temp". If no  
database name is specified, and the TEMP keyword is not present,  
the table is created in the main database.


If I execute a statement such as 'CREATE TEMP TABLE...' without  
specifying the database name, where is the SQL statement stored? I've  
tried using 'temp' as the database name, but it doesn't return anything.




Thanks,



-- Tito

Re: [sqlite] Richard Hipp Awarded Google-O'Reilly Open Source Award at OSCON 2005

2005-08-05 Thread Tito Ciuro
Without SQLite I couldn't have done some of my projects. It's amazing  
at what it does.


Thanks a lot and congratulations!

-- Tito



---
Tito Ciuro
Webbo, L.L.C.
http://www.webbo.com


Re: [sqlite] ALTER TABLE: Confusing documentation

2005-07-27 Thread Tito Ciuro

On 27/07/2005, at 13:04, Kurt Welgehausen wrote:


The docs are correct; you just have to read carefully.


I have :-)


They say that you can "rename, or add a new column to,
an existing table".


No, it doesn't.

It states that you can "rename or add a new column to an existing  
table."


Regards,

-- Tito


  1   2   >