[sqlite] Sqlite 2

2009-04-07 Thread Chuvke
Hello All,

I need a sqlite 2 windows binary but can't find it on the sqlite site
anywhere.
Anyone know where I can download "sqlite.exe" ? Any v2 will be ok.

Thanks in advance!

Greetings,
chuvke
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange sqlite_busy deadlock behavior

2009-04-07 Thread Dave Brown
But why is this deadlocking at all? I thought there shouldn't be a
problem doing this, especially since thread B is using a transaction.
Shouldn't either A or B prevent the other one from accessing the
database until they are done?

On 4/7/09, D. Richard Hipp  wrote:
>
> On Apr 7, 2009, at 8:57 PM, Dave Brown wrote:
>
>> I do finalize it - the steps I wrote were just pseudocode, but yes
>> after the
>> thread-A statement executes I call finalize ( or reset if I'm going
>> to use
>> it again ).  The problem is that thread-A *never* gets to the
>> finalize/reset
>> call.  It's thread-A which is stuck in winSleep().
>>
>
> In that case, you are deadlocked.  Either thread-A or thread-B is
> going to need to give up and retry later.
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Scott Hess
On Tue, Apr 7, 2009 at 7:04 PM, Vinnie  wrote:
> On the other hand there is some legacy data that I want to store
> using UTF-8. For these fields I will use sqlite3_bind_text(). It is
> possible that in a single INSERT statement there are both UTF-16
> and UTF-8 (wchar_t and char) fields present.
>
> At no point am I ever constructing SQL statements using a printf()
> style conversion on field data to create the statement.
>
> Am I vulnerable to a performance penalty because of conversions in this 
> scenario?

There is a lot of debate as to whether you lose more in the expanded
storage needs of UTF-16 than you gain in not having to decode the
UTF-8, even for in-memory applications.  For databases, it is almost
always a win to use UTF-8, because you can decode a huge amount of
UTF-8 for the cost of a single seek (bigger databases means more
seeks).  Unless your data is in that subset which codes more space
efficiently in UTF-16 than UTF-8.

-scott
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Nicolas Williams
On Tue, Apr 07, 2009 at 08:28:24PM -0700, Vinnie wrote:
> 
> > Note that both UTF-8 and UTF-16 are capable of representing
> > the full range of Unicode characters. Conversion between the two is
> > lossless. You seem to be under impression that UTF-8 is somehow
> > deficient, only suitable for "legacy" encoding. This is not the
> > case.
> 
> Yeah thats what they say...but if thats the case then why use UTF-16
> at all? What is the benefit for supporting UNICODE? Why is there

Because, for example, some systems supported UCS-2 to begin with, and
UTF-16 is the upgrade path from UCS-2, and as painless/painful as the
ASCII->UTF-8 upgrade path.

Nico
-- 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Vinnie

> Note that both UTF-8 and UTF-16 are capable of representing
> the full range of Unicode characters. Conversion between the two is
> lossless. You seem to be under impression that UTF-8 is somehow
> deficient, only suitable for "legacy" encoding. This is not the
> case.

Yeah thats what they say...but if thats the case then why use UTF-16 at all? 
What is the benefit for supporting UNICODE? Why is there UTF-16 support in 
SQLite? To be honest, thinking about character encodings gives me a large 
headache even though I've been programming for decades. I figured that 
supporting wide characters will be somehow beneficial for international 
users...I hope I was not mistaken but it was not a small amount of extra work. 
Although I have (hopefully) written everything to work with narrow characters 
by flipping a switch.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Rosemary Alles
Thanks Puneet. Those suggestions really help.

-rosemary.


On Apr 7, 2009, at 5:52 PM, P Kishor wrote:

> On Tue, Apr 7, 2009 at 5:18 PM, Rosemary Alles  
>  wrote:
>> Puneet,
>>
>> As you suggested I have supplied a brief background re: the problem:
>>
>> Background:
>> I'm very new to sql (x2 weeks). I have a database with two tables one
>> with -say (max)- 12k rows of data, and the other with more. The first
>> table (lets calls it A) has the following columns:
>>
>> source_id, x_pos, y_pos, magnitude, time_stamp, bin_num
>>
>> (source_id) is unique and I have indexes on all columns - individual
>> indexes.
>>
>> The second table (let's call it B) has the following columns:
>>
>> source_id, time_stamp, bin_num
>>
>> No column is unique and I have indexes on all columns - individual
>> indexes.
>>
>> I create/update the database without a problem using, sql_prepare,
>> sql_bind and sql_step. I use begin/commit to bundle transactions for
>> the updates. The updating has decent timing.
>>
>> I query the database with a query to extract x_pos, y_pos from  
>> table A
>> for instances (rows) that match a particular bin_num(s) in B provided
>> the source_ids are the same in both tables. The query take ~30.00
>> seconds when run about 7k times  in a loop.
>
> So, each SELECT is taking about 4 ms (30,000/7000). You might try
> loading the entire db in memory and working with that. Your db is
> really tiny; at least one of the tables is only 12K rows. You don't
> say how big table b is, but if you can load the entire db in memory,
> you would get much faster times.
>
> Try other SELECTs as well...
>
> SELECT x_pos, y_pos
> FROM a
> WHERE source_id = (SELECT source_id FROM b WHERE bin_num = ?)
>
> or
>
> SELECT x_pos, y_pos
> FROM a
> WHERE source_id IN ( SELECT source_id FROM b WHERE bin_num IN  
> (?, ?, ?, ?) )
>
> Perhaps others on the list can suggest something.
>
>
>> Each select statement is
>> distinct. The timing isn't acceptable. Obviously the query is
>> inefficient and/or the database isn't organized optimally etc. etc.
>> The program is part of data reduction pipeline system for an  
>> astronomy
>> project. I use the C-interface to sqlite3.
>>
>> Here's the query:
>>
>> select * from A a, B b where b.bin=? and a.soruce_id=b.source_id
>> or
>> elect * from A a, B b where b.bin in (?, ?, ?, ?, ?)  and
>> a.soruce_id=b.source_id
>>
>> the "?" is filled in by a bind via values calculated at run time.
>> Similar to the updates
>> I use sql_prepare, sql_bind and sql_step to run the query. I then  
>> loop
>> through the resulting
>> rows I retrieve from the database with sqlite3_column_* . Doing this
>> say 7k times for each
>> run of the program (and then repeatedly in the pipeline) is hugely
>> costly.  How can I optimize
>> my query/database for better performance?
>>
>> Sample data (table A)
>>source_id   x_pos
>> y_pos magband
>> fr_time_stamp pix_bin_num
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> =
>> = 
>> = 
>> = 
>> =
>>   50275a003-02-3 382.836
>> 235.303   6.162   3
>> 1260978065  23
>>   50275a003-03-3 166.883
>> 567.99   6.032   3
>> 1260978065  51
>>   50275a003-04-31009.492
>> 753.4   6.243   3
>> 1260978065  80
>>   50275a003-05-3  10.083
>> 153.815   7.672   3
>> 1260978065  10
>>   50275a003-06-3 332.153
>> 411.887.65   3
>> 1260978065  43
>>   50275a003-07-3 888.086
>> 135.478   7.589   3
>> 1260978065  18
>>   50275a003-09-3 208.277
>> 292.152   8.127   3
>> 1260978065  22
>>   50275a003-13-3 788.648
>> 829.213   8.424   3
>> 1260978065  87
>>   50275a003-14-3 277.768
>> 19.981   8.335   3
>> 1260978065   2
>>   50275a003-17-3 665.116
>> 624.767   8.807   3
>> 1260978065  66
>>   50275a003-18-3 170.859
>> 855.147   8.734   3
>> 1260978065  81
>>   50275a003-19-3 694.634
>> 210.285   8.787   3
>> 1260978065  26
>>   50275a003-20-3 293.737
>>

Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Igor Tandetnik
"Vinnie"  wrote in
message news:320060.55321...@web58204.mail.re3.yahoo.com
> However, I have table fields which will be UTF-16. For example,
> filenames that have to support international character sets. Or
> metadata fields that use different character sets (UNICODE). For
> these I am using sqlite3_bind_text16() and passing an appropriate
> wchar_t buffer.
>
> On the other hand there is some legacy data that I want to store
> using UTF-8. For these fields I will use sqlite3_bind_text(). It is
> possible that in a single INSERT statement there are both UTF-16 and
> UTF-8 (wchar_t and char) fields present.

Note that both UTF-8 and UTF-16 are capable of representing the full 
range of Unicode characters. Conversion between the two is lossless. You 
seem to be under impression that UTF-8 is somehow deficient, only 
suitable for "legacy" encoding. This is not the case.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Bill KING
Vinnie wrote:
> PRAGMA statements, I see what you mean now. This is exactly what I needed, 
> thanks a lot.
>
> To clarify what I am doing, my SQL statements are in UTF-8 and they are all 
> prepared, with parameter bindings. So table names, column names, etc.. are 
> all UTF-8.
>
> However, I have table fields which will be UTF-16. For example, filenames 
> that have to support international character sets. Or metadata fields that 
> use different character sets (UNICODE). For these I am using 
> sqlite3_bind_text16() and passing an appropriate wchar_t buffer.
>
> On the other hand there is some legacy data that I want to store using UTF-8. 
> For these fields I will use sqlite3_bind_text(). It is possible that in a 
> single INSERT statement there are both UTF-16 and UTF-8 (wchar_t and char) 
> fields present.
>
> At no point am I ever constructing SQL statements using a printf() style 
> conversion on field data to create the statement.
>
> Am I vulnerable to a performance penalty because of conversions in this 
> scenario?
>
> Thanks
>   
>From what I can see, i'd say no, binds are converted once (iirc) and
referenced from there on forwards, quoted strings are generally where
the performance hit comes in, as they're converted every step of the vm
(so for a select statement, every sqlite3_step over the result set).
>
>   
>> Igor Tandetnik wrote:
>> 
>>> You can mix and match encodings in your application.
>>>   
>> The database 
>> 
>>> encoding determines how strings are actually stored in
>>>   
>> the file (and 
>> 
>>> it's database-wide, not per table). SQLite API
>>>   
>> converts back and forth 
>> 
>>> as necessary.
>>>   
>>>   
>> Very inneficiently, but yes, it does. I suggest to the OP
>> to use
>> parameterised queries if you need to use string values,
>> otherwise,
>> you'll see significant overhead from conversions back
>> and forth between
>> utf8 and utf16 inside the sqlite code.
>> 
>>> Igor Tandetnik 
>>>   
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   


-- 
Bill King, Software Engineer
Qt Software, Nokia Pty Ltd
Brisbane Office

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Vinnie

PRAGMA statements, I see what you mean now. This is exactly what I needed, 
thanks a lot.

To clarify what I am doing, my SQL statements are in UTF-8 and they are all 
prepared, with parameter bindings. So table names, column names, etc.. are all 
UTF-8.

However, I have table fields which will be UTF-16. For example, filenames that 
have to support international character sets. Or metadata fields that use 
different character sets (UNICODE). For these I am using sqlite3_bind_text16() 
and passing an appropriate wchar_t buffer.

On the other hand there is some legacy data that I want to store using UTF-8. 
For these fields I will use sqlite3_bind_text(). It is possible that in a 
single INSERT statement there are both UTF-16 and UTF-8 (wchar_t and char) 
fields present.

At no point am I ever constructing SQL statements using a printf() style 
conversion on field data to create the statement.

Am I vulnerable to a performance penalty because of conversions in this 
scenario?

Thanks


> Igor Tandetnik wrote:
> > You can mix and match encodings in your application.
> The database 
> > encoding determines how strings are actually stored in
> the file (and 
> > it's database-wide, not per table). SQLite API
> converts back and forth 
> > as necessary.
> >   
> Very inneficiently, but yes, it does. I suggest to the OP
> to use
> parameterised queries if you need to use string values,
> otherwise,
> you'll see significant overhead from conversions back
> and forth between
> utf8 and utf16 inside the sqlite code.
> > Igor Tandetnik 

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange sqlite_busy deadlock behavior

2009-04-07 Thread D. Richard Hipp

On Apr 7, 2009, at 8:57 PM, Dave Brown wrote:

> I do finalize it - the steps I wrote were just pseudocode, but yes  
> after the
> thread-A statement executes I call finalize ( or reset if I'm going  
> to use
> it again ).  The problem is that thread-A *never* gets to the  
> finalize/reset
> call.  It's thread-A which is stuck in winSleep().
>

In that case, you are deadlocked.  Either thread-A or thread-B is  
going to need to give up and retry later.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] ANN - DBD::SQLite version 1.20

2009-04-07 Thread Jim Dodgen
thanks a bunch,

Jim

On Tue, Apr 7, 2009 at 1:24 PM, Darren Duncan  wrote:
> All,
>
> I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI 
> Driver)
> version 1.20 has been released on CPAN.
>
>   http://search.cpan.org/dist/DBD-SQLite/
>
> This follows on the heels of 10 developer releases released starting 2009 
> March
> 27th (Adam "Alias" Kennedy has been doing release management).  The previous
> production release of DBD::SQLite was version 1.14 about 18 months ago.
>
> Improvements in 1.20 over 1.14 include:
>
>   * Updated the bundled SQLite library from v3.4.2 to v3.6.12, which carries
> many new features as well as bug fixes.
>   * Added support for user-defined collations.
>   * Added ->column_info().
>   * Resolved all but a handful of the 60+ RT items.
>   * Many bug fixes and minor enhancements.
>   * Added more tests, large refactoring of tests.
>   * Minimum dependencies are now Perl 5.006 and DBI 1.57.
>
> See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.20/Changes as well as
> http://sqlite.org/changes.html for details.
>
> Now it is especially important, since automatic updates from CPAN such as with
> the CPAN/CPANPLUS utilities, would now be pulling this new 1.20 by default, 
> ...
>
> Please bash the hell out of the latest DBD::SQLite and report any outstanding
> bugs on RT.  Test your dependent or compatible projects with it, which 
> includes
> any DBMS-wrapping or object persistence modules, and applications.
>
> If you want in to DBD::SQLite development, then join the following email/IRC
> forums which MST created (the mailing list, I am administrating):
>
>   http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite
>
>   #dbd-sqlite on irc.perl.org
>
> And the canonical version control is at:
>
>   http://svn.ali.as/cpan/trunk/DBD-SQLite/
>
> Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.
>
> Regarding near future plans:  Now, the current 1.20 uses the pristine
> several-dozen SQLite library source files, same as 1.14 did.  While reality 
> may
> be different, I believe that the next major planned change to DBD::SQLite is 
> to
> substitute in the "amalgamation" version, which combines all the SQLite source
> files into a single file; the amalgamation is the recommended form for users
> according to the SQLite core developers.  See http://sqlite.org/download.html
> for a description of that.  Meanwhile there should be another stable release
> with any bug fixes for 1.20 to come out first.  Any other major changes or
> features for DBD::SQLite are expected to come out separately from and after 
> the
> stabilized switch to the amalgamation sources.
>
> Please do not reply to me directly with your responses.  Instead send them to
> the forums or file with RT as is appropriate.
>
> Thank you. -- Darren Duncan
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Jim Dodgen
j...@dodgen.us
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange sqlite_busy deadlock behavior

2009-04-07 Thread Dave Brown
I do finalize it - the steps I wrote were just pseudocode, but yes after the
thread-A statement executes I call finalize ( or reset if I'm going to use
it again ).  The problem is that thread-A *never* gets to the finalize/reset
call.  It's thread-A which is stuck in winSleep().

On Tue, Apr 7, 2009 at 3:20 PM, D. Richard Hipp  wrote:

>
> On Apr 7, 2009, at 6:11 PM, Dave Brown wrote:
>
> > I am seeing the equivalent of a deadlock, with SQLITE_BUSY being
> > returned
> > forever from my code which has 2 threads using SQLite. I can repro
> > this at
> > will. Each thread is using it's own connection to the sqlite
> > database, so
> > they are not sharing the same connection.
> >
> > Here is what is happening in chronological order:
> >
> > Thread A: Prepare statement-A
> > Thread B: Prepare statement-B
> > Thread B: Bind some variables to statement-B
> > Thread B: Prepare and execute "BEGIN IMMEDIATE" statement.
> > Thread B: Execute statement-B
> > Thread A: Bind variables to statement-A
> > Thread A: Execute statement-A ( statement-A is a one-line simple SQL
> > query
> > like "SELECT name FROM table WHERE id= ?;" )
> > Thread B: Prepare and execute "COMMIT" statement.
> >
> > At this point, thread A is deep into sqlite code calling winSleep(),
> > and
> > thread B is getting SQLITE_BUSY returned.  I can even set the busy
> > timeout
> > to 100 seconds and the same thing happens -- hangs until 100 seconds
> > elapse,
> > then busy returned.
> >
> > Anyone know why???
>
> You didn't finalize the statement in thread-A.  It is still holding a
> read-lock which is preventing the COMMIT from running in thread-B.
>
> D. Richard Hipp
> d...@hwaci.com
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread P Kishor
On Tue, Apr 7, 2009 at 5:18 PM, Rosemary Alles  wrote:
> Puneet,
>
> As you suggested I have supplied a brief background re: the problem:
>
> Background:
> I'm very new to sql (x2 weeks). I have a database with two tables one
> with -say (max)- 12k rows of data, and the other with more. The first
> table (lets calls it A) has the following columns:
>
> source_id, x_pos, y_pos, magnitude, time_stamp, bin_num
>
> (source_id) is unique and I have indexes on all columns - individual
> indexes.
>
> The second table (let's call it B) has the following columns:
>
> source_id, time_stamp, bin_num
>
> No column is unique and I have indexes on all columns - individual
> indexes.
>
> I create/update the database without a problem using, sql_prepare,
> sql_bind and sql_step. I use begin/commit to bundle transactions for
> the updates. The updating has decent timing.
>
> I query the database with a query to extract x_pos, y_pos from table A
> for instances (rows) that match a particular bin_num(s) in B provided
> the source_ids are the same in both tables. The query take ~30.00
> seconds when run about 7k times  in a loop.

So, each SELECT is taking about 4 ms (30,000/7000). You might try
loading the entire db in memory and working with that. Your db is
really tiny; at least one of the tables is only 12K rows. You don't
say how big table b is, but if you can load the entire db in memory,
you would get much faster times.

Try other SELECTs as well...

SELECT x_pos, y_pos
FROM a
WHERE source_id = (SELECT source_id FROM b WHERE bin_num = ?)

or

SELECT x_pos, y_pos
FROM a
WHERE source_id IN ( SELECT source_id FROM b WHERE bin_num IN (?, ?, ?, ?) )

Perhaps others on the list can suggest something.


> Each select statement is
> distinct. The timing isn't acceptable. Obviously the query is
> inefficient and/or the database isn't organized optimally etc. etc.
> The program is part of data reduction pipeline system for an astronomy
> project. I use the C-interface to sqlite3.
>
> Here's the query:
>
> select * from A a, B b where b.bin=? and a.soruce_id=b.source_id
> or
> elect * from A a, B b where b.bin in (?, ?, ?, ?, ?)  and
> a.soruce_id=b.source_id
>
> the "?" is filled in by a bind via values calculated at run time.
> Similar to the updates
> I use sql_prepare, sql_bind and sql_step to run the query. I then loop
> through the resulting
> rows I retrieve from the database with sqlite3_column_* . Doing this
> say 7k times for each
> run of the program (and then repeatedly in the pipeline) is hugely
> costly.  How can I optimize
> my query/database for better performance?
>
> Sample data (table A)
>            source_id               x_pos
> y_pos                 mag                band
> fr_time_stamp         pix_bin_num
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> =
> 
>   50275a003-02-3             382.836
> 235.303               6.162                   3
> 1260978065                  23
>   50275a003-03-3             166.883
> 567.99               6.032                   3
> 1260978065                  51
>   50275a003-04-3            1009.492
> 753.4               6.243                   3
> 1260978065                  80
>   50275a003-05-3              10.083
> 153.815               7.672                   3
> 1260978065                  10
>   50275a003-06-3             332.153
> 411.88                7.65                   3
> 1260978065                  43
>   50275a003-07-3             888.086
> 135.478               7.589                   3
> 1260978065                  18
>   50275a003-09-3             208.277
> 292.152               8.127                   3
> 1260978065                  22
>   50275a003-13-3             788.648
> 829.213               8.424                   3
> 1260978065                  87
>   50275a003-14-3             277.768
> 19.981               8.335                   3
> 1260978065                   2
>   50275a003-17-3             665.116
> 624.767               8.807                   3
> 1260978065                  66
>   50275a003-18-3             170.859
> 855.147               8.734                   3
> 1260978065                  81
>   50275a003-19-3             694.634
> 210.285               8.787                   3
> 1260978065                  26
>   50275a003-20-3             293.737
> 11.928               9.144                   3
> 1260978065                   2
>   50275a003-23-3              311.53
> 729.644               9.237                   3
> 1260978065                  73
>   50275a003-24-3             284.052
> 947.095               9.632                   3
> 1260978065                  92
>
>
>
> Sam

Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Bill KING
Igor Tandetnik wrote:
> You can mix and match encodings in your application. The database 
> encoding determines how strings are actually stored in the file (and 
> it's database-wide, not per table). SQLite API converts back and forth 
> as necessary.
>   
Very inneficiently, but yes, it does. I suggest to the OP to use
parameterised queries if you need to use string values, otherwise,
you'll see significant overhead from conversions back and forth between
utf8 and utf16 inside the sqlite code.
> Igor Tandetnik 
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>   


-- 
Bill King, Software Engineer
Qt Software, Nokia Pty Ltd
Brisbane Office

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] On-line documentation license

2009-04-07 Thread Krzysztof 'ChanibaL' Bociurko
Hello, 

I'm in the process of writing my bachelors and I'd like to use the
awesome SQL syntax diagrams for DML and perhaps DDL statements from
SQLites on-line documentation. I can't find anything on the license of
the documentation (apart from "The details here are unclear."), so I
thought it'd be best to ask - can I do that? Of course I will not claim
any copyright for the diagrams and the appropriate information shall be
added to the bibliography.

The project on which I base my bachelors is a highly reusable, object
oriented open source (code LGPL, examples BSD license) web application
framework written in PHP. For data storage it's using SQLite
or MySQL (and easily addaptable for other SQL dialects). The
bachelors is being written in Polish but the source code is in
English. I plan to release the framework (called SpInq) to the public at
the end of this year, this time with English docs.

PS. What was used to produce the diagrams, and would it be possible to
get vectorized versions of them? (if vectorized would require any
extra work, please ignore, the bitmap from the website will do)

Thanks in advance,

Krzysztof `ChanibaL` Bociurko 
xmpp://jab...@chanibal.net, http://chanibal.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread John Stanton
A TRANSACTION only has a meaning when the database is being altered.  
There are no journalling and commits on a read.  You get the results of 
the SELECT as soon as the database read occurs.  You cannot be faster 
than that.

Where you can get improved SELECT performance is by using caching.  
Explore shared caches
JS.

Rosemary Alles wrote:
> Hullo Puneet,
>
> Many thanks for your response.
>
> My understanding of a sqlite3 "transaction" is probably poor. From  
> your response
> (if you are correct) I see that only UPDATES and INSERTS can be  
> speeded up
> via bundling many numbers of them in a Being/Commit block? Leading me  
> to ask:
> Is there no difference in behavior between a SINGLE select and several
> of them within the context of transaction?
>
> And yes, each of the many SELECTS have a different WHERE clause.
>
> -rosemary.
>
> On Apr 7, 2009, at 12:38 PM, P Kishor wrote:
>
>   
>> On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles  
>>  wrote:
>> 
>>> Hullo Igor,
>>>
>>> Many thanks for your response: I believe I didn't phrase my question
>>> correctly:
>>>
>>> 1) If I were to bundle several thousand SELECT statements in a single
>>> transaction - why would it not run faster?
>>>   
>> as far as I understand, transactions matter only in the context of
>> UPDATEs and INSERTs for the purpose of speed (keep in mind, I am not
>> addressing data integrity -- the ACID part here, but only speed). A
>> transaction speeds this multiple UPDATEs and INSERTs by decreasing the
>> number of times your program interacts with slowest part of your
>> computer, the hard disk.
>>
>> Multiple SELECTs in a transaction might help with the integrity, but
>> ensuring that you don't end up getting data changed in mid-stream, but
>> won't speed up the query.
>>
>> Are all your thousands of SELECTs based on different WHERE criterion?
>> If not, they would really be just one SELECT.
>>
>> 
>>> 2) This is precisely the problem though - each of those statements
>>> will yield rows of results to be parsed with
>>> sqlite3_column - in the context of the user's (my) program. If many
>>> SELECT statements are issued within the context
>>> of a single transaction (repeatedly), how does one deal with the
>>> results without a callback (if using sql_step)? Yes,
>>> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
>>> However, is does (also - additionally) offer the
>>> option of a user supplied calleback routine which sql_prepare etc. do
>>> not.
>>>
>>> Essentially, my question is about context. if many many SELECTS are
>>> bundled in a single transaction using prepare,
>>> bind and step. In what context does one parse the results? Do we not
>>> have synchronizing issue here?
>>>
>>> Thanks again,
>>> rosemary
>>>
>>> On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
>>>
>>>   
 "Rosemary Alles"  wrote
 in message news:20A6B796-613B-4F5D- 
 bfca-359d6b9fa...@ipac.caltech.edu
 
> I want to speed up my app. Can I run SELECT statements within the
> context of a transaction.
>   
 Yes, but it's unlikely to make it run any faster.

 
> If so, how does one handle the query
> results?
>   
 The same way as when running it without an explicit transaction.

 
> I would assume this cannot be done with sql_prepare,
> sql_bind, sql_step?
>   
 Of course it can. See sqlite3_column_*

 
> Would I *have* to use sql_exec
>   
 No. And if you look at the implementation of sqlite3_exec, it uses
 sqlite3_prepare and sqlite3_step internally anyway. It's maintained
 mostly for backward compatibility.

 
> What am I giving up
> by using sql_exec vs sql_prepare, sql_bind and sql_step?
>   
 Off the top of my head: 1) strong typing (you get all data as  
 strings,
 so that, say, an integer is converted to string and then you'll have
 to
 convert it back); 2) streaming (with sqlite3_exec, the whole  
 resultset
 must be present in memory at the same time; with sqlite3_step, only
 one
 row's worth of data needs to be present in memory; makes a huge
 difference for very large resultsets).

 Igor Tandetnik



 
>> -- 
>> Puneet Kishor http://www.punkish.org/
>> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
>> Carbon Model http://carbonmodel.org/
>> Open Source Geospatial Foundation http://www.osgeo.org/
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> 
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>   

___
sqlite-users mailing list

Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Igor Tandetnik
Vinnie  wrote:
>> From: "Igor Tandetnik" 
>> You could convert your file name from UTF-16 to UTF-8, then
>> call sqlite3_open_v2.
>
> Converting the file name is no problem. But I thought that depending
> on how you opened the database (open16 versus open_v2), SQL treats
> your strings differently. I don't care about the encoding used to
> pass the filename, I care about the strings in my table rows.

You can specify encoding explicitly, with "PRAGMA encoding" statement, 
right after the database is created (or rather, right after you call 
sqlite3_open_* on a non-existent file and before you issue any other 
statement; the database is not actually created until the first 
statement needs to write to the file).

> Or does the encoding for the file name used to open the database not
> matter to subsequent SQLite SQL statements? Can I mix and match UTF-8
> and UTF-16 in a table or across multiple tables?

You can mix and match encodings in your application. The database 
encoding determines how strings are actually stored in the file (and 
it's database-wide, not per table). SQLite API converts back and forth 
as necessary.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Vinnie


> From: "Igor Tandetnik" 
> You could convert your file name from UTF-16 to UTF-8, then
> call sqlite3_open_v2.

Converting the file name is no problem. But I thought that depending on how you 
opened the database (open16 versus open_v2), SQL treats your strings 
differently. I don't care about the encoding used to pass the filename, I care 
about the strings in my table rows.

Or does the encoding for the file name used to open the database not matter to 
subsequent SQLite SQL statements? Can I mix and match UTF-8 and UTF-16 in a 
table or across multiple tables?

> See PRAGMA user_version
> (http://sqlite.org/pragma.html#version) - it's 
> designed specifically to do this sort of thing.

Yes I see, thank you very much. This is exactly what I am already trying to 
with my VERSION table.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Strange sqlite_busy deadlock behavior

2009-04-07 Thread D. Richard Hipp

On Apr 7, 2009, at 6:11 PM, Dave Brown wrote:

> I am seeing the equivalent of a deadlock, with SQLITE_BUSY being  
> returned
> forever from my code which has 2 threads using SQLite. I can repro  
> this at
> will. Each thread is using it's own connection to the sqlite  
> database, so
> they are not sharing the same connection.
>
> Here is what is happening in chronological order:
>
> Thread A: Prepare statement-A
> Thread B: Prepare statement-B
> Thread B: Bind some variables to statement-B
> Thread B: Prepare and execute "BEGIN IMMEDIATE" statement.
> Thread B: Execute statement-B
> Thread A: Bind variables to statement-A
> Thread A: Execute statement-A ( statement-A is a one-line simple SQL  
> query
> like "SELECT name FROM table WHERE id= ?;" )
> Thread B: Prepare and execute "COMMIT" statement.
>
> At this point, thread A is deep into sqlite code calling winSleep(),  
> and
> thread B is getting SQLITE_BUSY returned.  I can even set the busy  
> timeout
> to 100 seconds and the same thing happens -- hangs until 100 seconds  
> elapse,
> then busy returned.
>
> Anyone know why???

You didn't finalize the statement in thread-A.  It is still holding a  
read-lock which is preventing the COMMIT from running in thread-B.

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Rosemary Alles
Puneet,

As you suggested I have supplied a brief background re: the problem:

Background:
I'm very new to sql (x2 weeks). I have a database with two tables one  
with -say (max)- 12k rows of data, and the other with more. The first  
table (lets calls it A) has the following columns:

source_id, x_pos, y_pos, magnitude, time_stamp, bin_num

(source_id) is unique and I have indexes on all columns - individual  
indexes.

The second table (let's call it B) has the following columns:

source_id, time_stamp, bin_num

No column is unique and I have indexes on all columns - individual  
indexes.

I create/update the database without a problem using, sql_prepare,  
sql_bind and sql_step. I use begin/commit to bundle transactions for  
the updates. The updating has decent timing.

I query the database with a query to extract x_pos, y_pos from table A  
for instances (rows) that match a particular bin_num(s) in B provided  
the source_ids are the same in both tables. The query take ~30.00  
seconds when run about 7k times  in a loop. Each select statement is  
distinct. The timing isn't acceptable. Obviously the query is  
inefficient and/or the database isn't organized optimally etc. etc.  
The program is part of data reduction pipeline system for an astronomy  
project. I use the C-interface to sqlite3.

Here's the query:

select * from A a, B b where b.bin=? and a.soruce_id=b.source_id
or
elect * from A a, B b where b.bin in (?, ?, ?, ?, ?)  and  
a.soruce_id=b.source_id

the "?" is filled in by a bind via values calculated at run time.  
Similar to the updates
I use sql_prepare, sql_bind and sql_step to run the query. I then loop  
through the resulting
rows I retrieve from the database with sqlite3_column_* . Doing this  
say 7k times for each
run of the program (and then repeatedly in the pipeline) is hugely  
costly.  How can I optimize
my query/database for better performance?

Sample data (table A)
source_id   x_pos
y_pos magband
fr_time_stamp pix_bin_num
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 
= 

   50275a003-02-3 382.836  
235.303   6.162   3   
1260978065  23
   50275a003-03-3 166.883   
567.99   6.032   3   
1260978065  51
   50275a003-04-31009.492
753.4   6.243   3   
1260978065  80
   50275a003-05-3  10.083  
153.815   7.672   3   
1260978065  10
   50275a003-06-3 332.153   
411.887.65   3   
1260978065  43
   50275a003-07-3 888.086  
135.478   7.589   3   
1260978065  18
   50275a003-09-3 208.277  
292.152   8.127   3   
1260978065  22
   50275a003-13-3 788.648  
829.213   8.424   3   
1260978065  87
   50275a003-14-3 277.768   
19.981   8.335   3   
1260978065   2
   50275a003-17-3 665.116  
624.767   8.807   3   
1260978065  66
   50275a003-18-3 170.859  
855.147   8.734   3   
1260978065  81
   50275a003-19-3 694.634  
210.285   8.787   3   
1260978065  26
   50275a003-20-3 293.737   
11.928   9.144   3   
1260978065   2
   50275a003-23-3  311.53  
729.644   9.237   3   
1260978065  73
   50275a003-24-3 284.052  
947.095   9.632   3   
1260978065  92



Sample data (table B)
 pix_bin_num   source_id   fr_time_stamp

   21  50275a003-02-3  1260978065
   11  50275a003-02-3  1260978065
   31  50275a003-02-3  1260978065
   12  50275a003-02-3  1260978065
   22  50275a003-02-3  1260978

[sqlite] Strange sqlite_busy deadlock behavior

2009-04-07 Thread Dave Brown
I am seeing the equivalent of a deadlock, with SQLITE_BUSY being returned
forever from my code which has 2 threads using SQLite. I can repro this at
will. Each thread is using it's own connection to the sqlite database, so
they are not sharing the same connection.

Here is what is happening in chronological order:

Thread A: Prepare statement-A
Thread B: Prepare statement-B
Thread B: Bind some variables to statement-B
Thread B: Prepare and execute "BEGIN IMMEDIATE" statement.
Thread B: Execute statement-B
Thread A: Bind variables to statement-A
Thread A: Execute statement-A ( statement-A is a one-line simple SQL query
like "SELECT name FROM table WHERE id= ?;" )
Thread B: Prepare and execute "COMMIT" statement.

At this point, thread A is deep into sqlite code calling winSleep(), and
thread B is getting SQLITE_BUSY returned.  I can even set the busy timeout
to 100 seconds and the same thing happens -- hangs until 100 seconds elapse,
then busy returned.

Anyone know why???
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread P Kishor
On Tue, Apr 7, 2009 at 3:45 PM, Rosemary Alles  wrote:
> Hullo Puneet,
>
> Many thanks for your response.
>
> My understanding of a sqlite3 "transaction" is probably poor. From your
> response
> (if you are correct) I see that only UPDATES and INSERTS can be speeded up
> via bundling many numbers of them in a Being/Commit block?

Not that it is any standard, but search for the word "transaction" at

http://developer.postgresql.org/pgdocs/postgres/sql-select.html

You will see, Pg recommends using SELECTs inside a TRANSACTION for
just the reason I mentioned in my email... ensuring that you retrieve
something dependable that is not changed on you midstream, not for
speed.

> Leading me to
> ask:
> Is there no difference in behavior between a SINGLE select and several
> of them within the context of transaction?

What do you mean by "behavior"? Do you mean what you will get back?
No, it shouldn't be different. Do you mean how fast you will get it
back? Dunno, but you can tell for sure by writing a trivial
benchmarking script on your data.


>
> And yes, each of the many SELECTS have a different WHERE clause.

Don't mean to preempt your application, but bunching SELECTs with
different WHERE clause makes little sense. I mean, if you are doing

SELECT .. FROM .. WHERE color = 'blue'
SELECT .. FROM .. WHERE color = 'red'
SELECT .. FROM .. WHERE color = 'green'

you can just as well do

SELECT .. FROM .. WHERE color IN ('blue','red','green')

On the other hand, if you are doing

SELECT .. FROM .. WHERE color = 'blue'
SELECT .. FROM .. WHERE taste = 'bitter'
SELECT .. FROM .. WHERE type = 'pill'

That doesn't make much sense, but can also be accomplished with a
single SELECT and a bunch of ORs

Maybe you should explain your actual problem a bit more. What exactly
are you trying to accomplish? What does your db look like? Provide
some sample data, and perhaps example of your multiple but different
SELECT queries that you want to wrap in a transaction.

Once again, if only speed is your aim, benchmark it.


>
> -rosemary.
>
> On Apr 7, 2009, at 12:38 PM, P Kishor wrote:
>
>> On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles 
>> wrote:
>>>
>>> Hullo Igor,
>>>
>>> Many thanks for your response: I believe I didn't phrase my question
>>> correctly:
>>>
>>> 1) If I were to bundle several thousand SELECT statements in a single
>>> transaction - why would it not run faster?
>>
>> as far as I understand, transactions matter only in the context of
>> UPDATEs and INSERTs for the purpose of speed (keep in mind, I am not
>> addressing data integrity -- the ACID part here, but only speed). A
>> transaction speeds this multiple UPDATEs and INSERTs by decreasing the
>> number of times your program interacts with slowest part of your
>> computer, the hard disk.
>>
>> Multiple SELECTs in a transaction might help with the integrity, but
>> ensuring that you don't end up getting data changed in mid-stream, but
>> won't speed up the query.
>>
>> Are all your thousands of SELECTs based on different WHERE criterion?
>> If not, they would really be just one SELECT.
>>
>>> 2) This is precisely the problem though - each of those statements
>>> will yield rows of results to be parsed with
>>> sqlite3_column - in the context of the user's (my) program. If many
>>> SELECT statements are issued within the context
>>> of a single transaction (repeatedly), how does one deal with the
>>> results without a callback (if using sql_step)? Yes,
>>> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
>>> However, is does (also - additionally) offer the
>>> option of a user supplied calleback routine which sql_prepare etc. do
>>> not.
>>>
>>> Essentially, my question is about context. if many many SELECTS are
>>> bundled in a single transaction using prepare,
>>> bind and step. In what context does one parse the results? Do we not
>>> have synchronizing issue here?
>>>
>>> Thanks again,
>>> rosemary
>>>
>>> On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
>>>
 "Rosemary Alles"  wrote
 in message news:20a6b796-613b-4f5d-bfca-359d6b9fa...@ipac.caltech.edu
>
> I want to speed up my app. Can I run SELECT statements within the
> context of a transaction.

 Yes, but it's unlikely to make it run any faster.

> If so, how does one handle the query
> results?

 The same way as when running it without an explicit transaction.

> I would assume this cannot be done with sql_prepare,
> sql_bind, sql_step?

 Of course it can. See sqlite3_column_*

> Would I *have* to use sql_exec

 No. And if you look at the implementation of sqlite3_exec, it uses
 sqlite3_prepare and sqlite3_step internally anyway. It's maintained
 mostly for backward compatibility.

> What am I giving up
> by using sql_exec vs sql_prepare, sql_bind and sql_step?

 Off the top of my head: 1) strong typing (you get all data as strings,
 so that, say, an integer is converted t

Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Igor Tandetnik
Rosemary Alles  wrote:
> Is there no difference in behavior between a SINGLE select and several
> of them within the context of transaction?

There is, of course, a difference between running one SELECT statement 
and running more than one, regardless of transactions. I guess you 
wanted to ask whether there's a difference between running the same set 
of statements within vs outside a transaction.

There is only a difference if you have another connection (in the same 
or different process) that tries to modify the same database at the same 
time. It will be unable to do so while your transaction is in progress. 
So, if you issue several SELECTs within a transaction, their results are 
guaranteed to be consistent (e.g. if you run the same SELECT twice, it 
will produce the same results both times). If you run these SELECTs 
outside a transaction, the other connection could modify the data in 
between, which could lead to inconsistent results. This may or may not 
matter in your particular case.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Rosemary Alles
Hullo Puneet,

Many thanks for your response.

My understanding of a sqlite3 "transaction" is probably poor. From  
your response
(if you are correct) I see that only UPDATES and INSERTS can be  
speeded up
via bundling many numbers of them in a Being/Commit block? Leading me  
to ask:
Is there no difference in behavior between a SINGLE select and several
of them within the context of transaction?

And yes, each of the many SELECTS have a different WHERE clause.

-rosemary.

On Apr 7, 2009, at 12:38 PM, P Kishor wrote:

> On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles  
>  wrote:
>> Hullo Igor,
>>
>> Many thanks for your response: I believe I didn't phrase my question
>> correctly:
>>
>> 1) If I were to bundle several thousand SELECT statements in a single
>> transaction - why would it not run faster?
>
> as far as I understand, transactions matter only in the context of
> UPDATEs and INSERTs for the purpose of speed (keep in mind, I am not
> addressing data integrity -- the ACID part here, but only speed). A
> transaction speeds this multiple UPDATEs and INSERTs by decreasing the
> number of times your program interacts with slowest part of your
> computer, the hard disk.
>
> Multiple SELECTs in a transaction might help with the integrity, but
> ensuring that you don't end up getting data changed in mid-stream, but
> won't speed up the query.
>
> Are all your thousands of SELECTs based on different WHERE criterion?
> If not, they would really be just one SELECT.
>
>> 2) This is precisely the problem though - each of those statements
>> will yield rows of results to be parsed with
>> sqlite3_column - in the context of the user's (my) program. If many
>> SELECT statements are issued within the context
>> of a single transaction (repeatedly), how does one deal with the
>> results without a callback (if using sql_step)? Yes,
>> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
>> However, is does (also - additionally) offer the
>> option of a user supplied calleback routine which sql_prepare etc. do
>> not.
>>
>> Essentially, my question is about context. if many many SELECTS are
>> bundled in a single transaction using prepare,
>> bind and step. In what context does one parse the results? Do we not
>> have synchronizing issue here?
>>
>> Thanks again,
>> rosemary
>>
>> On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
>>
>>> "Rosemary Alles"  wrote
>>> in message news:20A6B796-613B-4F5D- 
>>> bfca-359d6b9fa...@ipac.caltech.edu
 I want to speed up my app. Can I run SELECT statements within the
 context of a transaction.
>>>
>>> Yes, but it's unlikely to make it run any faster.
>>>
 If so, how does one handle the query
 results?
>>>
>>> The same way as when running it without an explicit transaction.
>>>
 I would assume this cannot be done with sql_prepare,
 sql_bind, sql_step?
>>>
>>> Of course it can. See sqlite3_column_*
>>>
 Would I *have* to use sql_exec
>>>
>>> No. And if you look at the implementation of sqlite3_exec, it uses
>>> sqlite3_prepare and sqlite3_step internally anyway. It's maintained
>>> mostly for backward compatibility.
>>>
 What am I giving up
 by using sql_exec vs sql_prepare, sql_bind and sql_step?
>>>
>>> Off the top of my head: 1) strong typing (you get all data as  
>>> strings,
>>> so that, say, an integer is converted to string and then you'll have
>>> to
>>> convert it back); 2) streaming (with sqlite3_exec, the whole  
>>> resultset
>>> must be present in memory at the same time; with sqlite3_step, only
>>> one
>>> row's worth of data needs to be present in memory; makes a huge
>>> difference for very large resultsets).
>>>
>>> Igor Tandetnik
>>>
>>>
>>>
>
>
> -- 
> Puneet Kishor http://www.punkish.org/
> Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
> Carbon Model http://carbonmodel.org/
> Open Source Geospatial Foundation http://www.osgeo.org/
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] [Fwd: [SpatiaLite-Users] spatialite-2.3.0 has been released]

2009-04-07 Thread Alex Mandel


 Original Message 
Subject: [SpatiaLite-Users] spatialite-2.3.0 has been released
Date: Tue, 07 Apr 2009 20:27:51 +0200
From: Alessandro Furieri 
Reply-To: spatialite-us...@googlegroups.com
To: spatialite-us...@googlegroups.com


Hi list,

I've just released SpatiaLite v.2.3.0 [stable version]

http://www.gaia-gis.it/spatialite-2.3.0

=

- supporting Routing [VirtualNetwork]

- supporting EXIF GPS pictures

- compatibility support for FDO/OGR RFC16 [VirtualFDO]

- intensive and generalize debugging

- AMALGAMATION: all the stuff put in a single source, as
  SQLite already does
  a complete SpatialDBMS engine in just 2 (two) C sources

- clear separation between the LIB and the TOOLs

- libspatialite-2.3.0 now includes libsqlite; linking
  a single library allow to support a full SpatialDBMS
  engine [some 800KB required]

- now builds on M$ Visual Studio .NET as well



enhanced GUI-tool:

- color SQL syntax
- full DB self-initialization during creation
  [using init_spatialite.sql is no longer required]
- introducing MEMORY-DB ... try and see ...
  ... never seen before something comparable ...


bye, Sandro

--~--~-~--~~~---~--~~
You received this message because you are subscribed to the Google
Groups "SpatiaLite Users" group.
To post to this group, send email to spatialite-us...@googlegroups.com
To unsubscribe from this group, send email to
spatialite-users+unsubscr...@googlegroups.com
For more options, visit this group at
http://groups.google.com/group/spatialite-users?hl=en
-~--~~~~--~~--~--~---

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] ANN - DBD::SQLite version 1.20

2009-04-07 Thread Darren Duncan
All,

I am pleased to announce that DBD::SQLite (Self Contained RDBMS in a DBI 
Driver) 
version 1.20 has been released on CPAN.

   http://search.cpan.org/dist/DBD-SQLite/

This follows on the heels of 10 developer releases released starting 2009 March 
27th (Adam "Alias" Kennedy has been doing release management).  The previous 
production release of DBD::SQLite was version 1.14 about 18 months ago.

Improvements in 1.20 over 1.14 include:

   * Updated the bundled SQLite library from v3.4.2 to v3.6.12, which carries 
many new features as well as bug fixes.
   * Added support for user-defined collations.
   * Added ->column_info().
   * Resolved all but a handful of the 60+ RT items.
   * Many bug fixes and minor enhancements.
   * Added more tests, large refactoring of tests.
   * Minimum dependencies are now Perl 5.006 and DBI 1.57.

See http://cpansearch.perl.org/src/ADAMK/DBD-SQLite-1.20/Changes as well as 
http://sqlite.org/changes.html for details.

Now it is especially important, since automatic updates from CPAN such as with 
the CPAN/CPANPLUS utilities, would now be pulling this new 1.20 by default, ...

Please bash the hell out of the latest DBD::SQLite and report any outstanding
bugs on RT.  Test your dependent or compatible projects with it, which includes
any DBMS-wrapping or object persistence modules, and applications.

If you want in to DBD::SQLite development, then join the following email/IRC
forums which MST created (the mailing list, I am administrating):

   http://lists.scsys.co.uk/cgi-bin/mailman/listinfo/dbd-sqlite

   #dbd-sqlite on irc.perl.org

And the canonical version control is at:

   http://svn.ali.as/cpan/trunk/DBD-SQLite/

Patches welcome.  Ideas welcome.  Testing welcome.  Whining to /dev/null.

Regarding near future plans:  Now, the current 1.20 uses the pristine 
several-dozen SQLite library source files, same as 1.14 did.  While reality may 
be different, I believe that the next major planned change to DBD::SQLite is to 
substitute in the "amalgamation" version, which combines all the SQLite source 
files into a single file; the amalgamation is the recommended form for users 
according to the SQLite core developers.  See http://sqlite.org/download.html 
for a description of that.  Meanwhile there should be another stable release 
with any bug fixes for 1.20 to come out first.  Any other major changes or 
features for DBD::SQLite are expected to come out separately from and after the 
stabilized switch to the amalgamation sources.

Please do not reply to me directly with your responses.  Instead send them to
the forums or file with RT as is appropriate.

Thank you. -- Darren Duncan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite + CrystalReport + Visual Basic . NET

2009-04-07 Thread Steven . Farmer
Yes there is an ODBC driver to use with SQLite.dll  Check out this web 
page.  http://www.ch-werner.de/sqliteodbc

sqlite-users-boun...@sqlite.org wrote on 04/07/2009 02:13:28 AM:

> Hello,
> 
> Is it possible to use CystalReport + SQLite with Visual Basic 2005. 
NET??
> 
> Thanks,
> 
> Ernany
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Eric Minbiole
> Dear Group:
> 
> When my application launches I want to open the associated database, and if 
> that fails because the file does not exist then I would create a new 
> database. 
> 
> sqlite3_open_v2() is ideal for this purpose because you can leave out 
> SQLITE_OPEN_CREATE flag, and specify SQLITE_OPEN_READWRITE.
> 
> Unfortunately, this is all academic because I am using sqlite3_open16()! 
> Where is the UTF-16 version that accepts the flags as a parameter? How can I 
> achieve the same functionality? Let me add that I am not too keen on 
> modifying sqlite.c so thats not an option (too much hassle when new versions 
> come out).
> 
> How did this oversight happen? And what is the workaround? How can I tell, 
> after a call to sqlite3_open16() if the database was created? The first thing 
> I do when it opens is SELECT * FROM VERSION to see if I have to upgrade the 
> database to a new version of my application data. I guess that call could 
> fail and that would be my clue to create all the tables.
> 
> But what if the SELECT fails for a different reason? How do I distinguish it? 
> How do I make this robust?
> 
> Thanks!

I would assume the intent is that you convert your UTF-16 filename to 
UTF-8, and then call sqlite3_open_v2().  I don't know what platform you 
are running on, but you may have some conversion APIs available to you. 
  If not, unicode.org provides some nice sample code that performs the 
conversion:

http://unicode.org/faq/utf_bom.html#UTF8

~Eric
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Igor Tandetnik
Vinnie  wrote:
> Unfortunately, this is all academic because I am using
> sqlite3_open16()! Where is the UTF-16 version that accepts the flags
> as a parameter? How can I achieve the same functionality?

You could convert your file name from UTF-16 to UTF-8, then call 
sqlite3_open_v2.

> How did this oversight happen? And what is the workaround? How can I
> tell, after a call to sqlite3_open16() if the database was created?
> The first thing I do when it opens is SELECT * FROM VERSION to see if
> I have to upgrade the database to a new version of my application
> data. I guess that call could fail and that would be my clue to
> create all the tables.

See PRAGMA user_version (http://sqlite.org/pragma.html#version) - it's 
designed specifically to do this sort of thing. A freshly created 
database will have user_version of 0. You would normally set it to some 
positive number (internally meaningful to your application - SQLite 
doesn't care) after creating your tables.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Igor Tandetnik
Rosemary Alles  wrote:
> 1) If I were to bundle several thousand SELECT statements in a single
> transaction - why would it not run faster?

Why do you believe it should?

> 2) This is precisely the problem though - each of those statements
> will yield rows of results to be parsed with
> sqlite3_column - in the context of the user's (my) program. If many
> SELECT statements are issued within the context
> of a single transaction (repeatedly), how does one deal with the
> results without a callback (if using sql_step)?

One calls sqlite3_column_*, once for each column, to retrieve values 
from the current row after each sqlite3_step call. Which is precisely 
what sqlite3_exec does internally, right after calling sqlite3_step and 
right before calling your callback.

> Yes,
> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
> However, is does (also - additionally) offer the
> option of a user supplied calleback routine which sql_prepare etc. do
> not.

And that's good because?

> Essentially, my question is about context. if many many SELECTS are
> bundled in a single transaction using prepare,
> bind and step. In what context does one parse the results? Do we not
> have synchronizing issue here?

Synchronizing between what and what? I must admit I have absolutely no 
idea what you are talking about.

Igor Tandetnik 



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread P Kishor
On Tue, Apr 7, 2009 at 2:36 PM, Rosemary Alles  wrote:
> Hullo Igor,
>
> Many thanks for your response: I believe I didn't phrase my question
> correctly:
>
> 1) If I were to bundle several thousand SELECT statements in a single
> transaction - why would it not run faster?

as far as I understand, transactions matter only in the context of
UPDATEs and INSERTs for the purpose of speed (keep in mind, I am not
addressing data integrity -- the ACID part here, but only speed). A
transaction speeds this multiple UPDATEs and INSERTs by decreasing the
number of times your program interacts with slowest part of your
computer, the hard disk.

Multiple SELECTs in a transaction might help with the integrity, but
ensuring that you don't end up getting data changed in mid-stream, but
won't speed up the query.

Are all your thousands of SELECTs based on different WHERE criterion?
If not, they would really be just one SELECT.

> 2) This is precisely the problem though - each of those statements
> will yield rows of results to be parsed with
> sqlite3_column - in the context of the user's (my) program. If many
> SELECT statements are issued within the context
> of a single transaction (repeatedly), how does one deal with the
> results without a callback (if using sql_step)? Yes,
> sql_exec is touted to be a wrapper around sql_prepare, bind, step.
> However, is does (also - additionally) offer the
> option of a user supplied calleback routine which sql_prepare etc. do
> not.
>
> Essentially, my question is about context. if many many SELECTS are
> bundled in a single transaction using prepare,
> bind and step. In what context does one parse the results? Do we not
> have synchronizing issue here?
>
> Thanks again,
> rosemary
>
> On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:
>
>> "Rosemary Alles"  wrote
>> in message news:20a6b796-613b-4f5d-bfca-359d6b9fa...@ipac.caltech.edu
>>> I want to speed up my app. Can I run SELECT statements within the
>>> context of a transaction.
>>
>> Yes, but it's unlikely to make it run any faster.
>>
>>> If so, how does one handle the query
>>> results?
>>
>> The same way as when running it without an explicit transaction.
>>
>>> I would assume this cannot be done with sql_prepare,
>>> sql_bind, sql_step?
>>
>> Of course it can. See sqlite3_column_*
>>
>>> Would I *have* to use sql_exec
>>
>> No. And if you look at the implementation of sqlite3_exec, it uses
>> sqlite3_prepare and sqlite3_step internally anyway. It's maintained
>> mostly for backward compatibility.
>>
>>> What am I giving up
>>> by using sql_exec vs sql_prepare, sql_bind and sql_step?
>>
>> Off the top of my head: 1) strong typing (you get all data as strings,
>> so that, say, an integer is converted to string and then you'll have
>> to
>> convert it back); 2) streaming (with sqlite3_exec, the whole resultset
>> must be present in memory at the same time; with sqlite3_step, only
>> one
>> row's worth of data needs to be present in memory; makes a huge
>> difference for very large resultsets).
>>
>> Igor Tandetnik
>>
>>
>>


-- 
Puneet Kishor http://www.punkish.org/
Nelson Institute for Environmental Studies http://www.nelson.wisc.edu/
Carbon Model http://carbonmodel.org/
Open Source Geospatial Foundation http://www.osgeo.org/
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] UTF-16 API a second class citizen?

2009-04-07 Thread Vinnie

Dear Group:

When my application launches I want to open the associated database, and if 
that fails because the file does not exist then I would create a new database. 

sqlite3_open_v2() is ideal for this purpose because you can leave out 
SQLITE_OPEN_CREATE flag, and specify SQLITE_OPEN_READWRITE.

Unfortunately, this is all academic because I am using sqlite3_open16()! Where 
is the UTF-16 version that accepts the flags as a parameter? How can I achieve 
the same functionality? Let me add that I am not too keen on modifying sqlite.c 
so thats not an option (too much hassle when new versions come out).

How did this oversight happen? And what is the workaround? How can I tell, 
after a call to sqlite3_open16() if the database was created? The first thing I 
do when it opens is SELECT * FROM VERSION to see if I have to upgrade the 
database to a new version of my application data. I guess that call could fail 
and that would be my clue to create all the tables.

But what if the SELECT fails for a different reason? How do I distinguish it? 
How do I make this robust?

Thanks!

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Select statements in the context of transactions.

2009-04-07 Thread Rosemary Alles
Hullo Igor,

Many thanks for your response: I believe I didn't phrase my question  
correctly:

1) If I were to bundle several thousand SELECT statements in a single  
transaction - why would it not run faster?
2) This is precisely the problem though - each of those statements  
will yield rows of results to be parsed with
sqlite3_column - in the context of the user's (my) program. If many  
SELECT statements are issued within the context
of a single transaction (repeatedly), how does one deal with the  
results without a callback (if using sql_step)? Yes,
sql_exec is touted to be a wrapper around sql_prepare, bind, step.  
However, is does (also - additionally) offer the
option of a user supplied calleback routine which sql_prepare etc. do  
not.

Essentially, my question is about context. if many many SELECTS are  
bundled in a single transaction using prepare,
bind and step. In what context does one parse the results? Do we not  
have synchronizing issue here?

Thanks again,
rosemary

On Apr 6, 2009, at 8:03 PM, Igor Tandetnik wrote:

> "Rosemary Alles"  wrote
> in message news:20a6b796-613b-4f5d-bfca-359d6b9fa...@ipac.caltech.edu
>> I want to speed up my app. Can I run SELECT statements within the
>> context of a transaction.
>
> Yes, but it's unlikely to make it run any faster.
>
>> If so, how does one handle the query
>> results?
>
> The same way as when running it without an explicit transaction.
>
>> I would assume this cannot be done with sql_prepare,
>> sql_bind, sql_step?
>
> Of course it can. See sqlite3_column_*
>
>> Would I *have* to use sql_exec
>
> No. And if you look at the implementation of sqlite3_exec, it uses
> sqlite3_prepare and sqlite3_step internally anyway. It's maintained
> mostly for backward compatibility.
>
>> What am I giving up
>> by using sql_exec vs sql_prepare, sql_bind and sql_step?
>
> Off the top of my head: 1) strong typing (you get all data as strings,
> so that, say, an integer is converted to string and then you'll have  
> to
> convert it back); 2) streaming (with sqlite3_exec, the whole resultset
> must be present in memory at the same time; with sqlite3_step, only  
> one
> row's worth of data needs to be present in memory; makes a huge
> difference for very large resultsets).
>
> Igor Tandetnik
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Vacuum syntax

2009-04-07 Thread Gerry Snyder
Richard Dauben wrote:
> Hopefully a quick Vacuum question  I want to vacuum my sqlite3 database
> from an Objective C program but have not been able to find the necessary
> routine or statement.  What is the proper syntax?
>   

Gimme a V
Gimme an A
Gimme a C
Gimme a U
Gimme a U
Gimme an M

It's just an SQL command.


HTH,

Gerry
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Vacuum syntax

2009-04-07 Thread Richard Dauben
Hopefully a quick Vacuum question  I want to vacuum my sqlite3 database
from an Objective C program but have not been able to find the necessary
routine or statement.  What is the proper syntax?
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] On-line documentation license

2009-04-07 Thread D. Richard Hipp

On Apr 5, 2009, at 9:52 PM, Krzysztof 'ChanibaL' Bociurko wrote:

>  I'd like to use the great SQL syntax diagrams for DML and perhaps  
> DDL statements from
> SQLites on-line documentation.  Can I do that?

Yes
>
> PS. What was used to produce the diagrams,

http://wiki.tcl.tk/21708

D. Richard Hipp
d...@hwaci.com



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] On-line documentation license

2009-04-07 Thread Krzysztof 'ChanibaL' Bociurko
Hello, 

I'm in the process of writing my bachelors and I'd like to use the
great SQL syntax diagrams for DML and perhaps DDL statements from
SQLites on-line documentation. I can't find anything on the license of
the documentation (apart from "The details here are unclear."), so I
thought it'd be best to ask - can I do that? Of course I will not claim
any copyright for the diagrams and the appropriate information shall be
added to the bibliography.

The project on which I base my bachelors is a highly reusable, object
oriented open source (code LGPL, examples BSD license) web application
framework written in PHP. For data storage it's using SQLite
or MySQL (and easily addaptable for other SQL dialects). The
bachelors is being written in Polish but the source code is in
English. I plan to release the framework (called SpInq) to the public at
the end of this year.

PS. What was used to produce the diagrams, and would it be possible to
get vectorized versions of them? (if vectorized would require any
extra work, please ignore, the bitmap will do)

Thanks in advance,

--
Krzysztof `ChanibaL` Bociurko 
xmpp://jab...@chanibal.net, http://chanibal.net
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple Tokenizer in FTS3

2009-04-07 Thread Andy Roberts


I shall follow-up myself as I've made some progress.

I noticed that if I let the default loop run with !isalnum() and then
overrode specific ascii values like the apostrophe, then I started seeing
results which I expected.

I then realised that perhaps the fact that the MATCH query is "string*"
contains a non-alphanum symbol may be significant.  So as before, I found
using !isgraph() didn't return anything, but if I now do !isgraph, and then
set the value for '*' to be a delimiter also, then I get my results!

So that must have been the issue. Because I'm always doing prefix queries
again my dataset then I guess I must take that asterisk symbol into
consideration. If this is the reason, then it may be prudent to make certain
symbols that are key to the MATCH syntax to be explicit delimiters.

Anyway, it seems I've got what I've wanted now.

Thanks,
Andy
-- 
View this message in context: 
http://www.nabble.com/Simple-Tokenizer-in-FTS3-tp22911635p22925471.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Simple Tokenizer in FTS3

2009-04-07 Thread Andy Roberts

Thanks for at least confirming that I was on the right track. I was doing
some basic inspection of the delim array before and after I changed it and
that seemed to be doing what I wanted to do. I didn't look any closer at the
simpleNext function which is clearly my next port of call.

It does feel that I'm missing something though. My use case is in fact
pretty simple. The text column I'm indexing doesn't contain lots of
free-flowing text - they are mainly one- or two-word items. I'm using FTS3
because the MATCH performance for prefix searches is so much better than
standard LIKE 'string%' searches.

As another test, I tried simply setting each element in the delim array to 0
so that it doesn't break up the text at all, even on spaces. But as soon as
I alter that line of code it stops my MATCH queries from returning any
results, even though in theory it should. So that's a surprise result which
leaves stratching my head a tad.

So I definitely need to investigate more closely how the tokens are being
split up as a starting point, but perhaps the issue is elsewhere, in the
execution of the MATCH query itself for example, although I've not located
that code yet.

I will keep this thread up to date if and when I make any progress.

Thanks,
Andy




Scott Hess wrote:
> 
> My suggestion would be to spin up a debugger and set a breakpoint on
> that loop and check to see if it's really doing what you think it is.
> If it is, break in the code doing the actual tokenization and see if
> it's being called as expected.  Or scatter some printf() calls in
> there.  It's embarrassing the number of times one writes really simple
> code which doesn't work, and it turns out that it's something silly,
> like you didn't actually compile what you thought you did, or you
> modified code in the wrong file, or something like that.
> 
> I don't think you probably want isspace() here, though, it would let
> through non-whitespace control characters.  Most of your inputs
> probably won't have those, but it's easy to plan ahead, you might
> consider isgraph() instead, used the same way as isalnum() was being
> used before.
> 
> -scott
> 
> 
> On Mon, Apr 6, 2009 at 8:51 AM, Andy Roberts  wrote:
>>
>> Hi,
>>
>> I downloaded the amalgamation sources in order to create a build of
>> sqlite
>> with FTS3 enabled. The problem for me is that the default "simple"
>> tokenizer
>> is not behaving precisely how I want. In fact, I'd prefer if it wouldn't
>> count punctuation as a delimeter, and stuck purely to whitespace.
>>
>> In the simpleCreate() function there's some code that initializes an
>> array
>> that records with characters are delimiters or not:
>>
>> for(i=1; i<0x80; i++){
>>    t->delim[i] = !isalnum(i);
>> }
>>
>> I thought that if I made a simple edit to use the isspace() function then
>> I'd achieve what I was after, i.e.,
>>
>> for(i=1; i<0x80; i++){
>>    t->delim[i] = isspace(i);
>> }
>>
>> However, when I build this version, create my fts virtual tables and then
>> query them I get zero results. When I revert back to !isalnum I get
>> results,
>> but as I'm seeing words that are being split where I don't want them to
>> be.
>>
>> I must admit my C experience isn't great, but I've been trying for far
>> too
>> many hours now with little gain. I'd really appreciate some pointers!
>>
>> Thanks in advance,
>> Andy
>> --
>> View this message in context:
>> http://www.nabble.com/Simple-Tokenizer-in-FTS3-tp22911635p22911635.html
>> Sent from the SQLite mailing list archive at Nabble.com.
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 
> 

-- 
View this message in context: 
http://www.nabble.com/Simple-Tokenizer-in-FTS3-tp22911635p22924173.html
Sent from the SQLite mailing list archive at Nabble.com.

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Is it using an index?

2009-04-07 Thread John Machin
On 7/04/2009 6:43 AM, Scott Baker wrote:
[snip]

> I must have typod and not noticed.

Your hypothesis carries within itself the seed of its own plausibility :-)
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Database handle(sqlite3_stmt)

2009-04-07 Thread Joanne Pham
Thanks Igor!
Just a question. I always used two different statements handle: one for insert 
and one for select for Sqlserver as well as Sqlite. 
Thanks again for responding my email.
JP





From: Igor Tandetnik 
To: sqlite-users@sqlite.org
Sent: Monday, April 6, 2009 8:10:58 PM
Subject: Re: [sqlite] Database handle(sqlite3_stmt)

"Joanne Pham" 
wrote in message news:677762.12434...@web90302.mail.mud.yahoo.com
> Can we use one statement handle for both insert and select?

Of course not. When you call sqlite3_prepare, you pass the query text 
and get back the statement handle. The query can't begin both with 
SELECT and with INSERT, obviously.

Why do you feel it would be beneficial to somehow "reuse" a statement 
handle for two different queries? What are you trying to achieve?

Igor Tandetnik



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



  
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users