Re: [sqlite] test failures on cygwin

2007-09-21 Thread Gerry Snyder

Evans, Mark (Tandem) wrote:




So it looks like I have a cygwin TCL issue.  Is this fixable?
You can load Active State Tcl and use that instead of the version that 
comes with Cygwin.


You get lots of extra goodies, in addition to an up-to-date Tcl core.

HTH,

Gerry

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] test failures on cygwin

2007-09-21 Thread drh
"Evans, Mark (Tandem)" <[EMAIL PROTECTED]> wrote:
> **
> N.B.:  The version of TCL that you used to build this test harness
> is defective in that it does not support 64-bit integers.  Some or
> all of the test failures above might be a result from this defect
> in your TCL build.
> **

Yeah.  That's what I figured.  You need to fix you TCL build.

It's *still* not to late to switch to Linux or Mac  :-)

--
D. Richard Hipp <[EMAIL PROTECTED]>


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] test failures on cygwin

2007-09-21 Thread drh
"Evans, Mark (Tandem)" <[EMAIL PROTECTED]> wrote:
> Hi all,
> 
> I have been lurking on the message board and am in awe of the collective
> wisdom.
> 
> I'm just getting my feet wet learning the internals of SQLite, drinking
> from the proverbial firehose.  I am using cygwin 1.90 as my learning
> platform and I have built SQLite 3.5.0.   I ran 'make test' (quick.test
> suite) and it reports some test failures as follows:
> 
> 41 errors out of 28604 tests
> Failures on these tests: autoinc-6.1 bind-3.1 cast-3.1 cast-3.2 cast-3.4
> cast-3.5 cast-3.6 cast-3.8 cast-3.11 cast-3.12 cast-3.14 cast-3.15
> cast-3.16 cast-3.18 cast-3.21 cast-3.22 cast-3.24 expr-1.102 expr-1.106
> func-18.14 func-18.16 func-18.17 func-18.31 lastinsert-8.1
> lastinsert-9.1 misc1-9.1 misc2-4.1 misc2-4.2 misc2-4.4 misc2-4.5
> misc2-4.6 misc3-3.6 misc3-3.7 misc3-3.8 misc3-3.9 misc3-3.10 misc3-3.11
> misc5-2.2 shared-1.11.9 shared-2.11.9 types-2.1.8
> 
> Should I be surprised by these failures?  
> 
> Taking the first one that failed, autoinc-6.1, it reported:
> 
> autoinc-6.1...
> Expected: [9223372036854775807]
>  Got: [-1]
> 
> The corresonding test code is:
> 
> ifcapable {!rowid32} {
>   do_test autoinc-6.1 {
> execsql {
>   CREATE TABLE t6(v INTEGER PRIMARY KEY AUTOINCREMENT, w);
>   INSERT INTO t6 VALUES(9223372036854775807,1);
>   SELECT seq FROM main.sqlite_sequence WHERE name='t6';
> }
>   } 9223372036854775807
> }
> 
> So the SELECT clause returns -1 instead of the expected
> 0x7FFF (MAXLONGLONG), or in other words MAXLONGLONG + 1.
> Should the value returned by SELECT seq ... be the the key value (rowid)
> of the last insert, or, is seq supposed to represent the next rowid
> (autoincrement of last key)?
> 

41 failures out of 28604 isn't bad.  That's a 0.1% failure rate.  If
these are the only problems you are having, your build is probably OK.

If you want to go for 100% pass, the first place I would look is what
version of TCL you are linking against to build the test harness.
You want a later version of 8.4 or any version of 8.5.  I'll bet you
have 8.3, which won't work here.

The other thing you might try to do is *not* use cygwin but instead
use mingw.

Of course, the easiest option by far is to use a Linux box or a Mac. :-)

--
D. Richard Hipp <[EMAIL PROTECTED]>



-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] test failures on cygwin

2007-09-21 Thread Evans, Mark (Tandem)
When I  run just autoinc.test alone,  I get the following summary
output:

54 errors out of 66 tests
Failures on these tests: autoinc-6.1 autoinc-1.1 autoinc-1.2 autoinc-1.3
autoinc-1.4 autoinc-1.6 autoinc-2.1 autoinc-2.2 autoinc-2.3 autoinc-2.4
autoinc-2.5 autoinc-2.6 autoinc-2.7 autoinc-2.8 autoinc-2.9 autoinc-2.10
autoinc-2.11 autoinc-2.12 autoinc-2.13 autoinc-2.21 autoinc-2.23
autoinc-2.25 autoinc-2.27 autoinc-2.29 autoinc-2.51 autoinc-2.52
autoinc-2.53 autoinc-2.54 autoinc-2.55 autoinc-2.70 autoinc-2.71
autoinc-2.72 autoinc-2.73 autoinc-2.74 autoinc-3.1 autoinc-3.2
autoinc-3.3 autoinc-3.4 autoinc-4.1 autoinc-4.2 autoinc-4.3 autoinc-4.4
autoinc-4.4.1 autoinc-4.5 autoinc-4.6 autoinc-4.7 autoinc-4.8
autoinc-4.9 autoinc-4.10 autoinc-5.1 autoinc-5.2 autoinc-5.3 autoinc-5.4
autoinc-7.1
**
N.B.:  The version of TCL that you used to build this test harness
is defective in that it does not support 64-bit integers.  Some or
all of the test failures above might be a result from this defect
in your TCL build.
**
All memory allocations freed - no leaks
Maximum memory usage: 76060 bytes


So it looks like I have a cygwin TCL issue.  Is this fixable?
 
Mark

> -Original Message-
> From: Evans, Mark (Tandem) 
> Sent: Friday, September 21, 2007 5:47 PM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] test failures on cygwin
> 
> Hi all,
> 
> I have been lurking on the message board and am in awe of the 
> collective wisdom.
> 
> I'm just getting my feet wet learning the internals of 
> SQLite, drinking from the proverbial firehose.  I am using 
> cygwin 1.90 as my learning
> platform and I have built SQLite 3.5.0.   I ran 'make test' 
> (quick.test
> suite) and it reports some test failures as follows:
> 
> 41 errors out of 28604 tests
> Failures on these tests: autoinc-6.1 bind-3.1 cast-3.1 
> cast-3.2 cast-3.4
> cast-3.5 cast-3.6 cast-3.8 cast-3.11 cast-3.12 cast-3.14 cast-3.15
> cast-3.16 cast-3.18 cast-3.21 cast-3.22 cast-3.24 expr-1.102 
> expr-1.106
> func-18.14 func-18.16 func-18.17 func-18.31 lastinsert-8.1
> lastinsert-9.1 misc1-9.1 misc2-4.1 misc2-4.2 misc2-4.4 misc2-4.5
> misc2-4.6 misc3-3.6 misc3-3.7 misc3-3.8 misc3-3.9 misc3-3.10 
> misc3-3.11
> misc5-2.2 shared-1.11.9 shared-2.11.9 types-2.1.8
> 
> Should I be surprised by these failures?  
> 
> Taking the first one that failed, autoinc-6.1, it reported:
> 
> autoinc-6.1...
> Expected: [9223372036854775807]
>  Got: [-1]
> 
> The corresonding test code is:
> 
> ifcapable {!rowid32} {
>   do_test autoinc-6.1 {
> execsql {
>   CREATE TABLE t6(v INTEGER PRIMARY KEY AUTOINCREMENT, w);
>   INSERT INTO t6 VALUES(9223372036854775807,1);
>   SELECT seq FROM main.sqlite_sequence WHERE name='t6';
> }
>   } 9223372036854775807
> }
> 
> So the SELECT clause returns -1 instead of the expected 
> 0x7FFF (MAXLONGLONG), or in other words MAXLONGLONG + 1.
> Should the value returned by SELECT seq ... be the the key 
> value (rowid) of the last insert, or, is seq supposed to 
> represent the next rowid (autoincrement of last key)?
> 
> Regards,
> Mark
> 
> 
> --
> ---
> To unsubscribe, send email to [EMAIL PROTECTED]
> --
> ---
> 
> 

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] test failures on cygwin

2007-09-21 Thread Evans, Mark (Tandem)
Hi all,

I have been lurking on the message board and am in awe of the collective
wisdom.

I'm just getting my feet wet learning the internals of SQLite, drinking
from the proverbial firehose.  I am using cygwin 1.90 as my learning
platform and I have built SQLite 3.5.0.   I ran 'make test' (quick.test
suite) and it reports some test failures as follows:

41 errors out of 28604 tests
Failures on these tests: autoinc-6.1 bind-3.1 cast-3.1 cast-3.2 cast-3.4
cast-3.5 cast-3.6 cast-3.8 cast-3.11 cast-3.12 cast-3.14 cast-3.15
cast-3.16 cast-3.18 cast-3.21 cast-3.22 cast-3.24 expr-1.102 expr-1.106
func-18.14 func-18.16 func-18.17 func-18.31 lastinsert-8.1
lastinsert-9.1 misc1-9.1 misc2-4.1 misc2-4.2 misc2-4.4 misc2-4.5
misc2-4.6 misc3-3.6 misc3-3.7 misc3-3.8 misc3-3.9 misc3-3.10 misc3-3.11
misc5-2.2 shared-1.11.9 shared-2.11.9 types-2.1.8

Should I be surprised by these failures?  

Taking the first one that failed, autoinc-6.1, it reported:

autoinc-6.1...
Expected: [9223372036854775807]
 Got: [-1]

The corresonding test code is:

ifcapable {!rowid32} {
  do_test autoinc-6.1 {
execsql {
  CREATE TABLE t6(v INTEGER PRIMARY KEY AUTOINCREMENT, w);
  INSERT INTO t6 VALUES(9223372036854775807,1);
  SELECT seq FROM main.sqlite_sequence WHERE name='t6';
}
  } 9223372036854775807
}

So the SELECT clause returns -1 instead of the expected
0x7FFF (MAXLONGLONG), or in other words MAXLONGLONG + 1.
Should the value returned by SELECT seq ... be the the key value (rowid)
of the last insert, or, is seq supposed to represent the next rowid
(autoincrement of last key)?

Regards,
Mark


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and html character entities

2007-09-21 Thread Clark Christensen
Wow!  Excellent summary, Trevor.

- Original Message 
From: Trevor Talbot <[EMAIL PROTECTED]>
To: sqlite-users@sqlite.org
Sent: Thursday, September 20, 2007 11:35:42 PM
Subject: Re: [sqlite] SQLite and html character entities

On 9/20/07, P Kishor <[EMAIL PROTECTED]> wrote:
> On 9/20/07, Trevor Talbot <[EMAIL PROTECTED]> wrote:
> > On 9/20/07, P Kishor <[EMAIL PROTECTED]> wrote:

> > > Lucknow:~/Data/ecoservices punkish$ less foo.csv
> > > "the first record"
> > > "\351 \347 \361 \356"
> > > "more from 3rd row"
> > > "row four"
> > > "these \223volunteered\224 activities"
> > > "<\341 \370 \343 \374 \356 & others>"
> > > foo.csv (END)
> > > -
> >
> > Note that this is *not* UTF-8.  If you're still using this as test
> > data, you need to get rid of it and use UTF-8 encoded data instead.

> this is where I lost you... when you say "this" is not UTF8, what is
> "this"?

The data in the file shown by less, and since sqlite3 exported that
data exactly as it was stored, the data in the db as well.

> All I want is that I want (1) the user to be able to type ç in
> the web form, and (2) I want to be able to save ç in the db. (3)
 Then
> when I look at that data, either on the command line, but definitely
> back on the web, I want it to appear as ç. (4) If I export it, I
> should still be able to see it as ç and not something else.
>
> Seems like I was able to do 1, 2, and 3 with my test case, but not 4
> (I got \347 instead ç).
>
> Also, in my production case, 1,2, and 3 are not very reliable. Are
 you
> saying my data above are not UTF8? If so, I would like to know how
 you
> can tell that, so I can recognize it in the future myself. Also, I
> would like to know how I can do what you are suggesting I should do,
> that is, how can I ensure that I "use UTF8 encoded data"?

Okay, first a quick primer on character sets and encodings.  A byte
can hold one of 256 different values (0-255), and most processing
tends to happen on bytes, so it makes sense that individual characters
should be stored as individual bytes.

First we have US ASCII, the character encoding standard that defines
128 characters, including the basic english alphabet, numbers, and
some punctuation (www.asciitable.com).  However, this obviously
doesn't cover all the symbols in common use, or characters from other
languages, so more definitions are needed.  Given that a byte supports
twice as many values (ASCII takes up only half), that leaves 128
values for other purposes.  Many other character sets keep the bottom
half as ASCII, and assign different characters to the top 128 values.
The ISO-8859 family of standards works this way.

ISO-8859-1 is also known as Latin-1, and is most common for languages
that use characters similar to English, Spanish, etc.  It adds a few
more symbols (copyright, paragraph, etc) and some common characters
with diacritical marks (like é ç ñ î).  The data you posted above
 was
entered into your database using this encoding (or Windows-1252, which
is identical except for adding some characters in places 8859-1 does
not use).

ISO-8859-2 is also known as Latin-2, and covers another set of
European languages (such as Romanian).  It contains a different set of
symbols and characters with diacritical marks needed for these
languages, characters that don't fit in 8859-1.

It keeps going, of course (Wikipedia has info:
http://en.wikipedia.org/wiki/Category:ISO_8859).  There are many other
encodings that work this way, and collectively they're known as
single-byte encodings: they all represent a character as a single
byte, but the actual meaning of that byte depends in the character set
in use.

This situation is ripe for confusion, since interpreting a sequence of
bytes as being in a different encoding than it was stored in will lead
to strange results.  This is exactly what you saw in your Cocoa
editor, since it defaulted to using the classic MacRoman encoding,
which uses those same byte values to store uppercase characters
instead.

It gets worse: there are multi-byte encodings too.  You typically see
these in the East Asian languages, since they don't use the same
alphabetic writing system, and instead have thousands of characters to
encode.  A byte only supports a mere 256 values, so more than one byte
is needed to represent a single character.

By now you can see how this can spiral into an unmaintainable mess:
you have to worry about this encoding and that encoding and you can
store the encoding with the text but what do you do if someone
requests data in another encoding and what if they are using a
specific encoding but that text only contains ASCII characters and
therefore everyone should see it anyway and how do you tell the
difference and *brain asplode*

Enter Unicode, which has the goal of putting all the world's commonly
used language characters and symbols into one single character set.
By using Unicode, you don't have to worry about which character set
your data is in, and 

RE: [sqlite] Insertion and Search at a same time is it possible?

2007-09-21 Thread Sreedhar.a

Hi John Stanton,

Thankyou very much, I will try in this method.

Best Regards,
A.Sreedhar.


-Original Message-
From: John Stanton [mailto:[EMAIL PROTECTED] 
Sent: Friday, September 21, 2007 6:20 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Insertion and Search at a same time is it possible?

You can read and write to the database concurrently provided that your
program can handle SQLITE_BUSY events.

Sreedhar.a wrote:
> Hi Everyone,
>  
> I am implementing server database using sqlite.
>  
> I will be having the multiple clients browsing the database.
> At the same time the database can also be updated.
>  
> I can copy the database into the local memory of my system and can 
> perform search.
> Can i implement inserting the records at the same time into the same file.
> while searching
>  
> Your suggestions will help me a lot in my project.
>  
> Best Regards,
> A.Sreedhar.
>  
>  
> 



-
To unsubscribe, send email to [EMAIL PROTECTED]

-




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] Insertion and Search at a same time is it possible?

2007-09-21 Thread John Stanton
You can read and write to the database concurrently provided that your 
program can handle SQLITE_BUSY events.


Sreedhar.a wrote:

Hi Everyone,
 
I am implementing server database using sqlite.
 
I will be having the multiple clients browsing the database.

At the same time the database can also be updated.
 
I can copy the database into the local memory of my system and can perform

search.
Can i implement inserting the records at the same time into the same file.
while searching
 
Your suggestions will help me a lot in my project.
 
Best Regards,

A.Sreedhar.
 
 




-
To unsubscribe, send email to [EMAIL PROTECTED]
-



RE: [sqlite] FTS3 where ?

2007-09-21 Thread Andre du Plessis
I am indeed using Delphi, BUT,
I have quite a few programs which are using my library I put together
for Delphi, it would be quite a large change to convert to DISQLite3 at
this point, plus this same program uses Python and ASP.net

Which means that fts3.dll as a standalone would definitely be preferred.

-Original Message-
From: Ralf Junker [mailto:[EMAIL PROTECTED] 
Sent: 21 September 2007 12:50 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] FTS3 where ?

Hello Andre du Plessis,

If you are using Delphi, FTS3 is already included in the latest
DISQLite3 (Pro and Personal). Download is available from
http://www.yunqa.de/delphi/.

The source code is available from CVS. You will find FTS3 in the /ext/
directory.

Ralf

>Fts3 which everyone is talking about, I cannot see any mention of it on
>the download page, does it mean that its just the development
sourcecode
>which people are compiling at this point or are there some prebuilt
>dll's available, or has it not been officially released yet?



-
To unsubscribe, send email to [EMAIL PROTECTED]

-


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] FTS3 where ?

2007-09-21 Thread Ralf Junker
Hello Andre du Plessis,

If you are using Delphi, FTS3 is already included in the latest DISQLite3 (Pro 
and Personal). Download is available from http://www.yunqa.de/delphi/.

The source code is available from CVS. You will find FTS3 in the /ext/ 
directory.

Ralf

>Fts3 which everyone is talking about, I cannot see any mention of it on
>the download page, does it mean that its just the development sourcecode
>which people are compiling at this point or are there some prebuilt
>dll's available, or has it not been officially released yet?


-
To unsubscribe, send email to [EMAIL PROTECTED]
-



[sqlite] FTS3 where ?

2007-09-21 Thread Andre du Plessis
Fts3 which everyone is talking about, I cannot see any mention of it on
the download page, does it mean that its just the development sourcecode
which people are compiling at this point or are there some prebuilt
dll's available, or has it not been officially released yet?

 

Thanks.



Re: [sqlite] SQLite and html character entities

2007-09-21 Thread Trevor Talbot
I should also mention, a text editor I like to use is SubEthaEdit (I
have the old 2.2 version), and it supports switching encodings via the
Format menu.  If you're switching to find the encoding of an existing
file, just choose Reinterpret when it asks.

-
To unsubscribe, send email to [EMAIL PROTECTED]
-



Re: [sqlite] SQLite and html character entities

2007-09-21 Thread Trevor Talbot
On 9/20/07, P Kishor <[EMAIL PROTECTED]> wrote:
> On 9/20/07, Trevor Talbot <[EMAIL PROTECTED]> wrote:
> > On 9/20/07, P Kishor <[EMAIL PROTECTED]> wrote:

> > > Lucknow:~/Data/ecoservices punkish$ less foo.csv
> > > "the first record"
> > > "\351 \347 \361 \356"
> > > "more from 3rd row"
> > > "row four"
> > > "these \223volunteered\224 activities"
> > > "<\341 \370 \343 \374 \356 & others>"
> > > foo.csv (END)
> > > -
> >
> > Note that this is *not* UTF-8.  If you're still using this as test
> > data, you need to get rid of it and use UTF-8 encoded data instead.

> this is where I lost you... when you say "this" is not UTF8, what is
> "this"?

The data in the file shown by less, and since sqlite3 exported that
data exactly as it was stored, the data in the db as well.

> All I want is that I want (1) the user to be able to type ç in
> the web form, and (2) I want to be able to save ç in the db. (3) Then
> when I look at that data, either on the command line, but definitely
> back on the web, I want it to appear as ç. (4) If I export it, I
> should still be able to see it as ç and not something else.
>
> Seems like I was able to do 1, 2, and 3 with my test case, but not 4
> (I got \347 instead ç).
>
> Also, in my production case, 1,2, and 3 are not very reliable. Are you
> saying my data above are not UTF8? If so, I would like to know how you
> can tell that, so I can recognize it in the future myself. Also, I
> would like to know how I can do what you are suggesting I should do,
> that is, how can I ensure that I "use UTF8 encoded data"?

Okay, first a quick primer on character sets and encodings.  A byte
can hold one of 256 different values (0-255), and most processing
tends to happen on bytes, so it makes sense that individual characters
should be stored as individual bytes.

First we have US ASCII, the character encoding standard that defines
128 characters, including the basic english alphabet, numbers, and
some punctuation (www.asciitable.com).  However, this obviously
doesn't cover all the symbols in common use, or characters from other
languages, so more definitions are needed.  Given that a byte supports
twice as many values (ASCII takes up only half), that leaves 128
values for other purposes.  Many other character sets keep the bottom
half as ASCII, and assign different characters to the top 128 values.
The ISO-8859 family of standards works this way.

ISO-8859-1 is also known as Latin-1, and is most common for languages
that use characters similar to English, Spanish, etc.  It adds a few
more symbols (copyright, paragraph, etc) and some common characters
with diacritical marks (like é ç ñ î).  The data you posted above was
entered into your database using this encoding (or Windows-1252, which
is identical except for adding some characters in places 8859-1 does
not use).

ISO-8859-2 is also known as Latin-2, and covers another set of
European languages (such as Romanian).  It contains a different set of
symbols and characters with diacritical marks needed for these
languages, characters that don't fit in 8859-1.

It keeps going, of course (Wikipedia has info:
http://en.wikipedia.org/wiki/Category:ISO_8859).  There are many other
encodings that work this way, and collectively they're known as
single-byte encodings: they all represent a character as a single
byte, but the actual meaning of that byte depends in the character set
in use.

This situation is ripe for confusion, since interpreting a sequence of
bytes as being in a different encoding than it was stored in will lead
to strange results.  This is exactly what you saw in your Cocoa
editor, since it defaulted to using the classic MacRoman encoding,
which uses those same byte values to store uppercase characters
instead.

It gets worse: there are multi-byte encodings too.  You typically see
these in the East Asian languages, since they don't use the same
alphabetic writing system, and instead have thousands of characters to
encode.  A byte only supports a mere 256 values, so more than one byte
is needed to represent a single character.

By now you can see how this can spiral into an unmaintainable mess:
you have to worry about this encoding and that encoding and you can
store the encoding with the text but what do you do if someone
requests data in another encoding and what if they are using a
specific encoding but that text only contains ASCII characters and
therefore everyone should see it anyway and how do you tell the
difference and *brain asplode*

Enter Unicode, which has the goal of putting all the world's commonly
used language characters and symbols into one single character set.
By using Unicode, you don't have to worry about which character set
your data is in, and you can move on to other more interesting issues.
 Of course, it's a very large character set, supporting just over 1
million characters.  Obviously these don't all fit in one byte, so
there are also several standard encodings.  UTF-8 is one