Re: [sqlite] Division accuracy

2014-09-25 Thread RSmith


On 2014/09/25 19:32, Clemens Ladisch wrote:

RSmith wrote:

the time of finishing does not determine position alone, there are
bonuses and penalties which doesn't stack up to integer values, but is
measurable [...]  Without going into too much detail about how bonuses
and penalties are calculated

How can anybody help you without these details?


Because this is not the problem I needed help with, this one I can do but merely mentioned it to demonstrate to Mr. Lowden why I 
would need a seemingly unnatural ordering. The help I needed was already provided / added to by various people, for which I am very 
thankful.


And from JKL:

You make valid (if obvious) points, but...


Measurable means "is a number", right?  So we have math, and IEEE
floating point.  Yay!  We can compute the answer!   


No, measurable means that we understand the relative value of an advantage or 
disadvantage even if it does not necessarily compute into a single number 
because it might be subjective or multiplexed. Did you have orange juice or 
grapefruit juice for breakfast? Which scored higher? That depends whether you 
are rating taste or healthiness - and those are by far not the only possible 
ratings.

The exercise to make it map to a number is however a possible one, if not a 
very easy one, and I will (as mentioned before) be considering that solution if 
this order mechanism turns out to be too costly - Thank you kindly.




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the stringmetrics

2014-09-25 Thread Keith Medcalf

use -Wl,-Bstatic to force static linking (use lib.a in preference to 
lib.dll.a when linking)
use -static-libgcc to force a static link of the gcc runtime libraries

ie

gcc -s -O3 -mwin32 -pipe -march=i686 -mtune=i686 -shared 
-DSQLITE_API=__declspec(dllexport) -Wl,-Bstatic -mthreads 
-DSQLITE_THREADSAFE=1 -D_HAVE_SQLITE_CONFIG_H -DSQLITE_EXTRA_INIT=core_init 
-DSQLITE_CORE sqlite3x.c  -lz -o SQLite3.dll 
-Wl,--output-def,SQLite3.def,--out-implib,SQLite3.a 
-static-libgcc

but all on one line. (your options, defines, source, and output may vary) ...

>-Original Message-
>From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-
>boun...@sqlite.org] On Behalf Of dave
>Sent: Thursday, 25 September, 2014 11:27
>To: 'General Discussion of SQLite Database'
>Subject: Re: [sqlite] A new extension for sqlite to analyze the
>stringmetrics
>
>Update: never mind what I mentioned below; he said he is using mingw (it
>really does help to read the entire thread!)
>
>But I have tried and failed to load it (the prebuilt binary) myself; I
>notice there is a further dependency on libgcc-s-dw2-1.dll, so maybe
>that's
>the origin of Gert's problem.
>
>-dave
>
>> -Original Message-
>> From: sqlite-users-boun...@sqlite.org
>> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of dave
>> Sent: Thursday, September 25, 2014 11:59 AM
>> To: 'General Discussion of SQLite Database'
>> Subject: Re: [sqlite] A new extension for sqlite to analyze
>> the stringmetrics
>>
>>
>> Gert:
>>
>> Did you build it yourself, or download a binary?  Are you
>> running on XP, or
>> a later OS?
>>
>> I ask these questions because I have seen this code due to c
>> dll runtime
>> issues like (msvcrt100.dll), etc.
>> Depending on what tool was used to build the binary, some
>> changes may need
>> to be made to the build process so that the binary runs on
>> all platforms XP
>> - win8.  In particular, DS2012 broke the ability to make XP-compatible
>> builds, and the builder is using that, then (s)he needs to select the
>> 'v110_xp' toolset to make a binary that runs on all platforms.
>>
>> All this is purely a guess, and could easily be wrong; I
>> can't take a peek
>> at the lib myself right now; but I mention this now on the
>> chance that it
>> saves some time debugging.
>>
>> -dave
>>
>>
>> > -Original Message-
>> > From: sqlite-users-boun...@sqlite.org
>> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert
>> Van Assche
>> > Sent: Thursday, September 25, 2014 10:25 AM
>> > To: General Discussion of SQLite Database
>> > Subject: Re: [sqlite] A new extension for sqlite to analyze
>> > the stringmetrics
>> >
>> >
>> > Andea,
>> >
>> > Seems like a very interesting extension to me.
>> > I cannot make it run on Windows.
>> > I get error code 0xc00012f.
>> >
>> > Any idea why this is ?
>> >
>> > thanks
>> >
>> > gert
>> >
>> > 2014-09-25 10:11 GMT+02:00 Andrea Peri :
>> >
>> > > Hi,
>> > > for who eventually interested.
>> > >
>> > > Fr a specific internal use I develope a new simple
>> > extension for sqlite.
>> > > The LibStringmetrics.
>> > > https://github.com/aperi2007/libstringmetrics
>> > >
>> > > It is based on the LibSimmetrics c lib from Johnathan Botha
>> > > - available from here:
>> https://github.com/jokillsya/libsimmetrics -
>> > > It was a porting of another java library.
>> > >
>> > > The LibStringMetrics is compile using mingw .
>> > > All is released with a GPL3 to follow the same license of
>> > the original
>> > > libsimmetrics.
>> > >
>> > > The extension is usable after the usual load_extension command:
>> > >
>> > > select load_extension("libstringmetrics.dll");
>> > >
>> > > The extension add One new command:
>> > >
>> > > stringmetrics().
>> > >
>> > > calling it without any parameters
>> > > will return a simple help of the parameters and of the available
>> > > algorithms.
>> > >
>> > > Regards,
>> > >
>> > > --
>> > > -
>> > > Andrea Peri
>> > > . . . . . . . . .
>> > > qwerty àèìòù
>> > > -
>> > > ___
>> > > sqlite-users mailing list
>> > > sqlite-users@sqlite.org
>> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> > >
>> > ___
>> > sqlite-users mailing list
>> > sqlite-users@sqlite.org
>> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>> >
>>
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread James K. Lowden
On Thu, 25 Sep 2014 20:32:29 +0200
Mark Lawrence  wrote:

> I would have expected the group to work the same as the order, given
> that I think of the group as happening on the result set before any
> joins. 

ORDER BY is different.  It's not even *related* to GROUP BY.  

To elaborate on Cory Nelson's answer, GROUP BY is part of the SELECT
processing, and ORDER BY is not.  By analogy:

$ cat input | SELECT | ORDER BY > output

One way to understand it is that relational variables -- tables and
similar in SQL -- don't have order.  You can manipulate them
independent of order, using as many operators (JOIN, WHERE, etc.) as you
please.  Only when you're done can you ask the system, as a convenience
to yourself, to sort the results.  

SQLite takes some liberties with that model.  Depending on one's point
of view, supporting LIMIT and ORDER BY in a subquery is either a boon
or a wart.  Regardless, SQL as defined by the standard treats ORDER BY
differently, for the above reasons.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread James K. Lowden
On Thu, 25 Sep 2014 18:40:23 +0200
RSmith  wrote:

> It's much harder to make a mathematical positioning result than
> simply working out if the current is better or worse than any
> existing position.

So you say.  But someone is doing *something* to determine who comes
before whom.  Even synchronized swimming gets a score.  

> the time of finishing does not determine position alone, there are
> bonuses and penalties which doesn't stack up to integer values, but
> is measurable. 

Measurable means "is a number", right?  So we have math, and IEEE
floating point.  Yay!  We can compute the answer!  ;-)  

> so there is an action of finding a best-fit position in the list for
> the newest finisher and "insert" him/her there. 

There are two possibilities: The "corrected" time -- the ranking 
value -- either 

a) does,  or 
b) does not 

depend on prior finishers in the same race.  I've never heard of a 
type (a) system, so I assume yours is type (b).  That means the computed
result is a function F with the form

corrected time = F( time, parameter, [parameter...] )

where the parameters are constants or are computable from pre-start
attributes of the contestant or contestants.  

Record the actual time in the database with the race.  Maintain tables
of contestant attributes and parameters required for F().  Write a view
to compute the corrected time.  SELECT, ORDER BY, and go home!  :-)  

I can appreciate that you don't want to do the computations in SQL,
especially if the requisite math functions aren't available.  You then
have the options of computing it first and storing corrected time, or of
writing the C code as a user-defined SQLite function.  Either one is
more robust and less opaque than your divide-and-insert strategy, or a
recursive CTE.  

HTH.  

--jkl
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread John Hascall
Would dropping the non-functioning default clause from the schema be a
"breaking change"?

That is from:
# sqlite3 dummy.db
sqlite> CREATE TABLE x(
   ...> id INTEGER PRIMARY KEY DEFAULT (random()),
   ...> val VARCHAR
   ...> );
sqlite> .schema
CREATE TABLE x(
*id INTEGER PRIMARY KEY DEFAULT (random()),*
val VARCHAR
);
to:
# sqlite3 dummy.db
sqlite> CREATE TABLE x(
   ...> id INTEGER PRIMARY KEY DEFAULT (random()),
   ...> val VARCHAR
   ...> );
sqlite> .schema
CREATE TABLE x(
*id INTEGER PRIMARY KEY*,
val VARCHAR
);

And would it be better, or cause more head-scratching, I dunno...


John

On Thu, Sep 25, 2014 at 4:07 PM, Richard Hipp  wrote:

> On Thu, Sep 25, 2014 at 4:46 PM, Mark Lawrence  wrote:
>
> >
> > If you are going to keep this behaviour would it not make more sense to
> > ensure that the table creation fails? The DEFAULT clause is pretty
> > straight-forward and I don't find it intuitive to go looking for
> > PRIMARY KEY documentation when it is ignored.
> >
> > SQLite should either fail to accept the statement or do what the table
> > definition says - anything else means heartache for those debugging
> > problems (which I did for hours on this issue) and those reading the
> > code afterwards.
> >
>
> That would break backwards compatibility for the millions and millions of
> applications currently using SQLite.  Most of those millions would be
> unaffected, no doubt, but out of millions I'm sure there are a goodly
> number that would break.  I am unwilling to implement a breaking change
> simply to make the interface more "intuitive".
>
> Had you brought this up in 2002, the outcome would likely have been very
> different.  But at this point, the behavior of INTEGER PRIMARY KEY in
> SQLite is not something that can be modified.
>
> Sorry to disappoint.
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread Mark Lawrence
On Thu Sep 25, 2014 at 03:59:55PM -0400, Richard Hipp wrote:
> 
> I will make an effort to clarify this in the documentation.

If you are going to keep this behaviour would it not make more sense to
ensure that the table creation fails? The DEFAULT clause is pretty
straight-forward and I don't find it intuitive to go looking for
PRIMARY KEY documentation when it is ignored.

SQLite should either fail to accept the statement or do what the table
definition says - anything else means heartache for those debugging
problems (which I did for hours on this issue) and those reading the
code afterwards.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread Richard Hipp
On Thu, Sep 25, 2014 at 3:30 PM, Mark Lawrence  wrote:

>
> I understand that that behaviour exists and applies when an insert does
> not provide a value, but I don't see the contradiction. The table
> defines an *explicit* default that should (to my mind) override any
> kind of magical-in-the-absence-of-a-default-default. Such an explicit
> default should certainly not be accepted if it is going to be ignored.
>

I will make an effort to clarify this in the documentation.


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread Mark Lawrence
On Thu Sep 25, 2014 at 03:18:04PM -0400, Adam Devita wrote:

> Your table definition seems to have a contradiction.  The expression
> INTEGER PRIMARY KEY is a special keyword that means 'auto-increment',
> which would be a default value.

I understand that that behaviour exists and applies when an insert does
not provide a value, but I don't see the contradiction. The table
defines an *explicit* default that should (to my mind) override any
kind of magical-in-the-absence-of-a-default-default. Such an explicit
default should certainly not be accepted if it is going to be ignored.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread Adam Devita
Your table definition seems to have a contradiction.
The expression INTEGER PRIMARY KEY is a special keyword that means
'auto-increment', which would be a default value.  DEFAULT (random() )
would contradict the auto-increment instruction.  The row id was being used
to generate the key.



On Thu, Sep 25, 2014 at 3:10 PM, Mark Lawrence  wrote:

> Plan:
>
> CREATE TABLE x(
> id INTEGER PRIMARY KEY DEFAULT (random()),
> val VARCHAR
> );
>
> INSERT INTO x(val) VALUES ('a');
> SELECT * FROM x;
>
> Result:
>
> id  val
> --  --
> 1   a
>
> Expected result:
>
> id   val
> ---  --
> 4841191733402647298  a
>
> I get the expected result if I create the table WITHOUT ROWID.
>
> --
> Mark Lawrence
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] DEFAULT expression ignored for INTEGER PRIMARY KEYs?

2014-09-25 Thread Mark Lawrence
Plan:

CREATE TABLE x(
id INTEGER PRIMARY KEY DEFAULT (random()),
val VARCHAR
);

INSERT INTO x(val) VALUES ('a');
SELECT * FROM x;

Result:

id  val   
--  --
1   a 

Expected result:

id   val   
---  --
4841191733402647298  a 

I get the expected result if I create the table WITHOUT ROWID.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread Cory Nelson
This is just the way the SQL standard mandates it to work, not an oddity
specific to SQLite. I imagine the optimizer is probably smart enough to not
do the work twice here, but someone else will need to chime in to confirm
that.

If you want to be sure, you can use a CTE.

On Thu, Sep 25, 2014 at 2:02 PM, Mark Lawrence  wrote:

> On Thu Sep 25, 2014 at 01:43:20PM -0500, Cory Nelson wrote:
>
> > GROUP BY works on your input data, not output data. You want to GROUP
> > BY COALESCE(x.id, y.id)
>
> That describes the behaviour I demonstrated, but not the reasoning
> behind it nor the documentation pointing to that reasoning.
>
> Is SQLite clever enough to recognize that a GROUP BY expression and a
> SELECT column are the same? Because in my mind I think of the query as
> working in the following stages for the most efficient operation:
>
> - JOIN ROWS
> - SELECT COLUMNS -- COALESCE done here
> - GROUP OUTPUT
> - ORDER OUTPUT
>
> However, it appears to be the case that the order is more like this:
>
> - JOIN ROWS
> - GROUP ROWS  -- COALESCE done here
> - SELECT COLUMNS -- COALESCE also done here?
> - ORDER OUTPUT
>
> Which looks to me like the expression would be calculated twice. Is
> SQLite smart enough to figure out that the columns are the same and
> only do it once?
>
> If SQLite is capable of determining that the same expression is used
> twice, why not just accept a SELECT expression?
>
> --
> Mark Lawrence
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Cory Nelson
http://int64.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread Mark Lawrence
On Thu Sep 25, 2014 at 01:43:20PM -0500, Cory Nelson wrote:

> GROUP BY works on your input data, not output data. You want to GROUP
> BY COALESCE(x.id, y.id)

That describes the behaviour I demonstrated, but not the reasoning
behind it nor the documentation pointing to that reasoning.

Is SQLite clever enough to recognize that a GROUP BY expression and a
SELECT column are the same? Because in my mind I think of the query as
working in the following stages for the most efficient operation:

- JOIN ROWS
- SELECT COLUMNS -- COALESCE done here
- GROUP OUTPUT
- ORDER OUTPUT

However, it appears to be the case that the order is more like this:

- JOIN ROWS
- GROUP ROWS  -- COALESCE done here
- SELECT COLUMNS -- COALESCE also done here?
- ORDER OUTPUT

Which looks to me like the expression would be calculated twice. Is
SQLite smart enough to figure out that the columns are the same and
only do it once?

If SQLite is capable of determining that the same expression is used
twice, why not just accept a SELECT expression?

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread Cory Nelson
GROUP BY works on your input data, not output data. You want to GROUP BY
COALESCE(x.id, y.id)

On Thu, Sep 25, 2014 at 1:37 PM, Mark Lawrence  wrote:

> On Thu Sep 25, 2014 at 08:32:29PM +0200, Mark Lawrence wrote:
> > GROUP BY on a result column fails with "ambiguous column name":
> >
> > SELECT
> > COALESCE(x.id, y.id) AS id
> > FROM
> > y
> > LEFT JOIN
> > x
> > ON
> > x.id = y.fk
> > ORDER BY
> > id
> > ;
>
> Sorry, that should read GROUP BY of course.
>
> --
> Mark Lawrence
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
Cory Nelson
http://int64.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread Mark Lawrence
On Thu Sep 25, 2014 at 08:32:29PM +0200, Mark Lawrence wrote:
> GROUP BY on a result column fails with "ambiguous column name":
> 
> SELECT
> COALESCE(x.id, y.id) AS id
> FROM
> y
> LEFT JOIN
> x
> ON
> x.id = y.fk
> ORDER BY
> id
> ;

Sorry, that should read GROUP BY of course.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] GROUP BY: ambiguous column name?

2014-09-25 Thread Mark Lawrence
Don't know if this is a bug or intended behaviour. Given the following
schema:

CREATE TABLE x(
id INTEGER
);

CREATE TABLE y(
id INTEGER,
fk INTEGER REFERENCES x(id)
);

ORDER BY on a result column name is allowed:

SELECT
COALESCE(x.id, y.id) AS id
FROM
y
LEFT JOIN
x
ON
x.id = y.fk
ORDER BY
id
;

GROUP BY on a result column fails with "ambiguous column name":

SELECT
COALESCE(x.id, y.id) AS id
FROM
y
LEFT JOIN
x
ON
x.id = y.fk
ORDER BY
id
;

I would have expected the group to work the same as the order, given
that I think of the group as happening on the result set before any
joins. The syntax diagrams on the web page show the first as an
"ordering-term" and the second as an "expr" which doesn't enlighten me
much.

-- 
Mark Lawrence
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the stringmetrics

2014-09-25 Thread dave
Since the source is there, I may give-it-a-go myself this eve, if it's not
too difficult to set up (the extension should be easy, but I don't know
about the other project that provides the actual implementation).  If I do,
I'll make it statically linked so there will be no dependencies.

For Andrea Peri's benefit, I did google and find where someone mentions the
appropriate flags to statically link the dependency in, thus avoiding this
problem.
http://stackoverflow.com/questions/4702732/the-program-cant-start-because-li
bgcc-s-dw2-1-dll-is-missing
So Andrea may wish to rebuild and replace the existing dll for the benefit
of the community.

-dave

> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert Van Assche
> Sent: Thursday, September 25, 2014 12:47 PM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] A new extension for sqlite to analyze 
> the stringmetrics
> 
> 
> thanks for your help Dave.
> 
> As I'm not a real developer but just an SQLite user, I cannot 
> compile the
> DLL -- I just downloaded the compiled DLL. So your guess is 
> correct. I'm
> working on a Win 8.1 PC and there's no libgcc-s-dw2-1.dll on 
> my system.
> 
> gert
> 
> 2014-09-25 19:26 GMT+02:00 dave :
> 
> > Update: never mind what I mentioned below; he said he is 
> using mingw (it
> > really does help to read the entire thread!)
> >
> > But I have tried and failed to load it (the prebuilt 
> binary) myself; I
> > notice there is a further dependency on libgcc-s-dw2-1.dll, 
> so maybe that's
> > the origin of Gert's problem.
> >
> > -dave
...


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the stringmetrics

2014-09-25 Thread Gert Van Assche
thanks for your help Dave.

As I'm not a real developer but just an SQLite user, I cannot compile the
DLL -- I just downloaded the compiled DLL. So your guess is correct. I'm
working on a Win 8.1 PC and there's no libgcc-s-dw2-1.dll on my system.

gert

2014-09-25 19:26 GMT+02:00 dave :

> Update: never mind what I mentioned below; he said he is using mingw (it
> really does help to read the entire thread!)
>
> But I have tried and failed to load it (the prebuilt binary) myself; I
> notice there is a further dependency on libgcc-s-dw2-1.dll, so maybe that's
> the origin of Gert's problem.
>
> -dave
>
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of dave
> > Sent: Thursday, September 25, 2014 11:59 AM
> > To: 'General Discussion of SQLite Database'
> > Subject: Re: [sqlite] A new extension for sqlite to analyze
> > the stringmetrics
> >
> >
> > Gert:
> >
> > Did you build it yourself, or download a binary?  Are you
> > running on XP, or
> > a later OS?
> >
> > I ask these questions because I have seen this code due to c
> > dll runtime
> > issues like (msvcrt100.dll), etc.
> > Depending on what tool was used to build the binary, some
> > changes may need
> > to be made to the build process so that the binary runs on
> > all platforms XP
> > - win8.  In particular, DS2012 broke the ability to make XP-compatible
> > builds, and the builder is using that, then (s)he needs to select the
> > 'v110_xp' toolset to make a binary that runs on all platforms.
> >
> > All this is purely a guess, and could easily be wrong; I
> > can't take a peek
> > at the lib myself right now; but I mention this now on the
> > chance that it
> > saves some time debugging.
> >
> > -dave
> >
> >
> > > -Original Message-
> > > From: sqlite-users-boun...@sqlite.org
> > > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert
> > Van Assche
> > > Sent: Thursday, September 25, 2014 10:25 AM
> > > To: General Discussion of SQLite Database
> > > Subject: Re: [sqlite] A new extension for sqlite to analyze
> > > the stringmetrics
> > >
> > >
> > > Andea,
> > >
> > > Seems like a very interesting extension to me.
> > > I cannot make it run on Windows.
> > > I get error code 0xc00012f.
> > >
> > > Any idea why this is ?
> > >
> > > thanks
> > >
> > > gert
> > >
> > > 2014-09-25 10:11 GMT+02:00 Andrea Peri :
> > >
> > > > Hi,
> > > > for who eventually interested.
> > > >
> > > > Fr a specific internal use I develope a new simple
> > > extension for sqlite.
> > > > The LibStringmetrics.
> > > > https://github.com/aperi2007/libstringmetrics
> > > >
> > > > It is based on the LibSimmetrics c lib from Johnathan Botha
> > > > - available from here:
> > https://github.com/jokillsya/libsimmetrics -
> > > > It was a porting of another java library.
> > > >
> > > > The LibStringMetrics is compile using mingw .
> > > > All is released with a GPL3 to follow the same license of
> > > the original
> > > > libsimmetrics.
> > > >
> > > > The extension is usable after the usual load_extension command:
> > > >
> > > > select load_extension("libstringmetrics.dll");
> > > >
> > > > The extension add One new command:
> > > >
> > > > stringmetrics().
> > > >
> > > > calling it without any parameters
> > > > will return a simple help of the parameters and of the available
> > > > algorithms.
> > > >
> > > > Regards,
> > > >
> > > > --
> > > > -
> > > > Andrea Peri
> > > > . . . . . . . . .
> > > > qwerty àèìòù
> > > > -
> > > > ___
> > > > sqlite-users mailing list
> > > > sqlite-users@sqlite.org
> > > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > > >
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> >
> >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread Clemens Ladisch
RSmith wrote:
> the time of finishing does not determine position alone, there are
> bonuses and penalties which doesn't stack up to integer values, but is
> measurable [...]  Without going into too much detail about how bonuses
> and penalties are calculated

How can anybody help you without these details?


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the stringmetrics

2014-09-25 Thread dave
Update: never mind what I mentioned below; he said he is using mingw (it
really does help to read the entire thread!)

But I have tried and failed to load it (the prebuilt binary) myself; I
notice there is a further dependency on libgcc-s-dw2-1.dll, so maybe that's
the origin of Gert's problem.

-dave

> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of dave
> Sent: Thursday, September 25, 2014 11:59 AM
> To: 'General Discussion of SQLite Database'
> Subject: Re: [sqlite] A new extension for sqlite to analyze 
> the stringmetrics
> 
> 
> Gert:
> 
> Did you build it yourself, or download a binary?  Are you 
> running on XP, or
> a later OS?
> 
> I ask these questions because I have seen this code due to c 
> dll runtime
> issues like (msvcrt100.dll), etc.
> Depending on what tool was used to build the binary, some 
> changes may need
> to be made to the build process so that the binary runs on 
> all platforms XP
> - win8.  In particular, DS2012 broke the ability to make XP-compatible
> builds, and the builder is using that, then (s)he needs to select the
> 'v110_xp' toolset to make a binary that runs on all platforms.
> 
> All this is purely a guess, and could easily be wrong; I 
> can't take a peek
> at the lib myself right now; but I mention this now on the 
> chance that it
> saves some time debugging.
> 
> -dave
> 
> 
> > -Original Message-
> > From: sqlite-users-boun...@sqlite.org 
> > [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert 
> Van Assche
> > Sent: Thursday, September 25, 2014 10:25 AM
> > To: General Discussion of SQLite Database
> > Subject: Re: [sqlite] A new extension for sqlite to analyze 
> > the stringmetrics
> > 
> > 
> > Andea,
> > 
> > Seems like a very interesting extension to me.
> > I cannot make it run on Windows.
> > I get error code 0xc00012f.
> > 
> > Any idea why this is ?
> > 
> > thanks
> > 
> > gert
> > 
> > 2014-09-25 10:11 GMT+02:00 Andrea Peri :
> > 
> > > Hi,
> > > for who eventually interested.
> > >
> > > Fr a specific internal use I develope a new simple 
> > extension for sqlite.
> > > The LibStringmetrics.
> > > https://github.com/aperi2007/libstringmetrics
> > >
> > > It is based on the LibSimmetrics c lib from Johnathan Botha
> > > - available from here: 
> https://github.com/jokillsya/libsimmetrics -
> > > It was a porting of another java library.
> > >
> > > The LibStringMetrics is compile using mingw .
> > > All is released with a GPL3 to follow the same license of 
> > the original
> > > libsimmetrics.
> > >
> > > The extension is usable after the usual load_extension command:
> > >
> > > select load_extension("libstringmetrics.dll");
> > >
> > > The extension add One new command:
> > >
> > > stringmetrics().
> > >
> > > calling it without any parameters
> > > will return a simple help of the parameters and of the available
> > > algorithms.
> > >
> > > Regards,
> > >
> > > --
> > > -
> > > Andrea Peri
> > > . . . . . . . . .
> > > qwerty àèìòù
> > > -
> > > ___
> > > sqlite-users mailing list
> > > sqlite-users@sqlite.org
> > > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > >
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> > 
> 
> 
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Forcing a manual fsync in WAL/NORMAL mode

2014-09-25 Thread Simon Slavin

On 25 Sep 2014, at 10:53am, Deon Brewis  wrote:

> In the majority of my application, I'm fine running in WAL/NORMAL and lose
> some committed transactions if the application crashes.

When you lose data are you ever losing part of a transaction ?  SQLite is 
allowed to lose entire transactions, back to some END/COMMIT statement.  It 
should not be losing parts of transactions.  Or, at least, when you reopen the 
database using the SQLite library it should restore the data to an END/COMMIT 
statement.

> (Which actually just
> happened to me on iOS - I thought that can only happen on a full O/S crash -
> not just app. But oh well).

If an iOS app crashes (or is terminated by the OS because it has been hogging a 
resource) OS will close its files and ports.  But it will not flush the buffers 
before it closes them.  I don't know why this is -- it's way out of my field of 
expertise -- but this does mean that you can lose entire SQLite transactions 
which your application thought it had made.

Simon.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Forcing a manual fsync in WAL/NORMAL mode

2014-09-25 Thread Richard Hipp
On Thu, Sep 25, 2014 at 5:53 AM, Deon Brewis  wrote:

> In the majority of my application, I'm fine running in WAL/NORMAL and lose
> some committed transactions if the application crashes. (Which actually
> just
> happened to me on iOS - I thought that can only happen on a full O/S crash
> -
> not just app. But oh well).
>
> However, every now and again I need to communicate state with an external
> entity and to do that, I need to ensure local durability before that.
>
> Is there any way to force a single manual fsync? (Or is it really just a
> matter of calling the underlying O/S API?)
>
> I know I can checkpoint, but I:
>
> 1) Don't need a full checkpoint - syncing to the WAL is fine.
> 2) Can't tell for sure whether doing a checkpoint while in WAL/NORMAL mode
> will perform a sync before returning.
>

There are maybe a couple of ways to handle this:

(1) Temporarily set "PRAGMA synchronous=FULL;" and do a small transaction.
The transaction might be as simple as "PRAGMA user_version=123;".  The WAL
file is synced after each transaction in synchronous=FULL mode.

(2) Run a checkpoint.  SQLite just fsync the WAL file prior to writing back
content into the database.




>
>
>
>
>
> --
> View this message in context:
> http://sqlite.1065341.n5.nabble.com/Forcing-a-manual-fsync-in-WAL-NORMAL-mode-tp78260.html
> Sent from the SQLite mailing list archive at Nabble.com.
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the stringmetrics

2014-09-25 Thread dave
Gert:

Did you build it yourself, or download a binary?  Are you running on XP, or
a later OS?

I ask these questions because I have seen this code due to c dll runtime
issues like (msvcrt100.dll), etc.
Depending on what tool was used to build the binary, some changes may need
to be made to the build process so that the binary runs on all platforms XP
- win8.  In particular, DS2012 broke the ability to make XP-compatible
builds, and the builder is using that, then (s)he needs to select the
'v110_xp' toolset to make a binary that runs on all platforms.

All this is purely a guess, and could easily be wrong; I can't take a peek
at the lib myself right now; but I mention this now on the chance that it
saves some time debugging.

-dave


> -Original Message-
> From: sqlite-users-boun...@sqlite.org 
> [mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Gert Van Assche
> Sent: Thursday, September 25, 2014 10:25 AM
> To: General Discussion of SQLite Database
> Subject: Re: [sqlite] A new extension for sqlite to analyze 
> the stringmetrics
> 
> 
> Andea,
> 
> Seems like a very interesting extension to me.
> I cannot make it run on Windows.
> I get error code 0xc00012f.
> 
> Any idea why this is ?
> 
> thanks
> 
> gert
> 
> 2014-09-25 10:11 GMT+02:00 Andrea Peri :
> 
> > Hi,
> > for who eventually interested.
> >
> > Fr a specific internal use I develope a new simple 
> extension for sqlite.
> > The LibStringmetrics.
> > https://github.com/aperi2007/libstringmetrics
> >
> > It is based on the LibSimmetrics c lib from Johnathan Botha
> > - available from here: https://github.com/jokillsya/libsimmetrics -
> > It was a porting of another java library.
> >
> > The LibStringMetrics is compile using mingw .
> > All is released with a GPL3 to follow the same license of 
> the original
> > libsimmetrics.
> >
> > The extension is usable after the usual load_extension command:
> >
> > select load_extension("libstringmetrics.dll");
> >
> > The extension add One new command:
> >
> > stringmetrics().
> >
> > calling it without any parameters
> > will return a simple help of the parameters and of the available
> > algorithms.
> >
> > Regards,
> >
> > --
> > -
> > Andrea Peri
> > . . . . . . . . .
> > qwerty àèìòù
> > -
> > ___
> > sqlite-users mailing list
> > sqlite-users@sqlite.org
> > http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> 


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Fwd: Re: sqlite generate strange field name in union...

2014-09-25 Thread RSmith



On 2014/09/25 05:04, 麦田观望者 wrote:

Hi, RSmith‍:
I can't find a method to reply you message,so i send it to you mailbox 
directly, sorry for disturber.

you say:
>It is just whatever the Query producer feels comfortable writing in the
>header to identify the column‍

maybe you are right on the point of STANDARD‍, but, every serious(normal?) 
programer except the db engine generate the same column name as they defined in 
the database except when the name conflict occured.

index is a number which is inconstant‍  --  table redesign, sql statement 
re-write. but name keep more stable and is more friendly to people.  you can 
find a lot of FieldByName(...) statement in a database related source 
code--even some company require their programer access field only by field name 
-- except a performance issue is occured..


Pardon the next bit of convoluted English, I think the OP works via a 
translator so I will try to be overly verbose to try and
ensure the message go through in its fullest meaning.

Hi Mykore,

FieldByName(...) refers to the fieldname returned by the query or table, if you 
do not tell the query what exact fieldname you mean
for it to show, then it may show some other name. I am not talking about what is 
"Standard" acceptable use, I talk about "THE SQL
Standard", as in what the people who designed SQL language constructs meant and 
require from a DB engine. You only think it is
standard use to get the names in a query when you don't ask for it because most 
DB's do that most of the time, but it is in fact not
required, you HAVE to say specifically what you want as a field name (using the 
AS directive) if you want to be guaranteed that a
certain query will produce certain field names.

This is not my opinion, it's the SQL law - even if it seems silly and you can 
come up with hundred reasons why it is silly, it still
is the law and it won't change. If you expect a name, ask for it specifically.



i whish you report my opinion to the sqlite dev team, or post it to public and 
collect more opinions.


No problem - I have posted this to the forum  :)



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Forcing a manual fsync in WAL/NORMAL mode

2014-09-25 Thread Deon Brewis
In the majority of my application, I'm fine running in WAL/NORMAL and lose
some committed transactions if the application crashes. (Which actually just
happened to me on iOS - I thought that can only happen on a full O/S crash -
not just app. But oh well).

However, every now and again I need to communicate state with an external
entity and to do that, I need to ensure local durability before that.

Is there any way to force a single manual fsync? (Or is it really just a
matter of calling the underlying O/S API?)

I know I can checkpoint, but I:

1) Don't need a full checkpoint - syncing to the WAL is fine.
2) Can't tell for sure whether doing a checkpoint while in WAL/NORMAL mode
will perform a sync before returning.





--
View this message in context: 
http://sqlite.1065341.n5.nabble.com/Forcing-a-manual-fsync-in-WAL-NORMAL-mode-tp78260.html
Sent from the SQLite mailing list archive at Nabble.com.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread RSmith


On 2014/09/25 15:43, James K. Lowden wrote:

On Thu, 25 Sep 2014 10:36:31 +0200
Clemens Ladisch  wrote:


Yes, and yes, absolutely.  In that case the order is established by the
user, and can be captured by the application as integers, and stored in
the database.  The problem is trivial because the number is limited
to what a human being is willing to sort "by hand".  And the SQL is
straightforward.

--jkl


I think his example was meant to show the validity of the notion, not be a 
stringent use-case.

To this end, I will explain what I'm trying to do, and I have been thinking of making the system determine ordering up front as to 
avoid the whole issue. Basically we need to record live race results - but not a normal race, imagine a like a bingo game, every now 
and then someone finishes but the time of finishing does not determine position alone, there are bonuses and penalties which doesn't 
stack up to integer values, but is measurable.. so there is an action of finding a best-fit position in the list for the newest 
finisher and "insert" him/her there. The next finisher may well be below or above.  Without going into too much detail about how 
bonuses and penalties are calculated or rather, affected, think of a turn-based strategy card game as is common these days (Pokemon, 
Magic, etc.) with possible penalties such as "skip a turn", how do you put that into number values?  It's much harder to make a 
mathematical positioning result than simply working out if the current is better or worse than any existing position.


Harder, but not impossible, and if this ordering conundrum turns out to have too high an effort/pleasure ratio, then finding a 
mathematical positioning algorithm is next.


Thanks for all the input!


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-09-25 Thread Richard Hipp
On Thu, Sep 25, 2014 at 9:34 AM, Richard Hipp  wrote:

> Thanks for the schema.  Unfortunately, that was insufficient to reproduce
> the problem.  Are you able to send me the actual database file, via private
> email?
>


Thanks for sending the data.  But I'm still not able to reproduce the
problem.  I tried running this script against your database:

.open /home/drh/Downloads/Journal.dat
.tables
PRAGMA integrity_check;
.timer on
SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath
FROM tblFolderNestedSets Node
 , tblFolderNestedSets Parent
WHERE Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight
AND Parent.fkintSessionID = Node.fkintSessionID
AND Node.fkintSessionID =  1817
AND Node.fkintFolderID  = 1937926;
.print ---
SELECT Parent.fkintFolderID FilePath
FROM tblFolderNestedSets Node
 , tblFolderNestedSets Parent
WHERE Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight
AND Parent.fkintSessionID = Node.fkintSessionID
AND Node.fkintSessionID =  1817
AND Node.fkintFolderID  = 1937926;

The output I get back is:

tblCabinets  tblFiles tblSessions
tblDataParts tblFolderNames   vw_FileParts
tblFileNames tblFolderNestedSets  vw_Files
tblFileParts tblFolders   vw_Folders
ok

Run Time: real 0.000 user 0.84 sys 0.00
---
Run Time: real 0.000 user 0.61 sys 0.00

As you can see, neither query is returning any result and neither query is
taking an measurable amount of time.

Do you have any suggestions on what I can do differently in order to
recreate your problem?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] A new extension for sqlite to analyze the string metrics

2014-09-25 Thread Gert Van Assche
Andea,

Seems like a very interesting extension to me.
I cannot make it run on Windows.
I get error code 0xc00012f.

Any idea why this is ?

thanks

gert

2014-09-25 10:11 GMT+02:00 Andrea Peri :

> Hi,
> for who eventually interested.
>
> Fr a specific internal use I develope a new simple extension for sqlite.
> The LibStringmetrics.
> https://github.com/aperi2007/libstringmetrics
>
> It is based on the LibSimmetrics c lib from Johnathan Botha
> - available from here: https://github.com/jokillsya/libsimmetrics -
> It was a porting of another java library.
>
> The LibStringMetrics is compile using mingw .
> All is released with a GPL3 to follow the same license of the original
> libsimmetrics.
>
> The extension is usable after the usual load_extension command:
>
> select load_extension("libstringmetrics.dll");
>
> The extension add One new command:
>
> stringmetrics().
>
> calling it without any parameters
> will return a simple help of the parameters and of the available
> algorithms.
>
> Regards,
>
> --
> -
> Andrea Peri
> . . . . . . . . .
> qwerty àèìòù
> -
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-09-25 Thread Clemens Ladisch
Richard Hipp wrote:
> Note that the use of AUTOINCREMENT has nothing to do with your
> problem - I just see people using it a lot and I'm wondering why
> it is so popular

MySQL needs it.  Every search for "autoincrement" will find it.  This
keyword's name appears to imply that you do _not_ get autoincrementing
if you omit it.  Finally, when you use it, there is no noticeable
downside with typcial tests.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread James K. Lowden
On Thu, 25 Sep 2014 10:36:31 +0200
Clemens Ladisch  wrote:

> > The first question I'd have is:  Where are the ordering criteria,
> > and why aren't they in the database?  Someone is imposing an order,
> > but the basis for it is not included in the database design.
> 
[amusing list omitted]  ;-)
> 
> If the ordering is specified by someone doing drag in a list, or
> by saying "insert the new entry after *that one*", then there is no
> better ordering criteria than the relative order of the entries.
> 
> Of course, such lists tend to be short enough that the implemtation
> does not really matter, and that an always-updated SortOrder would
> work just fine.

Yes, and yes, absolutely.  In that case the order is established by the
user, and can be captured by the application as integers, and stored in
the database.  The problem is trivial because the number is limited
to what a human being is willing to sort "by hand".  And the SQL is
straightforward.  

--jkl


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-09-25 Thread Richard Hipp
Thanks for the schema.  Unfortunately, that was insufficient to reproduce
the problem.  Are you able to send me the actual database file, via private
email?

Aside:  Why are you using AUTOINCREMENT?  Do you really need it?  Are you
aware that there are space and time penalties for using AUTOINCREMENT even
if you never actually use the features it provides?  Are you aware that
INTEGER PRIMARY KEY values will be assigned automatically even without the
AUTOINCREMENT keyword?  See http://www.sqlite.org/autoinc.html for
additional information?  Note that the use of AUTOINCREMENT has nothing to
do with your problem - I just see people using it a lot and I'm wondering
why it is so popular and whether or not people really need it.

On Thu, Sep 25, 2014 at 8:59 AM, Kraijenbrink - FixHet - Systeembeheer <
kraijenbr...@fixhet.nl> wrote:

> Hi,
>
> Thank you for the fast response. Below this line you'll find sqlite3.exe's
> output:
>
> CREATE TABLE tblFolderNames (
> pkintFolderNameID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
>   txtNametext NOT NULL UNIQUE COLLATE nocase
> );
> CREATE TABLE tblFolders (
>   pkintFolderIDinteger PRIMARY KEY AUTOINCREMENT NOT NULL,
>   fkintParentIDinteger NOT NULL,
>   fkintNameID  integer NOT NULL,
>   dtmCreationTime  datetime NOT NULL
> );
> CREATE TABLE tblFolderNestedSets (
>   pkintFolderNestedSetID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
>   fkintSessionID  integer NOT NULL,
>   fkintFolderID   integer NOT NULL,
>   intLeft integer,
>   intRightinteger
> );
> CREATE TABLE tblSessions (
>   pkintSessionID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
>   dtmStartDatedatetime NOT NULL,
>   dtmEndDate  datetime
> );
> CREATE UNIQUE INDEX tblFolderNames_Name
>   ON tblFolderNames
>   (txtName COLLATE nocase);
> CREATE UNIQUE INDEX tblFolders_ParentID_NameID
>   ON tblFolders
>   (fkintParentID, fkintNameID);
> CREATE UNIQUE INDEX tblFileParts_FileID_DataPartID_DataPartPos
>   ON tblFileParts
>   (fkintFileID, fkintDataPartID, intDataPartPos);
> CREATE UNIQUE INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left
>   ON tblFolderNestedSets
>   (fkintSessionID, fkintFolderID, intRight, intLeft);
> /* No STAT tables available */
>
> --
>
> Peter Kraijenbrink
>
>
>
> >Please run the ".fullschema" command on your database and send us the
> output.
> >
> >I mean by this:  (1) Download the latest version of sqlite3.exe from the
> website.  (2) Run "sqlite3 YOURDATABASE .fullschema >out.txt"  (3) Include
> the text of out.txt in the body of a follow-up email.
> >
> >That information will assist us in answering your question.
> >
> >On Thu, Sep 25, 2014 at 6:54 AM, Kraijenbrink - FixHet - Systeembeheer <
> kraijenbr...@fixhet.nl> wrote:
> >
> >> Hi all,
> >>
> >> I've searched through this forum but couldn't find any related topic
> >> regarding my question. I'm having serious performance problems
> >> (queries up to 20/sec) while running a SQLite query since i added a
> group_concat clause.
> >>
> >> The query looks like:
> >>
> >> SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath FROM
> >> tblFolderNestedSets Node , tblFolderNestedSets Parent WHERE
> >> Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight AND
> >> Parent.fkintSessionID = Node.fkintSessionID AND Node.fkintSessionID =
> >> 1817 AND Node.fkintFolderID  = 1937926;
> >>
> >> Query result:
> >>
> >> 1927916\1934826\1936323\1937926
> >>
> >> Changing the first line "SELECT GROUP_CONCAT(" into "SELECT
> >> Parent.fkintFolderID..." increased performance by an order of magnitude.
> >>
> >> explain query plan returns:
> >>
> >> selectid orderfrom detail
> >> 0  0  0  SEARCH TABLE tblFolderNestedSets AS
> Node
> >> USING COVERING INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left
> >> (fkintSessionID=? AND fkintFolderID=?) (~9 rows)
> >> 0  1  1  SEARCH TABLE tblFolderNestedSets AS
> >> Parent USING COVERING INDEX
> >> tblFolderNestedSets_SessionID_FolderID_Right_Left (fkintSessionID=?)
> >> (~5
> >> rows)
> >>
> >> My question is: how can I improve performance and keep using
> >> GROUP_CONCAT at the same time?
> >>
> >> Thanks in advance.
> >>
> >> Peter
> >>
> >> ___
> >> sqlite-users mailing list
> >> sqlite-users@sqlite.org
> >> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
> >>
> >
> >
> >
> >--
> >D. Richard Hipp
> >d...@sqlite.org
> >___
> >sqlite-users mailing list
> >sqlite-users@sqlite.org
> >http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing 

Re: [sqlite] group_concat query performance

2014-09-25 Thread Kraijenbrink - FixHet - Systeembeheer
Hi,

Thank you for the fast response. Below this line you'll find sqlite3.exe's 
output:

CREATE TABLE tblFolderNames (
pkintFolderNameID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
  txtNametext NOT NULL UNIQUE COLLATE nocase
);
CREATE TABLE tblFolders (
  pkintFolderIDinteger PRIMARY KEY AUTOINCREMENT NOT NULL,
  fkintParentIDinteger NOT NULL,
  fkintNameID  integer NOT NULL,
  dtmCreationTime  datetime NOT NULL
);
CREATE TABLE tblFolderNestedSets (
  pkintFolderNestedSetID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
  fkintSessionID  integer NOT NULL,
  fkintFolderID   integer NOT NULL,
  intLeft integer,
  intRightinteger
);
CREATE TABLE tblSessions (
  pkintSessionID  integer PRIMARY KEY AUTOINCREMENT NOT NULL,
  dtmStartDatedatetime NOT NULL,
  dtmEndDate  datetime
);
CREATE UNIQUE INDEX tblFolderNames_Name
  ON tblFolderNames
  (txtName COLLATE nocase);
CREATE UNIQUE INDEX tblFolders_ParentID_NameID
  ON tblFolders
  (fkintParentID, fkintNameID);
CREATE UNIQUE INDEX tblFileParts_FileID_DataPartID_DataPartPos
  ON tblFileParts
  (fkintFileID, fkintDataPartID, intDataPartPos);
CREATE UNIQUE INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left
  ON tblFolderNestedSets
  (fkintSessionID, fkintFolderID, intRight, intLeft);
/* No STAT tables available */

--

Peter Kraijenbrink



>Please run the ".fullschema" command on your database and send us the output.
>
>I mean by this:  (1) Download the latest version of sqlite3.exe from the 
>website.  (2) Run "sqlite3 YOURDATABASE .fullschema >out.txt"  (3) Include the 
>text of out.txt in the body of a follow-up email.
>
>That information will assist us in answering your question.
>
>On Thu, Sep 25, 2014 at 6:54 AM, Kraijenbrink - FixHet - Systeembeheer < 
>kraijenbr...@fixhet.nl> wrote:
>
>> Hi all,
>>
>> I've searched through this forum but couldn't find any related topic 
>> regarding my question. I'm having serious performance problems 
>> (queries up to 20/sec) while running a SQLite query since i added a 
>> group_concat clause.
>>
>> The query looks like:
>>
>> SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath FROM 
>> tblFolderNestedSets Node , tblFolderNestedSets Parent WHERE 
>> Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight AND 
>> Parent.fkintSessionID = Node.fkintSessionID AND Node.fkintSessionID =  
>> 1817 AND Node.fkintFolderID  = 1937926;
>>
>> Query result:
>>
>> 1927916\1934826\1936323\1937926
>>
>> Changing the first line "SELECT GROUP_CONCAT(" into "SELECT 
>> Parent.fkintFolderID..." increased performance by an order of magnitude.
>>
>> explain query plan returns:
>>
>> selectid orderfrom detail
>> 0  0  0  SEARCH TABLE tblFolderNestedSets AS Node
>> USING COVERING INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left
>> (fkintSessionID=? AND fkintFolderID=?) (~9 rows)
>> 0  1  1  SEARCH TABLE tblFolderNestedSets AS
>> Parent USING COVERING INDEX
>> tblFolderNestedSets_SessionID_FolderID_Right_Left (fkintSessionID=?) 
>> (~5
>> rows)
>>
>> My question is: how can I improve performance and keep using 
>> GROUP_CONCAT at the same time?
>>
>> Thanks in advance.
>>
>> Peter
>>
>> ___
>> sqlite-users mailing list
>> sqlite-users@sqlite.org
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>>
>
>
>
>--
>D. Richard Hipp
>d...@sqlite.org
>___
>sqlite-users mailing list
>sqlite-users@sqlite.org
>http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users




___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 50% faster than 3.7.17

2014-09-25 Thread David Woodhouse
On Tue, 2014-09-23 at 17:48 +0100, David Woodhouse wrote:
> That looks really promising; thanks for all this work.
> 
> Tristan, you have a comprehensive set of benchmarks for Evolution's
> addressbook; is it possible for someone else to run those or would it
> take more of your time to babysit than it would to run them yourself?

I was able to get these benchmarks running, and compared the 3.8.7
snapshot against the version of 3.8.6 which is shipped in Fedora 20.

Results temporarily at http://westmere.infradead.org/charts/sqlite387/

This isn't a perfect comparison since the Fedora package is built from
sqlite-src-3080600.zip and the equivalent source for the 3.8.7 alpha
didn't seem to be available. But it was built as a shared library using
the same configuration and compiler flags, and the benchmark compares
identical code running against both the 3.8.6 and 3.8.7 libraries.

I haven't done any real analysis other than looking at the pretty
pictures, but certainly nothing seems to go slower (or break), and there
are some noticeable improvements on some of the benchmarks
(filter-by-long-full-name-suffix.png,
filter-by-short-full-name-suffix.png, filter-containing-full-name.png,
filter-containing-given-name.png, filter-containing-phone-number.png).

-- 
dwmw2
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-09-25 Thread Richard Hipp
Please run the ".fullschema" command on your database and send us the
output.

I mean by this:  (1) Download the latest version of sqlite3.exe from the
website.  (2) Run "sqlite3 YOURDATABASE .fullschema >out.txt"  (3) Include
the text of out.txt in the body of a follow-up email.

That information will assist us in answering your question.

On Thu, Sep 25, 2014 at 6:54 AM, Kraijenbrink - FixHet - Systeembeheer <
kraijenbr...@fixhet.nl> wrote:

> Hi all,
>
> I've searched through this forum but couldn't find any related topic
> regarding my question. I'm having serious performance problems (queries up
> to 20/sec) while running a SQLite query since i added a group_concat clause.
>
> The query looks like:
>
> SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath
> FROM tblFolderNestedSets Node
> , tblFolderNestedSets Parent
> WHERE Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight
> AND Parent.fkintSessionID = Node.fkintSessionID
> AND Node.fkintSessionID =  1817
> AND Node.fkintFolderID  = 1937926;
>
> Query result:
>
> 1927916\1934826\1936323\1937926
>
> Changing the first line "SELECT GROUP_CONCAT(" into "SELECT
> Parent.fkintFolderID..." increased performance by an order of magnitude.
>
> explain query plan returns:
>
> selectid orderfrom detail
> 0  0  0  SEARCH TABLE tblFolderNestedSets AS Node
> USING COVERING INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left
> (fkintSessionID=? AND fkintFolderID=?) (~9 rows)
> 0  1  1  SEARCH TABLE tblFolderNestedSets AS
> Parent USING COVERING INDEX
> tblFolderNestedSets_SessionID_FolderID_Right_Left (fkintSessionID=?) (~5
> rows)
>
> My question is: how can I improve performance and keep using GROUP_CONCAT
> at the same time?
>
> Thanks in advance.
>
> Peter
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] group_concat query performance

2014-09-25 Thread Dominique Devienne
On Thu, Sep 25, 2014 at 12:54 PM, Kraijenbrink - FixHet - Systeembeheer <
kraijenbr...@fixhet.nl> wrote:
>
> SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath
> FROM tblFolderNestedSets Node
> , tblFolderNestedSets Parent
> WHERE Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight
> AND Parent.fkintSessionID = Node.fkintSessionID
> AND Node.fkintSessionID =  1817
> AND Node.fkintFolderID  = 1937926;
>
> Query result:
>
> 1927916\1934826\1936323\1937926
>

from http://www.sqlite.org/lang_aggfunc.html: The order of the concatenated
elements is arbitrary.

So it's likely just chance that you get the result you expect, no?
Aren't recursive CTEs supposed to be used for such hierarchical queries?

Regarding group_concat performance, you could always try to write your own
aggregate function and compare. Could be group_concat does not pre-allocate
enough capacity into its buffer, and must realloc too many times. I doubt
it accounts for a 10x slow down though, assuming my hypothesis even holds
of course.

SQLite is one of those SQL engines where you can use aggregate functions
w/o an explicit group-by clause, so maybe w/o group_concat, that's no
longer a group-by by just a join, ending up being faster. All speculations
though, especially since I don't understand how your query works :). --DD
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] group_concat query performance

2014-09-25 Thread Kraijenbrink - FixHet - Systeembeheer
Hi all,

I've searched through this forum but couldn't find any related topic regarding 
my question. I'm having serious performance problems (queries up to 20/sec) 
while running a SQLite query since i added a group_concat clause.

The query looks like:

SELECT GROUP_CONCAT(Parent.fkintFolderID,'\') FilePath
FROM tblFolderNestedSets Node
, tblFolderNestedSets Parent
WHERE Node.intLeft BETWEEN Parent.intLeft AND Parent.intRight
AND Parent.fkintSessionID = Node.fkintSessionID
AND Node.fkintSessionID =  1817
AND Node.fkintFolderID  = 1937926;

Query result:

1927916\1934826\1936323\1937926

Changing the first line "SELECT GROUP_CONCAT(" into "SELECT 
Parent.fkintFolderID..." increased performance by an order of magnitude.

explain query plan returns:

selectid orderfrom detail
0  0  0  SEARCH TABLE tblFolderNestedSets AS Node USING 
COVERING INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left 
(fkintSessionID=? AND fkintFolderID=?) (~9 rows)
0  1  1  SEARCH TABLE tblFolderNestedSets AS Parent 
USING COVERING INDEX tblFolderNestedSets_SessionID_FolderID_Right_Left 
(fkintSessionID=?) (~5 rows)

My question is: how can I improve performance and keep using GROUP_CONCAT at 
the same time?

Thanks in advance.

Peter

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Non-optimal query plan

2014-09-25 Thread Rob Golsteijn

>> Hi List,
>>
>> I was looking at the query plan of a rather simple query, but I don't 
>> understand why sqlite would choose this query plan.
>>
>> ...I was surprised that sqlite came up with the inferior query plan...
>>
>> Note: After an "analyze aaa" (on a decently populated table) sqlite chooses 
>> the full table scan instead of creating an automatic index (but our 
>> application never uses 'analyze' to avoid that other (bad performing) query 
>> plans are used during operation than during testing)
>> Note 2: Adding "NOT INDEXED" to aa1 gives the desired query plan, but of 
>> course I prefer that sqlite choses the right query plan.

>I think this issue is fixed since it does not do it in my version, but... even 
>if it isn't, let me just note that basically SQLite 
>chooses a Query Plan that might in many circumstances work perfectly since it 
>has no information available about the data.

I do realize that it is impossible to find a QP that works correct in all 
cases. Certainly if sqlite has no knowledge about the tables's contents.
We encounter query plans that do not work out for our data occasionally. And if 
needed, we "fix" them by pushing the query planner in the desired
direction by using INDEXED BY, NOT INDEXED, CROSS JOINs, and adding +es, or 
rewriting the queries. 
The reason I reported this QP problem, is that the choice of this query plan is 
not optimal no matter what the table contents is, and I expected
it to find this without data analysis. 
Appearently, the further optimizations/tuning in the query planner solved this 
issue.

>Further 
>to this SQLite provides not one, but two explicit methods for you to improve 
>the query planning should the default not fit the best 
>for your specific query - yet you choose to ignore BOTH of them and expects 
>SQLite to pick a good plan without any knowledge of the 
>data, by default. [and ironically you are very happy to circumvent a whole 
>part of SQLite Query planning prowess to hide "other" QP 
>issues in stead of fixing it or requesting a fix, but won't add anything legal 
>and valid that would actually improve it. That is 
>just bizarre.]
We use databases in a data format conversion processes. Each conversion uses 
another set of (fresh) databases, on which a few thousand different queries
are executed once. The type and amount of data available per conversion differs 
significantly.
Hence query plans when using ANALYZE on these database will differ very much. 
In the past we used ANALYZE,
and we occasionally had problems with "never ending queries" for queries that 
usually only run minutes to a few hours. This was due to an unfortunate query 
plan.
Then my company decided not to use "ANALYZE" anymore, and only rely on fixed 
QP. (In the future we might introduce the ANALYZE results of a "standard
database" for all our databases to have the best of both worlds: predictable QP 
and sqlite having a bit of knowledge about our database contents, even if this
might differ from actual content in many cases).
Generally sqlite picks good QPs even if ANALYZE results are absent. There are 
some queries that need manual fine tuning for reasonable performance, and we do 
this.
But of couse we want to avoid this tuning as much as possible.

I don't just complain about any QP that doesn't work for me. We are regularly 
using the methods to finetune queries. But I reported this specific query plan 
because
I think even without knowledge of the table contents the QP should have come up 
with a better plan: I expected that it should have found that the a single full 
table scan
is always cheaper than creating an index on that same table and using this 
index for searching.

> If I was the asker of this question I would concentrate on what you mention 
> in passing in Note1 with regards to avoiding "other" bad 
> plans.
> If you do use Analyze and then at any point find the QP comes up with a bad 
> plan WITH access to analyze data - now THAT would 
> be a reason to complain and I have seen requests such as that cause a QP 
> overhaul many times here, you'd actually improve it for 
> everyone should you find a use-case with real QP problems based on proper 
> knowledge of data shape.




As explained our main problem with ANALYZE is predictability: for some specific 
database contents and query combination we might suddenly get a bad query plan, 
that we never encountered during our application testing. Aborting our 
conversion process for this reason means that a lot of time is lost, and 
reconversions are needed.
We want to prevent this. I guess there are more users that do not use ANALYZE 
for this reason.


Regards,
Rob


 
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 50% faster than 3.7.17

2014-09-25 Thread David Woodhouse
On Wed, 2014-09-24 at 19:36 -0600, Keith Medcalf wrote:
> 
> Interesting.  From that code you might want to try something like this:
> 
> SELECT uid, vcard, bdata 
>   FROM folder_id
>  WHERE uid in ( select uid FROM email_list where value like 'p%'
>union
> select uid from folder_id where nickname LIKE 'p%'
>union
> select uid from folder_id where full_name LIKE 'p%'
>union
> select uid from folder_id where family_name LIKE 'p%'
>union
> select uid from folder_id where given_name LIKE 'p%'
>union
> select uid from folder_id where nickname LIKE 'p%'
>union
> select uid from folder_id where file_as LIKE 'p%'
>   );
> 
> Then having nocase indexes on the various search fields will all work as 
> expected.

Yeah, that achieves the same speed.

I'm not sure it addresses the real problem though. It still only really
applies when the user's query (which we're translating to SQL) contains
only 'OR' and no 'AND' clauses. It doesn't help me translate a query in
the general case.

Now, I'm not *entirely* averse to having a special case for the 'all OR'
query — this particular query is the address autocompletion so it's
common and the user is actually waiting for it as they type. In fact, as
a proof of concept I've already *implemented* a hackish special case to
spot this case and basically submit a hand-crafted query instead of the
normal translation to SQL:
https://bugzilla.gnome.org/show_bug.cgi?id=699597#c19

The problem is that I don't *want* to have to have that special case.
This is just a query optimisation, which is something the query planner
is supposed to do. I don't *want* to implement this and other
optimisations in the client, just to trick *today's* sqlite query
planner into spotting the best way to do it. That's the Wrong Way™ to do
things.

There are two alternative approaches which *don't* seem as wrong. 

Firstly, if there's a sane way to rewrite our translator so that it
naturally uses UNION for OR clauses, that might make sense. But to cope
with AND clauses, AFAICT the natural extension of that approach would be
to use 'SELECT FROM ... SELECT FROM' and then we lose the use of indices
for *those* cases, right¹? Tristan started a thread about this 'nested
select' last year, which I picked up a couple of weeks ago. It didn't
seem like it was a viable strategy for the general case.

The second approach, and this is why I started this thread, is to 'fix'
the query planner so that that it can see for *itself* the best way to
implement a given query given the constraints.

I suggested a couple of specific optimisations which the query planner
might be able to make, which should hopefully have benefits wider than
just my own use case. Are those not viable?

-- 
dwmw2

¹ I do realise that in the special case of a single top-level AND such
  that all its sub-clauses are necessary conditions, I can do something
  nicer. But again, it's a special case and doesn't handle the general
  case of nested AND and OR clauses. And it's still the *client* code
  doing the job of the optimiser, spotting necessary vs. sufficient
  conditions and pulling them out to the top level for more efficient
  implementation.
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread Clemens Ladisch
James K. Lowden wrote:
> RSmith  wrote:
>> ID | Next  | Data
>> 1  |   4   | 'First Row'
>> 2  |   3   | 'Eventual Fourth Row'
>> 3  |   1   | 'Last Row'
>> 4  |   5   | 'New Second Row'
>> 5  |   2   | 'New Third Row'
>
> The first question I'd have is:  Where are the ordering criteria, and
> why aren't they in the database?  Someone is imposing an order, but
> the basis for it is not included in the database design.
>
> You're generating information instead of recording it.  Include that
> information, and the problem is converted to  nonproblem.

Imagine an MP3 playlist like this:

  Hypnoise  Logical Structures Database
  Graham Cooke  Everything Makes Sense When Everything Is Relational
  Cytadela  Red Sql
  RjSchema Things
  Sunny Sweeney From A Table Away
  Play New Moments  Queries Are the Wave
  Beehaus   Select Baker
  Little Champions  Transactions + Replications
  Shores of NullKings of Null

If the ordering is specified by someone doing drag in a list, or
by saying "insert the new entry after *that one*", then there is no
better ordering criteria than the relative order of the entries.

Of course, such lists tend to be short enough that the implemtation does
not really matter, and that an always-updated SortOrder would work just
fine.


Regards,
Clemens
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread Stephen Chrzanowski
I've read through the rest of the thread on this so far.  I like the linked
list idea, as updating three rows seems to be the better way of doing
things, but the question does remain 'what drives the sort reasoning?',
however, if you don't want to get that deep into CTEs and stuff for the
linked lists, one option would be instead of using integers, use strings.
It might chew up a LOT more drive space, but from the sounds of it, space
isn't a huge concern.

The code side of things, you'll have to do a bit more thinking, as you just
won't be able to do a simple math equation to get where things should be
in.

C:\Users\Stephen>sqlite3
SQLite version 3.7.14.1 2012-10-04 19:37:12
Enter ".help" for instructions
Enter SQL statements terminated with a ";"
sqlite> create table test (ID integer, SortOrder char, Data char);
sqlite> insert into Test (1,'A','First Row');
Error: near "1": syntax error
sqlite> insert into Test values (1,'A','First Row');
sqlite> insert into Test values (2,'B','Eventual Fifth Row');
sqlite> insert into Test values (3,'C','Last Row');
sqlite> insert into Test values (4,'AA','New Second Row');
sqlite> insert into Test values (5,'AB','New Fourth Row');
sqlite> insert into Test values (6,'AAA','New Third Line');
sqlite> select * from Test order by SortOrder;
1|A|First Row
4|AA|New Second Row
6|AAA|New Third Line
5|AB|New Fourth Row
2|B|Eventual Fifth Row
3|C|Last Row
sqlite>

What happens when you have to go beyond Z?  Tag on an A, so AAZA as an
example.  What comes before A? ... I don't know.

Since I don't know how you're determining how the decision to insert at the
bottom of a sorted list of data versus the middle, I don't know how viable
that is.



On Wed, Sep 24, 2014 at 12:49 PM, RSmith  wrote:

> I'm trying to find what the limit is for dividing in terms of accuracy.
>
> Basically I have one program that inserts values to a table and determine
> sort order using one standard trick that has a REAL column named
> "SortOrder" which gets the value Highest_previous_value+1 if an insert
> happens with something that needs to be sorted at the end of the table.
>
> For any other position, the SortOrder gets assigned the value:
> ((Prev.Sortorder + Next.Sortorder) / 2)
>
> So to be clear (in case anyone is not familiar with this method), let's
> start with a small table with 1 item added like this:
>
> ID | SortOrder | Data
> 1  |   1 | 'First Row'
>
> Adding two new rows to the end every time will use previous highest
> SortOrder+1 so that the result is:
>
> ID | SortOrder | Data
> 1  |   1 | 'First Row'
> 2  |   2 | 'Eventual Fourth Row'
> 3  |   3 | 'Last Row'
>
> Adding a new row that should Sort in between IDs 1 and 2 above will take
> those SortOrders and find a new Order value by dividing the total for IDs 1
> and 2 (=3) by 2 (=1.5):
>
> ID | SortOrder | Data
> 1  |   1 | 'First Row'
> 2  |   2 | 'Eventual Fourth Row'
> 3  |   3 | 'Last Row'
> 4  | 1.5   | 'New Second Row'
>
> Adding another row that should Sort in between IDs 2 and 4 will again
> total and divide by 2 (=(2+1.5)/2):
>
> ID | SortOrder | Data
> 1  |   1 | 'First Row'
> 2  |   2 | 'Eventual Fourth Row'
> 3  |   3 | 'Last Row'
> 4  | 1.5   | 'New Second Row'
> 5  | 1.75| 'New Third Row'
>
> So that if the Query 'SELECT Data FROM t ORDER BY SortOrder' executes it
> goes like this:
>
> Data
> 'First Row'
> 'New Second Row'
> 'New Third Row'
> 'Eventual Fourth Row'
> 'Last Row'
>
>
> This seems like a clever method and I've seen it used a few times, but it
> really can break easily if you keep dividing by two, there is a very quick
> limit in accuracy where one value can no longer be divided by two
> meanigfully. In 64-bit Floating point Math that limit is very far away,
> quite a few iterations (assuming normal floating point mantissa accuracy -
> the exponent size does not matter since any two such values will be
> adjacent in the same realm of magnitude and only the available real numbers
> in between them counts), but if inserts happen 2 to 3 times a second, and
> imagining for a moment that the sort might hit the same spot every time,
> many consecutive divs might be exhausted quick.
>
> The question is - how can I accurately establish how many
> total-then-divide-by-2's a set of co-values in 64-bit FP guise can
> withstand before the difference is too small to make sense to the sorter in
> SQLite?
>
> Reason: The fix is easy but costly on a large DB, sort and reassign
> SortOrders simply in Integer steps: 1, 2, 3 etc., but I want to establish
> how often this should be done, as little as possible preferred, but not so
> little as to allow the order to break or div0 errors or such.
>
>
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___

[sqlite] A new extension for sqlite to analyze the string metrics

2014-09-25 Thread Andrea Peri
Hi,
for who eventually interested.

Fr a specific internal use I develope a new simple extension for sqlite.
The LibStringmetrics.
https://github.com/aperi2007/libstringmetrics

It is based on the LibSimmetrics c lib from Johnathan Botha
- available from here: https://github.com/jokillsya/libsimmetrics -
It was a porting of another java library.

The LibStringMetrics is compile using mingw .
All is released with a GPL3 to follow the same license of the original
libsimmetrics.

The extension is usable after the usual load_extension command:

select load_extension("libstringmetrics.dll");

The extension add One new command:

stringmetrics().

calling it without any parameters
will return a simple help of the parameters and of the available algorithms.

Regards,

-- 
-
Andrea Peri
. . . . . . . . .
qwerty àèìòù
-
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] 50% faster than 3.7.17

2014-09-25 Thread Stephen Chrzanowski
In the vein of configurability, and in a day dream I just had, it would be
nice (But probably not possible as there could be compiler directives you
can't use at the same time) that we could have a single output
DLL/SO/whatever dumped from the compiler that had everything available,
then, via defaults in the source code, IF/THEN/ELSE (or whatever the C
equiv is) operations are considered to have certain functionality run
during the query process.  Via a configuration file, or via pragma which
would be generated based on a configuration, feature sets could be set at
run time.

One fault with this would be the issue of COMPILED size.  I realize we're
in an age where we generally talk megabytes instead of kilobytes in regards
to main thread compiled code, but I'm sure there are platforms out there
that are using SQLite that have to fit in the kilobyte range, and need
special compiled sources.

The advantage to this is that I wouldn't have to compile X number of DLLs
for Y number of programs to get the results I need, just have one
configuration file kicking around that I can adjust as needed.  It'd also
help with fine tuning to validate whether I need certain features turned on
or off.

On Wed, Sep 24, 2014 at 1:15 PM, Roger Binns  wrote:

> On 24/09/14 06:19, Simon Slavin wrote:
> > How much max is max ?
>
> Not giving up ACID.  But for example stat4 is better than the default
> stat1.
>  Memory mapping (especially on 64 bit) is great.  So is WAL.  All are off
> by
> default.
>
> If you want to give up ACID then you should really be on your own to look
> at
> the various tradeoffs.
>
> Roger
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Division accuracy

2014-09-25 Thread Hick Gunter
Floating point values are represented as  * 2 ^^ 

The egde cases are inserting in sorted  order.


Descending:

The first row is tagged with 1.0

Each new first row is tagged with 1/2 the previous.

This will either lose 1 bit of mantissa or decrement the exponent.

This means you will run out of values before reaching n + 2 ^^ (m-1)


Ascending:

The rows are assigned consecutive integers.

This means you will run out of values before reaching  2 ^^ n (because then the 
smallest increment becomes 2)


With n=53 and m=11 this gives you about 1000 inserts in descending order, which 
at a rate of 3/sec gives you all of 333 seconds (about 5 1/2 Minutes) between 
renumbering runs.


Of course you could implement a special number format with n=m=32 for 2^^31 
inserts, which at a rate of 3/sec gives you a mere 22 years between renumbering 
runs. You would need to write a user defined functions to manipulate and 
compare the values though.


-Ursprüngliche Nachricht-
Von: Scott Robison [mailto:sc...@casaderobison.com]
Gesendet: Mittwoch, 24. September 2014 18:58
An: rsm...@rsweb.co.za; General Discussion of SQLite Database
Betreff: Re: [sqlite] Division accuracy

On Wed, Sep 24, 2014 at 10:49 AM, RSmith  wrote:

> I'm trying to find what the limit is for dividing in terms of accuracy.
>
> Basically I have one program that inserts values to a table and
> determine sort order using one standard trick that has a REAL column
> named "SortOrder" which gets the value Highest_previous_value+1 if an
> insert happens with something that needs to be sorted at the end of the table.
>
> For any other position, the SortOrder gets assigned the value:
> ((Prev.Sortorder + Next.Sortorder) / 2)
>

{snipped}

A quick bit of test code shows me that after 53 iterations you'll run out of 
precision, which makes sense because there are 53 mantissa bits in a normalized 
double including the implicit leading 1 bit).

My quick & dirty test code which may be useful.

#include 

int main()
{
double lo = 1.0;
double hi = 2.0;

int count = 0;

while (lo != hi)
{
double mid = (lo + hi) / 2.0;
printf("%d %f\n", ++count, mid);
lo = mid;
}

return 0;
}

--
Scott Robison
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


___
 Gunter Hick
Software Engineer
Scientific Games International GmbH
FN 157284 a, HG Wien
Klitschgasse 2-4, A-1130 Vienna, Austria
Tel: +43 1 80100 0
E-Mail: h...@scigames.at

This communication (including any attachments) is intended for the use of the 
intended recipient(s) only and may contain information that is confidential, 
privileged or legally protected. Any unauthorized use or dissemination of this 
communication is strictly prohibited. If you have received this communication 
in error, please immediately notify the sender by return e-mail message and 
delete all copies of the original communication. Thank you for your cooperation.


___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users