On Sep 10, 2013, at 12:26 PM, Richard Hipp wrote:
> SURVEY QUESTION:
>
> The question for today is what to call this magic hint function:
>
> (1) unlikely(EXPR)
> (2) selective(EXPR)
> (3) seldom(EXPR)
> (4) seldom_true(EXPR)
> (5) usually_not_true(EXPR)
>
> Please feel free to suggest o
As I was reading this, I said to myself, "what they really need is a confidence
value." Then I read the end and, there it was! A confidence value. Ok.. not
exactly confidence, but I think you get my meaning.
It seems to me that you're allowing the query writer to substitute personal
knowledg
Hello Dr Hipp,
First of all, I apologize for this rather off-topic suggestion knowing that
you may have already implemented the syntax you describe, but there is an
IMHO good reason for it, read ahead.
On Tue, Sep 10, 2013 at 10:26 PM, Richard Hipp wrote:
> SELECT DISTINCT aname
> FROM album,
Hi
I've tried with my colleagues to write data to the same SQLite DB-file from
differents accounts.
When someone added a new line in the DB, sometimes older existing data were
lost as if they had never existed and sometimes not.
Why ?
Multiple access are maybe allowed only for reading ?
than
How about three, where two are just overloaded, or rather syntactic sugar, for
the main declaration:
unlikely(expr)
likely(expr)
likelihood(expr, rate)
Where unlikely(expr) -> likelihood(expr, 0.05)
likely(expr) -> likelihood(expr, 0.95)
I would presume that the rate is the expected se
On 10 Sep 2013, at 10:48pm, Tim Streater wrote:
> likelihood (EXPR, value)
Best I've seen so far. I know it makes no sense without the second parameter
but I think if you're going to make use of a special non-standard optimisation
system you can be expected to know exactly what it means.
Si
Or...
unlikely(expr (, prob=0.05)) ==> likelihood (...)
(sent from a mobile device - please excuse brevity, typos, and top-posting)
- stephan beal
http://wanderinghorse.net
On Sep 11, 2013 12:51 AM, "Stephan Beal" wrote:
> Plus an overload: unlikely(expr) ==> likelihood (expr, 0.05)
>
> (se
Seconded.
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of Simon Slavin
Sent: Tuesday, September 10, 2013 3:46 PM
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Hints for the query planner
On 10 Sep 2013, at
Plus an overload: unlikely(expr) ==> likelihood (expr, 0.05)
(sent from a mobile device - please excuse brevity, typos, and top-posting)
- stephan beal
http://wanderinghorse.net
On Sep 10, 2013 11:49 PM, "Tim Streater" wrote:
> On 10 Sep 2013 at 20:26, Richard Hipp wrote:
>
> > SURVEY QUEST
On 10 Sep 2013 at 20:26, Richard Hipp wrote:
> SURVEY QUESTION:
>
> The question for today is what to call this magic hint function:
>
> (1) unlikely(EXPR)
> (2) selective(EXPR)
> (3) seldom(EXPR)
> (4) seldom_true(EXPR)
> (5) usually_not_true(EXPR)
>
> Please feel free to suggest other name
I think I prefer something along the lines of "unlikely" or "likely". The
problem with a term like "selective" (at least in my brain) is that it
doesn't imply (for the single argument version) in what way it is being
selective.
If a negative form of the magic function is used ("unlikely", "seldom"
-Original Message-
Date: Tue, 10 Sep 2013 15:15:35 +
From: Harmen de Jong - CoachR Group B.V.
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Query preperation time does not scale linearly
withgrowth of no. of tables
Message-ID:
Content-Type: text/plai
> (1) unlikely(EXPR)
> (2) selective(EXPR)
> (3) seldom(EXPR)
> (4) seldom_true(EXPR)
> (5) usually_not_true(EXPR)
>
>
I quite like (2) "selective". I think it's reasonably descriptive on its
own, and also works well with the optional second argument.
_
In Bayesian statistics there is a term "prior", prior probability.
From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on
behalf of Richard Hipp [d...@sqlite.org]
Sent: Tuesday, September 10, 2013 3:26 PM
To: General Discussion of SQL
Hi Richard,
What about "probability" or "likelyhood"? This works in both the case where
the likelyhood is great as well as when it is low. From the list you
provided, I would pick "unlikely".
Kind regards,
Philip Bennefall
- Original Message -
From: "Richard Hipp"
To: "General Dis
On 10 sep. 2013, at 21:24, "E.Pasma" wrote:
> Op 10 sep 2013, om 19:48 heeft Simon Slavin het volgende geschreven:
>
>>
>> On 10 Sep 2013, at 4:15pm, Harmen de Jong - CoachR Group B.V.
>> wrote:
>>
>>> That is something we suspected too. We already made some tests where we
>>> timed the tim
On 10 Sep 2013, at 4:37pm, olivier Ménard wrote:
> I've tried with my colleagues to write data to the same SQLite DB-file from
> differents accounts.
> When someone added a new line in the DB, sometimes older existing data were
> lost as if they had never existed and sometimes not.
>
> Why ?
There is a survey question at the bottom of this message. But first some
context...
Over on the sqlite-dev mailing list, a debate has been going on about the
best way to provide some useful hints to the query planner. The query
under discussion looks like this:
SELECT DISTINCT aname
FROM albu
Op 10 sep 2013, om 19:48 heeft Simon Slavin het volgende geschreven:
On 10 Sep 2013, at 4:15pm, Harmen de Jong - CoachR Group B.V. > wrote:
That is something we suspected too. We already made some tests
where we timed the time needed for all memory allocations executed
in the entire opera
On 10 Sep 2013, at 4:15pm, Harmen de Jong - CoachR Group B.V.
wrote:
> That is something we suspected too. We already made some tests where we timed
> the time needed for all memory allocations executed in the entire operation.
> In total for the 1 tables test this was somewhere around 25
On 10 sep. 2013, at 16:44, "E.Pasma" wrote:
> Op 10 sep 2013, om 16:36 heeft Harmen de Jong - CoachR Group B.V. het
> volgende geschreven:
>
>> On 10 sep. 2013, at 16:16, "E.Pasma" wrote:
>>
>>> Op 10 sep 2013, om 11:37 heeft Harmen de Jong - CoachR Group B.V. het
>>> volgende geschreven:
>>
Op 10 sep 2013, om 16:36 heeft Harmen de Jong - CoachR Group B.V. het
volgende geschreven:
On 10 sep. 2013, at 16:16, "E.Pasma" wrote:
Op 10 sep 2013, om 11:37 heeft Harmen de Jong - CoachR Group B.V.
het volgende geschreven:
I included 5 databases that we used for testing in this link:
> >> het volgende geschreven:
> >>> I included 5 databases that we used for testing in this link:
> http://wikisend.com/download/570088/test_databases.zip
> >>>
> >>> The query performed on these databases is:
> >>> delete from A where id=1;
> >>
> >> I could not resist trying this but the tables
Sent from my iPad
On 10 sep. 2013, at 17:04, "Keith Medcalf" wrote:
>>> No, it is all about preparing, so there is no need to insert data.
>>> When we perform the query "delete from A where id=1;" on the
>>> databases from the zip file, we get the following timings:
>>>
>>> 500 tables - 10 m
On 10 sep. 2013, at 16:16, "E.Pasma" wrote:
> Op 10 sep 2013, om 11:37 heeft Harmen de Jong - CoachR Group B.V. het
> volgende geschreven:
>> I included 5 databases that we used for testing in this link:
>> http://wikisend.com/download/570088/test_databases.zip
>>
>> The query performed on the
Op 10 sep 2013, om 11:37 heeft Harmen de Jong - CoachR Group B.V. het
volgende geschreven:
I included 5 databases that we used for testing in this link:
http://wikisend.com/download/570088/test_databases.zip
The query performed on these databases is:
delete from A where id=1;
I could not res
On 10 sep. 2013, at 15:41, "Igor Tandetnik" wrote:
> Not exponential - polynomial. Between 500 and 1 the size of input
> increases x20, so the time increase of x400 would be consistent with a
> quadratic algorithm. Your observed measurements are even better than that.
You're right, thanks
On 9/10/2013 5:37 AM, Harmen de Jong - CoachR Group B.V. wrote:
The time factors it takes on each database are as follows (where the time
needed for the 500 tables was taken as starting point to calculate the other
factors):
500 tables - 1x
1000 tables - 2.5x
5000 tables - 29x
1 tables - 9
On 10 sep. 2013, at 14:43, "E.Pasma" wrote:
> The timings do not look truly exponential to me. It looks more as if there is
> a graduated charge (NL: staffeltoeslag) on the time per table. For instance:
> table 1 - 500 - 2 msec/table
> table 501 - 1.000 - 3 msec/table
> table 1001 - 5.000 - 5 m
Op 10 sep 2013, om 11:37 heeft Harmen de Jong - CoachR Group B.V. het
volgende geschreven:
On 9 sep. 2013, at 22:11, "E.Pasma" wrote:
Ha, I did not mean the length of the names but the length of the
hash table (NL: klutstabel), That is the number of buckets over
which the hash values are
On Mon, Sep 9, 2013 at 8:27 PM, Jared Albers wrote:
> This query takes 10x longer to execute when using versions 3.8.x. Step back
> to 3.7.17 or older and the query is much faster. I checked the query plans
> of both versions and they are identical.
>
Please post:
(1) Your complete schema
(2) T
klo wrote:
> Just changed a table of mine to support fts4 so that I can do searches
> on it
Please note that, depending on the implementation, virtual tables are
not a full replacement for 'normal' SQLite tables and will not support
all features.
> and noticed that INSERT OR REPLACE is not workin
This query takes 10x longer to execute when using versions 3.8.x. Step back
to 3.7.17 or older and the query is much faster. I checked the query plans
of both versions and they are identical.
-Jared
Query:
SELECT R.child, R.instance, R.owner, R.relationship, I.*, NS.rowid AS sort,
COALESCE(L1.may
Hey everybody,
I really need some help. Just changed a table of mine to support fts4 so
that I can do searches on it and noticed that INSERT OR REPLACE is not
working anymore as the way it is supposed to. Instead of replacing the item
with the primary ID it is instead adding a new entry. How can I
On 10 sep. 2013, at 11:37, "Harmen de Jong - CoachR Group B.V."
wrote:
> As you can see this is an exponential growth in time it takes to execte the
> query. So far we're missing the point of why this growth should be
> exponential.
We tried some further debugging and it seems that SQLite spen
On 9 sep. 2013, at 22:11, "E.Pasma" wrote:
> Ha, I did not mean the length of the names but the length of the hash table
> (NL: klutstabel), That is the number of buckets over which the hash values
> are distributed. I looked some further in the code and now believe that this
> is 1024. That is
36 matches
Mail list logo