Also, i want to note the pretty clever "hack" from Hick Gunter that
works by passing a "proxy" for the row (instead of the whole row's
values) to SQLite. This proxy gets expanded using the xColumn API by a
virtual table outside the whole query.
In this way a query using multiple VTs, only
On 4/3/2014 11:33 μμ, Alek Paunov wrote:
On 04.03.2014 20:25, Eleytherios Stamatogiannakis wrote:
On 04/03/14 20:11, Alek Paunov wrote:
On 04.03.2014 15:05, Eleytherios Stamatogiannakis wrote:
On 03/03/14 03:01, Alek Paunov wrote:
It seems that the easiest optimization for this (very often)
s in it, but we have other queries
that use a lot more VTs than that.
estama
On 2/3/2014 9:34 μμ, Max Vlasov wrote:
On Sun, Mar 2, 2014 at 5:21 PM, Elefterios Stamatogiannakis
<est...@gmail.com> wrote:
Our main test case is TPCH, a standard DB benchmark. The "lineitem" tabl
IMHO, a benchmark like this is useless without any more information.
Some questions that i would like to see answered:
- Which SQLite and Postgres versions were used?
- Are the SQLite indexes, covering ones?
- Have any performance pragmas being used?
Also interval joins ("between") are hard
In our performance tests we try to work with data and queries that are
representative of what we would find in a typical DB.
This means a lot of "small" values (ints, floats, small strings), and
5-20 columns.
Our main test case is TPCH, a standard DB benchmark. The "lineitem"
table of TPCH
Please excuse me for not explaining.
The syntax that you puzzle about is supported by madIS [*] which
translates it into SQLite. Having being using madIS for many years (we
created it in 2008) it comes natural to me, forgetting that SQLite
doesn't support it.
What essentially madIS does is
On 25/01/14 18:41, James K. Lowden wrote:
On Sat, 25 Jan 2014 11:56:32 +0200
Elefterios Stamatogiannakis <est...@gmail.com> wrote:
Wouldn't it be better instead of creating a new concept "row
constructor", to use the existing row constructors, also known as
virtual tables?
Pe
On 25/01/14 03:37, James K. Lowden wrote:
On Fri, 24 Jan 2014 23:51:11 +0100
Petite Abeille wrote:
It's exactly the same as "SELECT ?", but a little bit easier to
write. (It behaves like with INSERT, but is now available in every
place where a SELECT would be
On 23/1/2014 7:12 μμ, Drake Wilson wrote:
Quoth Eleytherios Stamatogiannakis , on 2014-01-23 14:37:23
+0200:
Let me describe a use case where a not unique key and without rowid
are most welcome. We have a distributed big data system here which
uses SQLite for the partitions.
Hello,
We've been using SQLite 3.15-with-union-all-optimizations that you
linked to previously, to run some of the workflows that we have here
without problems.
The workflows use madIS, and they involve around 30 OLAP queries using
row/aggregate/virtual table functions on ~5 GB of data.
Sorry i didn't realize before that you had:
select * from (... union all ...)
Try with a count(*) as such:
select count(*) from (... union all ...)
And you'll see that both union and "union all" will create a temp file.
Union needs the temp file to remove the duplicates. Union All doesn't
Thank you Simon. Your solution would work for the example i gave.
Nevertheless my problem is still more complex because i also use SQLite
as a generic streaming engine (yes i know, SQLite wasn't designed for
doing things like that).
Appart from input VTs (FILE) we also have output VTs like so
Thank you for answering Mr. Hipp. The general problem doesn't have to do
with counting the rows (it was there as a placeholder). I just want to
merge 2 (and more) table/streams.
The real query in madIS looks like the following:
create table t as
select upper(c1), c2, lower(c3) from
(
select *
Sorry for hijacking the thread, but i have an answer for that.
IMHO having the computation application and the data management on
different domains incurs a very high data transfer cost.
The traditional thinking of shipping the data from the DB to somewhere
else (application code) to do the
On 15/09/12 21:57, Richard Hipp wrote:
So my thinking now is that this optimization should not be merged to trunk
unless it is first disabled by default and only enabled by a compile-time
or start-time option.
IMHO, a pragma that enables/disables it persistently on a DB would be ideal.
Many
I have a question.
Will the covering index scan optimization also cover the automatically
created indexes?
So lets say that for a certain part of a query, the optimizer decides to
create an automatic index. Will the same index be chosen for doing a
scan (instead of a full table scan) for
On 15/09/12 17:03, Simon Slavin wrote:
On 15 Sep 2012, at 12:08pm, Elefterios Stamatogiannakis <est...@gmail.com>
wrote:
What i would really like to have in SQLite concerning OLAP, would be bigger
pages,
You can set pagesize for a new database using a PRAGMA:
<http://www.s
On 14/09/12 22:56, Clemens Ladisch wrote:
But do you have any numbers (which would help deciding whether to accept this
patch)?
I've run two queries on two different DBs:
- DB1: Size 414M, Count of recs 2999671, the table has 17 Cols
- DB2: Size 1.4G, Count of recs 1975986, the table has
On 14/09/12 22:56, Clemens Ladisch wrote:
Elefterios Stamatogiannakis wrote:
On 13/09/12 23:02, Clemens Ladisch wrote:
For my main workload (OLAP) this can make an enormous difference!
OLAP isn't quite the typical SQLite use case. But do you have any
numbers (which would help deciding
On 13/09/12 23:02, Clemens Ladisch wrote:
Eleytherios Stamatogiannakis wrote:
It seems to me that using a covering index scan would always be faster
in both cases (fewer disk page reads).
Yes, if the index has fewer columns than the table.
In my experience, the most frequent case is for an
Madis is a extensible relational database system built upon the SQLite
database and with extensions written in Python (via APSW SQLite
wrapper). Its is developed at:
http://madis.googlecode.com
Due to Madis’ SQLite core, the database format of Madis is exactly the
same as SQLite’s one. This
Is there any important reason for counter(X) function to not be included
in main sqlite?
There is already an implementation of counter function in
src\test_func.c and given the usefulness of counter function in
analytics, it is a petty to have to write obnoxious queries to
workaround the lack
I'm using sqlite's count function to emulate OLAP functionality.
Unfortunately count function is not exposed in sqlite, nevertheless it
eases the pain of not having analytics functions in sqlite (lead, lag,
median etc).
lefteris
Alexey Pechnikov wrote:
> Hello!
>
> В сообщении от Monday 24
Is there a reason for a very useful (for analytics) function like
"counter(X)" to live in test_func.c ?
I stumbled on it while trying to write a counter function of my own.
Please if possible make it visible. It could ease the pain of not having
analytics functions in sqlite (lead, lag, median
What page size do you use in your database? Maybe you should increase it?
Assuming a table fully fragmented with a page size of 16KB then the I/O
rate should be (if my calculations are correct) in the 2MB/s range
(assuming a hard disk having 50MB/s bandwidth, 7msec seek time).
For 32KB page
Has anybody successfully compiled sqlite with icu for win32?
I haven't managed to find an libicu for mingw. Any tips welcome.
lefteris
D. Richard Hipp wrote:
> On Nov 14, 2008, at 8:08 AM, Martin Engelschalk wrote:
>
>> Hi all,
>>
>> the ICU project is a very powerful tool to handle codepages,
26 matches
Mail list logo