Re: [sqlite] SQL Murder Mystery

2019-12-16 Thread Stefan Evert


> On 16 Dec 2019, at 04:59, Simon Slavin  wrote:
> 
> A little light relief:
> 
> 
> 
> Chosen SQL variation is SQLite !


Nice, but the crime is so easy to solve …

SELECT CAST(x'4A6572656D7920426F77657273' AS TEXT);

;-)
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Regarding CoC

2018-10-23 Thread Stefan Evert

> On 23 Oct 2018, at 07:04, Paul  wrote:
> 
> If my opinion has any value, even though being atheist, I prefer this CoC 100 
> times over
> the CoC that is being currently pushed onto the many open-source communities, 
> that was
> created by some purple-headed feminist with political motives. This one does 
> not have 
> any hidden intentions (at least, it seems so to me, knowing that you're 
> honest person).

Exactly my feelings.

– Stefan

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Responsive" website revamp at www.sqlite.org

2016-09-08 Thread Stefan Evert

> On 6 Sep 2016, at 10:19, Stephan Beal  wrote:
> 
> A counter-opinion, though apparently in the small minority: i _absolutely
> despise_ fixed-width web site layouts.

+1, including layouts with fixed maximum width.


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Sqlite incompatibility with Postgres

2016-05-18 Thread Stefan Evert

> On 18 May 2016, at 02:41, dandl  wrote:
> 
> Then you are mistaken. 
> 1. Consider the following set S of integers: 1,3,5,7,42,99,83,11.
> 2. Divide it into two subsets such that S1 is of size 3 and all members of
> S1 are larger than those in S2.
> 
> A sort is unnecessary -- there are many algorithms that can do that purely
> based on set logic, the ability to compare members and the ability to
> determine the cardinality of a set.

I think the point is that your query may not have a solution if there are ties, 
i.e. the ordering is not complete:

Consider this set of integers: 1,3,5,7,42,99,83,11,83,83

In this case, there is no subset S1 of size 3 that satisfies your criterion.  
In an SQL query, the set returned by LIMIT 3 would not be defined uniquely.

Best,
Stefan Evert



Re: [sqlite] ANN - DBD::SQLite version 1.20

2009-04-10 Thread Stefan Evert
Thanks a lot, Darren!

This version also fixes the problems I've been having with segfaults  
in the create_function test that I've been worried about.  I've only  
tried version 1.22_02, but according to the change log, the bug was  
fixed before the public release of 1.20.

Best,
Stefan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up row by row lookup in a large db

2009-03-24 Thread Stefan Evert

>> Just wanted to say publicly that DBD::SQLite is the greatest thing
>> since, well, SQLite. Thanks for making our lives easy.
>
> +1

$count++;

from me, too.  We talk so much about speed and versions only because  
we use DBD::SQLite so heavily.

BTW, I've switched to the amalgamation package, with SQLite updated to  
3.6.11 manually.  Works very well so far, though I haven't tried  
callbacks on the Mac so far (where the test suite segfaults).

Best,
Stefan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up row by row lookup in a large db

2009-03-22 Thread Stefan Evert
On 22 Mar 2009, at 00:27, P Kishor wrote:

> Stefan, you were right about "Depends on how much data" part, but it
> applied to the weather data, not the lookup tables for lc (or
> landcover) or dist (or disturbance).
>
> Also, "Even for Perl/DBI" makes it seem there is something wrong with
> Perl/DBI... no, no... Perl/DBI is an industry-tested, extremely
> capable interface. I wouldn't trade it for anything. The problem was
> with my problem, with my data size, with my approach. I have to
> correct that. DBD::SQLite/DBD::SQLite::Amalgamation/Perl DBI are the
> greatest things since SQLite.

I didn't mean to criticise DBI specifically.  I'm doing most of my  
database work through DBI (with either SQLite or MySQL as a backend)  
and I'm very happy with its ease of use and stability.

However, Perl does introduce a lot of overhead, so it may not be the  
best choice for high-performance applications.  In my simplistic  
benchmarks, Perl did not achieve more than approx. 400,000 subroutine  
calls per second (MacBook Pro, 2.5 GHz).  If you're reading your 7,300  
result rows in a loop with fetchrow_* method calls (or if similar  
subroutines are called internally when using fetchall_* or  
selectall_*), then you're already limited to less than 60 queries/s by  
the function call overhead.

If you don't need to access the rows separately, you could try to pack  
the complete data for each met_id with Storable.pm and store it in a  
single database blob.  I haven't tried that myself, though, so I don't  
know whether that would be any faster.

>> Another thing to keep in mind is that the SQLite version included  
>> in the
>> DBD::SQLite distribution is fairly old (3.4.0 on my Mac -- I doubt  
>> there's a
>> more recent version of DBD::SQLite around), and AFAIK there have  
>> been some
>> speed improvements in SQLite recently.
>
> I am using DBD::SQLite::Amalgamation with SQLite 3.6.11. I am not sure
> there is anything wrong with DBD::SQLite 1.1.4 at all (other than its
> use of the older SQLite code, which is easily corrected). One might
> get better bang by ensuring that the latest version of DBI is
> installed, which Tim Bunce and company are probably always
> fine-tuning.

DBD::SQLite is fine and works very well for me, but hasn't seen any  
updates to a newer SQLite version in quite a long time.  My  
understanding is that it's not entirely trivial to replace the SQLite  
source code (without understanding all the XS interfacing code), and  
that it would be better and easier to use the amalgamation anyway.

I'd love to switch to DBD::SQLite::Amalgamation, but I need to  
register my own callback functions, and this package fails the  
callback tests with a bus error.

Cheers,
Stefan
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] speeding up row by row lookup in a large db

2009-03-21 Thread Stefan Evert

On 21 Mar 2009, at 15:31, P Kishor wrote:

> I did some benchmarking with the above schema using Perl DBI, and I
> get about 30 transactions per second as long as I returning the data
> to memory.

Even for Perl/DBI, that seems pretty slow.  Depends on how much data  
each of these transactions returns, though -- if there are thousands  
of rows in lc or dist for each cell_id, then you can't expect much  
better performance.  Even though DBI and DBD::SQLite are written in C,  
they have to allocate fairly complex data structures to return the  
data (in the best case, an anonymous array with 40 to 60 entries for  
each data row in the result set), and these operations are relatively  
expensive in Perl (I know because I've written some XS code recently  
that does this kind of thing).

Another thing to keep in mind is that the SQLite version included in  
the DBD::SQLite distribution is fairly old (3.4.0 on my Mac -- I doubt  
there's a more recent version of DBD::SQLite around), and AFAIK there  
have been some speed improvements in SQLite recently.

(Darren, any news from the maintainer of DBD::SQLite?  I would be very  
delighted and grateful to be able to use an up-to-date SQLite version  
in my Perl scripts.)

>
> [1] First retrieve all data from cell table
> SELECT * FROM cell WHERE cell_id = :cell_id
>
> [2] Now retrieve the related lc, dist and met
> SELECT lc.*
> FROM lc l JOIN cell_lc c on l.lc_id = c.lc_id
> WHERE c.cell_id = :cell_id

Just a short in the dark, but people on this list have occasionally  
pointed out that SQLite isn't very good at optimising JOIN queries, so  
it's often better to use a sub-select clause.  (SQLite isn't very good  
at optimisation in general, at least the version 3.4.0 that comes with  
Perl, and I've found that sometimes it helps a lot to give a few  
subtle hints to the optimiser ...)

Have you tried rephrasing queries [2] and [3] in this form?

> SELECT * FROM lc WHERE lc_id IN (SELECT lc_id FROM cell_lc WHERE  
> cell_id = :cell_id)

I've had good experiences with this approach, although my definition  
of good performance is rather along the lines of "completes within  
less than 5 seconds". :-)


Best regards,
Stefan Evert

[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Any advantage to pre-sorting data before loading and indexing it?

2009-02-17 Thread Stefan Evert

On 17 Feb 2009, at 15:34, pyt...@bdurham.com wrote:

> Is there any advantage to having data pre-sorted (in index order)
> before loading it into SQLite and indexing it? Or does indexing
> work best if the index values are randomly ordered at data load
> time?

 From my own experience, there's a big performance gain both at the  
indexing stage and in queries that have to scan relatively large parts  
of the database in index order (at least if you're working with a  
database that doesn't fit in RAM).

Of course, if you're indexing multiple columns, you can only pre-sort  
data for one of them, so on average there will be little gain.




Best regards,
Stefan Evert

[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] request to become co-maintainer of DBD::SQLite

2009-01-19 Thread Stefan Evert
>>

Dear Duncan,

thanks for taking on this job!  I have recently started using SQLite  
quite heavily from Perl scripts -- it is astonishingly efficient, even  
for simple queries on a 70 GB database (Google's Web 1T 5-gram  
database, in case someone's curious) with Perl callback functions --  
so I'd be more than happy to see an up-to-date version of DBD::SQLite.

Since I'm lazy enough to rely on OS-provided SQLite installations on  
various computers, I'm using at least three different old versions of  
SQLite in parallel, DBD::SQLite being the oldest of all ... (no  
compatibility problems at all, though, so kudos to all SQLite  
developers!).

>> I have been stuck back at 3.4 for various issues.
>>
>> I do Perl and C and offer some help.

Same here.  I feel reasonably at home both in C and Perl, and I've  
written some simple XS code.  I don't have any experience with DBI,  
which seems to have its own method of compiling C extensions for DBD  
modules (from a quick look at the DBD::SQLite sources).

Just let us know how/whether we can help you!




Best regards,
Stefan Evert

[ stefan.ev...@uos.de | http://purl.org/stefan.evert ]



___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Journal files

2008-12-01 Thread Stefan Evert

> I tried that on sqlite3 version 3.6.6 and the return value was "OFF"  
> but the
> journals are still being created. Any reason why this wouldn't work?

Did you set

PRAGMA journal_mode = OFF;

? The way I read the documentation (on a second or third close  
reading, I think), this only sets the default value for new databases  
to be attached, but doesn't affect your main connection and any  
databases that have already been attached.  So what I do is

PRAGMA main.journal_mode = OFF;

IIRC, I needed this line to actually turn off the journals (tested  
with 3.4.x and 3.5.x versions of SQLite only, though).

Cheers,
Stefan

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users