I have a SQL problem I’m trying to solve and realise its now gone wyyy
beyond my meagre SQL talents.
A bit of context
The dataset I’m looking at looks at traffic problems. Every five mins I get a
new dataset of traffic updates. Some of the new dataset is a continuation of
the old data,
appreciate the excellent and helpful answers provided. I have improved
my SQL and learnt new stuff today which is always nice.
Best wishes,
Rob
On 1 Nov 2014, at 10:19, Rob Willett <rob.sql...@robertwillett.com> wrote:
> I have a SQL problem I’m trying to solve and realise its now gone
OK, thanks. A little more optimisation :)
> On 1 Nov 2014, at 17:39, Igor Tandetnik wrote:
>
> On 11/1/2014 11:52 AM, Luuk wrote:
>> Is the 'else null' part needed??, or can it be deleted
>
> Yes, it can be removed. CASE expression returns null when no case matches.
> --
>
I agree with Tim.
I filter all my SQLite messages to its own folder and read as needed.
I prefer this method to a forum as I can then keep the messages with me.
I’m often off the internet and its good to have them for reading. Also
I can look back through them and get hints and tips about
You didn?t read the link properly. I?ll highlight the relevant sections
The TCL Tests are the oldest set of tests for SQLite. They are contained in the
same source tree as the SQLite core and like the SQLite core are in the public
domain. The TCL tests are the primary tests used during
Eric,
The humorous side of me would argue that RC4 isn?t encryption anymore than
ROT13 is these days.
The serious side of me says that exportation of encryption stuff has changed
significantly and is full of weird and vagueness. Since you don?t state your
country of origin its difficult to
Simon,
Your example is a very simple, yet elegant example of why database
synchronisation is a ?difficult? problem to solve.
As you say at the bottom of your e-mail, for the database to be correct, it has
to determine the intentions of two sets of users to work out the solution. Even
if
Simon,
Thanks very much for this information.
Rob.
> On 14 Aug 2015, at 17:38, Simon Slavin wrote:
>
>
> On 14 Aug 2015, at 4:17pm, skywind mailing lists
> wrote:
>
>> this is the problem. I could never reproduce it by myself and even my
>> customers have normally no problems. But it
I *think* this is due to you creating an integer when you first create
the entries
Try changing from
INSERT INTO fmtemp VALUES (1, 0);
to
INSERT INTO fmtemp VALUES (1, 0.0);
Just did
macpro:js rwillett$ sqlite3
SQLite version 3.8.10.2 2015-05-20 18:17:19
Enter ".help" for usage hints.
Hi,
We?re trying to understand whether or not we have a performance problem with
our Sqlite database. It may well be that we are ?optimal? for some value of
optimal or it may be that we can improve our system. I was hoping to use the
experience here to help focus our attention or give some
Ward,
Thanks for this.
We are already doing ANALYZE and sadly its not made any difference,.
Rob
> On 3 Jul 2015, at 00:04, Ward Willats wrote:
>
>
>> On Jul 2, 2015, at 3:16 PM, Rob Willett
>> wrote:
>>
>> We?re trying to understand whether or
Thanks for the reply
> I can tell you that the SQLite3 command line program uses threading and is
> extremely fast and optimised, it is no surprise that the query takes much
> longer in many other programs using a library or even compiled-in SQLite
> code. If it takes 50 seconds piped into
> On 3 Jul 2015, at 07:06, GB wrote:
>
>
>
> Rob Willett schrieb am 03.07.2015 um 00:16:
>> SCAN TABLE RAG USING COVERING INDEX Rag_Idx4
>>
>>
>
> Since Rag_Idx4 is quite wide and not primarily ordered by "Text", an index
> scan might not b
Simon, Clemens,
> Do you ever depend on any indexing on the "Text" column which is not COLLATE
> NOCASE ASC ? If not, then you should be able to speed up your search by
> defining the column the way you think of it. So in your table definition use
>
>"Text" TEXT COLLATE NOCASE ASC,
ote:
>
> Rob Willett wrote:
>>> Do you ever depend on any indexing on the "Text" column which is not
>>> COLLATE NOCASE ASC ? If not, then you should be able to speed up your
>>> search by defining the column the way you think of it. So in your table
omething) is
fantastic. We need to add this to our almost production database and add in the
dumping and rebuilding on a monthly basis.
Hopefully this may help other people with tuning their databases.
We?d like to say thanks to all the people who offered help and suggestions,
Simon, Ward, R Smit
I forgot to add one thing in to my last e-mail. I just checked the function
call time which as approx 13ms, it is now approx 110 micro seconds, over a 100x
faster.
This looks like the speed we can get out of SQLite.
Thanks again,
Rob.
the same as doing the .dump and restore or is it different? We like
the .dump as it gives us a nice easy to use backup :)
Rob
> On 3 Jul 2015, at 13:15, Simon Slavin wrote:
>
>
> On 3 Jul 2015, at 11:35am, Rob Willett
> wrote:
>
>> CREATE TABLE "RAG" (
>&
Thanks for the comments.
Yes we did look at this before we posted our original plea for help. We have a
BEGIN/COMMIT around all the relevant code.
Rob
> On 4 Jul 2015, at 20:28, droedel wrote:
>
> Rob Willett writes:
>
> [snip]
>> The headline figures are we ha
or so searches as opposed to 40,000.
We?ve pulled the ?optimisation? out as it made the code untidy and it was a
dirty hack.
Thanks for helping,
Rob
> On 5 Jul 2015, at 21:36, James K. Lowden wrote:
>
> On Fri, 3 Jul 2015 11:35:21 +0100
> Rob Willett wrote:
>
>> It basica
Hi,
I?m trying to access a SQLite database I?ve written using Excel. In the Windows
world we used to use ODBC drivers which were clunky, cumbersome, sometimes
worked, sometimes didn?t, had arcane issues and generally were a PITA.
So I?m trying to do the same on our development Macs and it
Richard,
Every copy of iPhoto, Aperture and Photos as well. Photo?s is the new
replacement for IPhoto and Aperture. I?ve just checked a couple of the
libraries and they open as SQLite.
I can?t speak for the iOS versions of these.
In reference to the png library I would assume that that is
Hi,
I used the builtin sqlite library that comes with Mac OS X. The problem I found
was compiling it under 64 bit under X code seemed very difficult. I'm not
blaming SQLite just my poor knowledge of how Xcode works and how 64 bit
compilation worked. I needed 64 bit as I have very large data
Igor
Suggest you look at
http://www.sqlite.org/whentouse.html
Your questions are very broad and don’t really have a right or wrong answer.
You might as well start a flame war by asking the best language for software
development. Everybody has their own version of the truth, mine happens to
I have to say thats it been a long time since I sat back and was amazed by what
people can do.
My knowledge of SQL is limited, whilst I recognise it is a functional language,
I had no idea you could solve Sudoku in it.
I take my hat off to the real experts. The downside of this is I now
Hi,
Can I add my very first piece of advice after listening and reading for the
last 6-9 months :)
I’ll second what Simon says, I use the very same technique for a table with 4M+
records in and its so fast I thought I had an error and looked for bugs in my
code. I assumed (incorrectly) that
All my searches are unique and go across the whole table. The range I select
from us normally between 500 and 600 rows.
I benchmarked the select over the growth of the database to circa 4m records
and the slowdown was negligible. I'm not looking at optimising it as I have far
better
that them use
them accordingly.
Previous suggestions of using the browser or client computer settings seem
sensible to me.
--
Rob Willett
Sent from my mobile phone
On Thursday, 31 July 2014 at 08:54, Stephen Chrzanowski wrote:
> Looking back at the clarification of what the OP wan
Hi,
I?m trying to do some analysis across a couple of tables and the SQL is beyond
my meagre skills. I?m struggling to even describe the problem to be honest.
The high level description is that I have three tables, Users, Devices and
Perimeter_Notifications. The high level description is that
23 May 2015, at 17:06, Rich Shepard wrote:
>
> On Sat, 23 May 2015, Rob Willett wrote:
>
>> What I want to do is join the table Users and Perimeter Notifications
>> together but only if the value of Devices.Holiday_Mode is either non
>> existent or if Devices.Hol
eturned.
>>
>> It also has the advantage that when you read it, it translates directly
>> back into the original (re-phrased) problem statement, so it is self-
>> documenting.
>>
>>
>>> -Original Message-
>>> From: sqlite-users-bounces at mailinglists
the chain of data flow is now getting complex so ?small' changes
can take a long time to move through (and yes we have abstracted our designs
out).
All the very best.
Rob.
> On 23 May 2015, at 18:56, Rich Shepard wrote:
>
> On Sat, 23 May 2015, Rob Willett wrote:
>
>> Thank
015, at 17:06, Rich Shepard wrote:
>
> On Sat, 23 May 2015, Rob Willett wrote:
>
>> What I want to do is join the table Users and Perimeter Notifications
>> together but only if the value of Devices.Holiday_Mode is either non
>> existent or if Devices.Hol
for the helpful reply,
Rob
> On 25 May 2015, at 13:34, Simon Slavin wrote:
>
>
> On 23 May 2015, at 5:21pm, Rob Willett wrote:
>
>> If we can force holiday_mode to be set to either 0 or 1 then the problem
>> goes away, which comes down to getting the design right.
Petr,
You are making a number of fundamental mistakes with your security.
1. Security through obscurity is your first mistake. There is no such thing.
2. Assuming that nobody is writing CGI scripts on Windows Servers is your next
mistake. A lot of systems still do this, a lot of old systems
Petr,
Since this is the SQLite mailing list, we are moving away from the intentions
of the list, however I think your points need addressing as they may be
relevant to other people using this mailing list. I apologise to other people
if this is off topic but I think its important enough to
Hi,
There was a topic on here a few weeks ago which I cannot remember and cannot
find in the mess that is my e-mail system and after spending the last hours
search the SQLite archives I still can?t find it so will ask here if anybody
can remember or help. I can?t even format the question for
I don't know
> what, given that I don't know your system environment.
>
> On Thu, Sep 17, 2015 at 6:58 AM, Rob Willett
> wrote:
>
>> Hi,
>>
>> There was a topic on here a few weeks ago which I cannot remember and
>> cannot find in the mess that is my e-mail
Simon,
Fair point and agreed. We?ll update it.
Rob
> On 17 Sep 2015, at 13:56, Simon Slavin wrote:
>
>
> On 17 Sep 2015, at 12:58pm, Rob Willett
> wrote:
>
>> CREATE TABLE "postcode" (
>> "postcode" text NOT NULL,
>
> As
n our
system :)
Rob
If we ignore the WITHOUT ROWID issue, then
> On 17 Sep 2015, at 14:21, John McKown wrote:
>
> On Thu, Sep 17, 2015 at 7:21 AM, Rob Willett
> wrote:
>
>> John,
>>
>> Thanks for this.
>>
>> The base OS is Linux on a a 64bit se
server.
Rob
> On 17 Sep 2015, at 15:00, Simon Slavin wrote:
>
>
>> On 17 Sep 2015, at 2:47pm, Rob Willett
>> wrote:
>>
>> 3. We cannot get WITHOUT ROWID working using Perl DBI. We get the following
>> error
>>
>> DBD::SQLite::db prepare
channels to get DBD-SQLite updated.
>
>
> On Thu, Sep 17, 2015 at 9:00 AM, Simon Slavin wrote:
>
>>
>>> On 17 Sep 2015, at 2:47pm, Rob Willett
>> wrote:
>>>
>>> 3. We cannot get WITHOUT ROWID working using Perl DBI. We get the
>> follo
Nicolas,
Speed of development, ease of development, maintenance, available frameworks,
available Perl modules.
I actually know C the best of any language and will very happily code in C, but
for some things its quicker to develop it in out in Perl. Certainly for
anything involving
Chris,
The postcode is unique in this data collection. I *should* have stated that as
it is important. My apologies.
The UK postcode is defined to cover several houses, in London my postcode
covers around 20 houses, all of them on my street. This means the area it
covers is pretty small.
Rowan,
It wasn?t a troll. My trolls are normally far, far less subtle than that. My
?guess? was that since Australia is such a large country in comparison to the
UK and has a very small population in comparison to the UK, that its relatively
sparsely populated. I know that some cities are
channels to get DBD-SQLite updated.
>
>
> On Thu, Sep 17, 2015 at 9:00 AM, Simon Slavin wrote:
>
>>
>>> On 17 Sep 2015, at 2:47pm, Rob Willett
>> wrote:
>>>
>>> 3. We cannot get WITHOUT ROWID working using Perl DBI. We get the
>> follo
> On 18 Sep 2015, at 16:54, R.Smith wrote:
>
>
>
> On 2015-09-18 08:17 AM, Rob Willett wrote:
>> What we wanted was a postcode to GPS lookup, it was not to pinpoint a house,
>> you would need a little more information to be that specific, e.,g. house
>
Ryan,
Thanks. We?ve got 100 requests a second which may be enough. We?ll keep looking
though for any time.
Rob.
> On 18 Sep 2015, at 18:26, R.Smith wrote:
>
> >>>Rob: "We want to do postal code ==> GPS..."
>
> >>Me: "You can use google apis..."
>
> >Rob: "Our business is GPS and GIS
A quick update on handling the the select query. We have some speed results
which are quite illuminating and positive.
We?ve been playing around with building a few web services, one of which was
the postcode lookup that people have kindly helped with.
We have one service that is a simple
Cecil,
If you have a load average of 15 then that normally means you have a
massively overloaded Linux box. I don?t know your system but I get
worried around a load average of 3-4 on our boxes. Load Average is a
very crude measurement but a high number tends to be bad.
If your CPU is only
Cecil,,
Linux reporting 8 cores is due to hyper threading on the four cores.
Thats normal.
One of the cores reporting a high usage is normal if you have single
threaded app that simply cannot be moved, e.g. many perl programs
exhibit this behaviour. Thats fine and to be expected.
I do not
To add to the responses, we use SQLite for our main database which is
currently around 60GB in size, so size isn?t an issue for SQLite. I am
aware of other people with far, far larger SQLite databases. We did a
quick test and ran up to a couple of hundred DB?s of data in SQLIte
and it seemed
Hi,
I?m sure this is a really dumb question but I?m struggling to
understand why the following SQL is needed for what should be a trivial
SQL expression.
I?ve minimised the example down to (hopefully) make it simpler.
I have a table with an ISO date StartTime in it held as a string.
+ (180 * 60))
not work.
Thanks,
Rob
On 18 Feb 2016, at 11:25, Quan Yong Zhai wrote:
> It looks like strftime('%s',...) should return an integer and
> strftime('%J', ...) should return a double value. But it always return
> text value.
> ________
> ???: Ro
does not work.
The only difference is the boolean operand.
We know how to solve the problem, we?re puzzled though as our
understanding is clearly wrong.
Thanks for replying,
Rob
On 18 Feb 2016, at 12:14, Tim Streater wrote:
> On 18 Feb 2016 at 10:20, Rob Willett
> wrote:
&
ftime and we
will be in our logic.
Thanks to everybody for the help, we?ve learnt something new today
which is always good.
Best wishes,
Rob
On 18 Feb 2016, at 12:45, Richard Hipp wrote:
> On 2/18/16, Rob Willett wrote:
>>
>> select
>> strftime('%s' , starttime),
>>
Hi,
We think we know the answer to this, but we?ll ask the question
anyway.
We?re trying to backup a 10GB live running database
?as-fast-as-we-possibly-can? without stopping updates coming in. The
updates come every 2-3 mins, and write a chunk of data in. We can?t
really stop the database
Thanks for the reply,
Yes Example 2 in https://www.sqlite.org/backup.html is what we are
talking about. It was very slow to run for us.
Rob
On 4 May 2016, at 12:08, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 12:44 PM, Rob Willett
> wrote:
>> We?re trying to backup a 10GB
in
workflow might be the easiest and simplest solution.
I know you have mentioned VFS shims further down the email trail but
thats certainly a step too far for us.
Thanks
Rob
On 4 May 2016, at 12:22, Dominique Devienne wrote:
> On Wed, May 4, 2016 at 1:13 PM, Rob Willett
> wrote:
>
2016, at 12:58, Clemens Ladisch wrote:
> Rob Willett wrote:
>> We?re trying to backup a 10GB live running database
>> ?as-fast-as-we-
>> possibly-can? without stopping updates coming in.
>
> How much memory do you have? I guess you can't simply read the enti
,ply turns the tap back on is very simple
to code and to see working. I like easy, simple solutions because I?m
simple at heart.
Best wishes,
Rob
On 4 May 2016, at 14:24, Simon Slavin wrote:
> On 4 May 2016, at 1:35pm, Rob Willett
> wrote:
>
>> I think that unless we can get th
-It-Simple shell
script. We pause the queue upstream which stops the update process, do a
cp and then restart the queue again. All of this is doable in shell
script.
Rob
On 4 May 2016, at 18:22, R Smith wrote:
> On 2016/05/04 2:35 PM, Rob Willett wrote:
>> Dominque,
>>
>> We p
Scott,
Thats an interesting idea. Is there an option in SQLite to do this for
us, or do we have to write a small shim in our app?
I like the idea of this as its simple and elegant.
Rob
On 4 May 2016, at 16:51, Scott Robison wrote:
>>> This is going to become a bigger problem for us as the
the time to reply.
Rob
On 4 May 2016, at 18:52, Scott Robison wrote:
> On Wed, May 4, 2016 at 11:47 AM, Rob Willett
>
> wrote:
>
>> Scott,
>>
>> Thats an interesting idea. Is there an option in SQLite to do this
>> for us,
>> or do we have to w
Dan,
Thats NOT the case for us so that explains why things are slow. Mmm?
as I recall we never did get a backup to finish?. Now we know why :)
Rob
On 4 May 2016, at 18:53, Dan Kennedy wrote:
> On 05/05/2016 12:45 AM, Rob Willett wrote:
>> Ryan,
>>
>> Ah! The penny dro
at 10:55 AM, Rob Willett
> wrote:
>> Scott,
>>
>> OK, We can see how to do this (I think). Our app is written in Perl
>> and we?d
>> just need to capture the command we write down. The only issue I can
>> think
>> of is the prepare statement and makin
enough, though takes time, and see how that compares.
Thanks for the suggestion,
Rob,
On 5 May 2016, at 16:42, J Decker wrote:
> Instead of cp, rsync might help it is able to send delta changes.
>
> On Wed, May 4, 2016 at 10:55 AM, Rob Willett
> wrote:
>> Scott,
>>
>
more to see whats going on.
Top marks for spotting our deliberate omission :)
Rob
On 5 May 2016, at 17:42, J Decker wrote:
> On Thu, May 5, 2016 at 9:38 AM, Rob Willett
> wrote:
>> Mmmm?. Initial tests are not conclusive, it does look as if using
>> rsync
>> ?-inpl
Hi,
My first question as a lurker and read of these groups. Hopefully it's not too
stupid :)
I've been working through using SQLite on my Mac and have a question on style
and the way to use SQLite databases.
My application has two parts, a client/PHP side to collect requests from a user
and
quot; type mails here which is great. Some of the detail people
go into is wonderful and I've learnt an awful lot just reading replies.
Best wishes,
Rob.
On 19 Jun 2013, at 15:35, Simon Slavin <slav...@bigfraud.org> wrote:
>
> On 19 Jun 2013, at 3:15pm, Rob Willett <rob.sql...
t it, the data MUST be
> updated etc.
>
> Btw, to all, I concur very much with Rob as to the quality of replies on this
> forum - many thanks to all!
>
>
> On 2013/06/19 16:45, Rob Willett wrote:
>> Stephen, Simon,
>>
>> I'm not worried abou
Well I just simply downloaded my free copy from the Apple App store. I entered
absolutely zero information (apart from Apple knowing everything about men
anyway). Can't say I've been plagued by spam from other vendors who have
provided free downloads but there's a first time for everything.
It
Hi,
Apologies for jumping in here.
I was interested in your question as I have some development going on which
will eventually go into production on a VMWare ESXI server, which could be
either Linux or Windows.
When you say you are running sqlite3 on esxi5 do you mean that you are running
On 24 July 2013 05:34:43 Kai Peters wrote:
Is it possible to have two (or more) autoincrement columns per table?
___
sqlite-users mailing list
sqlite-users@sqlite.org
Hi,
I cannot comment on the speed of Berkeley DB, I am somewhat familiar with
Oracles approach to licensing terms having spent somewhat more than £10M with
them over the last five years :) You can accuse Ellison of many things (and I
have. all of them unrepeatable) but being cheap isn't one of
Hi,
As I've learnt more about how to use SQLite (mainly from looking at this
mailing list), I've realised I've coded somethings in a less than an
optimal manner .
One of the things I've realised is that SQLite has a timing system built into
accessing the database, so that if things are busy
be expected. Is there a problem here? It doesn’t appear to
be a problem but would welcome any comments.
Thanks for taking the time to reply.
Rob
On 6 Aug 2016, at 22:35, R Smith wrote:
On 2016/08/06 10:50 PM, Rob Willett wrote:
Our understanding of this is that many processes can READ
Hi,
We’ve been using Sqlite though Perl for some time now and have started
to get more adventurous.
Our SQLite database is around 32GB in size, is created and manipulated
by a single Perl process and is working well.
What we now want to do is mine the database using a very long running
Smith wrote:
On 2016/08/07 8:55 AM, Rob Willett wrote:
Richard, Ryan,
Thanks for this. We were dimly aware of WAL but until now hadn’t
needed to use it.
We’ve done a quick check with it and it *seems* to work on a test
database. We’ve all read the docs again and paid attention to
https
Jean-Christophe
Thanks for the update on wal-mode. Your explanation is clear and makes
sense to us. We can see what we would have a 224MB -wal file, we
experimented with killing processes whilst updating and generally
messing around and SQLite did what is was supposed to do. I wouldn’t
say
I've been following this thread with interest. I have used Putty for
years as its the de-facto standard for decent ssh terminals on Windows
boxes.
A slightly more radical suggestion for the log files. Since Putty is
open source, have a look at the code and see if you can easily add in a
We don't use Windows Server or System.Data.SQLite so can't comment.
I'd be astonished if its Sqlite itself thats at fault here.
Rob
On 4 Oct 2016, at 13:24, Werner Kleiner wrote:
Thanks for help.
Hopefully I give you the correct answer, because a collegue has
written the C# program.
We had
We've done inserts of tens of thousand at a time, we may well have done
hundreds of thousands in one single transaction. I've no doubt other
people do even larger transactions.
I would assume the problem lies elsewhere.
What error message are you getting? Whats the OS, the environment, disk,
We have production databases with 30-50GB and have no issues with
managing them. Mind you we have more memory, though we only have 8GB as
thats all our VM provider allows us.
After saying that we have never seen any performance issues that weren't
due to our schemas or our bad design. Our
or the right job etc etc.
Rob
On 2 Oct 2016, at 22:47, Domingo Alvarez Duarte wrote:
Hello Rob !
How do you use those big databases ?
Append only ? Read only ? Constant read write ? Foreign keys ?
Could you show the schema to talk about it ?
Cheers !
On 02/10/16 17:44, Rob Willett wrote:
W
John,
There is a lot of documentation on the SQLite website.
Here's the 'official' docs on creating a table
https://www.sqlite.org/lang_createtable.html
A Sqlite database consists of many tables. I am unsure if there is an
upper limit, if there is, its more tables than I have ever created.
Simon,
We use the Perl DBD module all the time. What I would expect to see is
(IGNORE THE LINE BREAKS)
my $sth = $dbh->prepare("INSERT INTO
func_begin_change(
author,
author_contact,
author_contact_method,
author_shortname,
if we have made the
same mistake elsewhere.
Thanks for your help, we can post the bytecode it people are interested.
Rob
On 17 Mar 2017, at 11:41, Rob Willett wrote:
Gunter,
I would never presume to describe anybody as a Nerd!
We're just going back to very first position with the
three international rugby
games tomorrow.
Rob
On 17 Mar 2017, at 18:15, Simon Slavin wrote:
On 17 Mar 2017, at 5:30pm, Rob Willett <rob.sql...@robertwillett.com>
wrote:
echo "select * from Disruptions where status = 2 OR status = 6;" |
sqlite3 tfl.sqlite > /dev/null
.
Thanks very much for the help so far. Tomorrow is more tricky but I'll
read up on covering indexes to see how to use them,.
Rob
On 17 Mar 2017, at 18:39, Simon Slavin wrote:
On 17 Mar 2017, at 6:22pm, Rob Willett <rob.sql...@robertwillett.com>
wrote:
4. Work through returning just th
, so thanks for all the help in getting us to this stage.
We have kept copies of the query planner bytecode output if anybody is
interested. Gunter has had copies, but if anybody else would like them,
please ask.
Many thanks again for all the help,
Rob
On 17 Mar 2017, at 22:12, Rob Willett
David,
We're English and masters of the understatement ;)
Rob
On 20 Mar 2017, at 15:04, David Raymond wrote:
"... Our query has moved from 32 mins to 54 secs.
We're quite happy with that performance increase."
I have to admit that the hearty chuckle which that statement produced
from me
unneeded fields and adding
in a covering index), we went down to 38 secs.
This has meant we are no longer looking at a second database just for
analytics, we just needed to learn to use the database we already had :)
Rob
On 18 Mar 2017, at 5:48, Rob Willett wrote:
We've just implemented
n them twice to see the
affect. No idea how long this will take but suspect a few hours :) I
will post back the results as other people may (or may not) find this
helpful.
Thanks
Rob
On 17 Mar 2017, at 10:57, Hick Gunter wrote:
On 17 Mar 2017, at 10:20am, Rob Willett
<rob.sql...@robertwillett.co
ce for each status value, whereas I expect the query with
UNION ALL to have 2 copies of the search (which would not affect the
run time) and maybe even a temporary table of results (which would
take longer and use more memory).
-Ursprüngliche Nachricht-
Von: sqlite-users
[mailto:sqlit
Hi,
We've been struggling with a complex query that we have written. One of
the elements of this complex query is a select statement that doesn't
appear to use an index when we *think* it should do. We're not going to
ask you to debug a large complex SQL query (unless you have nothing else
:
On 17 Mar 2017, at 10:20am, Rob Willett <rob.sql...@robertwillett.com>
wrote:
CREATE INDEX "Disruptions_idx4" ON Disruptions ("status" COLLATE
NOCASE ASC);
[…]
As part of the larger more complex query, we are executing the query
```
select * from Disruptions wher
Vishal,
SQLite isn't a traditional client/server relational database, therefore
there isn't a port to open up. It runs on a local machine.
Now there are wrappers around SQLite to extend it, I assume this ODBC
driver is one of them.
I suspect people here *may* know the answer regarding any
Rob
On 10 Jul 2017, at 14:14, Igor Korot wrote:
Rob,
On Mon, Jul 10, 2017 at 7:06 AM, Rob Willett
<rob.sql...@robertwillett.com> wrote:
Vishal,
SQLite isn't a traditional client/server relational database,
therefore
there isn't a port to open up. It runs on a local machine.
I be
Hi,
We've encountered what we think is an odd situation and we can't find
any explanation for why this is. We're also not sure if its a problem or
not.
A brief summary is that we are doing a major database upgrade and are
doing (lots) of testing. The box is a Linux box running Ubuntu 14.04
1 - 100 of 152 matches
Mail list logo