Given this DB schema (simplified, there are other columns):
CREATE TABLE History (
history_ID INTEGER PRIMARY KEY,
gameCount INTEGER,
weekday INTEGER, /* 0=Sunday, 6=Saturday */
hour INTEGER, /* (0..23) */
datetime INTEGER /* unix datetime */ );
CREATE INDEX Idx_week
YES. AFAIK if SQLite detects that the rows are/can be made to be returned in
GROUP BY order it can use internal variables to accumulate the group results.
This is expected to be significantly faster than locating and updating a
temporary BTree row for each record scanned.
-Ursprüngliche Nac
Say table1 has more then 50 records, and there is a second table
tmp_keep with the ids of the records in table1 to be kept, the rest need to
be deleted. The number of records in tmp_keep can vary from 0 to all the
records in table1, with any values in between.
What is the best strategy for doi
On 2017/03/03 12:53 PM, Dave Blake wrote:
Say table1 has more then 50 records, and there is a second table
tmp_keep with the ids of the records in table1 to be kept, the rest need to
be deleted. The number of records in tmp_keep can vary from 0 to all the
records in table1, with any values i
Dave Blake wrote:
> For deleting a large number of records (tmp_keep is small), this works
> fine:
> DELETE FROM table1 WHERE id1 NOT IN (SELECT id1 FROM tmp_keep);
>
> But this becomes inefficient when tmp_keep is large.
SQLite usually creates a temporary index for the values in the IN clause.
Ho
Jeffrey Mattox wrote:
> is my index on weekday worthwhile, time-wise and space-wise? (Query
> speed is not a big issue for me, and the DB is relatively small
Indexes are optimizations. In a small DB, the effect is probably not
noticeable, which implies that you should not bother.
Where exactly
Could be keep almost all the records so ~50, but it varies greatly so
sometimes will be just keep 10. I can adjust approach depending on size if
necessary.
Yes the id1 are integer primary keys.
Table1 has a number of indexes and views, so the create new table approach
is less attractive
___
Fossil is the version control system (VCS) written specifically to
support SQLite. See https://www.fossil-scm.org/ for details.
Version 2.0 of Fossil has just been released. The 2.0 release
supports the use of SHA3-256 hashes for naming files and check-ins in
the repository. SHA1 hashes are als
DELETE FROM table1 WHERE NOT EXISTS (SELECT 1 from tmp_keep WHERE id1 =
table1.id1);
Does it in a single pass by doing a correlated subquery on each row in table1
to see if the id is in tmp_keep.
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite
On 3 Mar 2017, at 10:53am, Dave Blake wrote:
> DELETE FROM table1 WHERE id1 NOT IN (SELECT id1 FROM tmp_keep);
>
> But this becomes inefficient when tmp_keep is large.
Do you have an appropriate index on tmp_keep ?
CREATE INDEX tk_id1 ON tmp_keep (id1)
then try it again.
Simon.
Dave Blake wrote:
> Yes the id1 are integer primary keys.
In both tables? If yes, then there is not much you could do, and the
problem probably are all the modifications done to the actual table
and its indexes.
Regards,
Clemens
___
sqlite-users maili
Thanks all for your input, it has really helped.
In my real world application tmp_keep is a temporary table populated by
examinining a number of other tables etc., and I suddenly realsied that it
could even contain duplicate ids. Sloppy thinking on my part.
I get the best results by creating ano
Hi!
The attached ruby script demonstrates a feature of the newer sqlite3 libraries,
which may be a regression.
--
Vermes Mátyás
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/
Hello,
Based on my reading of the documentation it seems like SQLite is a great
candidate for cases where you have a large number of threads that only need to
read a database simultaneously, in our case for fast data lookup, aggregation,
etc. I've been able to generate SQL queries that do this,
Might I suggest: https://www.w3schools.com/sql/
dvn
On 03/01/2017 09:02 AM, jose isaias cabrera wrote:
Ryan,
I just want to thank you for your kindness and display of goodwill to
mankind. This is a great response. I even learned something from this
post. Thanks so much for your responses.
On 3 Mar 2017, at 1:12pm, Vermes Mátyás wrote:
> The attached ruby script demonstrates a feature of the newer sqlite3
> libraries, which may be a regression.
Just a note that you cannot attach files to posts to this list. If your file
is text, just paste it into a message. Otherwise please
On Fri, Mar 3, 2017 at 1:13 PM, Andrew Brown wrote:
>
> Any tips to handle massively multithreaded side by side chunked
> queries on the same database?
In my (limited) experience, it seems that multithreaded SQLite
acquires a lock on a shared, in-memory b-tree every time a
statement is prepared.
> On Mar 3, 2017, at 2:52 PM, Josh Hunsaker wrote:
>
> Your problem might be mitigated if you could compile your queries in advance.
Precompiled statements are a must if you want the best performance (and you’re
running the same queries over and over.)
Andrew, how many database handles are op
Well, in the situation I'm working with, my API is constructing 1835 small SQL
jobs to be run, and then passing them off to a structure in which 72 threads
are running, each with their own db connection (I assume that's what you mean
by a database handle, a DB connection, but please, correct me
I'd say you should consider switching to some sort of queue feeding a
worker pool, then experimenting with pool sizes. Often problems reward the
first few threads you add, but at some point additional threads become a
negative unless the system is specifically designed for high thread counts
(and
On 3 Mar 2017, at 11:47pm, Scott Hess wrote:
> I'd say you should consider switching to some sort of queue feeding a
> worker pool, then experimenting with pool sizes.
Agreed. Set up 5 threads which have actual access to the database. Have your
1835 jobs queue up their jobs for servicing by
On 03/01/2017 23:41, Clemens Ladisch wrote:
It would certainly be possible to add your own user-defined SQL function
to call sqlite3_blob_write().
I think this should be added to sqlite itself. Writing a portion of blob
is very much like updating a field. There is currently no way to do this
> On Mar 3, 2017, at 3:51 PM, Simon Slavin wrote:
>
> Then mess with the '5' until you find a good value.
A common rule of thumb with thread pools is to allocate one thread per CPU
core.
—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists
On Mar 3, 2017, at 1:13 PM, Andrew Brown
wrote:
>
> This is being run on 64 and 72 core machines, and the more cores I run it on,
> the slower it ends up going.
What happens if you rework the app to do only one thing, single-threaded, but
run between 64 and 108 instances in parallel on the sa
So in this case I'm running on a 72 core machine.
Also the databases have one table in them each... The goal here is to bring all
the cores to bear on the many queries - each grabbing the next query to be run
and running it, until finally there are no more chunks to run.
Then within my own apis
Does each thread have its own connection?
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Andrew Brown
> Sent: Friday, 3 March, 2017 13:14
> To: sqlite-users@mailinglists.sqlite.org
> Subject: [sqlite] Massively multithreaded
No, the good rule of thumb is to allocate one thread per CPU. Depending on the
particular multi-core CPU you "may" be able to use all the cores simultaneously
but in many cases diminishing returns will set in long before you can execute
one thread per core. If this is an Intel processor that
On Mar 3, 2017, at 5:51 PM, Keith Medcalf wrote:
>
> No, the good rule of thumb is to allocate one thread per CPU.
It depends on the workload. Parallel make (e.g. “make -jN” in GNU make)
typically improves in speed past N=core count to about 1.5x the core count.
SQLite seems like a similar ki
Yes, if they are lock bound, then they need to have the number of cores
which reduces the locking overhead to the point where it's not degrading
performance too much. Though I guess the OP really didn't say that (more
CPUs may spend more time in spinlocks and still spend less wallclock time).
Ano
Yes, each thread has its own connection.
On Mar 3, 2017 4:45 PM, Keith Medcalf wrote:
Does each thread have its own connection?
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Andrew Brown
> Sent: Friday, 3 March, 2017 13:14
Now this is a interesting question. As it happens I /am/ blindly assuming the
os would schedule it properly. I will look at my options this evening and see
about addressing that.
On Mar 3, 2017 4:52 PM, Keith Medcalf wrote:
No, the good rule of thumb is to allocate one thread per CPU. Dependi
On Mar 3, 2017, at 6:24 PM, Andrew Brown
wrote:
>
> I /am/ blindly assuming the os would schedule it properly.
Why would it? Windows won’t do that for you, either.
Now, if you’re using .NET’s thread pool mechanisms, it *should* be making sure
it doesn’t oversubscribe the cores too much, but
You might want to try enabling mmap mode:
pragma mmap_size = 4294967296;
or something like that. Try to make it larger than your databases. I'd
expect that if you're running with that many cores, you're _probably_
running in a 64-bit address space, so it'll probably work.
-scott
On Fri, Mar
Thanks, Don.
-Original Message-
From: don v nielsen
Sent: Friday, March 3, 2017 3:21 PM
To: SQLite mailing list
Subject: Re: [sqlite] UNION
Might I suggest: https://www.w3schools.com/sql/
dvn
On 03/01/2017 09:02 AM, jose isaias cabrera wrote:
Ryan,
I just want to thank you for you
As an aside, you may likely find that the OS will schedule multiple processes
far more efficiently than it schedules multiple threads (especially from the
perspective of scheduling the cores independently). You may find that it is
far more efficient to perhaps schedule a dozen processes with s
The write operation using the open sqlite3_blob object fails after some
other field in the same row is updated.
The testcase below illustrates the problem.
Yuri
---testcase---
#include
#include
#include "sqlite3.h"
int main(int argc, char **argv) {
if(argc < 2) {
Can you give an example of an Intel Processor SKU with half-assed cores?
There's HyperThreading of course, but I don't think anybody has ever considered
HyperThreading to be separate cores.
- Deon
-Original Message-
From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org
You must never have used a Microsoft Operating system ...
> -Original Message-
> From: sqlite-users [mailto:sqlite-users-boun...@mailinglists.sqlite.org]
> On Behalf Of Deon Brewis
> Sent: Saturday, 4 March, 2017 00:33
> To: SQLite mailing list
> Subject: Re: [sqlite] Massively multithrea
38 matches
Mail list logo