What is the performance without the indexes?
On Sun, 29 Jul 2018 10:20:11 +0100
"Rob Willett" wrote:
> Update 1
>
> We've copied the disruptions table to a new table, dropped the old
> table, copied the new table back in and recreated all the indexes.
>
> Exactly the sam commit performance.
On 2018/07/29 11:47 AM, Rob Willett wrote:
John,
Thanks for the prompt reply and a very good question..
We've dropped the indexes and the commit is now very quick, approx two
seconds
However the overall performance of the run is much the same as other
areas of the code are now
Hi,
in an application I encountered a problem, where the changing of the
structure of a table by dropping and recreating it via one connection to
a db was not directly seen by another connection to the same db. I could
reproduce the problem with the test program below. Inserting a
On 2018/07/29 10:34 AM, Gerlando Falauto wrote:
Hi,
I'm totally new to sqlite and I'd like to use it for some logging
Welcome Gerlando. :)
application on an embedded
linux-based device. Data comes from multiple (~10), similar sources at a
steady rate.
The rolling data set would be in the
Rob,
Try creating indexes without order.
Or, try to make a column with a hash of the values entering the index and
search for the value by the index of. This will complicate the logic of your
program, but it will decrease the volume of the database, and possibly
significantly speed up both
John,
Thanks for the prompt reply and a very good question..
We've dropped the indexes and the commit is now very quick, approx two
seconds
However the overall performance of the run is much the same as other
areas of the code are now significantly slower, whereas before they were
quick.
Hi,
I'm totally new to sqlite and I'd like to use it for some logging
application on an embedded
linux-based device. Data comes from multiple (~10), similar sources at a
steady rate.
The rolling data set would be in the size of 20 GB. Such an amount of
storage would suffice to retain data from
I've been testing the latest version of SQLite with 3 static analysis tools
(Infer, Clang Static Analyzer and Cppcheck) and they all reported a memory
leak error in shell.c .
The Infer error report is as follows:
shell.c:585: error: MEMORY_LEAK
memory dynamically allocated by call to
On 29 Jul 2018, at 14:07, Simon Slavin wrote:
> On 29 Jul 2018, at 12:56pm, Robert M. Münch
> wrote:
>
>> I want to add an autosave feature that stores some data at critical
>> points/specific intervals. This data of course should be commited to disk,
>> to survive any app crashes.
>
> Use a
On July 29, 2018 5:47:29 AM EDT, Rob Willett
wrote:
>John,
>
>Thanks for the prompt reply and a very good question..
>
>We've dropped the indexes and the commit is now very quick, approx two
>seconds
>
>However the overall performance of the run is much the same as other
>areas of the code are
When the user opens a file I start an outer-most transaction to be able to
revert the file. So far so good.
I want to add an autosave feature that stores some data at critical
points/specific intervals. This data of course should be commited to disk, to
survive any app crashes.
But how can I
On 29 Jul 2018, at 12:56pm, Robert M. Münch wrote:
> I want to add an autosave feature that stores some data at critical
> points/specific intervals. This data of course should be commited to disk, to
> survive any app crashes.
Use a SAVEPOINT instead of COMMIT:
> Ideally, I would like to have a way of "seeing" the whole dataset with a
> single query spanning all available databases.
I think swarmvtab may be helpful.
https://www.sqlite.org/swarmvtab.html
2018-07-29 10:34 GMT+02:00, Gerlando Falauto :
> Hi,
>
> I'm totally new to sqlite and I'd like to
Update 1
We've copied the disruptions table to a new table, dropped the old
table, copied the new table back in and recreated all the indexes.
Exactly the sam commit performance.
We've also tracked the -shm and -wal files and they are around 5MB in
size.
Mmmm tricky (as Deepthought
Thanks for the mail.
We ran analyze with no indexes, made no difference.
We recreated the indexes and ran analyze again. The very long commit is
back, this time it took 14 secs :)
It does appear that the indexes have something to do with this whereby
they have not been an issue to now.
Please try moving your COLLATE clauses into the table definition. e.g. instead
of
CREATE UNIQUE INDEX "Disruptions_Idx3" ON Disruptions ("version" COLLATE
NOCASE ASC, "Disruption_id" COLLATE NOCASE ASC, "location" COLLATE NOCASE ASC);
Your table definition should have
"version"
Hi,
Background
We've been refactoring our database to reduce the size of it. Through
some simple logic we've managed to pull out 99% of the data to reduce
the size from 51GB down to approx 600MB. This logic has been to remove
rows that are almost the same but not quite identical. As with all
Also, kindly clarify one bit - I'm not sure whether you use the word
"commit" as an easy substitute for the entire process of updating the DB
(i.e all SQL INSERT/UPDATE code that runs up to and including the COMMIT
statement), of if you actually mean the "COMMIT" operation, because the
things
Anton,
Dropped the indexes and created them without order.
We'll need to look at what your second para means. It could be a major
and massive change.
Rob
On 29 Jul 2018, at 11:52, Djelf wrote:
Rob,
Try creating indexes without order.
Or, try to make a column with a hash of the values
On 7/29/18, Patricia Monteiro wrote:
> I've been testing the latest version of SQLite with 3 static analysis tools
> (Infer, Clang Static Analyzer and Cppcheck) and they all reported a memory
> leak error in shell.c .
The shell code has now been modified so that it calls exit(1) if
realloc()
First, thanks in advance for any help offered.
I'm pretty comfortable using sqlite but just now starting to develop
with virtual tables.
I'm running into troubles and I'm pretty sure it's because my mental
model of sqlite is wimpy.
I'm trying to build a tool that interfaces to C++ objects in
On 7/29/18, Jürgen Palm wrote:
> Hi,
>
> in an application I encountered a problem, where the changing of the
> structure of a table by dropping and recreating it via one connection to
> a db was not directly seen by another connection to the same db.
Detecting a schema change requires starting
Hi Ryan,
thank you for your reply.
>I think you are perhaps missing a core idea here - the only use-case
>that requires sharding is where you have very high write-concurrency
>from multiple sources, and even then, the sharding, in order to have any
>helpful effect, needs to distinguish "write
On 2018/07/30 12:39 AM, Gerlando Falauto wrote:
The question that needs to be answered specifically is: How many data
input sources are there? as in how many Processes will attempt to write
to the database at the same time? Two processes can obviously NOT write
at the same time, so if a
On the systems I use, for log files, you cant beat a text file for
integrity and speed.
I would suffer the slower queries and use something like the CSV
extension on your text log files.
On Mon, Jul 30, 2018 at 10:44 AM, Simon Slavin wrote:
> I have a particular logging task which is
>In the current use case thre's a single process. The way I see it, in
>the near future it would probably increase to 3-4 processes,
>each doing 10-100 writes per second or so. Each write would be around
>1KB-20KB (one single text field, I guess).
>I wonder if writing data in batches would be
I have a particular logging task which is time-critical, both in reading and
writing. It runs on Linux, but not a fast desktop computer, more like the slow
kind built into your WiFi router, with limited solid state storage. I can use
any version of the SQLite C API I want, and currently use a
On 29 Jul 2018, at 11:39pm, Gerlando Falauto wrote:
> In the current use case thre's a single process. The way I see it, in
> the near future it would probably increase to 3-4 processes,
> each doing 10-100 writes per second or so. Each write would be around
> 1KB-20KB (one single text field, I
On 28 July 2018 at 05:41, Rune Torgersen wrote:
> > Rowan Worth Thu, 26 Jul 2018 22:02:50 -0700
> >
> > On 26 July 2018 at 05:56, Rune Torgersen wrote:
> >
> > > The databases have been opened with two connections (one for reads, one
> > > for writes), and use the following options:
> > >
29 matches
Mail list logo