> On Sep 12, 2019, at 12:33 PM, mailing lists wrote:
>
> the application also runs on mobile devices and the insertion (excluding
> indexing) also takes a couple of minutes. This means that the user might
> switch to another app during insertion and - in the worst case - the system
> is
Hi,
> Am 2019-09-12 um 10:55 schrieb Keith Medcalf :
>
>
> On Tuesday, 10 September, 2019 09:26, mailing lists
> wrote:
>
>> I cannot really put all the inserts into one transaction because in case of
>> a failure I loose all the already inserted data.
>
> Why is that important? Cannot
On Tuesday, 10 September, 2019 09:26, mailing lists
wrote:
>I cannot really put all the inserts into one transaction because in case of
>a failure I loose all the already inserted data.
Why is that important? Cannot you just load it again from whence it came in
the first place on failure?
On Tue, Sep 10, 2019 at 05:25:38PM +0200, mailing lists wrote:
> Hi,
>
> I cannot really put all the inserts into one transaction because in case of a
> failure I loose all the already inserted data. Though I made some tests.
> There is hardly any performance gain anymore when doing 1000 or 10
On 10 Sep 2019, at 4:02pm, mailing lists wrote:
> Insertion really slows down after about 100 000 items have been inserted. I
> suppose that the slow down is related to indexing because:
>
> [...]
> c) changing the cache size has only a minor impact
SQLite speed does degrade with table size,
On Tuesday, 10 September, 2019 09:26, mailing lists
wrote:
>I cannot really put all the inserts into one transaction because in case of
>a failure I loose all the already inserted data. Though I made some tests.
>There is hardly any performance gain anymore when doing 1000 or 10 000
On 9/10/19, mailing lists wrote:
> What is the background that index creation is so much faster than insertion
> using indices?
Indexes are maintained in key order. So inserts are happening at
random spots all across the index. For each insert, the system has to
(1) read a 4096-byte page, (2)
Hi,
I cannot really put all the inserts into one transaction because in case of a
failure I loose all the already inserted data. Though I made some tests. There
is hardly any performance gain anymore when doing 1000 or 10 000 insertions in
one transaction including immediate insertion into
On 9/10/19, mailing lists wrote:
> So, the best solution I found so far is to disable indexing while insertion
> and to index the table afterwards
I think that is the best solution. Be sure to also do all of your
inserts (and the CREATE INDEX statements) inside of a transaction.
--
D.
Hartwig,
You have got most of the tricks we know about. Other more experienced
developers may provide a better insight.
We had to moved about 60GB of table data about and we ended up doing
what you have done with one extra bit, we batched the jobs up in
multiples of 10,000 between BEGIN and
I have the following situation:
- in one table relatively small data is inserted (100 bytes per record)
- this table contains three indices
- about 100 million or more records have to be inserted
Insertion really slows down after about 100 000 items have been inserted. I
suppose that the slow
11 matches
Mail list logo