Hi Joe,
> Thanks for the reply. Just for reference, using seq is actually
> considerably slower. It ran in 39 seconds vs. 4 seconds.
Yeah, tried it here too. It is only 9 seconds vs. 6 seconds, though.
> I think it's
> because it has to look up every object from disk to get the value of 'id
> i
Hi Alex,
Thanks for the reply. Just for reference, using seq is actually
considerably slower. It ran in 39 seconds vs. 4 seconds. I think it's
because it has to look up every object from disk to get the value of 'id
instead of using the index which is likely in memory. The index appears to
be stor
Hi Joe,
> Thank you. That sped it up. It's taking 69 seconds to insert 1M records
>
> (pool "foo.db")
> (class +Invoice +Entity)
> (rel id (+Key +Number))
> (zero N)
> (bench (do 100 (new (db: +Invoice) '(+Invoice) 'id (inc 'N)) ))
> (commit)
You can further speed it up if you distribute obj
Thank you. That sped it up. It's taking 69 seconds to insert 1M records
(pool "foo.db")
(class +Invoice +Entity)
(rel id (+Key +Number))
(zero N)
(bench (do 100 (new (db: +Invoice) '(+Invoice) 'id (inc 'N)) ))
(commit)
I can work with that. Now I am testing out queries.
? (bench (iter (tree
On Wed, May 30, 2012 at 12:28:50PM +0700, Henrik Sarvell wrote:
> Use new and chunk it up:
>
>(dbSync)
>(for A As
> (at (0 . 1000) (commit 'upd) (prune) (dbSync))
> (new (db: +Article) '(+Article) key1 value1 key2 value2 ... ))
>(commit 'upd)
>
> With new! you are locking
It depends of course. In the rare case that you actually need each
row to be securely on disk before writing the next one, the original
approach was correct, but flushing each row will take some time
in SQL databases too. (Google for Transactions Per Minute.)
best,
Jakob
On May 30, 2012 at 7:
Use new and chunk it up:
(dbSync)
(for A As
(at (0 . 1000) (commit 'upd) (prune) (dbSync))
(new (db: +Article) '(+Article) key1 value1 key2 value2 ... ))
(commit 'upd)
With new! you are locking and writing every row so should only be used
in cases where you know you are only