On Wed, May 30, 2012 at 12:28:50PM +0700, Henrik Sarvell wrote:
Use new and chunk it up:
(dbSync)
(for A As
(at (0 . 1000) (commit 'upd) (prune) (dbSync))
(new (db: +Article) '(+Article) key1 value1 key2 value2 ... ))
(commit 'upd)
With new! you are locking and
Hi Alex,
Thanks for the reply. Just for reference, using seq is actually
considerably slower. It ran in 39 seconds vs. 4 seconds. I think it's
because it has to look up every object from disk to get the value of 'id
instead of using the index which is likely in memory. The index appears to
be
Hi Joe,
Thanks for the reply. Just for reference, using seq is actually
considerably slower. It ran in 39 seconds vs. 4 seconds.
Yeah, tried it here too. It is only 9 seconds vs. 6 seconds, though.
I think it's
because it has to look up every object from disk to get the value of 'id
I'm evaluating the use of picolisp for analyzing large datasets. Is it
surprising that inserting a million rows into a simple db would take 5+
minutes on modern hardware? I killed it after that after about 500K were
inserted. I checked by ctrl+c and then inspecting N. It seems to
progressively get
Use new and chunk it up:
(dbSync)
(for A As
(at (0 . 1000) (commit 'upd) (prune) (dbSync))
(new (db: +Article) '(+Article) key1 value1 key2 value2 ... ))
(commit 'upd)
With new! you are locking and writing every row so should only be used
in cases where you know you are only