Re: db new! performance

2012-05-30 Thread Alexander Burger
On Wed, May 30, 2012 at 12:28:50PM +0700, Henrik Sarvell wrote: Use new and chunk it up: (dbSync) (for A As (at (0 . 1000) (commit 'upd) (prune) (dbSync)) (new (db: +Article) '(+Article) key1 value1 key2 value2 ... )) (commit 'upd) With new! you are locking and

Re: db new! performance

2012-05-30 Thread Joe Bogner
Hi Alex, Thanks for the reply. Just for reference, using seq is actually considerably slower. It ran in 39 seconds vs. 4 seconds. I think it's because it has to look up every object from disk to get the value of 'id instead of using the index which is likely in memory. The index appears to be

Re: db new! performance

2012-05-30 Thread Alexander Burger
Hi Joe, Thanks for the reply. Just for reference, using seq is actually considerably slower. It ran in 39 seconds vs. 4 seconds. Yeah, tried it here too. It is only 9 seconds vs. 6 seconds, though. I think it's because it has to look up every object from disk to get the value of 'id

db new! performance

2012-05-29 Thread Joe Bogner
I'm evaluating the use of picolisp for analyzing large datasets. Is it surprising that inserting a million rows into a simple db would take 5+ minutes on modern hardware? I killed it after that after about 500K were inserted. I checked by ctrl+c and then inspecting N. It seems to progressively get

Re: db new! performance

2012-05-29 Thread Henrik Sarvell
Use new and chunk it up: (dbSync) (for A As (at (0 . 1000) (commit 'upd) (prune) (dbSync)) (new (db: +Article) '(+Article) key1 value1 key2 value2 ... )) (commit 'upd) With new! you are locking and writing every row so should only be used in cases where you know you are only