I'm evaluating the use of picolisp for analyzing large datasets. Is it
surprising that inserting a million rows into a simple db would take 5+
minutes on modern hardware? I killed it after that after about 500K were
inserted. I checked by ctrl+c and then inspecting N. It seems to
progressively get slower after about 100K records.

(pool "foo.db")
(class +Invoice +Entity)
(rel nr (+Key +Number))
(zero N)
(do 1000000 (new! '(+Invoice) 'nr (inc 'N)))

I have just testing out the concept. My input data will be a flat file of
invoice data (12 million rows+)


Reply via email to