Im using Nim 0.15 with -d:release and --opt:speed.
My Python is a 64 bit v2.7.12 without Pypy.
I need SQLite mainly because it is a part of a desktop app, where any other SQL
server is not suitable. And BTW this .db3 is used just for readonly operations.
No CUD is done there.
Just curious about a few things things:
* How do Nim compiler settings (--d:release --opt:speed) affect this
benchmark? Did you make any tweaks to nim.cfg that might explain slow
performance? Are you using PyPy for Python?
* Y U no use PostgreSQL? Sometimes staying with SQLite is perfectly
Thank you for the support guys.
The weird statements surrounding the operations with SQL were created just for
benchmarking's sake with Python.
Anyways, optimizing the handlers of the CSV, due it is not that big, brought no
real gain of performance. Comparing the elapsed time of Nim and
I'm just starting with nim and nimx looks like it has great potential! The
WebGL demo is very impressive, keep it up!
FYI I got nimble to work by building it from source and also I had to run
nimble.exe from the checked out directory and not ~/.nimble/...
Hats off to everyone involved in the nim project and I look forward to
discovering more about it
You could profile both the python and nim code and compare percentages of run
time for each line of code between the two languages.
If you do that then I'd like to see the results :)
Yes, Stefan's solution is somewhat nicer, didn't think about that. If there is
more data involved, split may be further accelerated by setting `maxsplit`:
let (lookup_id, lookup_name_source) = p.rowEntry(col).split('|', 2)[0..1]
The split was the only optimization I had discovered myself just before the
detailed answer of flyx.
But we do not need a template. This should work too:
var lookup_id, lookup_name_source: string
(lookup_id, lookup_name_source) = p.rowEntry(col).split('|')[0..1]
I cannot really answer the question whether there are missed optimization
opportunities in Nim's SQLite wrapper, but being a wrapper, it sounds unlikely.
There are some things about your code I can comment on:
lookup_id = p.rowEntry(col).split('|')[0]
lookup_name_source =
Greetings,
I have a big production SQLite table with 4 Gig+ and around 12M records. (named
mydb.db3)
The code below is looking for 10 records (of lookup_input.csv) at mydb using
the FTS feature available at SQLite.
Given the thread
10 matches
Mail list logo