Nim's std lib parsing is kind of slow. That is why I made jsony (
<https://github.com/treeform/jsony> )
import times, os, std/json, jsony
type OptionContract* = ref object
id*: string
right*: string
expiration*: string
strike_raw*: float
premium_raw*: float
data_type*: string
type OptionChain* = object
contracts*: seq[OptionContract]
proc stub_data(): OptionChain =
result = OptionChain()
for _ in 1..6000:
result.contracts.add OptionContract(
id: "AMZN CALL 2021-03-19 1460.0 USD",
right: "call",
expiration: "2021-03-19",
strike_raw: 1460.0,
premium_raw: 1676.03,
data_type: "some type"
)
let json_str = stub_data().toJson()
let time = cpuTime()
for i in 1..5:
discard json_str.fromJson(OptionChain)
echo "Time taken: ", cpuTime() - time, " sec"
Run
Time taken: 0.042 sec
Run
While typescript on my machine takes:
Time taken: 0.033 sec
Run
Hmm typescript is still faster. Nim's standard json parsers first creates
intermediate json nodes then turns them into nim types, causing each object and
string to be double allocated. This is slow. Jsony reads json directly and
allocates objects once kind of like TypeScript does.
Jsony could be faster at this benchmark if I would have written a custom float
parser. This is some thing I have not done yet. Current float parser allocates
some garbage that slows it down.
The benchmarking could use work, discard there might allow the compiler to just
throw a way the results and whole computations. Also I recommend running
benchmark multiple times to get min, average and standard divination. I have
written a library for this called benchy ( <https://github.com/treeform/benchy>
). Using benchy:
timeIt "parsing":
for i in 1 .. 5:
keep json_str.fromJson(OptionChain)
Run
name ............................... min time avg time std dv runs
parsing ........................... 40.118 ms 42.083 ms ±1.599 x118
Run
I think I could get Nim to beat TypeScript if I work on the float parser.