Hi team!
I've spent some time and effort the last week on polishing the URL parser. The
current work in progress can be seen here:
https://github.com/curl/curl/pull/9408
To measure the raw parsing performance, I wrote up this benchmark app I call
speedparse.c:
https://gist.github.com/bagder/dac962c022171a212c54221afc4b99d2
Before I consider merging this work, I figured I would ask for some additional
feedback on the general take and the benchmarking program.
I've managed to reduce the sizes the number of needed allocations
significantly. The parser also runs slightly faster now. Of course with all
existing behaviors and support intact.
I am interested in knowing
1. if you think we should extend the test program with more/better/different
URLs
2. how the performance of this patch looks for others, I have a fairly old
Intel core 7 and it could be valuable with more data.
3. whatever else you think is revelant! =)
--
/ daniel.haxx.se
| Commercial curl support up to 24x7 is available!
| Private help, bug fixes, support, ports, new features
| https://curl.se/support.html
--
Unsubscribe: https://lists.haxx.se/listinfo/curl-library
Etiquette: https://curl.se/mail/etiquette.html