Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-13 Thread Benjamin Barenblat
On Monday, August  5, 2019, at 10:17 PM +01, Oisín Mac Fhearaí wrote:
> I was able to [...] update the Dockerfile to build Ur/web from the
> latest release tarball [...] and compare the benchmarks with the version
> installed with apt from the Ubuntu repo. The version built from the
> latest release was over ten times faster, even running on my old laptop.

On Tuesday, August  6, 2019, at  1:59 PM -04, Adam Chlipala wrote:
> I've asked the Debian packager if he can think of some build-process
> change there that would have introduced a slowdown.

Hi, Debian packager here. Sorry for the latency of this reply – I hope
nobody’s been racking their brain trying to figure this one out.

This is in fact an issue with the Debian packaging. In particular,
upstream Ur/Web builds the runtime with -U_FORTIFY_SOURCE, but Debian
builds it with -D_FORTIFY_SOURCE=2. This causes a truly impressive
slowdown, as you can read about in https://bugs.debian.org/934722. I’m
going to do a bit more digging to see if I can understand the issue
further, but I expect I’ll just end up downgrading the Debian package to
-D_FORTIFY_SOURCE=1. I expect this has no performance impact, but Oisín,
if you want to check me, you can patch Ur/Web master with

--8<-cut here--start--->8---
diff --git a/src/c/Makefile.am b/src/c/Makefile.am
index 95582793..ad8cbd3e 100644
--- a/src/c/Makefile.am
+++ b/src/c/Makefile.am
@@ -7,7 +7,7 @@ liburweb_fastcgi_la_SOURCES = fastcgi.c fastcgi.h
 liburweb_static_la_SOURCES = static.c
 
 AM_CPPFLAGS = -I$(srcdir)/../../include/urweb $(OPENSSL_INCLUDES) 
$(ICU_INCLUDES)
-AM_CFLAGS = -Wall -Wunused-parameter -Werror -Wno-format-security 
-Wno-deprecated-declarations -U_FORTIFY_SOURCE $(PTHREAD_CFLAGS)
+AM_CFLAGS = -Wall -Wunused-parameter -Werror -Wno-format-security 
-Wno-deprecated-declarations -D_FORTIFY_SOURCE=1 $(PTHREAD_CFLAGS)
 liburweb_la_LDFLAGS = $(AM_LDFLAGS) $(OPENSSL_LDFLAGS) \
-export-symbols-regex 
'^(client_pruner|pthread_create_big|strcmp_nullsafe|uw_.*)' \
-version-info 1:0:0
--8<-cut here--end->8---

and give the benchmarks another shot.

___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-11 Thread Oisín Mac Fhearaí
After more digging, I realised it's because Ur is converting all non-ASCII
characters into Unicode escapes, when the test expects straight UTF-8.
I made a small PR that attempts a UTF-8 conversion for all printable
characters that don't need escaping (like < and &):

https://github.com/urweb/urweb/pull/177

It now passes the Fortunes test and the others in that site, but I know
very little about Unicode, Ur/Web and C, so your feedback on it would be
most welcome!

Oisín

On Sun, 11 Aug 2019, 17:52 Karn Kallio,  wrote:

>
> > Hello! I used "git bisect" to find the commit that introduced
> > behaviour that causes Ur/Web to fail the Fortunes test in the
> > benchmarks. It's commit 5cc729b48aad084757a049b7e5cdbadae5e9e400 from
> > November 2018. Unfortunately that's a pretty big squashed commit from
> > a PR:
> >
> https://github.com/urweb/urweb/commit/5cc729b48aad084757a049b7e5cdbadae5e9e400
> >
> > It'd be great if someone could take a look and see why it strips UTF-8
> > output in that benchmark test. Note that the test runs in a Docker
> > container, so perhaps it's trying to infer a system-wide i18n setting?
> >
> > Once we fix this, we can update the benchmarks repo and solve the
> > sorry state of affairs with Ur/Web way down the performance rankings.
> > I'd love to see more people active in the community, and things like
> > this would help raise awareness of the project.
> >
> > Oisín
> >
>
> I suspect this has to do with the difference between LTR and RTL
> languages.  A given database, in the fortunes table, may have Arabic
> text stored "backwards", and without any markings such as U+200F
> RIGHT-TO-LEFT MARK and so the U8_NEXT macro is seeing a trailing byte
> of a UTF-8 encoded character and failing to loop.
>
> ___
> Ur mailing list
> Ur@impredicative.com
> http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
>
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-11 Thread Karn Kallio

> Hello! I used "git bisect" to find the commit that introduced
> behaviour that causes Ur/Web to fail the Fortunes test in the
> benchmarks. It's commit 5cc729b48aad084757a049b7e5cdbadae5e9e400 from
> November 2018. Unfortunately that's a pretty big squashed commit from
> a PR:
> https://github.com/urweb/urweb/commit/5cc729b48aad084757a049b7e5cdbadae5e9e400
> 
> It'd be great if someone could take a look and see why it strips UTF-8
> output in that benchmark test. Note that the test runs in a Docker
> container, so perhaps it's trying to infer a system-wide i18n setting?
> 
> Once we fix this, we can update the benchmarks repo and solve the
> sorry state of affairs with Ur/Web way down the performance rankings.
> I'd love to see more people active in the community, and things like
> this would help raise awareness of the project.
> 
> Oisín
> 

I suspect this has to do with the difference between LTR and RTL
languages.  A given database, in the fortunes table, may have Arabic
text stored "backwards", and without any markings such as U+200F
RIGHT-TO-LEFT MARK and so the U8_NEXT macro is seeing a trailing byte
of a UTF-8 encoded character and failing to loop.

___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-10 Thread Oisín Mac Fhearaí
Hello! I used "git bisect" to find the commit that introduced behaviour
that causes Ur/Web to fail the Fortunes test in the benchmarks.
It's commit 5cc729b48aad084757a049b7e5cdbadae5e9e400 from November 2018.
Unfortunately that's a pretty big squashed commit from a PR:
https://github.com/urweb/urweb/commit/5cc729b48aad084757a049b7e5cdbadae5e9e400

It'd be great if someone could take a look and see why it strips UTF-8
output in that benchmark test. Note that the test runs in a Docker
container, so perhaps it's trying to infer a system-wide i18n setting?

Once we fix this, we can update the benchmarks repo and solve the sorry
state of affairs with Ur/Web way down the performance rankings. I'd love to
see more people active in the community, and things like this would help
raise awareness of the project.

Oisín

On Tue, 6 Aug 2019 at 20:28, Oisín Mac Fhearaí 
wrote:

>
>
> On Tue, 6 Aug 2019 at 19:00, Adam Chlipala  wrote:
>
>> On 8/5/19 5:17 PM, Oisín Mac Fhearaí wrote:
>> > [...]
>> > It would seem that Unicode characters are being stripped from the
>> > output, causing the test to fail. I'm not familiar with exactly what
>> > the test is trying to do, and I don't know much about how Ur handles
>> > UTF-8.
>> That's odd.  I see the Unicode characters when I run that benchmark
>> locally with a recent Git checkout of Ur/Web.  Are you sure you ran the
>> database-setup scripts properly?  What happens when you query the
>> database manually?  Are the right characters there?
>>
>
> I didn't run the database-setup scripts manually; the "tfb" script at the
> repo root does that. I also tested one of the Go frameworks the same way:
> "/tfb --mode benchmark --test fasthttp-postgresql --type fortune", which
> seems to pass the test. When I ran the benchmark with the Ubuntu package
> version of urweb, it also passed the test (albeit many, many times more
> slowly).
>
> To double-check though, I built an image from
> toolset/databases/postgres/postgres.dockerfile and saw that there are
> actually two duplicate tables: "fortune" and "Fortune". That's curious, but
> they contain the same 12 rows (including UTF characters) anyway.
>
> It is a bit puzzling, because my local Urweb version seems to have no
> problem showing UTF-8 text from a table.
>
>>
>> ___
>> Ur mailing list
>> Ur@impredicative.com
>> http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
>>
>
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-06 Thread Oisín Mac Fhearaí
On Tue, 6 Aug 2019 at 19:00, Adam Chlipala  wrote:

> On 8/5/19 5:17 PM, Oisín Mac Fhearaí wrote:
> > [...]
> > It would seem that Unicode characters are being stripped from the
> > output, causing the test to fail. I'm not familiar with exactly what
> > the test is trying to do, and I don't know much about how Ur handles
> > UTF-8.
> That's odd.  I see the Unicode characters when I run that benchmark
> locally with a recent Git checkout of Ur/Web.  Are you sure you ran the
> database-setup scripts properly?  What happens when you query the
> database manually?  Are the right characters there?
>

I didn't run the database-setup scripts manually; the "tfb" script at the
repo root does that. I also tested one of the Go frameworks the same way:
"/tfb --mode benchmark --test fasthttp-postgresql --type fortune", which
seems to pass the test. When I ran the benchmark with the Ubuntu package
version of urweb, it also passed the test (albeit many, many times more
slowly).

To double-check though, I built an image from
toolset/databases/postgres/postgres.dockerfile and saw that there are
actually two duplicate tables: "fortune" and "Fortune". That's curious, but
they contain the same 12 rows (including UTF characters) anyway.

It is a bit puzzling, because my local Urweb version seems to have no
problem showing UTF-8 text from a table.

>
> ___
> Ur mailing list
> Ur@impredicative.com
> http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
>
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-06 Thread Adam Chlipala

On 8/5/19 5:17 PM, Oisín Mac Fhearaí wrote:

Update! The good news:
I was able to update the Dockerfile to build Ur/web from the latest 
release tarball (basically, using the old round 16 Dockerfile with a 
couple of small fixes like installing libicu-dev) and compare the 
benchmarks with the version installed with apt from the Ubuntu repo. 
The version built from the latest release was over ten times faster, 
even running on my old laptop.
Very interesting finding!  I've asked the Debian packager if he can 
think of some build-process change there that would have introduced a 
slowdown.

The bad news:
The latest version of Ur appears to fail the "fortunes" test with the 
following diff (there is more, but this seems to explain it):


fortune: -6Emacs is a nice operating system, but I 
prefer UNIX. — Tom Christaensen
fortune: +6Emacs is a nice operating system, but I 
prefer UNIX.  Tom Christaensen

fortune: @@ -17 +17 @@
fortune: -12フレームワークのベンチマーク
fortune: +12

It would seem that Unicode characters are being stripped from the 
output, causing the test to fail. I'm not familiar with exactly what 
the test is trying to do, and I don't know much about how Ur handles 
UTF-8.
That's odd.  I see the Unicode characters when I run that benchmark 
locally with a recent Git checkout of Ur/Web.  Are you sure you ran the 
database-setup scripts properly?  What happens when you query the 
database manually?  Are the right characters there?


___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-05 Thread Oisín Mac Fhearaí
Update! The good news:
I was able to update the Dockerfile to build Ur/web from the latest release
tarball (basically, using the old round 16 Dockerfile with a couple of
small fixes like installing libicu-dev) and compare the benchmarks with the
version installed with apt from the Ubuntu repo. The version built from the
latest release was over ten times faster, even running on my old laptop.

The bad news:
The latest version of Ur appears to fail the "fortunes" test with the
following diff (there is more, but this seems to explain it):

fortune: -6Emacs is a nice operating system, but I prefer
UNIX. — Tom Christaensen
fortune: +6Emacs is a nice operating system, but I prefer
UNIX.  Tom Christaensen
fortune: @@ -17 +17 @@

fortune: -12フレームワークのベンチマーク
fortune: +12

It would seem that Unicode characters are being stripped from the output,
causing the test to fail. I'm not familiar with exactly what the test is
trying to do, and I don't know much about how Ur handles UTF-8.

If you can advise on how to fix this, I'd be happy to open a PR on the
Techempower benchmarks repo with my changes.

Oisín

On Mon, 5 Aug 2019 at 19:43, Oisín Mac Fhearaí 
wrote:

> Although I'm no closer to understanding why performance seems to have
> dropped in the benchmarks, thanks to a couple of comments on the Github
> issue I was able to find some more detailed logs of the test runs.
>
> Fortunes, Round 16:
> https://tfb-logs.techempower.com/round-16/final/citrine-results/urweb/fortune/raw.txt
> Fortunes, Round 17:
> https://tfb-logs.techempower.com/round-17/final/citrine-results/20180903024112/urweb/fortune/raw.txt
>
> I'm amazed by the difference in request latencies:
>
> Round 16:
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency   235.27us  140.45us   1.80ms   90.30%
> Req/Sec 4.36k   148.24 4.89k72.06%
>
> Round 17:
>   Thread Stats   Avg  Stdev Max   +/- Stdev
> Latency16.29ms   39.60ms 327.45ms   95.34%
> Req/Sec   123.38 20.22   141.00 95.00%
>
> In both cases, the web service is being hit by the "wrk" load tester, with
> the exact same parameters.
>
> The only differences I can think of, then, are that the round 17 Ur/web
> Dockerfile installs urweb via the apt package manager, whereas the round 17
> Dockerfile directly downloads an old tarball from 2016. But I've tested the
> latest Ubuntu version on my laptop and it performs almost exactly the same
> as the latest version from Git. So why does the round 17 benchmark have a
> max latency of 327 ms compared to under 2 ms in the previous round?
>
> So confuse.
>
> On Fri, 2 Aug 2019 at 21:45, Oisín Mac Fhearaí 
> wrote:
>
>> I tried cloning the latest version of the benchmarks to run the Urweb
>> tests locally, but sadly the Docker image fails to build for me (due to a
>> problem with the Postgres installation steps, it seems). I've opened an
>> issue here:
>> https://github.com/TechEmpower/FrameworkBenchmarks/issues/4969 ... I
>> also asked for advice on how to track down the massive performance drop in
>> the Urweb tests. Hopefully they might have some thoughts on it. Sadly I'm
>> running things on a 9 year old laptop so it's hard to draw conclusions
>> around performance...
>>
>> On Thu, 1 Aug 2019 at 13:23, Adam Chlipala  wrote:
>>
>>> I'm glad you brought this up, Oisín.  I was already thinking of
>>> appealing to this mailing list, in hopes of finding an eager detective to
>>> hunt down what is going on!  I can say that I can achieve much better
>>> performance with the latest code on my own workstation (similar profile to
>>> *one* of the several machines used by TechEmpower), which leads me to
>>> think something basic is getting in the way of proper performance in the
>>> benchmarking environment.
>>> On 7/31/19 8:06 PM, Oisín Mac Fhearaí wrote:
>>>
>>> I've noticed that Ur/web's performance benchmarks on Techempower have
>>> changed significantly between round 16 and 17.
>>>
>>> For example, in round 16, Urweb measured 323,430 responses per second to
>>> the "Fortunes" benchmark.
>>> In round 17 (and beyond), it achieved 4,024 RPS with MySQL and 2,544 RPS
>>> with Postgres.
>>>
>>> What could explain such a drastic drop in performance? The blog entry
>>> for round 17 mentioned query pipelining as an explanation for some of the
>>> frameworks getting much faster, but I don't see why Urweb's RPS would drop
>>> by a factor of 100x, unless perhaps previous rounds had query caching
>>> enabled and then round 17 disabled them.
>>>
>>> Can anyone here shed light on this? I built a simplified version of the
>>> "sql" demo with the 2016 tarball version of Ur (used by the round 16
>>> benchmarks) and a recent snapshot, and they both perform at similar speeds
>>> on my laptop.
>>>
>>> Oddly, the load testing tool I used (a Go program called "hey") seems to
>>> have one request that takes 5 seconds if I set it to use more concurrent
>>> threads than the number of threads available to the Ur/web program.
>>> 

Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-05 Thread Oisín Mac Fhearaí
Although I'm no closer to understanding why performance seems to have
dropped in the benchmarks, thanks to a couple of comments on the Github
issue I was able to find some more detailed logs of the test runs.

Fortunes, Round 16:
https://tfb-logs.techempower.com/round-16/final/citrine-results/urweb/fortune/raw.txt
Fortunes, Round 17:
https://tfb-logs.techempower.com/round-17/final/citrine-results/20180903024112/urweb/fortune/raw.txt

I'm amazed by the difference in request latencies:

Round 16:
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency   235.27us  140.45us   1.80ms   90.30%
Req/Sec 4.36k   148.24 4.89k72.06%

Round 17:
  Thread Stats   Avg  Stdev Max   +/- Stdev
Latency16.29ms   39.60ms 327.45ms   95.34%
Req/Sec   123.38 20.22   141.00 95.00%

In both cases, the web service is being hit by the "wrk" load tester, with
the exact same parameters.

The only differences I can think of, then, are that the round 17 Ur/web
Dockerfile installs urweb via the apt package manager, whereas the round 17
Dockerfile directly downloads an old tarball from 2016. But I've tested the
latest Ubuntu version on my laptop and it performs almost exactly the same
as the latest version from Git. So why does the round 17 benchmark have a
max latency of 327 ms compared to under 2 ms in the previous round?

So confuse.

On Fri, 2 Aug 2019 at 21:45, Oisín Mac Fhearaí 
wrote:

> I tried cloning the latest version of the benchmarks to run the Urweb
> tests locally, but sadly the Docker image fails to build for me (due to a
> problem with the Postgres installation steps, it seems). I've opened an
> issue here: https://github.com/TechEmpower/FrameworkBenchmarks/issues/4969
> ... I also asked for advice on how to track down the massive performance
> drop in the Urweb tests. Hopefully they might have some thoughts on it.
> Sadly I'm running things on a 9 year old laptop so it's hard to draw
> conclusions around performance...
>
> On Thu, 1 Aug 2019 at 13:23, Adam Chlipala  wrote:
>
>> I'm glad you brought this up, Oisín.  I was already thinking of appealing
>> to this mailing list, in hopes of finding an eager detective to hunt down
>> what is going on!  I can say that I can achieve much better performance
>> with the latest code on my own workstation (similar profile to *one* of
>> the several machines used by TechEmpower), which leads me to think
>> something basic is getting in the way of proper performance in the
>> benchmarking environment.
>> On 7/31/19 8:06 PM, Oisín Mac Fhearaí wrote:
>>
>> I've noticed that Ur/web's performance benchmarks on Techempower have
>> changed significantly between round 16 and 17.
>>
>> For example, in round 16, Urweb measured 323,430 responses per second to
>> the "Fortunes" benchmark.
>> In round 17 (and beyond), it achieved 4,024 RPS with MySQL and 2,544 RPS
>> with Postgres.
>>
>> What could explain such a drastic drop in performance? The blog entry for
>> round 17 mentioned query pipelining as an explanation for some of the
>> frameworks getting much faster, but I don't see why Urweb's RPS would drop
>> by a factor of 100x, unless perhaps previous rounds had query caching
>> enabled and then round 17 disabled them.
>>
>> Can anyone here shed light on this? I built a simplified version of the
>> "sql" demo with the 2016 tarball version of Ur (used by the round 16
>> benchmarks) and a recent snapshot, and they both perform at similar speeds
>> on my laptop.
>>
>> Oddly, the load testing tool I used (a Go program called "hey") seems to
>> have one request that takes 5 seconds if I set it to use more concurrent
>> threads than the number of threads available to the Ur/web program.
>> Otherwise, the longest request takes about 0.02 seconds. This seems
>> unrelated to the performance drop on the Techempower benchmarks, since the
>> max latency is quite low there.
>>
>> ___
>> Ur mailing list
>> Ur@impredicative.com
>> http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
>>
>
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-02 Thread Oisín Mac Fhearaí
I tried cloning the latest version of the benchmarks to run the Urweb tests
locally, but sadly the Docker image fails to build for me (due to a problem
with the Postgres installation steps, it seems). I've opened an issue here:
https://github.com/TechEmpower/FrameworkBenchmarks/issues/4969 ... I also
asked for advice on how to track down the massive performance drop in the
Urweb tests. Hopefully they might have some thoughts on it. Sadly I'm
running things on a 9 year old laptop so it's hard to draw conclusions
around performance...

On Thu, 1 Aug 2019 at 13:23, Adam Chlipala  wrote:

> I'm glad you brought this up, Oisín.  I was already thinking of appealing
> to this mailing list, in hopes of finding an eager detective to hunt down
> what is going on!  I can say that I can achieve much better performance
> with the latest code on my own workstation (similar profile to *one* of
> the several machines used by TechEmpower), which leads me to think
> something basic is getting in the way of proper performance in the
> benchmarking environment.
> On 7/31/19 8:06 PM, Oisín Mac Fhearaí wrote:
>
> I've noticed that Ur/web's performance benchmarks on Techempower have
> changed significantly between round 16 and 17.
>
> For example, in round 16, Urweb measured 323,430 responses per second to
> the "Fortunes" benchmark.
> In round 17 (and beyond), it achieved 4,024 RPS with MySQL and 2,544 RPS
> with Postgres.
>
> What could explain such a drastic drop in performance? The blog entry for
> round 17 mentioned query pipelining as an explanation for some of the
> frameworks getting much faster, but I don't see why Urweb's RPS would drop
> by a factor of 100x, unless perhaps previous rounds had query caching
> enabled and then round 17 disabled them.
>
> Can anyone here shed light on this? I built a simplified version of the
> "sql" demo with the 2016 tarball version of Ur (used by the round 16
> benchmarks) and a recent snapshot, and they both perform at similar speeds
> on my laptop.
>
> Oddly, the load testing tool I used (a Go program called "hey") seems to
> have one request that takes 5 seconds if I set it to use more concurrent
> threads than the number of threads available to the Ur/web program.
> Otherwise, the longest request takes about 0.02 seconds. This seems
> unrelated to the performance drop on the Techempower benchmarks, since the
> max latency is quite low there.
>
> ___
> Ur mailing list
> Ur@impredicative.com
> http://www.impredicative.com/cgi-bin/mailman/listinfo/ur
>
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur


Re: [Ur] Drop of several orders of magnitude in Techempower benchmarks

2019-08-01 Thread Adam Chlipala
I'm glad you brought this up, Oisín.  I was already thinking of 
appealing to this mailing list, in hopes of finding an eager detective 
to hunt down what is going on!  I can say that I can achieve much better 
performance with the latest code on my own workstation (similar profile 
to /one/ of the several machines used by TechEmpower), which leads me to 
think something basic is getting in the way of proper performance in the 
benchmarking environment.


On 7/31/19 8:06 PM, Oisín Mac Fhearaí wrote:
I've noticed that Ur/web's performance benchmarks on Techempower have 
changed significantly between round 16 and 17.


For example, in round 16, Urweb measured 323,430 responses per second 
to the "Fortunes" benchmark.
In round 17 (and beyond), it achieved 4,024 RPS with MySQL and 2,544 
RPS with Postgres.


What could explain such a drastic drop in performance? The blog entry 
for round 17 mentioned query pipelining as an explanation for some of 
the frameworks getting much faster, but I don't see why Urweb's RPS 
would drop by a factor of 100x, unless perhaps previous rounds had 
query caching enabled and then round 17 disabled them.


Can anyone here shed light on this? I built a simplified version of 
the "sql" demo with the 2016 tarball version of Ur (used by the round 
16 benchmarks) and a recent snapshot, and they both perform at similar 
speeds on my laptop.


Oddly, the load testing tool I used (a Go program called "hey") seems 
to have one request that takes 5 seconds if I set it to use more 
concurrent threads than the number of threads available to the Ur/web 
program. Otherwise, the longest request takes about 0.02 seconds. This 
seems unrelated to the performance drop on the Techempower benchmarks, 
since the max latency is quite low there.
___
Ur mailing list
Ur@impredicative.com
http://www.impredicative.com/cgi-bin/mailman/listinfo/ur