Thx.

> On 31 Oct 2021, at 10:09, Tim Mackinnon <[email protected]> wrote:
> 
> Hey Sven - really appreciate your thorough approach to this, and it’s been 
> interesting seeing the results (I did try a few script injection attempts but 
> didn’t get up to running burp suite on it, which  I may try it while you have 
> the instances).
> 
> Kudos to Levente for taking a more reasoned approach and using inside 
> knowledge… this community is awesome!
> 
> Tim
> 
>> On 29 Oct 2021, at 19:42, Sven Van Caekenberghe <[email protected]> wrote:
>> 
>> Here is yet another update:
>> 
>> The instances
>> 
>> - On Amazon AWS: http://34.245.183.130:1701
>> 
>> - On Microsoft Azure: http://51.137.72.94:8080
>> 
>> have now been running for 18+ days.
>> 
>> Having observed the overall amounts of critical resources inside the Pharo 
>> images, things look good.
>> 
>> Seaside WASSession cleanup goes slowly but does work over time.
>> 
>> Eventually I will stop at least the Azure instance because it costs me 
>> personal money.
>> 
>>> On 11 Oct 2021, at 19:37, Sven Van Caekenberghe <[email protected]> wrote:
>>> 
>>> Here is an update on the status of both instances.
>>> 
>>> A couple of people tried fiddling with invalid input which did result in 
>>> exceptions, but everything was handled correctly so that the server kept on 
>>> functioning.
>>> 
>>> There were no spurious crashes.
>>> 
>>> There was one attack by Levente Uzonyi that was successful though. 
>>> 
>>> He used the fact that Seaside sessions are long lived (cached for a certain 
>>> time) combined with the fact that the Reddit.st app kept one GLORP database 
>>> connection through P3 open to PostgreSQL and fired off a small denial of 
>>> service (DOS) attack. Eventually either the VM ran out of space in the 
>>> ExternalSemaphoreTable, making Socket creation, either for database 
>>> connections or for handling new HTTP request impossible or the database's 
>>> own connection limit was reached. At that point the HTTP server or the 
>>> Pharo VM stopped functioning.
>>> 
>>> Although Levente used a sequential number of requests, a concurrent number 
>>> of request could cause similar problems (related, but differently).
>>> 
>>> In looking at what he reported it also became clear that I accidentally 
>>> left a debugging exception handler active, which is not good in a 
>>> production image.
>>> 
>>> Both instances are now redeployed with the following changes:
>>> 
>>> - the Reddit.st code was changed to not keep the database connection open 
>>> all the time, but instead connect/disconnect per request. this might be a 
>>> bit slower, but it conserves resources much better and solves the original 
>>> issue Levente reported
>>> 
>>> https://github.com/svenvc/Reddit/commit/f2e0a0dc00b9cbb68cfa4fb007906365ae66ab1b
>>> 
>>> - a new feature was added to Zinc HTTP Components' 
>>> ZnManagingMultiThreadedServer (the default) to enforce a maximum number of 
>>> concurrent connections being allowed (the limit is 32 by default, but 
>>> changeable if you know what you are doing). when the limit is reached, 503 
>>> Service Unavailable responses are sent to the excess clients and the 
>>> connection is closed. this should help protect against concurrent 
>>> connection DOS attacks
>>> 
>>> https://github.com/svenvc/zinc/commit/ac0f06e74e7ab129610c466cb1d7ea9533d29b4c
>>> 
>>> - the deploy script was changed to use the more primitive WAErrorHandler
>>> 
>>> https://github.com/svenvc/Reddit/commit/874b631e6dc0c04c8c0b687ef770d00540d282df
>>> 
>>> Thanks again to Levente for taking the time to try an attack and for 
>>> reporting it clearly.
>>> 
>>> Sven
>>> 
>>>>> On 29 Sep 2021, at 17:10, Sven Van Caekenberghe <[email protected]> wrote:
>>>> 
>>>> Both instances have been up for 5 days now, looking for more testers.
>>>> 
>>>>> On 23 Sep 2021, at 17:03, Sven Van Caekenberghe <[email protected]> wrote:
>>>>> 
>>>>> Hi,
>>>>> 
>>>>> Zinc HTTP Components [https://github.com/svenvc/zinc] has been a part of 
>>>>> Pharo since version 1.3 (2011). It is an open-source framework to deal 
>>>>> with the HTTP networking protocol, modelling all aspects involved. It 
>>>>> also offers both client and server functionality.
>>>>> 
>>>>> The reliability of the code base has improved steadily over the years, 
>>>>> thanks to virtually all Pharo developers using it, directly or 
>>>>> indirectly. Over the summer a number of issues that popped up after Pharo 
>>>>> 9 was released were resolved.
>>>>> 
>>>>> The robustness of the core HTTP server is one important aspect. To put 
>>>>> this quality further to the test, I deployed two servers with the same 
>>>>> demo Seaside application, Reddit.st, open to the internet, without any 
>>>>> further protections.
>>>>> 
>>>>> - On Amazon AWS: http://34.245.183.130:1701
>>>>> 
>>>>> - On Microsoft Azure: http://51.137.72.94:8080
>>>>> 
>>>>> The application's source code can be found at 
>>>>> [https://github.com/svenvc/Reddit]. For the technically curious there are 
>>>>> also deploy instructions at 
>>>>> [https://github.com/svenvc/Reddit/blob/main/DEPLOY.md]. The demo app 
>>>>> itself is described in an older article 
>>>>> [https://medium.com/@svenvc/reddit-st-in-10-cool-pharo-classes-1b5327ca0740].
>>>>>  Note that, by definition, there is no HTTPS/TLS variant.
>>>>> 
>>>>> If you manage to break this server with (a) malicious request(s) in such 
>>>>> a way that you can explain what you did for others to confirm your 
>>>>> approach, you not only help me/us improve the code, but earn eternal fame 
>>>>> as well ;-)
>>>>> 
>>>>> Sven
>>>>> 
>>>>> PS: I hope I won't regret this, I am looking for constructive criticism.
>>>>> 
>>>>> 
>>>>> --
>>>>> Sven Van Caekenberghe
>>>>> Proudly supporting Pharo
>>>>> http://pharo.org
>>>>> http://association.pharo.org
>>>>> http://consortium.pharo.org
>>>>> 
>>>> 
>>> 

Reply via email to