Well, initially I was just curious.
As the name implies, it's a TCP proxy, and different features could go into
that.
I looked at for example port knocking for hindering unauthorized access to
the (protected) TCP service SMPS, but there you also have the possibility
of someone eavesdropping, and
> On 30 Jul 2022, at 20:33, Morten W. Petersen wrote:
> I thought it was a bit much.
>
> I just did a bit more testing, and saw that the throughput of wget through
> regular lighttpd was 1,3 GB/s, while through STP it was 122 MB/s, and using
> quite a bit of CPU.
>
> Then I increased the bu
I thought it was a bit much.
I just did a bit more testing, and saw that the throughput of wget through
regular lighttpd was 1,3 GB/s, while through STP it was 122 MB/s, and using
quite a bit of CPU.
Then I increased the buffer size 8-fold for reading and writing in run.py,
and the CPU usage went
Morten,
As Chris remarked you need to learn a number of networking, python, system
performance
and other skills to turn your project into production code.
Using threads does not scale very well. Its uses a lot of memory and raises CPU
used
just to do the context switches. Also the GIL means tha
Morten W. Petersen schreef op 29/07/2022 om 22:59:
OK, sounds like sunshine is getting the best of you.
It has to be said: that is uncalled for.
Chris gave you good advice, with the best of intentions. Sometimes we
don't like good advice if it says something we don't like, but that's no
reaso
OK, sounds like sunshine is getting the best of you.
It's working with a pretty heavy load, I see ways of solving potential
problems that haven't become a problem yet, and I'm enjoying it.
Maybe you should tone down the coaching until someone asks for it.
Regards,
Morten
On Fri, Jul 29, 2022 a
On Sat, 30 Jul 2022 at 04:54, Morten W. Petersen wrote:
>
> OK.
>
> Well, I've worked with web hosting in the past, and proxies like squid were
> used to lessen the load on dynamic backends. There was also a website
> opensourcearticles.com that we had with Firefox, Thunderbird articles etc.
>
, 2022 at 12:11 AM Chris Angelico wrote:
> On Fri, 29 Jul 2022 at 07:24, Morten W. Petersen
> wrote:
> >
> > Forwarding to the list as well.
> >
> > -- Forwarded message -
> > From: Morten W. Petersen
> > Date: Thu, Jul 28, 2022 at 11:22
OK, that's useful to know. Thanks. :)
-Morten
On Fri, Jul 29, 2022 at 3:43 AM Andrew MacIntyre
wrote:
> On 29/07/2022 8:08 am, Chris Angelico wrote:
> > It takes a bit of time to start ten thousand threads, but after that,
> > the system is completely idle again until I notify them all and they
On Fri, 29 Jul 2022 at 11:42, Andrew MacIntyre wrote:
>
> On 29/07/2022 8:08 am, Chris Angelico wrote:
> > It takes a bit of time to start ten thousand threads, but after that,
> > the system is completely idle again until I notify them all and they
> > shut down.
> >
> > (Interestingly, it takes
On 29/07/2022 8:08 am, Chris Angelico wrote:
It takes a bit of time to start ten thousand threads, but after that,
the system is completely idle again until I notify them all and they
shut down.
(Interestingly, it takes four times as long to start 20,000 threads,
suggesting that something in thr
On Fri, 29 Jul 2022 at 07:24, Morten W. Petersen wrote:
>
> Forwarding to the list as well.
>
> -- Forwarded message -
> From: Morten W. Petersen
> Date: Thu, Jul 28, 2022 at 11:22 PM
> Subject: Re: Simple TCP proxy
> To: Chris Angelico
>
>
> W
Well, it's not just code size in terms of disk space, it is also code
complexity, and the level of knowledge, skill and time it takes to make use
of something.
And if something fails in an unobvious way in Twisted, I imagine that
requires somebody highly skilled, and that costs quite a bit of mone
> On 28 Jul 2022, at 10:31, Morten W. Petersen wrote:
>
>
> Hi Barry.
>
> Well, I can agree that using backlog is an option for handling bursts. But
> what if that backlog number is exceeded? How easy is it to deal with such a
> situation?
You can make backlog very large, if that makes s
On Thu, 28 Jul 2022 at 21:01, Morten W. Petersen wrote:
>
> Well, I was thinking of following the socketserver / handle layout of code
> and execution, for now anyway.
>
> It wouldn't be a big deal to make them block, but another option is to
> increase the sleep period 100% for every 200 waitin
Well, I was thinking of following the socketserver / handle layout of code
and execution, for now anyway.
It wouldn't be a big deal to make them block, but another option is to
increase the sleep period 100% for every 200 waiting connections while
waiting in handle.
Another thing is that it's nic
On Thu, 28 Jul 2022 at 19:41, Morten W. Petersen wrote:
>
> Hi Martin.
>
> I was thinking of doing something with the handle function, but just this
> little tweak:
>
> https://github.com/morphex/stp/commit/9910ca8c80e9d150222b680a4967e53f0457b465
>
> made a huge difference in CPU usage. Hundreds
Hi Martin.
I was thinking of doing something with the handle function, but just this
little tweak:
https://github.com/morphex/stp/commit/9910ca8c80e9d150222b680a4967e53f0457b465
made a huge difference in CPU usage. Hundreds of waiting sockets are now
using 20-30% of CPU instead of 10x that. So
Hi Barry.
Well, I can agree that using backlog is an option for handling bursts. But
what if that backlog number is exceeded? How easy is it to deal with such
a situation?
I just cloned twisted, and compared the size:
morphex@morphex-Latitude-E4310:~$ du -s stp; du -s tmp/twisted/
464 stp
98520
OK, I'll have a look at using something else than _threading.
I quickly saw a couple of points where code could be optimized for speed,
the loop that transfers data back and forth also has low throughput, but
first priority was getting it working and seeing that it is fairly stable.
Regards,
Mor
> On 27 Jul 2022, at 17:16, Morten W. Petersen wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests and
> then allowing n number of connections to pass through at a time:
>
> http
On Wed, Jul 27, 2022 at 08:32:31PM +0200, Morten W. Petersen wrote:
You're thinking of the backlog argument of listen?
From my understanding, yes, when you set up the "accepter" socket (the
one that you use to listen and accept new connections), you can define
the length of the queue for inco
On Thu, 28 Jul 2022 at 04:32, Morten W. Petersen wrote:
>
> Hi Chris.
>
> You're thinking of the backlog argument of listen?
Yes, precisely.
> Well, STP will accept all connections, but can limit how many of the accepted
> connections that are active at any given time.
>
> So when I bombed it w
Hi Chris.
You're thinking of the backlog argument of listen?
Well, STP will accept all connections, but can limit how many of the
accepted connections that are active at any given time.
So when I bombed it with hundreds of almost simultaneous connections, all
of them were accepted, but only 25 w
On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests and
> then allowing n number of connections to pass through at a time:
How's this
25 matches
Mail list logo