Re: Simple TCP proxy

2022-07-31 Thread Morten W. Petersen
Well, initially I was just curious.

As the name implies, it's a TCP proxy, and different features could go into
that.

I looked at for example port knocking for hindering unauthorized access to
the (protected) TCP service SMPS, but there you also have the possibility
of someone eavesdropping, and learning the right handshake, if you will.
So it's something that will work, until someone gets determined to make a
mess.

In short, it will give better control than backlog does, enabling
Python-style code and logic to deal with different situations.

I was about to say "deal with things intelligently"; but I think
"intelligent" is a word that doesn't fit here or in many other applications.

Say for example this service comes under attack for unknown reasons; it
could be possible to teach the proxy to only accept connections to the
backend server for IP addresses / subnets that have previously n number of
transmissions back and forth.  If you know that the service will have max
50 different clients.

Anyway, what Chris said earlier, I think we can file that under "eagerness
to tech others and show what you know".  Right Chris? :)

Regards,

Morten

On Sat, Jul 30, 2022 at 10:31 PM Barry  wrote:

>
>
>
> > On 30 Jul 2022, at 20:33, Morten W. Petersen  wrote:
> > I thought it was a bit much.
> >
> > I just did a bit more testing, and saw that the throughput of wget
> through
> > regular lighttpd was 1,3 GB/s, while through STP it was 122 MB/s, and
> using
> > quite a bit of CPU.
> >
> > Then I increased the buffer size 8-fold for reading and writing in
> run.py,
> > and the CPU usage went way down, and the transfer speed went up to 449
> MB/s.
>
> You are trading latency for through put.
>
> >
> > So it would require well more than a gigabit network interface to max out
> > STP throughput; CPU usage was around 30-40% max, on one processor core.
>
> With how many connections?
>
> >
> > There is good enough, and then there's general practice and/or what is
> > regarded as an elegant solution.  I'm looking for good enough, and in the
> > process I don't mind pushing the envelope on Python threading.
>
> You never did answer my query on why a large backlog is not good enough.
> Why do you need this program at all?
>
> Barry
> >
> > -Morten
> >
> > On Sat, Jul 30, 2022 at 12:59 PM Roel Schroeven 
> > wrote:
> >
> >> Morten W. Petersen schreef op 29/07/2022 om 22:59:
> >>> OK, sounds like sunshine is getting the best of you.
> >> It has to be said: that is uncalled for.
> >>
> >> Chris gave you good advice, with the best of intentions. Sometimes we
> >> don't like good advice if it says something we don't like, but that's no
> >> reason to take it off on the messenger.
> >>
> >> --
> >> "Iceland is the place you go to remind yourself that planet Earth is a
> >> machine... and that all organic life that has ever existed amounts to a
> >> greasy
> >> film that has survived on the exterior of that machine thanks to furious
> >> improvisation."
> >> -- Sam Hughes, Ra
> >>
> >> --
> >> https://mail.python.org/mailman/listinfo/python-list
> >
> >
> > --
> > I am https://leavingnorway.info
> > Videos at https://www.youtube.com/user/TheBlogologue
> > Twittering at http://twitter.com/blogologue
> > Blogging at http://blogologue.com
> > Playing music at https://soundcloud.com/morten-w-petersen
> > Also playing music and podcasting here:
> > http://www.mixcloud.com/morten-w-petersen/
> > On Google+ here https://plus.google.com/107781930037068750156
> > On Instagram at https://instagram.com/morphexx/
> > --
> > https://mail.python.org/mailman/listinfo/python-list
>
>

-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-30 Thread Barry



> On 30 Jul 2022, at 20:33, Morten W. Petersen  wrote:
> I thought it was a bit much.
> 
> I just did a bit more testing, and saw that the throughput of wget through
> regular lighttpd was 1,3 GB/s, while through STP it was 122 MB/s, and using
> quite a bit of CPU.
> 
> Then I increased the buffer size 8-fold for reading and writing in run.py,
> and the CPU usage went way down, and the transfer speed went up to 449 MB/s.

You are trading latency for through put.

> 
> So it would require well more than a gigabit network interface to max out
> STP throughput; CPU usage was around 30-40% max, on one processor core.

With how many connections?

> 
> There is good enough, and then there's general practice and/or what is
> regarded as an elegant solution.  I'm looking for good enough, and in the
> process I don't mind pushing the envelope on Python threading.

You never did answer my query on why a large backlog is not good enough.
Why do you need this program at all?

Barry
> 
> -Morten
> 
> On Sat, Jul 30, 2022 at 12:59 PM Roel Schroeven 
> wrote:
> 
>> Morten W. Petersen schreef op 29/07/2022 om 22:59:
>>> OK, sounds like sunshine is getting the best of you.
>> It has to be said: that is uncalled for.
>> 
>> Chris gave you good advice, with the best of intentions. Sometimes we
>> don't like good advice if it says something we don't like, but that's no
>> reason to take it off on the messenger.
>> 
>> --
>> "Iceland is the place you go to remind yourself that planet Earth is a
>> machine... and that all organic life that has ever existed amounts to a
>> greasy
>> film that has survived on the exterior of that machine thanks to furious
>> improvisation."
>> -- Sam Hughes, Ra
>> 
>> --
>> https://mail.python.org/mailman/listinfo/python-list
> 
> 
> -- 
> I am https://leavingnorway.info
> Videos at https://www.youtube.com/user/TheBlogologue
> Twittering at http://twitter.com/blogologue
> Blogging at http://blogologue.com
> Playing music at https://soundcloud.com/morten-w-petersen
> Also playing music and podcasting here:
> http://www.mixcloud.com/morten-w-petersen/
> On Google+ here https://plus.google.com/107781930037068750156
> On Instagram at https://instagram.com/morphexx/
> -- 
> https://mail.python.org/mailman/listinfo/python-list

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-30 Thread Morten W. Petersen
I thought it was a bit much.

I just did a bit more testing, and saw that the throughput of wget through
regular lighttpd was 1,3 GB/s, while through STP it was 122 MB/s, and using
quite a bit of CPU.

Then I increased the buffer size 8-fold for reading and writing in run.py,
and the CPU usage went way down, and the transfer speed went up to 449 MB/s.

So it would require well more than a gigabit network interface to max out
STP throughput; CPU usage was around 30-40% max, on one processor core.

There is good enough, and then there's general practice and/or what is
regarded as an elegant solution.  I'm looking for good enough, and in the
process I don't mind pushing the envelope on Python threading.

-Morten

On Sat, Jul 30, 2022 at 12:59 PM Roel Schroeven 
wrote:

> Morten W. Petersen schreef op 29/07/2022 om 22:59:
> > OK, sounds like sunshine is getting the best of you.
> It has to be said: that is uncalled for.
>
> Chris gave you good advice, with the best of intentions. Sometimes we
> don't like good advice if it says something we don't like, but that's no
> reason to take it off on the messenger.
>
> --
> "Iceland is the place you go to remind yourself that planet Earth is a
> machine... and that all organic life that has ever existed amounts to a
> greasy
> film that has survived on the exterior of that machine thanks to furious
> improvisation."
>  -- Sam Hughes, Ra
>
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-30 Thread Barry Scott
Morten,

As Chris remarked you need to learn a number of networking, python, system 
performance
and other skills to turn your project into production code.

Using threads does not scale very well. Its uses a lot of memory and raises CPU 
used
just to do the context switches. Also the GIL means that even if you are doing 
blocking
I/O the use of threads does not scale well.

Its rare to see multi threaded code, rather what you see is code that uses 
async I/O.

At its heart async code at the low level is using a kernel interface like epoll 
(or on old
systems select). What epoll allow you to do is wait on a sets of FDs for a 
range of
I/O operations. Like ready to read, ready to write and other activity (like the 
socket
closing).

You could write code to use epoll your self, but while fun to write you need to 
know
a lot about networking and linux to cover all the corner cases.

Libraries like twisted, trio, uvloop and pythons selectors implemented 
production quality
version of the required code with good APIs.

Do not judge these libraries by their size. They are no bloated and only as 
complex as
the problem they are solving requires.

There is a simple example of async code using the python selectors here that 
shows
the style of programming.
https://docs.python.org/3/library/selectors.html#examples 


The issues that you likely need to solve and test for include:
* handling unexpected socket close events.
* buffering and flow control from one socket's read to the another socket's 
write.
  What if one side is reading slower then the other is writing?
* timeout sockets that stop sending data and close them

At some point you will exceed the capacity for one process to handle the load.
The solution we used is to listen on the socket in a parent process and fork
enough child processes to handle the I/O load. This avoids issues with the GIL
and allows you to scale.

But I am still not sure why you need to do anything more the increase the 
backlog
on your listen socket in the main app. Set the backlog to 1,000,000 does that 
fix
your issue? 

You will need on Linux to change kernel limits to allow that size. See man 
listen
for info on what you need to change.

Barry

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-30 Thread Roel Schroeven

Morten W. Petersen schreef op 29/07/2022 om 22:59:

OK, sounds like sunshine is getting the best of you.

It has to be said: that is uncalled for.

Chris gave you good advice, with the best of intentions. Sometimes we 
don't like good advice if it says something we don't like, but that's no 
reason to take it off on the messenger.


--
"Iceland is the place you go to remind yourself that planet Earth is a
machine... and that all organic life that has ever existed amounts to a greasy
film that has survived on the exterior of that machine thanks to furious
improvisation."
-- Sam Hughes, Ra

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-29 Thread Morten W. Petersen
OK, sounds like sunshine is getting the best of you.

It's working with a pretty heavy load, I see ways of solving potential
problems that haven't become a problem yet, and I'm enjoying it.

Maybe you should tone down the coaching until someone asks for it.

Regards,

Morten

On Fri, Jul 29, 2022 at 10:46 PM Chris Angelico  wrote:

> On Sat, 30 Jul 2022 at 04:54, Morten W. Petersen 
> wrote:
> >
> > OK.
> >
> > Well, I've worked with web hosting in the past, and proxies like squid
> were used to lessen the load on dynamic backends.  There was also a website
> opensourcearticles.com that we had with Firefox, Thunderbird articles
> etc. that got quite a bit of traffic.
> >
> > IIRC, that website was mostly static with some dynamic bits and heavily
> cached by squid.
>
> Yep, and squid almost certainly won't have a thread for every incoming
> connection, spinning and waiting for the back end server. But squid
> does a LOT more than simply queue connections - it'll be inspecting
> headers and retaining a cache of static content, so it's not really
> comparable.
>
> > Most websites don't get a lot of traffic though, and don't have a big
> budget for "website system administration".  So maybe that's where I'm
> partly going with this, just making a proxy that can be put in front and
> deal with a lot of common situations, in a reasonably good way.
> >
> > If I run into problems with threads that can't be managed, then a switch
> to something like the queue_manager function which has data and then
> functions that manage the data and connections is an option.
> >
>
> I'll be quite frank with you: this is not production-quality code. It
> should not be deployed by anyone who doesn't have a big budget for
> "website system administration *training*". This code is good as a
> tool for YOU to learn how these things work; it shouldn't be a tool
> for anyone who actually has server load issues.
>
> I'm sorry if that sounds harsh, but the fact is, you can do a lot
> better by using this to learn more about networking than you'll ever
> do by trying to pitch it to any specific company.
>
> That said though: it's still good to know what your (theoretical)
> use-case is. That'll tell you what kinds of connection spam to throw
> at your proxy (lots of idle sockets? lots of HTTP requests? billions
> of half open TCP connections?) to see what it can cope with.
>
> Keep on playing with this code. There's a lot you can gain from it, still.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-29 Thread Chris Angelico
On Sat, 30 Jul 2022 at 04:54, Morten W. Petersen  wrote:
>
> OK.
>
> Well, I've worked with web hosting in the past, and proxies like squid were 
> used to lessen the load on dynamic backends.  There was also a website 
> opensourcearticles.com that we had with Firefox, Thunderbird articles etc. 
> that got quite a bit of traffic.
>
> IIRC, that website was mostly static with some dynamic bits and heavily 
> cached by squid.

Yep, and squid almost certainly won't have a thread for every incoming
connection, spinning and waiting for the back end server. But squid
does a LOT more than simply queue connections - it'll be inspecting
headers and retaining a cache of static content, so it's not really
comparable.

> Most websites don't get a lot of traffic though, and don't have a big budget 
> for "website system administration".  So maybe that's where I'm partly going 
> with this, just making a proxy that can be put in front and deal with a lot 
> of common situations, in a reasonably good way.
>
> If I run into problems with threads that can't be managed, then a switch to 
> something like the queue_manager function which has data and then functions 
> that manage the data and connections is an option.
>

I'll be quite frank with you: this is not production-quality code. It
should not be deployed by anyone who doesn't have a big budget for
"website system administration *training*". This code is good as a
tool for YOU to learn how these things work; it shouldn't be a tool
for anyone who actually has server load issues.

I'm sorry if that sounds harsh, but the fact is, you can do a lot
better by using this to learn more about networking than you'll ever
do by trying to pitch it to any specific company.

That said though: it's still good to know what your (theoretical)
use-case is. That'll tell you what kinds of connection spam to throw
at your proxy (lots of idle sockets? lots of HTTP requests? billions
of half open TCP connections?) to see what it can cope with.

Keep on playing with this code. There's a lot you can gain from it, still.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-29 Thread Morten W. Petersen
OK.

Well, I've worked with web hosting in the past, and proxies like squid were
used to lessen the load on dynamic backends.  There was also a website
opensourcearticles.com that we had with Firefox, Thunderbird articles etc.
that got quite a bit of traffic.

IIRC, that website was mostly static with some dynamic bits and heavily
cached by squid.

Most websites don't get a lot of traffic though, and don't have a big
budget for "website system administration".  So maybe that's where I'm
partly going with this, just making a proxy that can be put in front and
deal with a lot of common situations, in a reasonably good way.

If I run into problems with threads that can't be managed, then a switch to
something like the queue_manager function which has data and then functions
that manage the data and connections is an option.

-Morten

On Fri, Jul 29, 2022 at 12:11 AM Chris Angelico  wrote:

> On Fri, 29 Jul 2022 at 07:24, Morten W. Petersen 
> wrote:
> >
> > Forwarding to the list as well.
> >
> > -- Forwarded message -
> > From: Morten W. Petersen 
> > Date: Thu, Jul 28, 2022 at 11:22 PM
> > Subject: Re: Simple TCP proxy
> > To: Chris Angelico 
> >
> >
> > Well, an increase from 0.1 seconds to 0.2 seconds on "polling" in each
> > thread whether or not the connection should become active doesn't seem
> like
> > a big deal.
>
> Maybe, but polling *at all* is the problem here. It shouldn't be
> hammering the other server. You'll quickly find that there are limits
> that simply shouldn't exist, because every connection is trying to
> check to see if it's active now. This is *completely unnecessary*.
> I'll reiterate the advice given earlier in this thread (of
> conversation): Look into the tools available for thread (of execution)
> synchronization, such as mutexes (in Python, threading.Lock) and
> events (in Python, threading.Condition). A poll interval enforces a
> delay before the thread notices that it's active, AND causes inactive
> threads to consume CPU, neither of which is a good thing.
>
> > And there's also some point where it is pointless to accept more
> > connections, and where maybe remedies like accepting known good IPs,
> > blocking IPs / IP blocks with more than 3 connections etc. should be
> > considered.
>
> Firewalling is its own science. Blocking IPs with too many
> simultaneous connections should be decided administratively, not
> because your proxy can't handle enough connections.
>
> > I think I'll be getting closer than most applications to an eventual
> > ceiling for what Python can handle of threads, and that's interesting and
> > could be beneficial for Python as well.
>
> Here's a quick demo of the cost of threads when they're all blocked on
> something.
>
> >>> import threading
> >>> finish = threading.Condition()
> >>> def thrd(cond):
> ... with cond: cond.wait()
> ...
> >>> threading.active_count() # Main thread only
> 1
> >>> import time
> >>> def spawn(n):
> ... start = time.monotonic()
> ... for _ in range(n):
> ... t = threading.Thread(target=thrd, args=(finish,))
> ... t.start()
> ... print("Spawned", n, "threads in", time.monotonic() - start,
> "seconds")
> ...
> >>> spawn(1)
> Spawned 1 threads in 7.548425202025101 seconds
> >>> threading.active_count()
> 10001
> >>> with finish: finish.notify_all()
> ...
> >>> threading.active_count()
> 1
>
> It takes a bit of time to start ten thousand threads, but after that,
> the system is completely idle again until I notify them all and they
> shut down.
>
> (Interestingly, it takes four times as long to start 20,000 threads,
> suggesting that something in thread spawning has O(n²) cost. Still,
> even that leaves the system completely idle once it's done spawning
> them.)
>
> If your proxy can handle 20,000 threads, I would be astonished. And
> this isn't even close to a thread limit.
>
> Obviously the cost is different if the threads are all doing things,
> but if you have thousands of active socket connections, you'll start
> finding that there are limitations in quite a few places, depending on
> how much traffic is going through them. Ultimately, yes, you will find
> that threads restrict you and asynchronous I/O is the only option; but
> you can take threads a fairly long way before they are the limiting
> factor.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-29 Thread Morten W. Petersen
OK, that's useful to know. Thanks. :)

-Morten

On Fri, Jul 29, 2022 at 3:43 AM Andrew MacIntyre 
wrote:

> On 29/07/2022 8:08 am, Chris Angelico wrote:
> > It takes a bit of time to start ten thousand threads, but after that,
> > the system is completely idle again until I notify them all and they
> > shut down.
> >
> > (Interestingly, it takes four times as long to start 20,000 threads,
> > suggesting that something in thread spawning has O(n²) cost. Still,
> > even that leaves the system completely idle once it's done spawning
> > them.)
>
> Another cost of threads can be memory allocated as thread stack space,
> the default size of which varies by OS (see e.g.
>
> https://ariadne.space/2021/06/25/understanding-thread-stack-sizes-and-how-alpine-is-different/
> ).
>
> threading.stack_size() can be used to check and perhaps adjust the
> allocation size.
>
> --
> -
> Andrew I MacIntyre "These thoughts are mine alone..."
> E-mail: andy...@pcug.org.au(pref) | Snail: PO Box 370
>  andy...@bullseye.apana.org.au   (alt) |Belconnen ACT 2616
> Web:http://www.andymac.org/   |Australia
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Chris Angelico
On Fri, 29 Jul 2022 at 11:42, Andrew MacIntyre  wrote:
>
> On 29/07/2022 8:08 am, Chris Angelico wrote:
> > It takes a bit of time to start ten thousand threads, but after that,
> > the system is completely idle again until I notify them all and they
> > shut down.
> >
> > (Interestingly, it takes four times as long to start 20,000 threads,
> > suggesting that something in thread spawning has O(n²) cost. Still,
> > even that leaves the system completely idle once it's done spawning
> > them.)
>
> Another cost of threads can be memory allocated as thread stack space,
> the default size of which varies by OS (see e.g.
> https://ariadne.space/2021/06/25/understanding-thread-stack-sizes-and-how-alpine-is-different/).
>
> threading.stack_size() can be used to check and perhaps adjust the
> allocation size.
>

Yeah, they do have quite a few costs, and a naive approach of "give a
thread to every client", while very convenient, will end up limiting
throughput. (But I'll be honest: I still have a server that's built on
exactly that model, because it's much much safer than risking one
client stalling out the whole server due to a small bug. But that's a
MUD server.) Thing is, though, it'll most likely limit throughput to
something in the order of thousands of concurrent connections (or
thousands per second if it's something like HTTP where they tend to
get closed again), maybe tens of thousands. So if you have something
where every thread needs its own database connection, well, you're
gonna have database throughput problems WAY before you actually run
into thread count limitations!

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Andrew MacIntyre

On 29/07/2022 8:08 am, Chris Angelico wrote:

It takes a bit of time to start ten thousand threads, but after that,
the system is completely idle again until I notify them all and they
shut down.

(Interestingly, it takes four times as long to start 20,000 threads,
suggesting that something in thread spawning has O(n²) cost. Still,
even that leaves the system completely idle once it's done spawning
them.)


Another cost of threads can be memory allocated as thread stack space, 
the default size of which varies by OS (see e.g. 
https://ariadne.space/2021/06/25/understanding-thread-stack-sizes-and-how-alpine-is-different/).


threading.stack_size() can be used to check and perhaps adjust the 
allocation size.


--
-
Andrew I MacIntyre "These thoughts are mine alone..."
E-mail: andy...@pcug.org.au(pref) | Snail: PO Box 370
andy...@bullseye.apana.org.au   (alt) |Belconnen ACT 2616
Web:http://www.andymac.org/   |Australia
--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Chris Angelico
On Fri, 29 Jul 2022 at 07:24, Morten W. Petersen  wrote:
>
> Forwarding to the list as well.
>
> -- Forwarded message -
> From: Morten W. Petersen 
> Date: Thu, Jul 28, 2022 at 11:22 PM
> Subject: Re: Simple TCP proxy
> To: Chris Angelico 
>
>
> Well, an increase from 0.1 seconds to 0.2 seconds on "polling" in each
> thread whether or not the connection should become active doesn't seem like
> a big deal.

Maybe, but polling *at all* is the problem here. It shouldn't be
hammering the other server. You'll quickly find that there are limits
that simply shouldn't exist, because every connection is trying to
check to see if it's active now. This is *completely unnecessary*.
I'll reiterate the advice given earlier in this thread (of
conversation): Look into the tools available for thread (of execution)
synchronization, such as mutexes (in Python, threading.Lock) and
events (in Python, threading.Condition). A poll interval enforces a
delay before the thread notices that it's active, AND causes inactive
threads to consume CPU, neither of which is a good thing.

> And there's also some point where it is pointless to accept more
> connections, and where maybe remedies like accepting known good IPs,
> blocking IPs / IP blocks with more than 3 connections etc. should be
> considered.

Firewalling is its own science. Blocking IPs with too many
simultaneous connections should be decided administratively, not
because your proxy can't handle enough connections.

> I think I'll be getting closer than most applications to an eventual
> ceiling for what Python can handle of threads, and that's interesting and
> could be beneficial for Python as well.

Here's a quick demo of the cost of threads when they're all blocked on
something.

>>> import threading
>>> finish = threading.Condition()
>>> def thrd(cond):
... with cond: cond.wait()
...
>>> threading.active_count() # Main thread only
1
>>> import time
>>> def spawn(n):
... start = time.monotonic()
... for _ in range(n):
... t = threading.Thread(target=thrd, args=(finish,))
... t.start()
... print("Spawned", n, "threads in", time.monotonic() - start, "seconds")
...
>>> spawn(1)
Spawned 1 threads in 7.548425202025101 seconds
>>> threading.active_count()
10001
>>> with finish: finish.notify_all()
...
>>> threading.active_count()
1

It takes a bit of time to start ten thousand threads, but after that,
the system is completely idle again until I notify them all and they
shut down.

(Interestingly, it takes four times as long to start 20,000 threads,
suggesting that something in thread spawning has O(n²) cost. Still,
even that leaves the system completely idle once it's done spawning
them.)

If your proxy can handle 20,000 threads, I would be astonished. And
this isn't even close to a thread limit.

Obviously the cost is different if the threads are all doing things,
but if you have thousands of active socket connections, you'll start
finding that there are limitations in quite a few places, depending on
how much traffic is going through them. Ultimately, yes, you will find
that threads restrict you and asynchronous I/O is the only option; but
you can take threads a fairly long way before they are the limiting
factor.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
Well, it's not just code size in terms of disk space, it is also code
complexity, and the level of knowledge, skill and time it takes to make use
of something.

And if something fails in an unobvious way in Twisted, I imagine that
requires somebody highly skilled, and that costs quite a bit of money. And
people like that might also not always be available.

-Morten

On Thu, Jul 28, 2022 at 2:29 PM Barry  wrote:

>
>
> On 28 Jul 2022, at 10:31, Morten W. Petersen  wrote:
>
> 
> Hi Barry.
>
> Well, I can agree that using backlog is an option for handling bursts. But
> what if that backlog number is exceeded?  How easy is it to deal with such
> a situation?
>
>
> You can make backlog very large, if that makes sense.
> But at some point you will be forced to reject connections,
> once you cannot keep up with the average rate of connections.
>
>
>
> I just cloned twisted, and compared the size:
>
> morphex@morphex-Latitude-E4310:~$ du -s stp; du -s tmp/twisted/
> 464 stp
> 98520 tmp/twisted/
> morphex@morphex-Latitude-E4310:~$ du -sh stp/LICENSE
> 36K stp/LICENSE
>
> >>> 464/98520.0
> 0.004709703613479496
> >>>
>
> It's quite easy to get an idea of what's going on in STP, as opposed to if
> something goes wrong in Twisted with the size of the codebase. I used to
> use emacs a lot, but then I came into a period where it was more practical
> to use nano, and I mostly use nano now, unless I need to for example search
> and replace or something like that.
>
>
> I mentioned twisted for context. Depending on yours need the built in
> python 3 async support may well be sufficient for you needs. Using threads
> is not scalable.
>
> In the places I code disk space of a few MiB is not an issue.
>
> Barry
>
>
> -Morten
>
> On Thu, Jul 28, 2022 at 8:31 AM Barry  wrote:
>
>>
>>
>> > On 27 Jul 2022, at 17:16, Morten W. Petersen  wrote:
>> >
>> > Hi.
>> >
>> > I'd like to share with you a recent project, which is a simple TCP proxy
>> > that can stand in front of a TCP server of some sort, queueing requests
>> and
>> > then allowing n number of connections to pass through at a time:
>> >
>> > https://github.com/morphex/stp
>> >
>> > I'll be developing it further, but the the files committed in this tree
>> > seem to be stable:
>> >
>> >
>> https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465
>> >
>> > I just bombed that code with 700+ requests almost simultaneously, and
>> STP
>> > handled it well.
>>
>> What is the problem that this solves?
>>
>> Why not just increase the allowed size of the socket listen backlog if
>> you just want to handle bursts of traffic.
>>
>> I do not think of this as a proxy, rather a tunnel.
>> And the tunnel is a lot more expensive the having kernel keep the
>> connection in
>> the listen socket backlog.
>>
>> I work on a web proxy written on python that handles huge load and
>> using backlog of the bursts.
>>
>> It’s async using twisted as threads are not practice at scale.
>>
>> Barry
>>
>> >
>> > Regards,
>> >
>> > Morten
>> >
>> > --
>> > I am https://leavingnorway.info
>> > Videos at https://www.youtube.com/user/TheBlogologue
>> > Twittering at http://twitter.com/blogologue
>> > Blogging at http://blogologue.com
>> > Playing music at https://soundcloud.com/morten-w-petersen
>> > Also playing music and podcasting here:
>> > http://www.mixcloud.com/morten-w-petersen/
>> > On Google+ here https://plus.google.com/107781930037068750156
>> > On Instagram at https://instagram.com/morphexx/
>> > --
>> > https://mail.python.org/mailman/listinfo/python-list
>> >
>>
>>
>
> --
> I am https://leavingnorway.info
> Videos at https://www.youtube.com/user/TheBlogologue
> Twittering at http://twitter.com/blogologue
> Blogging at http://blogologue.com
> Playing music at https://soundcloud.com/morten-w-petersen
> Also playing music and podcasting here:
> http://www.mixcloud.com/morten-w-petersen/
> On Google+ here https://plus.google.com/107781930037068750156
> On Instagram at https://instagram.com/morphexx/
>
>

-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Fwd: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
Forwarding to the list as well.

-- Forwarded message -
From: Morten W. Petersen 
Date: Thu, Jul 28, 2022 at 11:22 PM
Subject: Re: Simple TCP proxy
To: Chris Angelico 


Well, an increase from 0.1 seconds to 0.2 seconds on "polling" in each
thread whether or not the connection should become active doesn't seem like
a big deal.

And there's also some point where it is pointless to accept more
connections, and where maybe remedies like accepting known good IPs,
blocking IPs / IP blocks with more than 3 connections etc. should be
considered.

I think I'll be getting closer than most applications to an eventual
ceiling for what Python can handle of threads, and that's interesting and
could be beneficial for Python as well.

-Morten

On Thu, Jul 28, 2022 at 2:31 PM Chris Angelico  wrote:

> On Thu, 28 Jul 2022 at 21:01, Morten W. Petersen 
> wrote:
> >
> > Well, I was thinking of following the socketserver / handle layout of
> code and execution, for now anyway.
> >
> > It wouldn't be a big deal to make them block, but another option is to
> increase the sleep period 100% for every 200 waiting connections while
> waiting in handle.
>
> Easy denial-of-service attack then. Spam connections and the queue
> starts blocking hard. The sleep loop seems like a rather inefficient
> way to do things.
>
> > Another thing is that it's nice to see Python handling 500+ threads
> without problems. :)
>
> Yeah, well, that's not all THAT many threads, ultimately :)
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Barry


> On 28 Jul 2022, at 10:31, Morten W. Petersen  wrote:
> 
> 
> Hi Barry.
> 
> Well, I can agree that using backlog is an option for handling bursts. But 
> what if that backlog number is exceeded?  How easy is it to deal with such a 
> situation?

You can make backlog very large, if that makes sense.
But at some point you will be forced to reject connections,
once you cannot keep up with the average rate of connections.


> 
> I just cloned twisted, and compared the size:
> 
> morphex@morphex-Latitude-E4310:~$ du -s stp; du -s tmp/twisted/
> 464 stp
> 98520 tmp/twisted/
> morphex@morphex-Latitude-E4310:~$ du -sh stp/LICENSE 
> 36K stp/LICENSE
> 
> >>> 464/98520.0
> 0.004709703613479496
> >>> 
> 
> It's quite easy to get an idea of what's going on in STP, as opposed to if 
> something goes wrong in Twisted with the size of the codebase. I used to use 
> emacs a lot, but then I came into a period where it was more practical to use 
> nano, and I mostly use nano now, unless I need to for example search and 
> replace or something like that.

I mentioned twisted for context. Depending on yours need the built in python 3 
async support may well be sufficient for you needs. Using threads is not 
scalable.

In the places I code disk space of a few MiB is not an issue.

Barry

> 
> -Morten
> 
>> On Thu, Jul 28, 2022 at 8:31 AM Barry  wrote:
>> 
>> 
>> > On 27 Jul 2022, at 17:16, Morten W. Petersen  wrote:
>> > 
>> > Hi.
>> > 
>> > I'd like to share with you a recent project, which is a simple TCP proxy
>> > that can stand in front of a TCP server of some sort, queueing requests and
>> > then allowing n number of connections to pass through at a time:
>> > 
>> > https://github.com/morphex/stp
>> > 
>> > I'll be developing it further, but the the files committed in this tree
>> > seem to be stable:
>> > 
>> > https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465
>> > 
>> > I just bombed that code with 700+ requests almost simultaneously, and STP
>> > handled it well.
>> 
>> What is the problem that this solves?
>> 
>> Why not just increase the allowed size of the socket listen backlog if you 
>> just want to handle bursts of traffic.
>> 
>> I do not think of this as a proxy, rather a tunnel.
>> And the tunnel is a lot more expensive the having kernel keep the connection 
>> in
>> the listen socket backlog.
>> 
>> I work on a web proxy written on python that handles huge load and
>> using backlog of the bursts.
>> 
>> It’s async using twisted as threads are not practice at scale.
>> 
>> Barry
>> 
>> > 
>> > Regards,
>> > 
>> > Morten
>> > 
>> > -- 
>> > I am https://leavingnorway.info
>> > Videos at https://www.youtube.com/user/TheBlogologue
>> > Twittering at http://twitter.com/blogologue
>> > Blogging at http://blogologue.com
>> > Playing music at https://soundcloud.com/morten-w-petersen
>> > Also playing music and podcasting here:
>> > http://www.mixcloud.com/morten-w-petersen/
>> > On Google+ here https://plus.google.com/107781930037068750156
>> > On Instagram at https://instagram.com/morphexx/
>> > -- 
>> > https://mail.python.org/mailman/listinfo/python-list
>> > 
>> 
> 
> 
> -- 
> I am https://leavingnorway.info
> Videos at https://www.youtube.com/user/TheBlogologue
> Twittering at http://twitter.com/blogologue
> Blogging at http://blogologue.com
> Playing music at https://soundcloud.com/morten-w-petersen
> Also playing music and podcasting here: 
> http://www.mixcloud.com/morten-w-petersen/
> On Google+ here https://plus.google.com/107781930037068750156
> On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Chris Angelico
On Thu, 28 Jul 2022 at 21:01, Morten W. Petersen  wrote:
>
> Well, I was thinking of following the socketserver / handle layout of code 
> and execution, for now anyway.
>
> It wouldn't be a big deal to make them block, but another option is to 
> increase the sleep period 100% for every 200 waiting connections while 
> waiting in handle.

Easy denial-of-service attack then. Spam connections and the queue
starts blocking hard. The sleep loop seems like a rather inefficient
way to do things.

> Another thing is that it's nice to see Python handling 500+ threads without 
> problems. :)

Yeah, well, that's not all THAT many threads, ultimately :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
Well, I was thinking of following the socketserver / handle layout of code
and execution, for now anyway.

It wouldn't be a big deal to make them block, but another option is to
increase the sleep period 100% for every 200 waiting connections while
waiting in handle.

Another thing is that it's nice to see Python handling 500+ threads without
problems. :)

-Morten

On Thu, Jul 28, 2022 at 11:45 AM Chris Angelico  wrote:

> On Thu, 28 Jul 2022 at 19:41, Morten W. Petersen 
> wrote:
> >
> > Hi Martin.
> >
> > I was thinking of doing something with the handle function, but just this
> > little tweak:
> >
> >
> https://github.com/morphex/stp/commit/9910ca8c80e9d150222b680a4967e53f0457b465
> >
> > made a huge difference in CPU usage.  Hundreds of waiting sockets are now
> > using 20-30% of CPU instead of 10x that.
>
>  wait, what?
>
> Why do waiting sockets consume *any* measurable amount of CPU? Why
> don't the threads simply block until it's time to do something?
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Chris Angelico
On Thu, 28 Jul 2022 at 19:41, Morten W. Petersen  wrote:
>
> Hi Martin.
>
> I was thinking of doing something with the handle function, but just this
> little tweak:
>
> https://github.com/morphex/stp/commit/9910ca8c80e9d150222b680a4967e53f0457b465
>
> made a huge difference in CPU usage.  Hundreds of waiting sockets are now
> using 20-30% of CPU instead of 10x that.

 wait, what?

Why do waiting sockets consume *any* measurable amount of CPU? Why
don't the threads simply block until it's time to do something?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
Hi Martin.

I was thinking of doing something with the handle function, but just this
little tweak:

https://github.com/morphex/stp/commit/9910ca8c80e9d150222b680a4967e53f0457b465

made a huge difference in CPU usage.  Hundreds of waiting sockets are now
using 20-30% of CPU instead of 10x that.  So for example making the handle
function exit / stop and wait isn't necessary at this point. It also opens
up the possibility of sending a noop that is appropriate for the given
protocol.

I've not done a lot of thread programming before, but yes, locks can be
used and will be used if necessary. I wasn't sure what data types were
thread safe in Python, and it might be that some variables could be off by
1 or more, if using <= >= checks is an option and that there is no risk of
the variable containing "garbage".

I think with a simple focus, that the project is aimed at one task, will
make it easier to manage even complex matters such as concurrency and
threads.

-Morten

On Wed, Jul 27, 2022 at 11:00 PM Martin Di Paola 
wrote:

>
> On Wed, Jul 27, 2022 at 08:32:31PM +0200, Morten W. Petersen wrote:
> >You're thinking of the backlog argument of listen?
>
>  From my understanding, yes, when you set up the "accepter" socket (the
> one that you use to listen and accept new connections), you can define
> the length of the queue for incoming connections that are not accepted
> yet.
>
> This will be the equivalent of your SimpleQueue which basically puts a
> limits on how many incoming connections are "accepted" to do a real job.
>
> Using skt.listen(N) the incoming connections are put on hold by the OS
> while in your implementation are formally accepted but they are not
> allowed to do any meaningful work: they are put on the SimpleQueue and
> only when they are popped then they will work (send/recv data).
>
> The difference then between the OS and your impl is minimal. The only
> case that I can think is that on the clients' side it may exist a
> timeout for the acceptance of the connection so your proxy server will
> eagerly accept these connections so no timeout is possible(*)
>
> On a side note, you implementation is too thread-naive: it uses plain
> Python lists, integers and boolean variables which are not thread safe.
> It is a matter of time until your server will start behave weird.
>
> One option is that you use thread-safe objects. I'll encourage to read
> about thread-safety in general and then which sync mechanisms Python
> offers.
>
> Another option is to remove the SimpleQueue and the background function
> that allows a connection to be "active".
>
> If you think, the handlers are 99% independent except that you want to
> allow only N of them to progress (stablish and forward the connection)
> and when a handler finishes, another handler "waiting" is activated, "in
> a queue fashion" as you said.
>
> If you allow me to not have a strict queue discipline here, you can achieve
> the same results coordinating the handlers using semaphores. Once again,
> take this email as starting point for your own research.
>
> On a second side note, the use of handlers and threads is inefficient
> because while you have N active handlers sending/receiving data, because
> you are eagerly accepting new connections you will have much more
> handlers created and (if I'm not wrong), each will be a thread.
>
> A more efficient solution could be
>
> 1) accept as many connections as you can, saving the socket (not the
> handler) in the thread-safe queue.
> 2) have N threads in the background popping from the queue a socket and
> then doing the send/recv stuff. When the thread is done, the thread
> closes the socket and pops another from the queue.
>
> So the queue length will be the count of accepted connections but in any
> moment your proxy will not activate (forward) more than N connections.
>
> This idea is thread-safe, simpler, efficient and has the queue
> discipline (I leave aside the usefulness).
>
> I encourage you to take time to read about the different things
> mentioned as concurrency and thread-related stuff is not easy to
> master.
>
> Thanks,
> Martin.
>
> (*) make your proxy server slow enough and yes, you will get timeouts
> anyways.
>
> >
> >Well, STP will accept all connections, but can limit how many of the
> >accepted connections that are active at any given time.
> >
> >So when I bombed it with hundreds of almost simultaneous connections, all
> >of them were accepted, but only 25 were actively sending and receiving
> data
> >at any given time. First come, first served.
> >
> >Regards,
> >
> >Morten
> >
>

Re: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
Hi Barry.

Well, I can agree that using backlog is an option for handling bursts. But
what if that backlog number is exceeded?  How easy is it to deal with such
a situation?

I just cloned twisted, and compared the size:

morphex@morphex-Latitude-E4310:~$ du -s stp; du -s tmp/twisted/
464 stp
98520 tmp/twisted/
morphex@morphex-Latitude-E4310:~$ du -sh stp/LICENSE
36K stp/LICENSE

>>> 464/98520.0
0.004709703613479496
>>>

It's quite easy to get an idea of what's going on in STP, as opposed to if
something goes wrong in Twisted with the size of the codebase. I used to
use emacs a lot, but then I came into a period where it was more practical
to use nano, and I mostly use nano now, unless I need to for example search
and replace or something like that.

-Morten

On Thu, Jul 28, 2022 at 8:31 AM Barry  wrote:

>
>
> > On 27 Jul 2022, at 17:16, Morten W. Petersen  wrote:
> >
> > Hi.
> >
> > I'd like to share with you a recent project, which is a simple TCP proxy
> > that can stand in front of a TCP server of some sort, queueing requests
> and
> > then allowing n number of connections to pass through at a time:
> >
> > https://github.com/morphex/stp
> >
> > I'll be developing it further, but the the files committed in this tree
> > seem to be stable:
> >
> >
> https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465
> >
> > I just bombed that code with 700+ requests almost simultaneously, and STP
> > handled it well.
>
> What is the problem that this solves?
>
> Why not just increase the allowed size of the socket listen backlog if you
> just want to handle bursts of traffic.
>
> I do not think of this as a proxy, rather a tunnel.
> And the tunnel is a lot more expensive the having kernel keep the
> connection in
> the listen socket backlog.
>
> I work on a web proxy written on python that handles huge load and
> using backlog of the bursts.
>
> It’s async using twisted as threads are not practice at scale.
>
> Barry
>
> >
> > Regards,
> >
> > Morten
> >
> > --
> > I am https://leavingnorway.info
> > Videos at https://www.youtube.com/user/TheBlogologue
> > Twittering at http://twitter.com/blogologue
> > Blogging at http://blogologue.com
> > Playing music at https://soundcloud.com/morten-w-petersen
> > Also playing music and podcasting here:
> > http://www.mixcloud.com/morten-w-petersen/
> > On Google+ here https://plus.google.com/107781930037068750156
> > On Instagram at https://instagram.com/morphexx/
> > --
> > https://mail.python.org/mailman/listinfo/python-list
> >
>
>

-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-28 Thread Morten W. Petersen
OK, I'll have a look at using something else than _threading.

I quickly saw a couple of points where code could be optimized for speed,
the loop that transfers data back and forth also has low throughput, but
first priority was getting it working and seeing that it is fairly stable.

Regards,

Morten
--

I am https://leavingnorway.info

Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue

Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen

Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/

On Instagram at https://instagram.com/morphexx/



On Wed, Jul 27, 2022 at 9:57 PM Chris Angelico  wrote:

> On Thu, 28 Jul 2022 at 04:32, Morten W. Petersen 
> wrote:
> >
> > Hi Chris.
> >
> > You're thinking of the backlog argument of listen?
>
> Yes, precisely.
>
> > Well, STP will accept all connections, but can limit how many of the
> accepted connections that are active at any given time.
> >
> > So when I bombed it with hundreds of almost simultaneous connections,
> all of them were accepted, but only 25 were actively sending and receiving
> data at any given time. First come, first served.
> >
>
> Hmm. Okay. Not sure what the advantage is, but sure.
>
> If the server's capable of handling the total requests-per-minute,
> then a queueing system like this should help with burst load, although
> I would have thought that the listen backlog would do the same. What
> happens if the server actually gets overloaded though? Do connections
> get disconnected after appearing connected? What's the disconnect
> mode?
>
> BTW, you probably don't want to be using the _thread module - Python
> has a threading module which is better suited to this sort of work.
> Although you may want to consider asyncio instead, as that has far
> lower overhead when working with large numbers of sockets.
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Barry


> On 27 Jul 2022, at 17:16, Morten W. Petersen  wrote:
> 
> Hi.
> 
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests and
> then allowing n number of connections to pass through at a time:
> 
> https://github.com/morphex/stp
> 
> I'll be developing it further, but the the files committed in this tree
> seem to be stable:
> 
> https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465
> 
> I just bombed that code with 700+ requests almost simultaneously, and STP
> handled it well.

What is the problem that this solves?

Why not just increase the allowed size of the socket listen backlog if you just 
want to handle bursts of traffic.

I do not think of this as a proxy, rather a tunnel.
And the tunnel is a lot more expensive the having kernel keep the connection in
the listen socket backlog.

I work on a web proxy written on python that handles huge load and
using backlog of the bursts.

It’s async using twisted as threads are not practice at scale.

Barry

> 
> Regards,
> 
> Morten
> 
> -- 
> I am https://leavingnorway.info
> Videos at https://www.youtube.com/user/TheBlogologue
> Twittering at http://twitter.com/blogologue
> Blogging at http://blogologue.com
> Playing music at https://soundcloud.com/morten-w-petersen
> Also playing music and podcasting here:
> http://www.mixcloud.com/morten-w-petersen/
> On Google+ here https://plus.google.com/107781930037068750156
> On Instagram at https://instagram.com/morphexx/
> -- 
> https://mail.python.org/mailman/listinfo/python-list
> 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Martin Di Paola



On Wed, Jul 27, 2022 at 08:32:31PM +0200, Morten W. Petersen wrote:

You're thinking of the backlog argument of listen?


From my understanding, yes, when you set up the "accepter" socket (the
one that you use to listen and accept new connections), you can define
the length of the queue for incoming connections that are not accepted
yet.

This will be the equivalent of your SimpleQueue which basically puts a
limits on how many incoming connections are "accepted" to do a real job.

Using skt.listen(N) the incoming connections are put on hold by the OS
while in your implementation are formally accepted but they are not
allowed to do any meaningful work: they are put on the SimpleQueue and
only when they are popped then they will work (send/recv data).

The difference then between the OS and your impl is minimal. The only
case that I can think is that on the clients' side it may exist a
timeout for the acceptance of the connection so your proxy server will
eagerly accept these connections so no timeout is possible(*)

On a side note, you implementation is too thread-naive: it uses plain
Python lists, integers and boolean variables which are not thread safe.
It is a matter of time until your server will start behave weird.

One option is that you use thread-safe objects. I'll encourage to read
about thread-safety in general and then which sync mechanisms Python
offers.

Another option is to remove the SimpleQueue and the background function
that allows a connection to be "active".

If you think, the handlers are 99% independent except that you want to
allow only N of them to progress (stablish and forward the connection)
and when a handler finishes, another handler "waiting" is activated, "in
a queue fashion" as you said.

If you allow me to not have a strict queue discipline here, you can achieve
the same results coordinating the handlers using semaphores. Once again,
take this email as starting point for your own research.

On a second side note, the use of handlers and threads is inefficient
because while you have N active handlers sending/receiving data, because
you are eagerly accepting new connections you will have much more
handlers created and (if I'm not wrong), each will be a thread.

A more efficient solution could be

1) accept as many connections as you can, saving the socket (not the
handler) in the thread-safe queue.
2) have N threads in the background popping from the queue a socket and
then doing the send/recv stuff. When the thread is done, the thread
closes the socket and pops another from the queue.

So the queue length will be the count of accepted connections but in any
moment your proxy will not activate (forward) more than N connections.

This idea is thread-safe, simpler, efficient and has the queue
discipline (I leave aside the usefulness).

I encourage you to take time to read about the different things
mentioned as concurrency and thread-related stuff is not easy to
master.

Thanks,
Martin.

(*) make your proxy server slow enough and yes, you will get timeouts
anyways.



Well, STP will accept all connections, but can limit how many of the
accepted connections that are active at any given time.

So when I bombed it with hundreds of almost simultaneous connections, all
of them were accepted, but only 25 were actively sending and receiving data
at any given time. First come, first served.

Regards,

Morten

On Wed, Jul 27, 2022 at 8:00 PM Chris Angelico  wrote:


On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen 
wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests
and
> then allowing n number of connections to pass through at a time:

How's this different from what the networking subsystem already does?
When you listen, you can set a queue length. Can you elaborate?

ChrisA
--
https://mail.python.org/mailman/listinfo/python-list




--
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--

I am https://leavingnorway.info

Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue

Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen

Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/

On Instagram at https://instagram.com/morphexx/
--
https://mail.python.org/mailman/listinfo/python-list

--
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Chris Angelico
On Thu, 28 Jul 2022 at 04:32, Morten W. Petersen  wrote:
>
> Hi Chris.
>
> You're thinking of the backlog argument of listen?

Yes, precisely.

> Well, STP will accept all connections, but can limit how many of the accepted 
> connections that are active at any given time.
>
> So when I bombed it with hundreds of almost simultaneous connections, all of 
> them were accepted, but only 25 were actively sending and receiving data at 
> any given time. First come, first served.
>

Hmm. Okay. Not sure what the advantage is, but sure.

If the server's capable of handling the total requests-per-minute,
then a queueing system like this should help with burst load, although
I would have thought that the listen backlog would do the same. What
happens if the server actually gets overloaded though? Do connections
get disconnected after appearing connected? What's the disconnect
mode?

BTW, you probably don't want to be using the _thread module - Python
has a threading module which is better suited to this sort of work.
Although you may want to consider asyncio instead, as that has far
lower overhead when working with large numbers of sockets.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Morten W. Petersen
Hi Chris.

You're thinking of the backlog argument of listen?

Well, STP will accept all connections, but can limit how many of the
accepted connections that are active at any given time.

So when I bombed it with hundreds of almost simultaneous connections, all
of them were accepted, but only 25 were actively sending and receiving data
at any given time. First come, first served.

Regards,

Morten

On Wed, Jul 27, 2022 at 8:00 PM Chris Angelico  wrote:

> On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen 
> wrote:
> >
> > Hi.
> >
> > I'd like to share with you a recent project, which is a simple TCP proxy
> > that can stand in front of a TCP server of some sort, queueing requests
> and
> > then allowing n number of connections to pass through at a time:
>
> How's this different from what the networking subsystem already does?
> When you listen, you can set a queue length. Can you elaborate?
>
> ChrisA
> --
> https://mail.python.org/mailman/listinfo/python-list
>


-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
--

I am https://leavingnorway.info

Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue

Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen

Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/

On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Simple TCP proxy

2022-07-27 Thread Chris Angelico
On Thu, 28 Jul 2022 at 02:15, Morten W. Petersen  wrote:
>
> Hi.
>
> I'd like to share with you a recent project, which is a simple TCP proxy
> that can stand in front of a TCP server of some sort, queueing requests and
> then allowing n number of connections to pass through at a time:

How's this different from what the networking subsystem already does?
When you listen, you can set a queue length. Can you elaborate?

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Simple TCP proxy

2022-07-27 Thread Morten W. Petersen
Hi.

I'd like to share with you a recent project, which is a simple TCP proxy
that can stand in front of a TCP server of some sort, queueing requests and
then allowing n number of connections to pass through at a time:

https://github.com/morphex/stp

I'll be developing it further, but the the files committed in this tree
seem to be stable:

https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465

I just bombed that code with 700+ requests almost simultaneously, and STP
handled it well.

Regards,

Morten

-- 
I am https://leavingnorway.info
Videos at https://www.youtube.com/user/TheBlogologue
Twittering at http://twitter.com/blogologue
Blogging at http://blogologue.com
Playing music at https://soundcloud.com/morten-w-petersen
Also playing music and podcasting here:
http://www.mixcloud.com/morten-w-petersen/
On Google+ here https://plus.google.com/107781930037068750156
On Instagram at https://instagram.com/morphexx/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-30 Thread Dennis Lee Bieber
On Fri, 30 Jul 2021 13:17:50 +0200, Peter Otten <__pete...@web.de>
declaimed the following:

>
>https://mail.python.org/pipermail/python-list/2021-July/902975.html
>
>You are now officially archived ;)

Pity the choice was the boolean "X-No-Archive"... Consensus
implementation of an "X-Expire-After: " would have been friendlier
(Google Groups USED to expire X-No-Archive posts after a week, instead of
the current practice of: post says no-archive, delete immediately).


-- 
Wulfraed Dennis Lee Bieber AF6VN
wlfr...@ix.netcom.comhttp://wlfraed.microdiversity.freeddns.org/

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-30 Thread Peter Otten

On 29/07/2021 17:43, Dennis Lee Bieber wrote:

On Thu, 29 Jul 2021 15:45:26 +0200, Peter Otten <__pete...@web.de>
declaimed the following:


On 28/07/2021 18:40, Dennis Lee Bieber wrote:

On Wed, 28 Jul 2021 09:04:40 +0200, Peter Otten <__pete...@web.de>
declaimed the following:



Perhaps it has something to do with the X-No-Archive flag set by Dennis?


According to my properties page in Agent, I've turned that off except
if the message I'm replying to has it set.


I'm yet to see an archived post by you -- maybe that setting is
overridden somewhere.


And naturally, once I found the legacy (per persona) custom setting, I
failed to edit the main persona. Hopefully this time it's set...


https://mail.python.org/pipermail/python-list/2021-July/902975.html

You are now officially archived ;)

--
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-29 Thread Dennis Lee Bieber
On Thu, 29 Jul 2021 15:45:26 +0200, Peter Otten <__pete...@web.de>
declaimed the following:

>On 28/07/2021 18:40, Dennis Lee Bieber wrote:
>> On Wed, 28 Jul 2021 09:04:40 +0200, Peter Otten <__pete...@web.de>
>> declaimed the following:
>> 
>>>
>>> Perhaps it has something to do with the X-No-Archive flag set by Dennis?
>> 
>>  According to my properties page in Agent, I've turned that off except
>> if the message I'm replying to has it set.
>
>I'm yet to see an archived post by you -- maybe that setting is 
>overridden somewhere.

And naturally, once I found the legacy (per persona) custom setting, I
failed to edit the main persona. Hopefully this time it's set...

Appears Agent added the per "folder" and "default' 
[x] X-no-Archive 
flags in some later release, and those are what I've been tweaking... 

But looking at the INI file, I found there were persona specific custom
settings for "additional header fields", Which I likely set up back in the
days of Agent 5 or so, currently on Agent 8



-- 
Wulfraed Dennis Lee Bieber AF6VN
wlfr...@ix.netcom.comhttp://wlfraed.microdiversity.freeddns.org/

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-29 Thread Peter Otten

On 28/07/2021 18:40, Dennis Lee Bieber wrote:

On Wed, 28 Jul 2021 09:04:40 +0200, Peter Otten <__pete...@web.de>
declaimed the following:



Perhaps it has something to do with the X-No-Archive flag set by Dennis?


According to my properties page in Agent, I've turned that off except
if the message I'm replying to has it set.


I'm yet to see an archived post by you -- maybe that setting is 
overridden somewhere.


--
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-28 Thread hongy...@gmail.com
On Wednesday, July 28, 2021 at 3:05:11 PM UTC+8, Peter Otten wrote:
> On 28/07/2021 07:32, Cameron Simpson wrote: 
> > On 27Jul2021 19:24, Hongyi Zhao  wrote:
> >> On Wednesday, July 28, 2021 at 7:25:27 AM UTC+8, cameron...@gmail.com 
> >> wrote: 
> >>> Just to follow on a bit to Dennis: 
> >> 
> >> But I can't any reply from Dennis in this issue. 
> >
> > Odd, because I replied to his reply to you :-)
> Perhaps it has something to do with the X-No-Archive flag set by Dennis?

I'm not sure. I use the Google Groups from Firefox for reading from/posting on 
this group.

HY
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-28 Thread Peter Otten

On 28/07/2021 07:32, Cameron Simpson wrote:

On 27Jul2021 19:24, Hongyi Zhao  wrote:

On Wednesday, July 28, 2021 at 7:25:27 AM UTC+8, cameron...@gmail.com wrote:

Just to follow on a bit to Dennis:


But I can't any reply from Dennis in this issue.


Odd, because I replied to his reply to you :-)


Perhaps it has something to do with the X-No-Archive flag set by Dennis?

--
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-27 Thread Cameron Simpson
On 27Jul2021 19:24, Hongyi Zhao  wrote:
>On Wednesday, July 28, 2021 at 7:25:27 AM UTC+8, cameron...@gmail.com wrote:
>> Just to follow on a bit to Dennis:
>
>But I can't any reply from Dennis in this issue.

Odd, because I replied to his reply to you :-)

If you're using a threaded view of the list, his reply should be right 
before mine. If you're not, it was about 6 hours before my reply.

Cheers,
Cameron Simpson 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-27 Thread hongy...@gmail.com
On Wednesday, July 28, 2021 at 7:25:27 AM UTC+8, cameron...@gmail.com wrote:
> Just to follow on a bit to Dennis: 

But I can't any reply from Dennis in this issue.
 
> C:\Users\Wulfraed\Documents\_Hg-Repositories\Python Progs>python 
> googletrans_test.py 
> Traceback (most recent call last): 
> File "googletrans_test.py", line 4, in  
> tk = acquirer.do(text) 
> File "C:\Python38\lib\site-packages\googletrans\gtoken.py", line 194, in do 
> self._update() 
> File "C:\Python38\lib\site-packages\googletrans\gtoken.py", line 62, in 
> _update
> code = self.RE_TKK.search(r.text).group(1).replace('var ', '')
> AttributeError: 'NoneType' object has no attribute 'group'
> The implication here is that: 
> 
> self.RE_TKK.search(r.text).group(1) 
> 
> generates the error because: 
> 
> self.RE_TKK.search(r.text) 
> 
> returned None instead of a regular expression match object. That means 
> that "r.text" way not what was expected. 
> 
> Like Dennis, i note the remark "for internal use only", suggesting this 
> was some internal Google code for doing something fiddly. It may not be 
> the best thing for what you're trying to do. 
> 
> WRT to a socks proxy, Dennis showed that the issue occurs even without a 
> proxy. I would advocate doing all your debugging without trying to use 
> socks, then get to going through a socks proxy _after_ the main stuff is 
> working. 
> 
> Cheers, 
> Cameron Simpson 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-27 Thread Cameron Simpson
Just to follow on a bit to Dennis:

C:\Users\Wulfraed\Documents\_Hg-Repositories\Python Progs>python
googletrans_test.py
Traceback (most recent call last):
  File "googletrans_test.py", line 4, in 
tk = acquirer.do(text)
  File "C:\Python38\lib\site-packages\googletrans\gtoken.py", line 194, in 
do
self._update()
  File "C:\Python38\lib\site-packages\googletrans\gtoken.py", line 62, in 
_update
code = self.RE_TKK.search(r.text).group(1).replace('var ', '')
AttributeError: 'NoneType' object has no attribute 'group'

The implication here is that:

self.RE_TKK.search(r.text).group(1)

generates the error because:

self.RE_TKK.search(r.text)

returned None instead of a regular expression match object. That means 
that "r.text" way not what was expected.

Like Dennis, i note the remark "for internal use only", suggesting this 
was some internal Google code for doing something fiddly. It may not be 
the best thing for what you're trying to do.

WRT to a socks proxy, Dennis showed that the issue occurs even without a 
proxy. I would advocate doing all your debugging without trying to use 
socks, then get to going through a socks proxy _after_ the main stuff is 
working.

Cheers,
Cameron Simpson 
-- 
https://mail.python.org/mailman/listinfo/python-list


Generate a Google Translate API token through the Socks5 proxy using gtoken.py

2021-07-27 Thread hongy...@gmail.com
I want to use 
[gtoken.py](https://github.com/ssut/py-googletrans/blob/master/googletrans/gtoken.py)
 through a socks5 proxy. Based on the comment 
[here](https://github.com/encode/httpx/issues/203#issuecomment-611914974) and 
the example usage in 
[gtoken.py](https://github.com/ssut/py-googletrans/blob/d15c94f176463b2ce6199a42a1c517690366977f/googletrans/gtoken.py#L29),
 I tried with the following method, but failed:
```python
(datasci) werner@X10DAi:~$ ipython
Python 3.9.1 (default, Feb 10 2021, 15:30:33) 
Type 'copyright', 'credits' or 'license' for more information
IPython 7.23.1 -- An enhanced Interactive Python. Type '?' for help.

In [1]: import socks

In [2]: import socket

In [3]: socks.set_default_proxy(socks.SOCKS5, "127.0.0.1", 18889)
   ...: socket.socket = socks.socksocket

In [4]: from googletrans.gtoken import TokenAcquirer

In [5]: acquirer=TokenAcquirer()

In [6]: text = 'test'

In [7]: tk= acquirer.do(text)
---
AttributeErrorTraceback (most recent call last)
 in 
> 1 tk= acquirer.do(text)

~/.pyenv/versions/3.9.1/envs/datasci/lib/python3.9/site-packages/googletrans/gtoken.py
 in do(self, text)
192 
193 def do(self, text):
--> 194 self._update()
195 tk = self.acquire(text)
196 return tk

~/.pyenv/versions/3.9.1/envs/datasci/lib/python3.9/site-packages/googletrans/gtoken.py
 in _update(self)
 60 
 61 # this will be the same as python code after stripping out a 
reserved word 'var'
---> 62 code = self.RE_TKK.search(r.text).group(1).replace('var ', '')
 63 # unescape special ascii characters such like a \x3d(=)
 64 code = code.encode().decode('unicode-escape')

AttributeError: 'NoneType' object has no attribute 'group'
```
Any hints for this problem will be highly appreciated.

Regards,
HY
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-10 Thread Paul Bryan
Cloudflare operates as a reverse proxy in front of your service(s);
clients of your services access them through an endpoint that
Cloudflare stands up. DNS records point to Cloudflare, and TLS
certificates must be provisioned in Cloudflare to match. For all
intents and purposes, you would be outsourcing a part of your service
network infrastructure to Cloudflare.

Paul 

On Sat, 2021-04-10 at 13:35 -0500, Christian Seberino wrote:
> > 
> > a) your reverse proxy must be colocated with the service it fronts
> > on the same machine;
> > b) your network infrastructure transparently encrypts traffic
> > between your proxy and the service; or 
> > c) your proxy must negotiate its own TLS connection(s) with the
> > service.
> > 
> 
> 
> Paul
> 
> Thanks. I'm curious, do you know which of your options CloudFlare
> uses?  It has to stand in between
> you and all the sites you visit while allowing encryption right?
> 
> cs 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-10 Thread Christian Seberino
>
>
> a) your reverse proxy must be colocated with the service it fronts on the
> same machine;
> b) your network infrastructure transparently encrypts traffic between your
> proxy and the service; or
> c) your proxy must negotiate its own TLS connection(s) with the service.
>

Paul

Thanks. I'm curious, do you know which of your options CloudFlare uses?  It
has to stand in between
you and all the sites you visit while allowing encryption right?

cs
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-10 Thread Michael Torrie
On 4/10/21 8:52 AM, cseb...@gmail.com wrote:
> 
>> Is it even possible to be secure in that way? This is, by definition, 
>> a MITM, and in order to be useful, it *will* have to decrypt 
>> everything. So if someone compromises the monitor, they get 
>> everything. 
> 
> Chris
> 
> I hear all your security concerns and I'm aware of them.  I *really* don't 
> want to have to
> fight SSL.  Encryption was the biggest concern and I'd rather not mess with 
> it to do something 
> useful.
> 
> I've never used CloudFlare but if I'm not mistaken, it can be considered a 
> useful "MITM" service?
> Do they have to decrypt traffic and increase the attack surface to be useful?

Cloudfare does not do any kind of MITM stuff.  Cloudfare requires some
set up on the part of the server owner, and that takes several forms.
One recommended method is have Cloudfare sign a special certificate that
you install on your web server, which encrypts between your server and
Cloudfare.  Then you provide cloudfare with an SSL certificate and key
to use when they serve up your site to the world.

> I just want to create a "safe" MITM service so to speak.

For my own purposes, sometimes I'll create a limited, wildcard
certificate signed by my own authority which works only in my own
browser (this is the same technique used by certain regimes to MITM the
entire country!).  The proxy then uses that certificate.  It's useful
for some debugging tasks.  Or alternatively I'll create a proxy intended
to run on localhost only that proxies an encrypted source to a local,
non-encrypted channel.  For example, I might want to examine why a
connection to an IMAPS port is failing.  So I'll proxy IMAPS to IMAP so
I can sniff the IMAP locally to find out why the interaction is failing.

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-10 Thread Paul Bryan
There is absolutely nothing wrong with building your own reverse proxy
in front of your own service, as long as you control both. This
constitutes a tiered network/application architecture, and it's a
common practice. There's no man in the middle; there's no imposter; its
all "you". 

If your proxy becomes the endpoint that clients now must connect to
(i.e. nothing "in front" of your new proxy), you will have to deal with
TLS (aka SSL). Having the TLS certificate match the DNS address of your
new endpoint is a central tenet of security, and allows clients to
trust that they are connecting to the intended endpoint (there is
not a man in the middle).

How you secure the network traffic between your proxy and the service
it fronts becomes an important factor. If you claim that data is always
encrypted in transit, then either:

a) your reverse proxy must be colocated with the service it fronts on
the same machine;
b) your network infrastructure transparently encrypts traffic between
your proxy and the service; or 
c) your proxy must negotiate its own TLS connection(s) with the
service.

Negotiating TLS connections independently for each hop through a
network will add overhead, and degrade the performance of your service.
This can be major pain point when composing an application of
microservices, where each service must be able to securely connect to
another over a network. This is where a service mesh proxy (e.g. Envoy)
comes into play, or even full blown service mesh with service
discovery, certificate management, access policies (e.g. Istio). 


On Sat, 2021-04-10 at 07:52 -0700, cseb...@gmail.com wrote:
> 
> > Is it even possible to be secure in that way? This is, by
> > definition, 
> > a MITM, and in order to be useful, it *will* have to decrypt 
> > everything. So if someone compromises the monitor, they get 
> > everything. 
> 
> Chris
> 
> I hear all your security concerns and I'm aware of them.  I *really*
> don't want to have to
> fight SSL.  Encryption was the biggest concern and I'd rather not
> mess with it to do something 
> useful.
> 
> I've never used CloudFlare but if I'm not mistaken, it can be
> considered a useful "MITM" service?
> Do they have to decrypt traffic and increase the attack surface to be
> useful?
> 
> I just want to create a "safe" MITM service so to speak.
> 
> cs

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-10 Thread cseb...@gmail.com


> Is it even possible to be secure in that way? This is, by definition, 
> a MITM, and in order to be useful, it *will* have to decrypt 
> everything. So if someone compromises the monitor, they get 
> everything. 

Chris

I hear all your security concerns and I'm aware of them.  I *really* don't want 
to have to
fight SSL.  Encryption was the biggest concern and I'd rather not mess with it 
to do something 
useful.

I've never used CloudFlare but if I'm not mistaken, it can be considered a 
useful "MITM" service?
Do they have to decrypt traffic and increase the attack surface to be useful?

I just want to create a "safe" MITM service so to speak.

cs
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-08 Thread Chris Angelico
On Fri, Apr 9, 2021 at 12:42 AM <2qdxy4rzwzuui...@potatochowder.com> wrote:
>
> On 2021-04-09 at 00:17:59 +1000,
> Chris Angelico  wrote:
>
> > Also, you'd better be really REALLY sure that your monitoring is
> > legal, ethical, and not deceptive.
>
> Not to mention *secure*.  Your monitor increases the attack surface of
> the system as a whole.  If I break into your monitor, can I recover
> passwords (yours, users, servers, etc.)?  Can I snoop on traffic?  Can I
> snoop metadata (like when which users are talking to which servers) not
> otherwise available on your network?

Is it even possible to be secure in that way? This is, by definition,
a MITM, and in order to be useful, it *will* have to decrypt
everything. So if someone compromises the monitor, they get
everything.

But try asking those questions minus the "break into the monitor"
part. Does the mere presence of the monitor mean that someone *else*
can start monitoring too?

TBH though, I think the other questions are going to largely shut this down.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-08 Thread 2QdxY4RzWzUUiLuE
On 2021-04-09 at 00:17:59 +1000,
Chris Angelico  wrote:

> Also, you'd better be really REALLY sure that your monitoring is
> legal, ethical, and not deceptive.

Not to mention *secure*.  Your monitor increases the attack surface of
the system as a whole.  If I break into your monitor, can I recover
passwords (yours, users, servers, etc.)?  Can I snoop on traffic?  Can I
snoop metadata (like when which users are talking to which servers) not
otherwise available on your network?
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: question about basics of creating a PROXY to MONITOR network activity

2021-04-08 Thread Chris Angelico
On Fri, Apr 9, 2021 at 12:11 AM cseb...@gmail.com  wrote:
>
> I'm trying to create an application that stands in between all
> connections to a remote server to monitor behavior for
> security and compliance reasons.
>
> I'm guessing I'll have all users log into this middle man proxy
> application instead of logging into the original website?
>
> Are there any frameworks or existing Python apps to help
> with this project?

Yes, they'd all need to log in to the middle man. That has significant
impact on things like SSL, unless you also own the remote server and
can use the same certificate.

I'd recommend looking into one of the well-known web app frameworks
like Django or Flask, and making sure you know how all of that works.

Also, you'd better be really REALLY sure that your monitoring is
legal, ethical, and not deceptive.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


question about basics of creating a PROXY to MONITOR network activity

2021-04-08 Thread cseb...@gmail.com
I'm trying to create an application that stands in between all
connections to a remote server to monitor behavior for
security and compliance reasons.

I'm guessing I'll have all users log into this middle man proxy
application instead of logging into the original website?

Are there any frameworks or existing Python apps to help 
with this project?

Thanks,

Chris
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Speeding up a test process with a local pypi and/or web proxy?

2019-11-19 Thread Dan Stromberg
On Fri, Nov 15, 2019 at 1:11 PM Dan Stromberg  wrote:

> Hi folks.
>
> I'm looking at a test process that takes about 16 minutes for a full run.
>
Anyone?


> Naturally, I'd like to speed it up.  We've already parallelized it -
> mostly.
>
> It seems like the next thing to look at is setting up a local pypi, and
> building some of the packages that're compiled from C/C++ every time we do
> a full test run.  (We're using docker and building dependencies for each
> full test run)
>
> Also, we could conceivably set up a web proxy...?
>
> Does having a local pypi obviate the web proxy?
>
> And what local pypi servers do folks recommend for speed?
>
> We need support mostly for CPython 3.x, but we still have a little CPython
> 2.x we require, and it's possible we'll need the 2.x for a while.
>
> Thanks!
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Speeding up a test process with a local pypi and/or web proxy?

2019-11-15 Thread Dan Stromberg
Hi folks.

I'm looking at a test process that takes about 16 minutes for a full run.

Naturally, I'd like to speed it up.  We've already parallelized it - mostly.

It seems like the next thing to look at is setting up a local pypi, and
building some of the packages that're compiled from C/C++ every time we do
a full test run.  (We're using docker and building dependencies for each
full test run)

Also, we could conceivably set up a web proxy...?

Does having a local pypi obviate the web proxy?

And what local pypi servers do folks recommend for speed?

We need support mostly for CPython 3.x, but we still have a little CPython
2.x we require, and it's possible we'll need the 2.x for a while.

Thanks!
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Help, Can't find the default proxy in requests by config

2019-02-21 Thread Evi1 T1me
On Thursday, February 21, 2019 at 7:12:40 AM UTC-5, Evi1 T1me wrote:
> ```bash
> ~ python3
> Python 3.7.0 (default, Oct 22 2018, 14:54:27)
> [Clang 10.0.0 (clang-1000.11.45.2)] on darwin
> Type "help", "copyright", "credits" or "license" for more information.
> >>> import requests
> >>> r = requests.get('https://www.baidu.com')
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 159, in _new_conn
> (self._dns_host, self.port), self.timeout, **extra_kw)
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
> line 80, in create_connection
> raise err
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
> line 70, in create_connection
> sock.connect(sa)
> ConnectionRefusedError: [Errno 61] Connection refused
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 594, in urlopen
> self._prepare_proxy(conn)
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 805, in _prepare_proxy
> conn.connect()
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 301, in connect
> conn = self._new_conn()
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
> 168, in _new_conn
> self, "Failed to establish a new connection: %s" % e)
> urllib3.exceptions.NewConnectionError: 
> : Failed to 
> establish a new connection: [Errno 61] Connection refused
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 
> 449, in send
> timeout=timeout
>   File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", 
> line 638, in urlopen
> _stacktrace=sys.exc_info()[2])
>   File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 
> 398, in increment
> raise MaxRetryError(_pool, url, error or ResponseError(cause))
> urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.baidu.com', 
> port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
> connect to proxy.', 
> NewConnectionError(' 0x10e3ce550>: Failed to establish a new connection: [Errno 61] Connection 
> refused')))
> 
> During handling of the above exception, another exception occurred:
> 
> Traceback (most recent call last):
>   File "", line 1, in 
>   File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 75, in 
> get
> return request('get', url, params=params, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 60, in 
> request
> return session.request(method=method, url=url, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 
> 533, in request
> resp = self.send(prep, **send_kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 
> 646, in send
> r = adapter.send(request, **kwargs)
>   File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 
> 510, in send
> raise ProxyError(e, request=request)
> requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.baidu.com', 
> port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
> connect to proxy.', 
> NewConnectionError(' 0x10e3ce550>: Failed to establish a new connection: [Errno 61] Connection 
> refused')))
> ```
> 
> Check the proxy 
> 
> ```bash
> >>> print(requests.utils.get_environ_proxies('https://www.baidu.com'))
> {'http': 'http://127.0.0.1:', 'https': 'http://127.0.0.1:'}
> ```
> 
> Check bash environment
> 
> ```bash
> ~ set | grep proxy
> ```
> Nothing output.
> 
> ```bash
> ➜  ~ netstat -ant | grep 
> tcp4   5  0  127.0.0.1.54437127.0.0.1. CLOSE_WAIT
> tcp4 653  0  127.0.0.1.54436127.0.0.1. CLOSE_WAIT
> tcp4   5  0  127.0.0.1.54434127.0.0.1. CLOSE_WAIT
> ```
> 
> ```bash
> ➜  ~ lsof -i:
> COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
> JavaAppli 77714 zerop   54u  IPv6 0x975257a3

Help, Can't find the default proxy in requests by config

2019-02-21 Thread Evi1 T1me
```bash
~ python3
Python 3.7.0 (default, Oct 22 2018, 14:54:27)
[Clang 10.0.0 (clang-1000.11.45.2)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import requests
>>> r = requests.get('https://www.baidu.com')
Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
159, in _new_conn
(self._dns_host, self.port), self.timeout, **extra_kw)
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
line 80, in create_connection
raise err
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/connection.py", 
line 70, in create_connection
sock.connect(sa)
ConnectionRefusedError: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 
594, in urlopen
self._prepare_proxy(conn)
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 
805, in _prepare_proxy
conn.connect()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
301, in connect
conn = self._new_conn()
  File "/usr/local/lib/python3.7/site-packages/urllib3/connection.py", line 
168, in _new_conn
self, "Failed to establish a new connection: %s" % e)
urllib3.exceptions.NewConnectionError: 
: Failed to 
establish a new connection: [Errno 61] Connection refused

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 449, 
in send
timeout=timeout
  File "/usr/local/lib/python3.7/site-packages/urllib3/connectionpool.py", line 
638, in urlopen
_stacktrace=sys.exc_info()[2])
  File "/usr/local/lib/python3.7/site-packages/urllib3/util/retry.py", line 
398, in increment
raise MaxRetryError(_pool, url, error or ResponseError(cause))
urllib3.exceptions.MaxRetryError: HTTPSConnectionPool(host='www.baidu.com', 
port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
connect to proxy.', 
NewConnectionError(': Failed to establish a new connection: [Errno 61] Connection 
refused')))

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "", line 1, in 
  File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 75, in get
return request('get', url, params=params, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/api.py", line 60, in 
request
return session.request(method=method, url=url, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 533, 
in request
resp = self.send(prep, **send_kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/sessions.py", line 646, 
in send
r = adapter.send(request, **kwargs)
  File "/usr/local/lib/python3.7/site-packages/requests/adapters.py", line 510, 
in send
raise ProxyError(e, request=request)
requests.exceptions.ProxyError: HTTPSConnectionPool(host='www.baidu.com', 
port=443): Max retries exceeded with url: / (Caused by ProxyError('Cannot 
connect to proxy.', 
NewConnectionError(': Failed to establish a new connection: [Errno 61] Connection 
refused')))
```

Check the proxy 

```bash
>>> print(requests.utils.get_environ_proxies('https://www.baidu.com'))
{'http': 'http://127.0.0.1:', 'https': 'http://127.0.0.1:'}
```

Check bash environment

```bash
~ set | grep proxy
```
Nothing output.

```bash
➜  ~ netstat -ant | grep 
tcp4   5  0  127.0.0.1.54437127.0.0.1. CLOSE_WAIT
tcp4 653  0  127.0.0.1.54436127.0.0.1. CLOSE_WAIT
tcp4   5  0  127.0.0.1.54434127.0.0.1. CLOSE_WAIT
```

```bash
➜  ~ lsof -i:
COMMAND PID  USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
JavaAppli 77714 zerop   54u  IPv6 0x975257a323b5690f  0t0  TCP 
localhost:54434->localhost:ddi-tcp-1 (CLOSE_WAIT)
JavaAppli 77714 zerop   55u  IPv6 0x975257a33daa290f  0t0  TCP 
localhost:54436->localhost:ddi-tcp-1 (CLOSE_WAIT)
JavaAppli 77714 zerop   56u  IPv6 0x975257a3366b600f  0t0  TCP 
localhost:54437->localhost:ddi-tcp-1 (CLOSE_WAIT)
```

```bash
➜  ~ ps -ef | grep 77714
  501 77714 1   0 11:17AM ?? 3:33.55 /Applications/Burp Suite 
Community Edition.app/Contents/MacOS/JavaApplicationStub
  501 84408 82855   0  5:54AM ttys0020:00.00 grep --color=auto 
--exclude-dir=.bzr --exclude-dir=CVS --exclude-dir=.git --exclude-dir=.hg 
--e

Re: How to reset system proxy using pyhton code

2018-02-19 Thread Thomas Jollans
On 2018-02-19 09:57, Sum J wrote:
> Hi,
> 
> I am using below python code (Python 2.7) to reset the proxy of my Ubuntu
> (Cent OS 6) system, but I am unable to reset the proxy:

I'm sure you know this, but CentOS and Ubuntu are two different things.

> 
> Code :
> import os
>  print "Unsetting http..."
>  os.system("unset http_proxy")
>  os.system("echo $http_proxy")
>  print "http is reset"

Please pay attention to indentation when pasting Python code. I know
what you mean, but as such this code wouldn't even run.

> 
> Output :
> Unsetting http...
> http://web-proxy..xxx.net:8080
> http is reset
> Process finished with exit code 0
> 
> It should not return ' http://web-proxy..xxx.net:8080 ' in output.
> 
> I run the same unset command  from terminal , then I see that proxy is
> reset:
> 
> [trex@sumlnxvm ~]$ unset $HTTP_PROXY
> [trex@sumlnxvm ~]$ echo $HTTP_PROXY

It is not possible to modify a parent process's environment from within
a child process. Can I interest you in a shell function or alias?

If you want to remove the environment variable for future processes you
start from your python script, simply modify os.environ. For example:
(using an environment variable I actually have on my system)

% cat del_env.py
import os

os.system('echo desktop $XDG_SESSION_DESKTOP')
print('removing')
del os.environ['XDG_SESSION_DESKTOP']
os.system('echo desktop $XDG_SESSION_DESKTOP')

% python del_env.py
desktop gnome
removing
desktop

% echo $XDG_SESSION_DESKTOP
gnome



-- Thomas

> 
> [trex@sumlnxvm ~]$
> 
> 
> Please suggest how to reset system proxy using Python Code
> 
> Regards,
> Sumit
> 


-- 
https://mail.python.org/mailman/listinfo/python-list


How to reset system proxy using pyhton code

2018-02-19 Thread Sum J
Hi,

I am using below python code (Python 2.7) to reset the proxy of my Ubuntu
(Cent OS 6) system, but I am unable to reset the proxy:

Code :
import os
 print "Unsetting http..."
 os.system("unset http_proxy")
 os.system("echo $http_proxy")
 print "http is reset"

Output :
Unsetting http...
http://web-proxy..xxx.net:8080
http is reset
Process finished with exit code 0

It should not return ' http://web-proxy..xxx.net:8080 ' in output.

I run the same unset command  from terminal , then I see that proxy is
reset:

[trex@sumlnxvm ~]$ unset $HTTP_PROXY
[trex@sumlnxvm ~]$ echo $HTTP_PROXY

[trex@sumlnxvm ~]$


Please suggest how to reset system proxy using Python Code

Regards,
Sumit
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Recommended pypi caching proxy?

2017-12-18 Thread Ray Cote
On Mon, Dec 18, 2017 at 1:00 PM, Matt Wheeler  wrote:

> On Mon, 18 Dec 2017, 15:45 Ray Cote, 
> wrote:
>
>> Looking to deploy a locally cached pypi proxy service.
>>
>> Is there a recommended/preferred pypi caching tool?
>> I’ve found:
>>   - proxypypy
>>   - Flask-Pypi-Proxy
>>   - pypicache
>>
>> All of which seem to have generally the same functionality and all of
>> which
>> are a few years old.
>> Recommendations from the crowd?
>>
>
> In the past I've used https://github.com/devpi/devpi
> It may have many features you don't need if all you're after is a caching
> proxy, but I found it does that well and it appears to still be pretty
> active.
>
>> --
>
> --
> Matt Wheeler
> http://funkyh.at
>

Thanks for the recommendation.
Will take a look at it.
—Ray


-- 
Raymond Cote, President
voice: +1.603.924.6079 email: rgac...@appropriatesolutions.com skype:
ray.cote
Schedule a meeting: https://calendly.com/ray_cote/60min/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Recommended pypi caching proxy?

2017-12-18 Thread Matt Wheeler
On Mon, 18 Dec 2017, 15:45 Ray Cote, 
wrote:

> Looking to deploy a locally cached pypi proxy service.
>
> Is there a recommended/preferred pypi caching tool?
> I’ve found:
>   - proxypypy
>   - Flask-Pypi-Proxy
>   - pypicache
>
> All of which seem to have generally the same functionality and all of which
> are a few years old.
> Recommendations from the crowd?
>

In the past I've used https://github.com/devpi/devpi
It may have many features you don't need if all you're after is a caching
proxy, but I found it does that well and it appears to still be pretty
active.

> --

--
Matt Wheeler
http://funkyh.at
-- 
https://mail.python.org/mailman/listinfo/python-list


Recommended pypi caching proxy?

2017-12-18 Thread Ray Cote
Hello list:

Looking to deploy a locally cached pypi proxy service.

Is there a recommended/preferred pypi caching tool?
I’ve found:
  - proxypypy
  - Flask-Pypi-Proxy
  - pypicache

All of which seem to have generally the same functionality and all of which
are a few years old.
Recommendations from the crowd?
—Ray


-- 
Raymond Cote, President
Tokenize What Matters®
voice: +1.603.924.6079 email: rgac...@appropriatesolutions.com skype:
ray.cote
Schedule a meeting: https://calendly.com/ray_cote/60min/
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: install on host not connected to the internet and no local proxy

2017-11-02 Thread Paul Moore
On 2 November 2017 at 07:17, Chris Angelico  wrote:
> On Thu, Nov 2, 2017 at 5:50 PM, Noah  wrote:
>> Hi,
>>
>> I am trying to install a python package with about 80 dependencies on a
>> server that is not connected to the internet and has no local proxy.  I can
>> ssh to it via VPN.
>>
>> I was able to find python bundle and download the tarballs for all the main
>> python package and all the tarballs for the subsequent dependencies.They
>> reside in the same directory on the isolated server.
>>
>> Does anybody have some recommendations on how to install the main package
>> and that process triggers the installation of all the dependencies from
>> their corresponding tar.gz file?  I cant seem to figure out how to do that
>> easily with pip.
>
> Hmm. The first thing that comes to my mind is a virtual environment.
> I'm assuming here that you have a local system that has the same CPU
> architecture and Python as the main server, and which *does* have an
> internet connection; if that's not the case, it'll be more
> complicated. But in theory, this should work:
>
> local$ python3 -m venv env
> local$ source env/bin/activate
> local$ pip install -r requirements.txt
>
> At this point, you have a directory called "env" which contains all
> the packages listed in your requirements.txt file (you DO have one of
> those, right?) and everything those packages depend on. Then SSH to
> your server, and set up an equivalent environment:
>
> server$ python3 -m venv env
> server$ source env/bin/activate
>
> Copy in the contents of env/lib/pythonX.Y/site-packages (where X.Y is
> your Python version, eg python3.7 on my system), and then try
> importing stuff. In theory, you should be able to load everything in
> just fine.
>
> If that doesn't work, you might have to manually run setup.py for each
> of your eighty dependencies, and possibly all of their dependencies
> too. I'd definitely try the venv transfer before going to that level
> of tedium.

Alternatively, you can do (on your internet-connected system)

mkdir wheels
pip wheel --wheel-directory wheels -r requirements.txt

This will create a set of .whl files in the directory "wheels". You
can copy that directory to the target machine and (assuming the two
machines do have the same architecture/OS) on that machine do

pip install --no-index --find-links wheels -r requirements.txt

This will tell pip to not use PyPI (and so not need the internet) and
to satisfy the requirements using only the wheel files in the "wheels"
directory.

Paul
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: install on host not connected to the internet and no local proxy

2017-11-02 Thread Chris Angelico
On Thu, Nov 2, 2017 at 5:50 PM, Noah  wrote:
> Hi,
>
> I am trying to install a python package with about 80 dependencies on a
> server that is not connected to the internet and has no local proxy.  I can
> ssh to it via VPN.
>
> I was able to find python bundle and download the tarballs for all the main
> python package and all the tarballs for the subsequent dependencies.They
> reside in the same directory on the isolated server.
>
> Does anybody have some recommendations on how to install the main package
> and that process triggers the installation of all the dependencies from
> their corresponding tar.gz file?  I cant seem to figure out how to do that
> easily with pip.

Hmm. The first thing that comes to my mind is a virtual environment.
I'm assuming here that you have a local system that has the same CPU
architecture and Python as the main server, and which *does* have an
internet connection; if that's not the case, it'll be more
complicated. But in theory, this should work:

local$ python3 -m venv env
local$ source env/bin/activate
local$ pip install -r requirements.txt

At this point, you have a directory called "env" which contains all
the packages listed in your requirements.txt file (you DO have one of
those, right?) and everything those packages depend on. Then SSH to
your server, and set up an equivalent environment:

server$ python3 -m venv env
server$ source env/bin/activate

Copy in the contents of env/lib/pythonX.Y/site-packages (where X.Y is
your Python version, eg python3.7 on my system), and then try
importing stuff. In theory, you should be able to load everything in
just fine.

If that doesn't work, you might have to manually run setup.py for each
of your eighty dependencies, and possibly all of their dependencies
too. I'd definitely try the venv transfer before going to that level
of tedium.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


install on host not connected to the internet and no local proxy

2017-11-02 Thread Noah

Hi,

I am trying to install a python package with about 80 dependencies on a 
server that is not connected to the internet and has no local proxy.  I 
can ssh to it via VPN.


I was able to find python bundle and download the tarballs for all the 
main python package and all the tarballs for the subsequent 
dependencies.They reside in the same directory on the isolated server.


Does anybody have some recommendations on how to install the main 
package and that process triggers the installation of all the 
dependencies from their corresponding tar.gz file?  I cant seem to 
figure out how to do that easily with pip.


Cheers

--
https://mail.python.org/mailman/listinfo/python-list


How to get the webpage with socks5 proxy in python3?

2017-09-23 Thread Length Power
sudo lsof -i:1080
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sslocal 1795 root4u  IPv4  16233  0t0  TCP localhost:socks (LISTEN)
sslocal 1795 root5u  IPv4  16234  0t0  UDP localhost:socks 

An app was listening on localhost:1080,it is ready for curl's socks5 proxy.
The app provided  socks5 proxy service is shadowsocks client on my pc.

curl can work with socks proxy in my pc.

target="target_url_youtube"
curl --socks5-hostname 127.0.0.1:1080 $target -o /tmp/sample  

The target url can be downloaded with scoks5 proxy in curl.   

shadowsocks client--->shadowsocks server--->target_url_youtube
127.0.0.1:1080  1xx.1xx.1xx.1xx:port   target_url_youtube

Notice:
All the packages from 127.0.0.1:1080  to 1xx.1xx.1xx.1xx:port  is sent and 
received by shadowsocks client and server.
curl just sent packages to  127.0.0.1:1080.


Now i want to get the target webpage with socks proxy in python3.

the first try :

import urllib.request
target="target_url_youtubr"
proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
web = urllib.request.urlopen(target).read() 
print(web)

The error info:
sock.connect(sa)
OSError: [Errno 101] Network is unreachable

Notice:
It is no use to write {'sock5': 'localhost:1080'}  as {'sock5': 
'127.0.0.1:1080'},i have verified it.   

the second try:

import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 1080)
socket.socket = socks.socksocket
import urllib.request
target="target_url_youtubr"
print(urllib.request.urlopen('target').read())

error info:
raise BadStatusLine(line)
http.client.BadStatusLine: 

The third try:as Martijn Pieters say in web 

[Python3 - Requests with Sock5 proxy ][1]

import socks
import socket
from urllib import request
socks.set_default_proxy(socks.SOCKS5, "localhost", 1080)
socket.socket = socks.socksocket
target="target_url_youtube"
r = request.urlopen(url)
print(r.read()) 

ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:600)
urllib.error.URLError: 

Why data packet can't send via localhost:1080 and get the target_url_youtube's 
content,but curl can?   
How to fix my python3 code for socks5 proxy?

shadowsocks client-->shadowsocks server-->target_url_youtube
127.0.0.1:1080   1xx.1xx.1xx.1xx:port target_url_youtube
`curl --socks5-hostname 127.0.0.1:1080 $target -o /tmp/sample`  can  do the 
job.
    why  all the three python codes can't do?
How to fix it?



  [1]: 
https://stackoverflow.com/questions/31777692/python3-requests-with-sock5-proxy
-- 
https://mail.python.org/mailman/listinfo/python-list


How to get the webpage with socks5 proxy in python3?

2017-09-23 Thread Length Power
sudo lsof -i:1080
COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME
sslocal 1795 root4u  IPv4  16233  0t0  TCP localhost:socks
(LISTEN)
sslocal 1795 root5u  IPv4  16234  0t0  UDP localhost:socks

An app was listening on localhost:1080,it is ready for curl's socks5
proxy.
The app provided  socks5 proxy service is shadowsocks client on my pc.

curl can work with socks proxy in my pc.

target="target_url_youtube"
curl --socks5-hostname 127.0.0.1:1080 $target -o /tmp/sample

The target url can be downloaded with scoks5 proxy in curl.

shadowsocks client--->shadowsocks server--->target_url_youtube
127.0.0.1:1080  1xx.1xx.1xx.1xx:port   target_url_youtube

Notice:
All the packages from 127.0.0.1:1080  to 1xx.1xx.1xx.1xx:port  is sent and
received by shadowsocks client and server.
curl just sent packages to  127.0.0.1:1080.


Now i want to get the target webpage with socks proxy in python3.

the first try :

import urllib.request
target="target_url_youtubr"
proxy_support = urllib.request.ProxyHandler({'sock5': 'localhost:1080'})
opener = urllib.request.build_opener(proxy_support)
urllib.request.install_opener(opener)
web = urllib.request.urlopen(target).read()
print(web)

The error info:
sock.connect(sa)
OSError: [Errno 101] Network is unreachable

Notice:
It is no use to write {'sock5': 'localhost:1080'}  as {'sock5':
'127.0.0.1:1080'},i have verified it.

the second try:

import socks
import socket
socks.setdefaultproxy(socks.PROXY_TYPE_SOCKS5, "127.0.0.1", 1080)
socket.socket = socks.socksocket
import urllib.request
target="target_url_youtubr"
print(urllib.request.urlopen('target').read())

error info:
raise BadStatusLine(line)
http.client.BadStatusLine:

The third try:

import socks
import socket
from urllib import request
socks.set_default_proxy(socks.SOCKS5, "localhost", 1080)
socket.socket = socks.socksocket
target="target_url_youtube"
r = request.urlopen(url)
print(r.read())

ssl.SSLEOFError: EOF occurred in violation of protocol (_ssl.c:600)
urllib.error.URLError: 

Why data packet can't send via localhost:1080 and get the
target_url_youtube's content,but curl can?
How to fix my python3 code for socks5 proxy?

shadowsocks client-->shadowsocks server-->target_url_youtube
127.0.0.1:1080   1xx.1xx.1xx.1xx:port target_url_youtube
`curl --socks5-hostname 127.0.0.1:1080 $target -o /tmp/sample`  can  do
the job.
why  all the three python codes can't do?
How to fix it?
-- 
https://mail.python.org/mailman/listinfo/python-list


urllib.request with proxy and HTTPS

2017-06-30 Thread Pavel Volkov

Hello,
I'm trying to make an HTTPS request with urllib.

OS: Gentoo
Python: 3.6.1
openssl: 1.0.2l

This is my test code:

= CODE BLOCK BEGIN =
import ssl
import urllib.request
from lxml import etree

PROXY = 'proxy.vpn.local:'
URL = "https://google.com";

proxy = urllib.request.ProxyHandler({'http': PROXY})

#context = ssl.SSLContext(ssl.PROTOCOL_TLSv1_1)
context = ssl.SSLContext()
context.verify_mode = ssl.CERT_REQUIRED
context.check_hostname = True

secure_handler = urllib.request.HTTPSHandler(context = context)
opener = urllib.request.build_opener(proxy, secure_handler)
opener.addheaders = [('User-Agent', 'Mozilla/5.0 (Windows NT 6.1; Win64; 
x64; rv:54.0) Gecko/20100101 Firefox/54.0')]


response = opener.open(URL)
tree = etree.parse(response, parser=etree.HTMLParser())
print(tree.docinfo.doctype)
= CODE BLOCK END =


My first problem is that CERTIFICATE_VERIFY_FAILED error happens.
I've found that something similar happens in macOS since Python installs 
its own set of trusted CA.
But this isn't macOS and I can fetch HTTPS normally with curl and other 
tools.



= TRACE BLOCK BEGIN =
Traceback (most recent call last):
 File "/usr/lib64/python3.6/urllib/request.py", line 1318, in do_open
   encode_chunked=req.has_header('Transfer-encoding'))
 File "/usr/lib64/python3.6/http/client.py", line 1239, in request
   self._send_request(method, url, body, headers, encode_chunked)
 File "/usr/lib64/python3.6/http/client.py", line 1285, in _send_request
   self.endheaders(body, encode_chunked=encode_chunked)
 File "/usr/lib64/python3.6/http/client.py", line 1234, in endheaders
   self._send_output(message_body, encode_chunked=encode_chunked)
 File "/usr/lib64/python3.6/http/client.py", line 1026, in _send_output
   self.send(msg)
 File "/usr/lib64/python3.6/http/client.py", line 964, in send
   self.connect()
 File "/usr/lib64/python3.6/http/client.py", line 1400, in connect
   server_hostname=server_hostname)
 File "/usr/lib64/python3.6/ssl.py", line 401, in wrap_socket
   _context=self, _session=session)
 File "/usr/lib64/python3.6/ssl.py", line 808, in __init__
   self.do_handshake()
 File "/usr/lib64/python3.6/ssl.py", line 1061, in do_handshake
   self._sslobj.do_handshake()
 File "/usr/lib64/python3.6/ssl.py", line 683, in do_handshake
   self._sslobj.do_handshake()
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed 
(_ssl.c:749)


During handling of the above exception, another exception occurred:

Traceback (most recent call last):
 File "./https_test.py", line 21, in 
   response = opener.open(URL)
 File "/usr/lib64/python3.6/urllib/request.py", line 526, in open
   response = self._open(req, data)
 File "/usr/lib64/python3.6/urllib/request.py", line 544, in _open
   '_open', req)
 File "/usr/lib64/python3.6/urllib/request.py", line 504, in _call_chain
   result = func(*args)
 File "/usr/lib64/python3.6/urllib/request.py", line 1361, in https_open
   context=self._context, check_hostname=self._check_hostname)
 File "/usr/lib64/python3.6/urllib/request.py", line 1320, in do_open
   raise URLError(err)
urllib.error.URLError: certificate verify failed (_ssl.c:749)>

= TRACE BLOCK END =

Second problem is that for HTTP requests proxy is used, but for HTTPS it 
makes a direct connection (verified with tcpdump).


I've read at docs.python.org that previous versions of Python couldn't 
handle HTTPS with proxy but that shortcoming seems to have gone now.


Please help :)

--
https://mail.python.org/mailman/listinfo/python-list


SOAPpy proxy through NTLM

2015-03-28 Thread pfaff . christopherj
 
  Performs lookup of specified ticket ID and attempts to 
retrieve outage-related data.

  
  

  
WebService_CreateOnlineMeetingAuditHistor


  Used to update online meeting schedule details for RFC.

  
  

  
WebService_LockdownAdminApproval


  Used to Set Lockdown admin approval via email.

  
  

  
WebService_LockdownPropertyApproval


  Used to Set Lockdown property approval via email.

  
  

  
WebService_RfcCABApproval


  Used to set the CAB RFC Approval via email.

  
  

  
WebService_RfcPeerApproval


  This method set the Peer RFC Approval via email.

  
  

  
WebService_SendResultEmail


  This method used to send email notifications for rfc 
approval status.

  
  

  
WebService_SendResultLockDownEmail


  This method used to send email notifications for lockdown 
approval status.

  
  

  
WebService_ValidateDiary


  This method validates a diary entry which is created by 
e-mail.

  
  

  
Webservice_CreateDiary


  Web method for creating a diary entry via email.

  
  

  
Webservice_GetPropertyOwners


  This method returns list of property owners of the 
specified properties.

  
  

  
WriteErrorData


  This method is used to proper error message.

  
  

  

  

  
  





  

  


  




##
#
I tired to authenticate with SOAPpy and failed.

##
#

Python 2.7.6 (default, Apr 28 2014, 19:01:47) 
[GCC 4.2.2 20070831 prerelease [FreeBSD]] on freebsd8
Type "help", "copyright", "credits" or "license" for more information.
>>> import SOAPpy
>>> from SOAPpy import SOAPProxy
>>> username, password, instance = 'domain\\user', 'abc123', 'demo'
>>> proxy, namespace = 
>>> 'http://username:password@networkchange/NetworkChangeAPI.asmx'+instance+'/incident.do?SOAP',
>>>  'http://networkchange/webservices/'
>>> server = SOAPProxy(proxy,namespace)
>>> server.config.debug = 1
>>> response = server.GetBlankRfc()
In build.
*** Outgoing HTTP headers **
POST /NetworkChangeAPI.asmxdemo/incident.do?SOAP HTTP/1.0
Host: networkchange
User-agent: SOAPpy 0.12.22 (http://pywebsvcs.sf.net)
Content-type: text/xml; charset=UTF-8
Content-length: 401
SOAPAction: "GetBlankRfc"

*** Outgoing SOAP **

http://schemas.xmlsoap.org/soap/encoding/";
  xmlns:SOAP-ENC="http://schemas.xmlsoap.org/soap/encoding/";
  xmlns:SOAP-ENV="http://schemas.xmlsoap.org/soap/envelope/";
>

http://networkchange/webservices/"; 
SOAP-ENC:root="1">




code= 401
msg= Unauthorized
headers= Content-Type: text/html
Server: MediumCo-IIS/7.5
WWW-Authenticate: Negotiate
WWW-Authenticate: NTLM
X-Powered-By: ASP.NET
Host: WBA02
X-UA-Compatible: IE=9
Date: Sat, 28 Mar 2015 22:03:17 GMT
Connection: close
Content-Length: 1293

content-type= text/html
data= http://www.w3.org/TR/xhtml1/DTD/xhtml

Re: Suds Python 2.4.3 Proxy

2014-11-30 Thread Chris Angelico
On Mon, Dec 1, 2014 at 8:14 AM, Ned Deily  wrote:
> In article
> ,
>  Chris Angelico  wrote:
>> It might be worth looking at some actual 2.4 documentation.
>> Unfortunately that doesn't seem to be hosted on python.org any more
>> (though I might be wrong),
>
> https://python.org/ -> Documentation
>https://www.python.org/doc/ -> Documentation Releases by Version
>   https://www.python.org/doc/versions/ -> Python 2.4.4
>  https://docs.python.org/release/2.4.4/

Ahh, I see why I didn't find it. Since the 2.4 days, there seems to
have been a reorganization that changed URLs. Compare:

https://docs.python.org/release/2.4/lib/module-urllib2.html
https://docs.python.org/2.7/library/urllib2.html

Glad I was wrong on that, and the first of those links is what the OP needs.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Suds Python 2.4.3 Proxy

2014-11-30 Thread Ned Deily
In article 
,
 Chris Angelico  wrote:
> It might be worth looking at some actual 2.4 documentation.
> Unfortunately that doesn't seem to be hosted on python.org any more
> (though I might be wrong),

https://python.org/ -> Documentation
   https://www.python.org/doc/ -> Documentation Releases by Version
  https://www.python.org/doc/versions/ -> Python 2.4.4
 https://docs.python.org/release/2.4.4/

-- 
 Ned Deily,
 n...@acm.org

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Suds Python 2.4.3 Proxy

2014-11-30 Thread Mark Lawrence

On 30/11/2014 09:10, Chris Angelico wrote:

On Sun, Nov 30, 2014 at 2:01 AM, Jerry Rocteur  wrote:

This works GREAT on 2.7 but when I run it on 2.4 I get:

   File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open
 if '@' in host:
TypeError: iterable argument required


It might be worth looking at some actual 2.4 documentation.
Unfortunately that doesn't seem to be hosted on python.org any more
(though I might be wrong), so you'll want to look on your system
itself and find some docs.



This 
https://hg.python.org/cpython/file/ceec209b26d4/Doc/lib/liburllib2.tex 
should help.



Otherwise, why not simply install Python 2.7 on those systems? It can
happily coexist with the system 2.4.

ChrisA



--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Suds Python 2.4.3 Proxy

2014-11-30 Thread Chris Angelico
On Sun, Nov 30, 2014 at 2:01 AM, Jerry Rocteur  wrote:
> This works GREAT on 2.7 but when I run it on 2.4 I get:
>
>   File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open
> if '@' in host:
> TypeError: iterable argument required

It might be worth looking at some actual 2.4 documentation.
Unfortunately that doesn't seem to be hosted on python.org any more
(though I might be wrong), so you'll want to look on your system
itself and find some docs.

Otherwise, why not simply install Python 2.7 on those systems? It can
happily coexist with the system 2.4.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Suds Python 2.4.3 Proxy

2014-11-29 Thread Jerry Rocteur
Hi,

I posted this on the soap list but didn`t get a reply so I was hoping
perhaps someone from this list could help me.

I got my SOAP script working to GlobalSign, thanks to the help from
Dieter and I can now download certificates with the script.

I was running on Python 2.7 and it works great there but now I have to
deploy the script on some Redhat workstations running python 2.4.3
(which I am not allowed to upgrade)

When I create my client in my script I do it like this:

url = 'https://system.globalsign.com/cr/ws/GasOrderService?wsdl'
proxy = {'https': proxylog + ":" + proxypass + '@proxyhost.int:'}
globalClient = Client(url, proxy=proxy)


This works GREAT on 2.7 but when I run it on 2.4 I get:

  File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open
if '@' in host:
TypeError: iterable argument required

I'm Googling it and I can see there are other ways to connect but
can't seem to get the syntax.

Can anyone  help.


Thanks in advance,

-- 
Jerry Rocteur

je...@rocteur.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Google Appengine Proxy Post method error

2014-09-22 Thread Kev Dwyer
alextr...@googlemail.com wrote:

> So I got the Labnol Google Appengine proxy but it can't handle the Post
> method aka error 405.
> 
> I need help adding this method to the script:
> 
> mirror.py = http://pastebin.com/2zRsdi3U
> 
> transform_content.py = http://pastebin.com/Fw7FCncA
> 
> main.html = http://pastebin.com/HTBH3y5T
> 
> All other files are just small files for appengine that don't carry
> sufficient code for this. Hope you guys can help.
> 
> Thanks in advance :)


Hello,

Very broadly speaking, you need to add a post method to the MirrorHandler  
class, and in that method:

 - mung the request in a similar fashion to the get method
 - avoid caching the request (POST requests are not idempotent)
 - forward the altered request to the destination server
 - return the response to the original client


The labnol google-proxy githubpage lists a twitter account for support 
contact - http://twitter.com/labnol - so you could try asking there for more 
help.  Also check the docs for webapp2 and and Google App Engine 
(http://developers.google.com/appengine).  

Have fun,

Kev 

-- 
https://mail.python.org/mailman/listinfo/python-list


Google Appengine Proxy Post method error

2014-09-19 Thread alextrapp
So I got the Labnol Google Appengine proxy but it can't handle the Post method 
aka error 405.

I need help adding this method to the script:

mirror.py = http://pastebin.com/2zRsdi3U

transform_content.py = http://pastebin.com/Fw7FCncA

main.html = http://pastebin.com/HTBH3y5T

All other files are just small files for appengine that don't carry sufficient 
code for this. Hope you guys can help.

Thanks in advance :)
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Define proxy in windows 7

2014-09-02 Thread Mark Lawrence

On 02/09/2014 04:02, Chris Angelico wrote:


These tips may help. (Though on Windows, where port 80 requires no
special privileges, it's more likely for a proxy to use that than it
is under Unix. So it's entirely possible it is actually on 80.) But
what I'm seeing is a problem with environment variable setting in the
first place - or else a transcription problem in the original post.

Try this:

set http_proxy=proxy name:80



Which reminds me of this nifty piece of work http://www.rapidee.com/en/about

--
My fellow Pythonistas, ask not what our language can do for you, ask
what you can do for our language.

Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


Re: Define proxy in windows 7

2014-09-01 Thread Chris Angelico
On Tue, Sep 2, 2014 at 12:06 PM, Cameron Simpson  wrote:
> I am not a Windows user, but on UNIX systems the format of http_proxy and
> https_proxy is:
>
>   http://proxyname:3128/
>
> being the proxy hostname and port number respectively. You're saying:
>
>   proxyname:8080
>
> instead. (Note, https_proxy _also_ starts with "http:", not "https:" because
> the communication with the proxy is HTTP.)
>
> Try the longer form.  And ensure you have the port number right; proxies do
> not normally listen on port 80; they tend to listen on port 3128 or 8080.

These tips may help. (Though on Windows, where port 80 requires no
special privileges, it's more likely for a proxy to use that than it
is under Unix. So it's entirely possible it is actually on 80.) But
what I'm seeing is a problem with environment variable setting in the
first place - or else a transcription problem in the original post.

Try this:

set http_proxy=proxy name:80

If that doesn't work,*copy and paste* what you're doing and what
happens when you do. Include the prompt, the command, and its output.
That'll make it easier for us to figure out what's going on.

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Define proxy in windows 7

2014-09-01 Thread Cameron Simpson

On 02Sep2014 06:25, Om Prakash  wrote:


I am wondering how to define proxy setting in env variable on windows 
7, I want this so i can use pip to pull packages for me, the same 
setting though working earlier on windows xp.


http_proxy = "proxy name:80"

now this same setting doesn't work, i tried doing in the cmd.exe prompt.

set http_proxy "proxy name:80"

P.S. i am a normal user and don't have admin privleges.


I am not a Windows user, but on UNIX systems the format of http_proxy and 
https_proxy is:


  http://proxyname:3128/

being the proxy hostname and port number respectively. You're saying:

  proxyname:8080

instead. (Note, https_proxy _also_ starts with "http:", not "https:" because 
the communication with the proxy is HTTP.)


Try the longer form.  And ensure you have the port number right; proxies do not 
normally listen on port 80; they tend to listen on port 3128 or 8080.


Cheers,
--

To understand recursion, you must first understand recursion.
--
https://mail.python.org/mailman/listinfo/python-list


Define proxy in windows 7

2014-09-01 Thread Om Prakash

Hi,

I am wondering how to define proxy setting in env variable on windows 7, 
I want this so i can use pip to pull packages for me, the same setting 
though working earlier on windows xp.


http_proxy = "proxy name:80"

now this same setting doesn't work, i tried doing in the cmd.exe prompt.

set http_proxy "proxy name:80"

P.S. i am a normal user and don't have admin privleges.

Regards,
Om Prakash
--
https://mail.python.org/mailman/listinfo/python-list


Re: Want guidance to set proxy please help

2013-12-16 Thread Denis McMahon
On Sun, 15 Dec 2013 20:29:55 -0800, Jai wrote:

> so , i need some step to  set proxy so that my ip is not blocked by them

This sounds like you're attempting to access a site other than for 
legitimate purposes. You probably want some 1337 script kiddies forum, 
not a serious programming newsgroup.

-- 
Denis McMahon, denismfmcma...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


Want guidance to set proxy please help

2013-12-15 Thread Jai
hey i am working on parsing like  project . 

so , i need some step to  set proxy 
so that my ip is not blocked by them 

+=
i am using this method 


proxy_support = urllib2.ProxyHandler({"http":"http://61.147.82.87:8000"})
opener = urllib2.build_opener(proxy_support)
urllib2.install_opener(opener)


is it ok ? if yes 
how can i verify that my  i have set proxy 
if no, please give some guidance
also if some modification need please help 

-- 
https://mail.python.org/mailman/listinfo/python-list


Re: want to run proxy in python

2013-12-13 Thread Denis McMahon
On Fri, 13 Dec 2013 03:39:44 -0800, Jai wrote:

> hey , will u guide me how to run proxies from python

http://lmgtfy.com/?q=ip+address+spoofing

-- 
Denis McMahon, denismfmcma...@gmail.com
-- 
https://mail.python.org/mailman/listinfo/python-list


want to run proxy in python

2013-12-13 Thread Jai
hey , will u guide me how to run proxies from python 

i have tested lots of code but my ip show always constant on when i see it 
online 

plz help .
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: please guide to make proxy type function in python

2013-12-11 Thread Mark Lawrence

On 11/12/2013 12:28, Jai wrote:

please guide to make proxy type function  in python



Write some code after looking at the documentation 
http://docs.python.org/3/.


--
My fellow Pythonistas, ask not what our language can do for you, ask 
what you can do for our language.


Mark Lawrence

--
https://mail.python.org/mailman/listinfo/python-list


please guide to make proxy type function in python

2013-12-11 Thread Jai
please guide to make proxy type function  in python 
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: reporting proxy porting problem

2013-12-02 Thread Robin Becker

On 28/11/2013 22:01, Terry Reedy wrote:
..


All the transition guides I have seen recommend first updating 2.x code (and its
tests) to work in 2.x with *all* classes being new-style classes. I presume one
of the code checker programs will check this for you. To some extent, the
upgrade can be done by changing one class at a time.



the intent is to produce a compatible code with aid of six.py like functions.



Yes, this means abandoning support of 2.1 ;-). It also means giving up
magical hacks that only work with old-style classes.


I find that I don't understand exactly how the original works so well,


To me, not being comprehensible is not a good sign. I explain some of
the behavior below.


The author has commented that he might have been drunk at time of writing :)





but here is a cut down version



..

thanks for the analysis, I think we came to the same broad conclusion. I think 
this particular module may get lost in the wash. If it ever needs 
re-implementing we can presumably rely on some more general approach as used in 
the various remote object proxies like pyro or similar.

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: reporting proxy porting problem

2013-11-28 Thread Terry Reedy

On 11/28/2013 6:12 AM, Robin Becker wrote:

I am in the process of porting reportlab to python3.3, one of the
contributions is a module that implements a reporting proxy with a
canvas that records all access calls and attribute settings etc etc.
This fails under python3 because of differences between old and new
style classes.


All the transition guides I have seen recommend first updating 2.x code 
(and its tests) to work in 2.x with *all* classes being new-style 
classes. I presume one of the code checker programs will check this for 
you. To some extent, the upgrade can be done by changing one class at a 
time.


Yes, this means abandoning support of 2.1 ;-). It also means giving up 
magical hacks that only work with old-style classes.



I find that I don't understand exactly how the original works so well,


To me, not being comprehensible is not a good sign. I explain some of 
the behavior below.



but here is a cut down version


Much more should be cut to highlight the important parts.


##
class Canvas:
 def __init__(self,*args,**kwds):
 self._fontname = 'Helvetica'


This seems pretty useless, but maybe that is a result of cutting down.


class PDFAction :
 """Base class to fake method calls or attributes on Canvas"""
 def __init__(self, parent, action) :
 """Saves a pointer to the parent object, and the method name."""
 self._parent = parent
 self._action = action

 def __getattr__(self, name) :
 """Probably a method call on an attribute, returns the real one."""


What if it is not a 'method call on an attribute'?


 print('PDFAction.__getattr__(%s)' % name)
 return getattr(getattr(self._parent._underlying, self._action), name)


I snipped several irrelevant methods. The important part is that there 
is no __str__ method!



class PyCanvas:
 _name = "c"

 def __init__(self, *args, **kwargs) :
 self._in = 0
 self._parent = self # nice trick, isn't it ?


I call this an ugly code stink. But this is not directly an issue here.


 self._underlying = Canvas(*args,**kwargs)


Snip irrelevant __bool__


 def __str__(self) :
 return 'PyCanvas.__str__()'


Also irrelevant for the example.


 def __getattr__(self, name) :
 return PDFAction(self, name)



if __name__=='__main__':
 c = PyCanvas('filepath.pdf')
 print('c._fontname=%s' % c._fontname)
 print('is it a string? %r type=%s' %
 (isinstance(c._fontname,str),type(c._fontname
##

when run under python27



C:\code\hg-repos\reportlab>\python27\python.exe z.py
PDFAction.__getattr__(__str__)
c._fontname=Helvetica
is it a string? False type=


When Canvas and PyCanvas are upgraded, but PDFAction is not, the result 
remains the same. This fact shows where the old-new difference makes a 
difference.



and under python33 I see this



C:\code\hg-repos\reportlab>\python33\python.exe z.py
c._fontname=<__main__.PDFAction object at 0x00BF8830>
is it a string? False type=


With 2.7, and PDFAction also upgraded (subclassed from object), the 
result is the same. So this is not a 3.x issue at all, but strictly an 
old -- new class issue, which has existed  since 2.2.


The first difference is that
  print('c._fontname=%s' % c._fontname)
produces, with old-style PDFAction,
  PDFAction.__getattr__(__str__)
  c._fontname=Helvetica
but produces, with new-style PDFAction,
  c._fontname=<__main__.PDFAction object at 0x00BF8830>

The reason is that c._fontname invokes PyCanvas.__getattr__, which 
returns PDFAction(c, '_fontname') (call this p). The % string 
interpolation calls p.__str__. For old p, that method does not exist, so 
p.__getattr__('__str__) is called, and that prints a debug line and 
returns 'Helvetica', which is interpolated and printed in the second 
line. For new p, p.__str__ is inherited from object, and we see the 
familiar default 'object at' string.


The hack is depending of the absence of a special method. The immediate 
fix is to give PDFAction a .__str__ method that returns the same string 
as the current .__attr__.


The second different is that type(c._fontname) is 'instance' versus 
"". This is because the type of all 
instances of all user-defined old classes is 'instance'. This is pretty 
useless. Any test that depends on this is equally useless and should be 
upgraded to only pass with an instance of the intended (new-style) class.


--
Terry Jan Reedy

--
https://mail.python.org/mailman/listinfo/python-list


reporting proxy porting problem

2013-11-28 Thread Robin Becker
I am in the process of porting reportlab to python3.3, one of the contributions 
is a module that implements a reporting proxy with a canvas that records all 
access calls and attribute settings etc etc. This fails under python3 because of 
differences between old and new style classes.



I find that I don't understand exactly how the original works so well, but here 
is a cut down version


##
class Canvas:
def __init__(self,*args,**kwds):
self._fontname = 'Helvetica'

class PDFAction :
"""Base class to fake method calls or attributes on Canvas"""
def __init__(self, parent, action) :
"""Saves a pointer to the parent object, and the method name."""
self._parent = parent
self._action = action

def __getattr__(self, name) :
"""Probably a method call on an attribute, returns the real one."""
print('PDFAction.__getattr__(%s)' % name)
return getattr(getattr(self._parent._underlying, self._action), name)

def __call__(self, *args, **kwargs) :
"""The fake method is called, print it then call the real one."""
if not self._parent._parent._in :
self._precomment()
self._postcomment()
self._parent._parent._in += 1
meth = getattr(self._parent._underlying, self._action)
retcode = meth(*args,**kwargs)
self._parent._parent._in -= 1
return retcode

def __hash__(self) :
return hash(getattr(self._parent._underlying, self._action))

def _precomment(self) :
print('%s(__dict__=%s)._precomment()' %
(self.__class__.__name__,repr(self.__dict__)))

def _postcomment(self) :
print('%s(__dict__=%s)._postcomment()' %
(self.__class__.__name__,repr(self.__dict__)))

class PyCanvas:
_name = "c"

def __init__(self, *args, **kwargs) :
self._in = 0
self._parent = self # nice trick, isn't it ?
self._underlying = Canvas(*args,**kwargs)

def __bool__(self) :
"""This is needed by platypus' tables."""
return 1

def __str__(self) :
return 'PyCanvas.__str__()'

def __getattr__(self, name) :
return PDFAction(self, name)

if __name__=='__main__':
c = PyCanvas('filepath.pdf')
print('c._fontname=%s' % c._fontname)
print('is it a string? %r type=%s' %
(isinstance(c._fontname,str),type(c._fontname
##

when run under python27 I see this

C:\code\hg-repos\reportlab>\python27\python.exe z.py
PDFAction.__getattr__(__str__)
c._fontname=Helvetica
is it a string? False type=

and under python33 I see this
C:\code\hg-repos\reportlab>\python33\python.exe z.py
c._fontname=<__main__.PDFAction object at 0x00BF8830>
is it a string? False type=

clearly something different is happening and this leads to failure in the real 
pycanvas module testing. Is there a way to recover the old behaviour(s)?

--
Robin Becker

--
https://mail.python.org/mailman/listinfo/python-list


Re: Free Proxy site 2014

2013-09-27 Thread Dave Angel
On 27/9/2013 10:44, 23alagmy wrote:

> Free Proxy site 2014
> search is the first proxy supports proxy access blocked sites, browsing, and 
> downloads without restrictions and also conceal your identity Pencah by 100% 
> from the sites you visit
>
>

Sure, I'm going to trust going to a site recommended by a spammer who
doesn't know the language he's posting in.  Right.

-- 
DaveA


-- 
https://mail.python.org/mailman/listinfo/python-list


Free Proxy site 2014

2013-09-27 Thread 23alagmy
Free Proxy site 2014
search is the first proxy supports proxy access blocked sites, browsing, and 
downloads without restrictions and also conceal your identity Pencah by 100% 
from the sites you visit

http://natigaas7ab.net/wp/?p=14556
-- 
https://mail.python.org/mailman/listinfo/python-list


substituting proxy

2013-07-29 Thread Robin Becker
Before attempting to reinvent the wheel has anyone created an http(s) proxy that 
can replace the content for specific requests.


Context: I have access to the client's test site, but a lot of the  requests are 
dynamic and save html complete etc etc doesn't work properly. In addition lots 
of the content comes from a cloud server which seems to object unless I take 
great care over spoofing of host names & referrers etc.


I would like to debug stuff we supply, but that's hard unless I have the whole 
kit & kaboodle. I thought a proxy that could substitute for our javascript 
file(s) might work.


Has anyone done this in twisted pymiproxy etc etc?
--
Robin Becker

--
http://mail.python.org/mailman/listinfo/python-list


Proxy connection with Python

2013-06-25 Thread bevan jenkins
Hello,

I have an issue that has been frustrating me for a while now.

This is an update of a crosspost
(http://stackoverflow.com/questions/16703936/proxy-connection-with-python)
which I made over a month ago.

I have been attempting to connect to URLs from python. I have tried:
urllib2, urlib3, and requests. It is the same issue that i run up
against in all cases. Once I get the answer I imagine all three of
them would work fine.

The issue is connecting via proxy. I have entered our proxy
information but am not getting any joy. I am getting 407 codes and
error messages like: HTTP Error 407: Proxy Authentication Required (
Forefront TMG requires authorization to fulfill the request. Access to
the Web Proxy filter is denied. )

I think that this also stops me using pip to install (at least from
remotes ).  I get 'Cannot fetch index base URL
http://pypi.python.org/simple/";.  I end up using git to clone a local
copy of the repo and install from that.

However, I can connect using a number of other applications that go
through the proxy, git and pycharm for example. When I run git config
--get htpp.proxy it returns the same values and format that I am
entering in Python namely:

http://username:password@proxy:8080

An example of code in requests is

import requests
proxy = {"http": "http://username:password@proxy:8080"}
url = 'http://example.org'
r = requests.get(url,  proxies=proxy)
print r.status_code

Thanks for your time and any suggestions gratefully received.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ssl proxy server

2013-05-15 Thread Zachary Ware
On Wed, May 15, 2013 at 1:58 PM, Chris “Kwpolska” Warrick
 wrote:
> On Tue, May 14, 2013 at 9:14 PM, Skip Montanaro  wrote:
>> I haven't touched the SpamBayes setup for the usenet-to-mail gateway
>> in a long while.  For whatever reason, this message was either held
>> and then approved by the current list moderator(s), or (more likely)
>> slipped through unscathed.  No filter is perfect.
>>
>> Skip
>
> A filter on this guy altogether would be.  He sent 24 messages there
> since February 25.  All of them spam.
>

You can always set your own filter.  I had been somewhat confused by
this thread until I realized the initial message had been killed by a
filter I set up a while back.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ssl proxy server

2013-05-15 Thread Chris “Kwpolska” Warrick
On Tue, May 14, 2013 at 9:14 PM, Skip Montanaro  wrote:
> I haven't touched the SpamBayes setup for the usenet-to-mail gateway
> in a long while.  For whatever reason, this message was either held
> and then approved by the current list moderator(s), or (more likely)
> slipped through unscathed.  No filter is perfect.
>
> Skip

A filter on this guy altogether would be.  He sent 24 messages there
since February 25.  All of them spam.

--
Kwpolska  | GPG KEY: 5EAAEA16
stop html mail| always bottom-post
http://asciiribbon.org| http://caliburn.nl/topposting.html
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ssl proxy server

2013-05-14 Thread Skip Montanaro
I haven't touched the SpamBayes setup for the usenet-to-mail gateway
in a long while.  For whatever reason, this message was either held
and then approved by the current list moderator(s), or (more likely)
slipped through unscathed.  No filter is perfect.

Skip

On Tue, May 14, 2013 at 1:40 PM, Chris “Kwpolska” Warrick
 wrote:
> On Tue, May 14, 2013 at 2:34 PM, 23alagmy  wrote:
>> ssl proxy server
>>
>> hxxp://natigtas7ab.blogspot.com/2013/05/ssl-proxy-server.html
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>
> I have been seeing those mails for a long time.  Why didn’t anybody
> ban that guy?  If it comes from Usenet (and headers say it does), and
> you can’t destroy stuff easily there, maybe just put a ban on the
> Mailman side of things, making the world much better for at least some
> people?
> --
> http://mail.python.org/mailman/listinfo/python-list
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: ssl proxy server

2013-05-14 Thread Chris “Kwpolska” Warrick
On Tue, May 14, 2013 at 2:34 PM, 23alagmy  wrote:
> ssl proxy server
>
> hxxp://natigtas7ab.blogspot.com/2013/05/ssl-proxy-server.html
> --
> http://mail.python.org/mailman/listinfo/python-list

I have been seeing those mails for a long time.  Why didn’t anybody
ban that guy?  If it comes from Usenet (and headers say it does), and
you can’t destroy stuff easily there, maybe just put a ban on the
Mailman side of things, making the world much better for at least some
people?
-- 
http://mail.python.org/mailman/listinfo/python-list


ssl proxy server

2013-05-14 Thread 23alagmy
ssl proxy server

http://natigtas7ab.blogspot.com/2013/05/ssl-proxy-server.html
-- 
http://mail.python.org/mailman/listinfo/python-list


image transforming web proxy?

2013-03-12 Thread Skip Montanaro
I stumbled upon an old FFT tutorial on astro.berkeley.edu website
whose images are in xbm format.  Neither Chrome nor Firefox knows how
to display X bitmap format and for Chrome at least, I've been unable
to find an extension to do the conversion (didn't hunt for a FF
extension).  I can clearly download the whole kit-n-kaboodle, use any
of a number of different tools to convert the images from xbm to png,
then view things locally.  I finally figured out that Opera supports
xbm and downloaded it.

I wonder though, if there is a Python-based web proxy out there which
can transparently transform "obsolete" image formats like xbm into
png, jpeg, presumably using PIL?

Thanks,

Skip
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Transparent Proxy and Redirecting Sockets

2013-02-21 Thread Luca Bongiorni
2013/2/21 Rodrick Brown 

> On Thu, Feb 21, 2013 at 10:24 AM, Luca Bongiorni wrote:
>
>> Hi all,
>> Around I have found plenty useful sources about TCP transparent proxies.
>> However I am still missing how to make socket redirection.
>>
>> What I would like to do is:
>>
>> host_A <--> PROXY <--> host_B
>>   ^
>>   |
>> host_C <--
>>
>> At the beginning the proxy is simply forwarding the data between A and B.
>> Subsequently, when a parser catches the right pattern, the proxy quit the
>> communication between A and B and redirect all the traffic to the host_C.
>>
>> I would be pleased if someone would suggest me some resources or hints.
>>
>>
> Are you looking for a Python way of doing this? I would highly recommend
> taking a look at ha-proxy as its very robust, simple and fast. If you're
> looking to implement this in Python code you may want to use a framework
> like Twisted - http://twistedmatrix.com/trac/wiki/TwistedProject
>
> Twisted provides many functionality that can leverage to accomplish this
> task.
>

Thank you for the hint. I will start to delve on it right now.
Cheers,
Luca


>
>
>> Thank you :)
>> Cheers,
>> Luca
>>
>> --
>> http://mail.python.org/mailman/listinfo/python-list
>>
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: Transparent Proxy and Redirecting Sockets

2013-02-21 Thread Rodrick Brown
On Thu, Feb 21, 2013 at 10:24 AM, Luca Bongiorni  wrote:

> Hi all,
> Around I have found plenty useful sources about TCP transparent proxies.
> However I am still missing how to make socket redirection.
>
> What I would like to do is:
>
> host_A <--> PROXY <--> host_B
>   ^
>   |
> host_C <--
>
> At the beginning the proxy is simply forwarding the data between A and B.
> Subsequently, when a parser catches the right pattern, the proxy quit the
> communication between A and B and redirect all the traffic to the host_C.
>
> I would be pleased if someone would suggest me some resources or hints.
>
>
Are you looking for a Python way of doing this? I would highly recommend
taking a look at ha-proxy as its very robust, simple and fast. If you're
looking to implement this in Python code you may want to use a framework
like Twisted - http://twistedmatrix.com/trac/wiki/TwistedProject

Twisted provides many functionality that can leverage to accomplish this
task.


> Thank you :)
> Cheers,
> Luca
>
> --
> http://mail.python.org/mailman/listinfo/python-list
>
-- 
http://mail.python.org/mailman/listinfo/python-list


Transparent Proxy and Redirecting Sockets

2013-02-21 Thread Luca Bongiorni
Hi all,
Around I have found plenty useful sources about TCP transparent proxies. 
However I am still missing how to make socket redirection.

What I would like to do is:

host_A <--> PROXY <--> host_B
  ^
  |
host_C <--

At the beginning the proxy is simply forwarding the data between A and B.
Subsequently, when a parser catches the right pattern, the proxy quit the 
communication between A and B and redirect all the traffic to the host_C.

I would be pleased if someone would suggest me some resources or hints.

Thank you :)
Cheers,
Luca

-- 
http://mail.python.org/mailman/listinfo/python-list


how to add socks proxy feature to script based on requests module?

2013-02-12 Thread xliiv
Hi!

I've go a script which uses python requests 
(http://docs.python-requests.org/en/latest/).

I need to add to it socks proxy feature.

AFAIK requests doesn't support socks proxy 
(http://stackoverflow.com/questions/12601316/how-to-make-python-requests-work-via-socks-proxy)
 so i was about to switch requests module to human_curl 
(http://stackoverflow.com/questions/8482896/making-http-requests-via-python-requests-module-not-working-via-proxy-where-curl).
Then it turned out that human_curl doesn't support "requests module's" session.

Ok, what can You recommend me to do? I need the best solution for adding socks 
proxy feature to script (based on requests module).

if nothing better is recommended i'll clone "requests' module" session feature 
in human_curl.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to change system-wide proxy settings by Python ?

2013-02-03 Thread Michael Torrie
On 02/03/2013 08:34 AM, iMath wrote:
> I have already known a valid proxy server(63.141.216.159)and
> port(8087) which support both http and https protocols ,so how to
> change system-wide proxy settings to this proxy by Python ? I use
> WinXP ,can you show  me an example of this ? thanks in advance !

There really is no way on any operating system to set a system-wide
proxy that is honored by every program that does http.

However if you can change the one "Internet Settings" proxy
programmatically, any windows app that use the IE browser engine will
pick it up.  One method to do this is to interact with the registry.
You can google for the appropriate key.  Setting it for all users,
though, is a bit trickier.  Your script would need privileges to access
keys in HKEY_LOCAL_MACHINE.

But be warned that other programs like firefox and Chrome will not
automatically know about this setting or honor it.  Or any program that
implements its own http requests with sockets.  It's not something that
can be enforced as a sort of policy.  If you need that kind of
enforcing, you'll have to work with the network hardware to block
un-proxied http and https traffic.
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: how to change system-wide proxy settings by Python ?

2013-02-03 Thread Kwpolska
On Sun, Feb 3, 2013 at 4:34 PM, iMath  wrote:
> I have already known a valid proxy server(63.141.216.159)and port(8087) which 
> support both http and https protocols ,so how to change system-wide proxy 
> settings to this proxy by Python ?
> I use WinXP ,can you show  me an example of this ?
> thanks in advance !
> --
> http://mail.python.org/mailman/listinfo/python-list

This may help you:

http://stackoverflow.com/questions/1068212/programmatically-detect-system-proxy-settings-on-windows-xp-with-python

Next time, please use Google before you ask.
-- 
Kwpolska <http://kwpolska.tk> | GPG KEY: 5EAAEA16
stop html mail| always bottom-post
http://asciiribbon.org| http://caliburn.nl/topposting.html
-- 
http://mail.python.org/mailman/listinfo/python-list


how to change system-wide proxy settings by Python ?

2013-02-03 Thread iMath
I have already known a valid proxy server(63.141.216.159)and port(8087) which 
support both http and https protocols ,so how to change system-wide proxy 
settings to this proxy by Python ?
I use WinXP ,can you show  me an example of this ?
thanks in advance !
-- 
http://mail.python.org/mailman/listinfo/python-list


Re: python: HTTP connections through a proxy server requiring authentication

2013-01-28 Thread Saju M
Hi ,
Thanks barry,

I solved that issue.
I reconfigured squid3 with ncsa_auth, now its working same python code.
Earlier I used digest_pw_auth.

Actually I am trying to fix an issue related to python boto API.

Please check this post
https://groups.google.com/forum/#!topic/boto-users/1qk6d7v2HpQ


Regards
Saju Madhavan
+91 09535134654


On Tue, Jan 29, 2013 at 5:01 AM, Barry Scott  wrote:

> The shipped python library code does not work.
>
> See http://bugs.python.org/issue7291 for patches.
>
> Barry
>
>
-- 
http://mail.python.org/mailman/listinfo/python-list


  1   2   3   4   5   >