I also tried looking at SO_RCVTIMEO option. Turns out that also resets if data is received.
And yeah I implemented that as a separate logic in my code. I was wondering if sockets natively provided this functionality. Thanks again for clarifying. Cheers, Abhijeet On Mon, Nov 19, 2012 at 12:40 AM, Cameron Simpson <c...@zip.com.au> wrote: > On 18Nov2012 03:27, Abhijeet Mahagaonkar <abhi.for...@gmail.com> wrote: > | I'm new to network programming. > | I have a question. > | > | Can we set a timeout to limit how long a particular socket can read or > | write? > > On the socket itself? Probably not. But... > > | I have used a settimeout() function. > | The settimeout() works fine as long as the client doesnt send any data > for > | x seconds. > | The data that I receive in the server after accept()ing a connect() from > a > | client I check if the client is sending any invalid data. > | I'm trying to ensure that a client sending invalid data constantly cannot > | hold the server. So is there a way of saying I want the client to use > this > | socket for x seconds before I close it, no matter what data I receive? > > Not the time you set up the socket, or when you accept the client's > connection. Thereafter, ever time you get some data, look at the clock. > If enough time has elapsed, close the socket yourself. > > So, not via an interface to the socket but as logic in your own code. > > Cheers, > -- > Cameron Simpson <c...@zip.com.au> > > Their are thre mistakes in this sentence. > - Rob Ray DoD#33333 <r...@linden.msvu.ca> >
-- http://mail.python.org/mailman/listinfo/python-list