viocal added the comment:
I have fixed it
modified selector_events.py
def write(self, data):
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError(f'data argument must be a bytes-like object, '
f'not {type(data).__name__!r
viocal added the comment:
I have fixed it
modified selector_events.py
def write(self, data):
if not isinstance(data, (bytes, bytearray, memoryview)):
raise TypeError(f'data argument must be a bytes-like object, '
f'not {type(data).__name__!r
viocal added the comment:
thanks again
the environment:
filedata1<512M
filedata2>512M
filedata3>1G
this computer-peer computer
server(with asyncio)--clien(socket without asyncio)
memory<512M---memory>512M
read filedata1 <- succ
viocal added the comment:
for example
the system free memory size is 512m
and filedata size is 500m
will transport Success
but filedata than 512m will be failed
--
resolution: not a bug ->
___
Python tracker
<https://bugs.python.org/issu
viocal added the comment:
thanks you
but I think protocol.resume_writing() / protocol.pause_writing() is auto called
by Protocol
because set transport.set_write_buffer_limits(high=65536*2, low=16384*2)
#default (high=65536, low=16384
viocal added the comment:
for buf in filedata:
asc.resume_writing()
asc.transport.write(buf)
asc.pause_writing()
--
___
Python tracker
<https://bugs.python.org/issue36
viocal added the comment:
I use rotocol.pause_writing() / Protocol.resume_writing()
but results is no change(memory out Or killed by OS)
--
___
Python tracker
<https://bugs.python.org/issue36
New submission from viocal :
in asyncio
when filedata than free memory(hardware)
will be memory out Or killed by OS
for buf in filedata:
transport.write(buf)
#to client
I try it todo:
abort transporting to protect application be killed by OS
modified selector_events.py
def