This makes alot of sense.  But we are talking about the need for
large scale parallelism, not discrete events.  Once a given unit
of I/O work can be performed on a given socket or pipe, it's going
to be time to farm it out to a worker.

Somewhere in this scheme we need to consider dispatching.

Bill

At 06:25 PM 4/19/2005, Bill Stoddard wrote:
>Bill Stoddard wrote:
>>Mladen Turk wrote:
>>
>>>Hi,
>>>
>>>Since the WIN32 imposes pretty nasty limit on FD_SETSIZE to 64, that
>>>is way too low for any serious usage, I developed an alternative
>>>implementation.
>>>Also the code support the APR_POLLSET_THREADSAFE flag.
>>>
>>>A simple patch to apr_arch_poll_private.h allows to have
>>>multiple implementations compilable.
>>>
>>>Any comments?
>>>
>>>Regards,
>>>Mladen.
>>
>>Brain dump...
>>It may be possible to use IOCompletionPorts on Windows to implement 
>>apr_pollset_*.  IOCPs aare very scalable but moving to IOCPs will require a 
>>complete rewrite of the apr_socket implementation on Windows. And there is 
>>the small matter of a simple technical issue that needs to be investigated...
>>IOCPs support true async network i/o. BSD KQ, Solaris /dev/poll, epoll et. 
>>al. are not async, they are event driven i/o models. When you issue an async 
>>read on Windows, the kernel will start filling your i/o buffer as soon as 
>>data is available. With event driven i/o, the kernel tells you when you can 
>>do a read() and expect to receive data. See the difference? Your buffer 
>>management strategy will be completely different between async i/o and event 
>>driven i/o and I am not sure how APR (or applications that use APR) can be 
>>made to support both cleanly. 
>
>One thought on the buffer management issue... rather than managing buffers, we 
>manage 'i/o objects' that contain references to network i/o buffers (among 
>other things). These i/o objects have their own scope and lifetime. When an 
>i/o is issued and it does not complete immediately, we place the i/o object 
>into a container, searchable by a key. The application should not reference 
>the i/o object further once it is in the container. I know IOCPs enable 
>passing a key on the i/o call that the kernel will return on the IOCP 
>notification. I am sure something similar is available on Unix.
>
>When an app receives notification that io is complete (or can be done with the 
>expectation that it will complete), we find the i/o object in the container, 
>and issue read passing the i/o object on the call. Under the covers, the read 
>on windows just returns the buffer in the i/o object. On unix, the read is 
>issued to fetch the bytes from the kernel. The buffer management is hidden in 
>the i/o object.
>
>Bill


Reply via email to