> On 28 Sep 2016, at 22:21, Oleg Kalnichevski <ol...@apache.org> wrote:
> 
>> On Wed, 2016-09-28 at 21:48 +0300, Dmitry Potapov wrote:
>>> On Wed, Sep 28, 2016 at 07:53:52PM +0200, Oleg Kalnichevski wrote:
>>>> On Wed, 2016-09-28 at 18:40 +0300, Dmitry Potapov wrote:
>>>> Oleg,
>>>> 
>>> 
>>> Hi Dmitry
>>> 
>>>> I'm not sure I got it right.
>>>> Are you going to drop org.apache.http.protocol.HttpService class?
>>> 
>>> Yes, I do.
>>> 
>>>> For now it is the only way to:
>>>> 1. Process full-duplex requests (i.e. start sending reply before complete 
>>>> request entity consumption)
>>> 
>>> Full-duplex data transfer should be massively easier with non-blocking
>>> I/O. If it is not, it is a problem with the actual non-blocking code.
>> It is possible. I have patch which enables full duplex in NIO: 
>> https://gist.github.com/hirthwork/be613055884362ea68d3
>> But this patch really need review from somebody who understands current 
>> internals better than I am.
>> We had conversation before (8 Dec 2014). You had some doubts concerning rfc 
>> compatibility at this point, and I didn't insisted because I found that my 
>> tasks can be done without full-duplex support.
>>> 
> 
> Damn. I had no idea you were working on something like that. I am
> currently in the process of re-writing the old HTTP/1.1 non-blocking
> transport to re-align it with the new HTTP/2 code. I just wrote a
> completely new HTTP/1.1 message duplexer:
> 
> https://github.com/ok2c/httpcore/blob/565d939a3b43444eb93346b30c07579cbf2c5ff0/httpcore5/src/main/java/org/apache/hc/core5/http/impl/nio/AbstractHttp1StreamDuplexer.java
>  
I hadn't working on this since December 2014. In my e-mail from 07 December 
2014, you can find description on what my patch does. In fact I've tried to 
reduce impact and fit it into existing 4.x messages handling API.
> 
>>>> 2. Compless/decomplress requests and responses on the fly 
>>>> (DecompressingEntity really does the job for servers). This is possible 
>>>> for NIO too, but will require to implement non-blocking analog for 
>>>> GzipInputStream.
>>> 
>>> Very true. However we likely will have to do it anyway if we want HTTP/2
>>> code to support transparent content compression / decompression. 
>>> 
>>>> 3. Reduce threads contention by using fixed number of workers with 
>>>> connections queue. This allows to limit CPU usage with native system 
>>>> mechanisms: you spawn 4 threads and you know that only 4 requests and 
>>>> responses will be served simultaneously, without excessive context 
>>>> switching and risk of response being blocked by other heavy task.
>>> 
>>> Exactly the same can be done with non-blocking code very easily as long
>>> as one can live without InputStream / OutputStream compatibility.
>> There is too few libraries able to process stream data without blocking Java 
>> streams: JFlex, Tika, Pdfclown they are all using blocking i/o.
>>> 
> 
> Very true.
> 
>>>> 4. Blocking server is the only effective way to stream static files from 
>>>> disk, as there is no such thing as non-blocking file channels (unless 
>>>> you're crazy and use direct i/o). For instance, recent nginx versions uses 
>>>> separate pool with blocking operations for this task, otherwise static 
>>>> files streaming will preempt other requests.
>>>> 
>>> 
>>> Non-blocking file channels were implemented in Java 7 with NIO2, were
>>> they not? Besides, NIO with direct channels (zero-copy mode) outperforms
>>> classic I/O considerably when copying content directly from a file. The
>>> problem is that HTTP/2 makes zero-copy impossible due to frame
>>> multiplexing. 
>> This is not about non-blocking channels and not about zero-copy. This is 
>> about lack of real asynchornous reads support in Linux.
>> For example, http://man7.org/linux/man-pages/man2/open.2.html describes 
>> O_NONBLOCK has the following statement concerning O_NONBLOCK:
>>> Note that this flag has no effect for regular files and block devices; that 
>>> is, I/O operations will (briefly) block when device activity is required, 
>>> regardless of whether O_NONBLOCK is set
>> 
>> There was other attempts for asynchonous disk i/o implementations like 
>> http://lse.sourceforge.net/io/aio.html, but they declares the following:
>>> What Does Not Work?
>>>  * AIO read and write on files opened without O_DIRECT
>> But the O_DIRECT has even worse drawbacks than blocking disk read, such as 
>> lack of file system page cache.
>> 
>> So for now the only way to have both page cache and "non-blocking" i/o is to 
>> use thread pools, like nginx does: 
>> http://nginx.org/en/docs/http/ngx_http_core_module.html#aio
>> 
> 
> I see. This goes beyond my rather limited understanding of Linux
> internals. It would be interesting how Sun / Oracle solved the issue in
> NIO2 for Linux, though.
> 
> Oleg
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org
> For additional commands, e-mail: dev-h...@hc.apache.org
> 

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscr...@hc.apache.org
For additional commands, e-mail: dev-h...@hc.apache.org

Reply via email to