Re: CouchDB 1.6.1 returning empty reply

2018-02-08 Thread Raja
Got it working by modifying mochiweb_http.erl. For some reason, the recbuf
was not setup in the socket options when receiving the request. I had it
setup in local.ini and thought it would be propagated, but in
mochiweb_http.erl's request(Socket,Body) method, it is setting up the
socketoptions like


ok = mochiweb_socket:setopts(Socket, [{active, once}]),

It looks like its not setting any other socket options, which is probably
why the request buffer was cut off at 8k bytes and the response was
terminated abruptly. I changed the above to

ok = mochiweb_socket:setopts(Socket, [{active, once},{recbuf, 6}]),

Once I included recbuf, my original request, which had a huge query string
worked properly. For now, its hardcoded and Ill have to make it fetch from
the local.ini, but atleast the request is now working properly through
nginx / haproxy

Thanks
Raja



On Thu, Feb 8, 2018 at 11:16 AM, Raja  wrote:

> Hi Nick
> We have those headers setup for nginx and haproxy uses tune.maxrewrite and
> tune.bufsize. Both of them have been setup to be 32k.
>
> Also, to ensure that the problem is not with nginx or haproxy, I setup
> netcat on the box listening on port 5984 and proxied to it, which worked
> just fine. So it seems to be something to do with couchdb/erlang buffer
> lengths.
>
> Ill post my findings after adding some debug logs on mochiweb to see if we
> can resolve this.
>
> Thanks
> Raja
>
>
> On Thu, Feb 8, 2018 at 10:28 AM, Nick Vatamaniuc 
> wrote:
>
>> Hi Raja,
>>
>> It seems that nginx or haproxy has limits on request line lengths.
>>
>> Take a look at:
>> http://nginx.org/en/docs/http/ngx_http_core_module.html#clie
>> nt_header_buffer_size
>> and
>> http://nginx.org/en/docs/http/ngx_http_core_module.html#clie
>> nt_header_buffer_size
>> for nginx. Not sure which settings apply to haproxy.
>>
>> Also consider that in CouchDB 2.x you also have an option of filtering
>> replication docs using selector objects instead of filter functions. Those
>> requests would then be sent using POST requests.
>>
>> http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
>> also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A
>> New
>> Way to Filter" section).
>>
>> Regards,
>> -Nick
>>
>>
>> On Wed, Feb 7, 2018 at 11:01 PM, Raja  wrote:
>>
>> > Thanks Nick. I did try setting recbuf earlier, but dint get much luck.
>> This
>> > is what I had:
>> >
>> > socket_options = [{recbuf,64000},{nodelay,true}]
>> >
>> > I set the same in the [replicator] section as well to see if _changes
>> would
>> > pickup the values in [replicator] section. This is how the [replicator]
>> > socket_options looks like
>> >
>> > socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
>> >
>> > I have very little experience with erlang other than the replication
>> > filters that I have written, but can we enable any debugging to see the
>> > http errors that it may be throwing. We do build couchdb from source, so
>> > not sure if we can change any parameters to allow for increased buffer
>> > size.
>> >
>> > I see in mochiweb_http.erl (line 69 for request body handling and line
>> 102
>> > for header handling) that if the size is > emsgsize, the code is like
>> >
>> > >% R15B02 returns this then closes the socket, so close and
>> > exit
>> > > mochiweb_socket:close(Socket),
>> >
>> >
>> > but Im not sure why it wouldnt pick up the socket_options which has
>> recbuf
>> > set to 64k. Ill keep debugging (build again from source and see what
>> value
>> > of recbuf its taking and logging it), but if you have any other
>> pointers,
>> > please let know.
>> >
>> > Thanks again
>> > Raja
>> >
>> >
>> > On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc 
>> > wrote:
>> >
>> > > Hi Raja,
>> > >
>> > > This sounds like this issue:
>> > > https://issues.apache.org/jira/browse/COUCHDB-3293
>> > >
>> > > It stems from a bug in http parser
>> > > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html
>> and
>> > > perhaps mochiweb not knowing how to handle a "message too large
>> error".
>> > >
>> > > One way to work around it is to increase the recbuf, say something
>> like
>> > > this (in 2.0):
>> > >
>> > > [chttpd]
>> > > server_options = [{recbuf, 65536}]
>> > >
>> > > In 1.6 the corresponding option I think is in httpd section:
>> > >
>> > > [httpd]
>> > > socket_options = ...
>> > >
>> > > See if that helps at all.
>> > >
>> > > And btw, that was the reason for introducing these two configuration
>> > > parameters:
>> > >
>> > > couchdb.max_document_id_length = infinity | Integer
>> > >
>> > > replicator.max_document_id_length = infinity | Integer
>> > >
>> > > Basically allowing another way to "avoid" the bug by limiting the
>> size of
>> > > document ids accepted in the system.
>> > >
>> > > Also it seems the behavior in mochiweb was fixed as well to send 413
>> as
>> > > opposed to timing out or closing teh connection as before. But the
>> > problem
>> > > with t

Re: CouchDB 1.6.1 returning empty reply

2018-02-07 Thread Raja
Hi Nick
We have those headers setup for nginx and haproxy uses tune.maxrewrite and
tune.bufsize. Both of them have been setup to be 32k.

Also, to ensure that the problem is not with nginx or haproxy, I setup
netcat on the box listening on port 5984 and proxied to it, which worked
just fine. So it seems to be something to do with couchdb/erlang buffer
lengths.

Ill post my findings after adding some debug logs on mochiweb to see if we
can resolve this.

Thanks
Raja


On Thu, Feb 8, 2018 at 10:28 AM, Nick Vatamaniuc  wrote:

> Hi Raja,
>
> It seems that nginx or haproxy has limits on request line lengths.
>
> Take a look at:
> http://nginx.org/en/docs/http/ngx_http_core_module.html#
> client_header_buffer_size
> and
> http://nginx.org/en/docs/http/ngx_http_core_module.html#
> client_header_buffer_size
> for nginx. Not sure which settings apply to haproxy.
>
> Also consider that in CouchDB 2.x you also have an option of filtering
> replication docs using selector objects instead of filter functions. Those
> requests would then be sent using POST requests.
>
> http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
> also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A
> New
> Way to Filter" section).
>
> Regards,
> -Nick
>
>
> On Wed, Feb 7, 2018 at 11:01 PM, Raja  wrote:
>
> > Thanks Nick. I did try setting recbuf earlier, but dint get much luck.
> This
> > is what I had:
> >
> > socket_options = [{recbuf,64000},{nodelay,true}]
> >
> > I set the same in the [replicator] section as well to see if _changes
> would
> > pickup the values in [replicator] section. This is how the [replicator]
> > socket_options looks like
> >
> > socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
> >
> > I have very little experience with erlang other than the replication
> > filters that I have written, but can we enable any debugging to see the
> > http errors that it may be throwing. We do build couchdb from source, so
> > not sure if we can change any parameters to allow for increased buffer
> > size.
> >
> > I see in mochiweb_http.erl (line 69 for request body handling and line
> 102
> > for header handling) that if the size is > emsgsize, the code is like
> >
> > >% R15B02 returns this then closes the socket, so close and
> > exit
> > > mochiweb_socket:close(Socket),
> >
> >
> > but Im not sure why it wouldnt pick up the socket_options which has
> recbuf
> > set to 64k. Ill keep debugging (build again from source and see what
> value
> > of recbuf its taking and logging it), but if you have any other pointers,
> > please let know.
> >
> > Thanks again
> > Raja
> >
> >
> > On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc 
> > wrote:
> >
> > > Hi Raja,
> > >
> > > This sounds like this issue:
> > > https://issues.apache.org/jira/browse/COUCHDB-3293
> > >
> > > It stems from a bug in http parser
> > > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> > > perhaps mochiweb not knowing how to handle a "message too large error".
> > >
> > > One way to work around it is to increase the recbuf, say something like
> > > this (in 2.0):
> > >
> > > [chttpd]
> > > server_options = [{recbuf, 65536}]
> > >
> > > In 1.6 the corresponding option I think is in httpd section:
> > >
> > > [httpd]
> > > socket_options = ...
> > >
> > > See if that helps at all.
> > >
> > > And btw, that was the reason for introducing these two configuration
> > > parameters:
> > >
> > > couchdb.max_document_id_length = infinity | Integer
> > >
> > > replicator.max_document_id_length = infinity | Integer
> > >
> > > Basically allowing another way to "avoid" the bug by limiting the size
> of
> > > document ids accepted in the system.
> > >
> > > Also it seems the behavior in mochiweb was fixed as well to send 413 as
> > > opposed to timing out or closing teh connection as before. But the
> > problem
> > > with the Erlang http parser might still be there:
> > >
> > > https://github.com/mochi/mochiweb/commit/
> a6fdb9a3af1301c8be68cd1f85a87c
> > > e3028da07a
> > >
> > > Cheers,
> > > -Nick
> > >
> > >
> > > On Wed, Feb 7, 2018 at 1:40 PM, Raja  wrote:
> > >
> > > > Hi
> > > > We are trying to put an nginx (or haproxy) in front of a CouchDB
> server
> > > to
> > > > see if we can load balance some of our databases between multiple
> > > machines.
> > > > We are currently on 1.6.1 and cannot move upto 2.x to take advantage
> of
> > > the
> > > > newer features.
> > > >
> > > > The problem is that the _changes urls are working pretty nice(through
> > > nginx
> > > > or haproxy) as long as the query string length is < 8192 bytes. We do
> > > have
> > > > some filtered replications that take the UUIDs as query parameters,
> and
> > > if
> > > > they are exceeding 8192 bytes, then we get a "no reply from server"
> in
> > > the
> > > > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > > > fronting CouchDB.
> > > >
> > > > The format of the query i

Re: CouchDB 1.6.1 returning empty reply

2018-02-07 Thread Nick Vatamaniuc
Hi Raja,

It seems that nginx or haproxy has limits on request line lengths.

Take a look at:
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
and
http://nginx.org/en/docs/http/ngx_http_core_module.html#client_header_buffer_size
for nginx. Not sure which settings apply to haproxy.

Also consider that in CouchDB 2.x you also have an option of filtering
replication docs using selector objects instead of filter functions. Those
requests would then be sent using POST requests.

http://docs.couchdb.org/en/master/replication/replicator.html#selectorobj
also https://blog.couchdb.org/2016/08/15/feature-replication/ (under "A New
Way to Filter" section).

Regards,
-Nick


On Wed, Feb 7, 2018 at 11:01 PM, Raja  wrote:

> Thanks Nick. I did try setting recbuf earlier, but dint get much luck. This
> is what I had:
>
> socket_options = [{recbuf,64000},{nodelay,true}]
>
> I set the same in the [replicator] section as well to see if _changes would
> pickup the values in [replicator] section. This is how the [replicator]
> socket_options looks like
>
> socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]
>
> I have very little experience with erlang other than the replication
> filters that I have written, but can we enable any debugging to see the
> http errors that it may be throwing. We do build couchdb from source, so
> not sure if we can change any parameters to allow for increased buffer
> size.
>
> I see in mochiweb_http.erl (line 69 for request body handling and line 102
> for header handling) that if the size is > emsgsize, the code is like
>
> >% R15B02 returns this then closes the socket, so close and
> exit
> > mochiweb_socket:close(Socket),
>
>
> but Im not sure why it wouldnt pick up the socket_options which has recbuf
> set to 64k. Ill keep debugging (build again from source and see what value
> of recbuf its taking and logging it), but if you have any other pointers,
> please let know.
>
> Thanks again
> Raja
>
>
> On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc 
> wrote:
>
> > Hi Raja,
> >
> > This sounds like this issue:
> > https://issues.apache.org/jira/browse/COUCHDB-3293
> >
> > It stems from a bug in http parser
> > http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> > perhaps mochiweb not knowing how to handle a "message too large error".
> >
> > One way to work around it is to increase the recbuf, say something like
> > this (in 2.0):
> >
> > [chttpd]
> > server_options = [{recbuf, 65536}]
> >
> > In 1.6 the corresponding option I think is in httpd section:
> >
> > [httpd]
> > socket_options = ...
> >
> > See if that helps at all.
> >
> > And btw, that was the reason for introducing these two configuration
> > parameters:
> >
> > couchdb.max_document_id_length = infinity | Integer
> >
> > replicator.max_document_id_length = infinity | Integer
> >
> > Basically allowing another way to "avoid" the bug by limiting the size of
> > document ids accepted in the system.
> >
> > Also it seems the behavior in mochiweb was fixed as well to send 413 as
> > opposed to timing out or closing teh connection as before. But the
> problem
> > with the Erlang http parser might still be there:
> >
> > https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87c
> > e3028da07a
> >
> > Cheers,
> > -Nick
> >
> >
> > On Wed, Feb 7, 2018 at 1:40 PM, Raja  wrote:
> >
> > > Hi
> > > We are trying to put an nginx (or haproxy) in front of a CouchDB server
> > to
> > > see if we can load balance some of our databases between multiple
> > machines.
> > > We are currently on 1.6.1 and cannot move upto 2.x to take advantage of
> > the
> > > newer features.
> > >
> > > The problem is that the _changes urls are working pretty nice(through
> > nginx
> > > or haproxy) as long as the query string length is < 8192 bytes. We do
> > have
> > > some filtered replications that take the UUIDs as query parameters, and
> > if
> > > they are exceeding 8192 bytes, then we get a "no reply from server" in
> > the
> > > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > > fronting CouchDB.
> > >
> > > The format of the query is something like :
> > >
> > > curl - -XGET
> > > 'http://username:password@url:5984//_changes?feed=
> > > normal&heartbeat=30&style=all_docs&filter=filtername&docIds= > > of ids>
> > >
> > > Sometimes, we do have a lot of ids in that it exceeds the limit of 8192
> > and
> > > when we try to limit it, it returns the values properly, but if we go
> > > beyond the 8192 limit, it seems to be truncated and gives an error.
> > >
> > > Please note that none of these happen if we go directly to CouchDB.
> This
> > is
> > > only a problem if we go through Nginx or HAProxy. The nginx config is
> as
> > > mentioned here (
> > > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy)
> > and
> > > HAProxy is quite straightforward where all requests to the frontend are
> > > sent to a cou

Re: CouchDB 1.6.1 returning empty reply

2018-02-07 Thread Raja
Thanks Nick. I did try setting recbuf earlier, but dint get much luck. This
is what I had:

socket_options = [{recbuf,64000},{nodelay,true}]

I set the same in the [replicator] section as well to see if _changes would
pickup the values in [replicator] section. This is how the [replicator]
socket_options looks like

socket_options = [{keepalive, true}, {nodelay, false},{recbuf,64000}]

I have very little experience with erlang other than the replication
filters that I have written, but can we enable any debugging to see the
http errors that it may be throwing. We do build couchdb from source, so
not sure if we can change any parameters to allow for increased buffer
size.

I see in mochiweb_http.erl (line 69 for request body handling and line 102
for header handling) that if the size is > emsgsize, the code is like

>% R15B02 returns this then closes the socket, so close and exit
> mochiweb_socket:close(Socket),


but Im not sure why it wouldnt pick up the socket_options which has recbuf
set to 64k. Ill keep debugging (build again from source and see what value
of recbuf its taking and logging it), but if you have any other pointers,
please let know.

Thanks again
Raja


On Thu, Feb 8, 2018 at 12:49 AM, Nick Vatamaniuc  wrote:

> Hi Raja,
>
> This sounds like this issue:
> https://issues.apache.org/jira/browse/COUCHDB-3293
>
> It stems from a bug in http parser
> http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
> perhaps mochiweb not knowing how to handle a "message too large error".
>
> One way to work around it is to increase the recbuf, say something like
> this (in 2.0):
>
> [chttpd]
> server_options = [{recbuf, 65536}]
>
> In 1.6 the corresponding option I think is in httpd section:
>
> [httpd]
> socket_options = ...
>
> See if that helps at all.
>
> And btw, that was the reason for introducing these two configuration
> parameters:
>
> couchdb.max_document_id_length = infinity | Integer
>
> replicator.max_document_id_length = infinity | Integer
>
> Basically allowing another way to "avoid" the bug by limiting the size of
> document ids accepted in the system.
>
> Also it seems the behavior in mochiweb was fixed as well to send 413 as
> opposed to timing out or closing teh connection as before. But the problem
> with the Erlang http parser might still be there:
>
> https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87c
> e3028da07a
>
> Cheers,
> -Nick
>
>
> On Wed, Feb 7, 2018 at 1:40 PM, Raja  wrote:
>
> > Hi
> > We are trying to put an nginx (or haproxy) in front of a CouchDB server
> to
> > see if we can load balance some of our databases between multiple
> machines.
> > We are currently on 1.6.1 and cannot move upto 2.x to take advantage of
> the
> > newer features.
> >
> > The problem is that the _changes urls are working pretty nice(through
> nginx
> > or haproxy) as long as the query string length is < 8192 bytes. We do
> have
> > some filtered replications that take the UUIDs as query parameters, and
> if
> > they are exceeding 8192 bytes, then we get a "no reply from server" in
> the
> > case of HAProxy and a "Connection reset by peer" in the case of Nginx
> > fronting CouchDB.
> >
> > The format of the query is something like :
> >
> > curl - -XGET
> > 'http://username:password@url:5984//_changes?feed=
> > normal&heartbeat=30&style=all_docs&filter=filtername&docIds= > of ids>
> >
> > Sometimes, we do have a lot of ids in that it exceeds the limit of 8192
> and
> > when we try to limit it, it returns the values properly, but if we go
> > beyond the 8192 limit, it seems to be truncated and gives an error.
> >
> > Please note that none of these happen if we go directly to CouchDB. This
> is
> > only a problem if we go through Nginx or HAProxy. The nginx config is as
> > mentioned here (
> > https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy)
> and
> > HAProxy is quite straightforward where all requests to the frontend are
> > sent to a couchdb server.
> >
> > Also, we cannot use POST for the _changes as there is a issue with
> > filterParameters expected to be in the URL even if its POST (
> > https://github.com/couchbase/couchbase-lite-ios/issues/1139).
> >
> > Any suggestions/workaround to solve this will be greatly helpful.
> >
> > Thanks
> > Raja
> >
> >  --
> > Raja
> > rajasaur at gmail.com
> >
>



-- 
Raja
rajasaur at gmail.com


Re: CouchDB 1.6.1 returning empty reply

2018-02-07 Thread Nick Vatamaniuc
Hi Raja,

This sounds like this issue:
https://issues.apache.org/jira/browse/COUCHDB-3293

It stems from a bug in http parser
http://erlang.org/pipermail/erlang-questions/2011-June/059567.html and
perhaps mochiweb not knowing how to handle a "message too large error".

One way to work around it is to increase the recbuf, say something like
this (in 2.0):

[chttpd]
server_options = [{recbuf, 65536}]

In 1.6 the corresponding option I think is in httpd section:

[httpd]
socket_options = ...

See if that helps at all.

And btw, that was the reason for introducing these two configuration
parameters:

couchdb.max_document_id_length = infinity | Integer

replicator.max_document_id_length = infinity | Integer

Basically allowing another way to "avoid" the bug by limiting the size of
document ids accepted in the system.

Also it seems the behavior in mochiweb was fixed as well to send 413 as
opposed to timing out or closing teh connection as before. But the problem
with the Erlang http parser might still be there:

https://github.com/mochi/mochiweb/commit/a6fdb9a3af1301c8be68cd1f85a87ce3028da07a

Cheers,
-Nick


On Wed, Feb 7, 2018 at 1:40 PM, Raja  wrote:

> Hi
> We are trying to put an nginx (or haproxy) in front of a CouchDB server to
> see if we can load balance some of our databases between multiple machines.
> We are currently on 1.6.1 and cannot move upto 2.x to take advantage of the
> newer features.
>
> The problem is that the _changes urls are working pretty nice(through nginx
> or haproxy) as long as the query string length is < 8192 bytes. We do have
> some filtered replications that take the UUIDs as query parameters, and if
> they are exceeding 8192 bytes, then we get a "no reply from server" in the
> case of HAProxy and a "Connection reset by peer" in the case of Nginx
> fronting CouchDB.
>
> The format of the query is something like :
>
> curl - -XGET
> 'http://username:password@url:5984//_changes?feed=
> normal&heartbeat=30&style=all_docs&filter=filtername&docIds= of ids>
>
> Sometimes, we do have a lot of ids in that it exceeds the limit of 8192 and
> when we try to limit it, it returns the values properly, but if we go
> beyond the 8192 limit, it seems to be truncated and gives an error.
>
> Please note that none of these happen if we go directly to CouchDB. This is
> only a problem if we go through Nginx or HAProxy. The nginx config is as
> mentioned here (
> https://cwiki.apache.org/confluence/display/COUCHDB/Nginx+as+a+proxy) and
> HAProxy is quite straightforward where all requests to the frontend are
> sent to a couchdb server.
>
> Also, we cannot use POST for the _changes as there is a issue with
> filterParameters expected to be in the URL even if its POST (
> https://github.com/couchbase/couchbase-lite-ios/issues/1139).
>
> Any suggestions/workaround to solve this will be greatly helpful.
>
> Thanks
> Raja
>
>  --
> Raja
> rajasaur at gmail.com
>