[GitHub] [couchdb] rnewson commented on a diff in pull request #4729: Fabric workers should exit if the client exits

2023-08-18 Thread via GitHub


rnewson commented on code in PR #4729:
URL: https://github.com/apache/couchdb/pull/4729#discussion_r1298106491


##
rel/haproxy.cfg:
##
@@ -25,9 +25,9 @@ defaults
 option redispatch
 retries 4
 option http-server-close
-timeout client 15
-timeout server 360
-timeout connect 500
+timeout client 60s
+timeout server 60s

Review Comment:
   this was just for testing, but rel/haproxy.cfg is not part of the 
deployment/install, is it? our installers only set up couchdb.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb] rnewson commented on pull request #4729: Fabric workers should exit if the client exits

2023-08-18 Thread via GitHub


rnewson commented on PR #4729:
URL: https://github.com/apache/couchdb/pull/4729#issuecomment-1683488124

   good point on `fabric_doc_update` and possibly others.
   
   I confirmed directly and via haproxy that a client that disconnects (curl 
and CTRL-C in the first case and simply doing something that hits haproxy's 
server timeout like `_changes?feed=continuous&timeout=1000`) that the 
mochiweb process is killed, and this triggers the new code above to take down 
workers.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb-fauxton] YakovL opened a new issue, #1411: Don't throw from Mango Query tab after doc edit

2023-08-18 Thread via GitHub


YakovL opened a new issue, #1411:
URL: https://github.com/apache/couchdb-fauxton/issues/1411

   ## Expected Behavior
   When I'm in the Mango Query tab (/_utils/#/database/db_name/_find), after 
running a query I can click on a document to see its contents or to edit it. 
When I leave the doc editing UI (by clicking either Cancel, Save Changes, or 
Delete), I expect to end up in the Mango Query tab, ideally seeing the same or 
updated list.
   
   ## Current Behavior
   When I click Delete and confirm deleting, I'm thrown to the All Documents 
tab (/_utils/#/database/db_name/_all_docs).
   When I Cancel, I see the "No Documents Found" instead of the same list (ok, 
it could get updates from other sources, but that issue applies even when just 
looking at the list, so it should be handled separately and hence is 
irrelevant).
   When I Save Changes, I see the "No Documents Found" instead of the same list 
with the updated document that I've saw before.
   
   ## Possible Solution
   A minimal solution – after deleting, please return me to the Mango Query 
tab, not to All Documents.
   An ideal solution – also keep the shown list (including pagination!) after 
cancel/save changes, and update the doc columns if they were changed.
   
   ## Context
   I'm debugging my app and managing data, and these are annoying things that 
slow down my flow.
   
   ## Your Environment
   * Version used: left bottom corner suggests "Fauxton on Apache CouchDB v. 
3.2.2"
   * Browser Name and version: Vivaldi 6.1 (Chromium-based)
   * Operating System and version: Windows 10 Pro x64
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb] rnewson commented on a diff in pull request #4718: enhance smoosh to cleanup search indexes when ddocs change

2023-08-18 Thread via GitHub


rnewson commented on code in PR #4718:
URL: https://github.com/apache/couchdb/pull/4718#discussion_r1298337833


##
src/fabric/src/fabric.erl:
##
@@ -590,25 +590,38 @@ cleanup_index_files(DbName) ->
 cleanup_local_indices_and_purge_checkpoints([]) ->
 ok;
 cleanup_local_indices_and_purge_checkpoints([_ | _] = Dbs) ->
-AllIndices = lists:map(fun couch_mrview_util:get_index_files/1, Dbs),
-AllPurges = lists:map(fun couch_mrview_util:get_purge_checkpoints/1, Dbs),
-Sigs = couch_mrview_util:get_signatures(hd(Dbs)),
-ok = cleanup_purges(Sigs, AllPurges, Dbs),
-ok = cleanup_indices(Sigs, AllIndices).
+MrViewIndices = lists:map(fun couch_mrview_util:get_index_files/1, Dbs),
+MrViewPurges = lists:map(fun couch_mrview_util:get_purge_checkpoints/1, 
Dbs),
+MrViewSigs = couch_mrview_util:get_signatures(hd(Dbs)),
+ok = cleanup_mrview_purges(MrViewSigs, MrViewPurges, Dbs),
+ok = cleanup_mrview_indices(MrViewSigs, MrViewIndices),
 
-cleanup_purges(Sigs, AllPurges, Dbs) ->
+ClouseauSigs = dreyfus_util:active_sigs(hd(Dbs)),
+ok = cleanup_clouseau_indices(Dbs, ClouseauSigs),
+
+NouveauSigs = nouveau_util:active_sigs(hd(Dbs)),
+ok = cleanup_nouveau_indices(Dbs, NouveauSigs).
+
+cleanup_mrview_purges(Sigs, AllPurges, Dbs) ->
 Fun = fun(DbPurges, Db) ->
 couch_mrview_cleanup:cleanup_purges(Db, Sigs, DbPurges)
 end,
 lists:zipwith(Fun, AllPurges, Dbs),
 ok.
 
-cleanup_indices(Sigs, AllIndices) ->
+cleanup_mrview_indices(Sigs, AllIndices) ->
 Fun = fun(DbIndices) ->
 couch_mrview_cleanup:cleanup_indices(Sigs, DbIndices)
 end,
 lists:foreach(Fun, AllIndices).
 
+cleanup_clouseau_indices(Dbs, ActiveSigs) ->
+Fun = fun(Db) -> clouseau_rpc:cleanup(Db, ActiveSigs) end,
+lists:foreach(Fun, Dbs).
+cleanup_nouveau_indices(Dbs, ActiveSigs) ->
+Fun = fun(Db) -> nouveau_api:delete_path(nouveau_util:index_name(Db), 
ActiveSigs) end,
+lists:foreach(Fun, Dbs).

Review Comment:
   `clouseau_rpc:cleanup` is a `gen_server:cast` so that's fine even if the 
target node doesn't exist.
   
   all the `nouveau_api` functions have a `send_if_enabled` check and return 
`{error, nouveau_not_enabled}` when it's not enabled.
   
   in either case I ignore the function result, but perhaps I should check that 
it's one of the two expected results?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb] nickva commented on a diff in pull request #4718: enhance smoosh to cleanup search indexes when ddocs change

2023-08-18 Thread via GitHub


nickva commented on code in PR #4718:
URL: https://github.com/apache/couchdb/pull/4718#discussion_r1298530399


##
src/fabric/src/fabric.erl:
##
@@ -590,25 +590,38 @@ cleanup_index_files(DbName) ->
 cleanup_local_indices_and_purge_checkpoints([]) ->
 ok;
 cleanup_local_indices_and_purge_checkpoints([_ | _] = Dbs) ->
-AllIndices = lists:map(fun couch_mrview_util:get_index_files/1, Dbs),
-AllPurges = lists:map(fun couch_mrview_util:get_purge_checkpoints/1, Dbs),
-Sigs = couch_mrview_util:get_signatures(hd(Dbs)),
-ok = cleanup_purges(Sigs, AllPurges, Dbs),
-ok = cleanup_indices(Sigs, AllIndices).
+MrViewIndices = lists:map(fun couch_mrview_util:get_index_files/1, Dbs),
+MrViewPurges = lists:map(fun couch_mrview_util:get_purge_checkpoints/1, 
Dbs),
+MrViewSigs = couch_mrview_util:get_signatures(hd(Dbs)),
+ok = cleanup_mrview_purges(MrViewSigs, MrViewPurges, Dbs),
+ok = cleanup_mrview_indices(MrViewSigs, MrViewIndices),
 
-cleanup_purges(Sigs, AllPurges, Dbs) ->
+ClouseauSigs = dreyfus_util:active_sigs(hd(Dbs)),
+ok = cleanup_clouseau_indices(Dbs, ClouseauSigs),
+
+NouveauSigs = nouveau_util:active_sigs(hd(Dbs)),
+ok = cleanup_nouveau_indices(Dbs, NouveauSigs).
+
+cleanup_mrview_purges(Sigs, AllPurges, Dbs) ->
 Fun = fun(DbPurges, Db) ->
 couch_mrview_cleanup:cleanup_purges(Db, Sigs, DbPurges)
 end,
 lists:zipwith(Fun, AllPurges, Dbs),
 ok.
 
-cleanup_indices(Sigs, AllIndices) ->
+cleanup_mrview_indices(Sigs, AllIndices) ->
 Fun = fun(DbIndices) ->
 couch_mrview_cleanup:cleanup_indices(Sigs, DbIndices)
 end,
 lists:foreach(Fun, AllIndices).
 
+cleanup_clouseau_indices(Dbs, ActiveSigs) ->
+Fun = fun(Db) -> clouseau_rpc:cleanup(Db, ActiveSigs) end,
+lists:foreach(Fun, Dbs).
+cleanup_nouveau_indices(Dbs, ActiveSigs) ->
+Fun = fun(Db) -> nouveau_api:delete_path(nouveau_util:index_name(Db), 
ActiveSigs) end,
+lists:foreach(Fun, Dbs).

Review Comment:
   Makes sense. The only worry was that we would throw an exception and prevent 
other indexes from getting cleaned up. Thanks for double-checking, I think it's 
fine as is, then. 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb] nickva commented on a diff in pull request #4729: Fabric workers should exit if the client exits

2023-08-18 Thread via GitHub


nickva commented on code in PR #4729:
URL: https://github.com/apache/couchdb/pull/4729#discussion_r1298913972


##
rel/haproxy.cfg:
##
@@ -25,9 +25,9 @@ defaults
 option redispatch
 retries 4
 option http-server-close
-timeout client 15
-timeout server 360
-timeout connect 500
+timeout client 60s
+timeout server 60s

Review Comment:
   At least for changes it's not needed as we already have a changes timeout 
max default of 60 seconds. That behavior is a bit odd, no matter what the user 
passes in as the timeout we always pick the minimum of the default and user's 
timeout: 
https://github.com/apache/couchdb/blob/e60e27554708f46a9e6528fa6049d025c1aba859/src/couch/src/couch_changes.erl#L401
 I guess it's for cases of multi-tenant clusters to limit the timeout upper 
limit?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: notifications-unsubscr...@couchdb.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[GitHub] [couchdb] nickva commented on pull request #4729: Fabric workers should exit if the client exits

2023-08-18 Thread via GitHub


nickva commented on PR #4729:
URL: https://github.com/apache/couchdb/pull/4729#issuecomment-1684533075

   > I confirmed directly and via haproxy that a client that disconnects (curl 
and CTRL-C in the first case and simply doing something that hits haproxy's 
server timeout like `_changes?feed=continuous&timeout=1000`) that the 
mochiweb process is killed, and this triggers the new code above to take down 
workers.
   
   Cleanup happens without this patch already form what I've observed. Here is 
how I tested on main:
   
   ```diff
   diff --git a/src/chttpd/src/chttpd_db.erl b/src/chttpd/src/chttpd_db.erl
   index e2de301b2..6c2beba83 100644
   --- a/src/chttpd/src/chttpd_db.erl
   +++ b/src/chttpd/src/chttpd_db.erl
   @@ -127,6 +127,8 @@ handle_changes_req1(#httpd{} = Req, Db) ->
db_open_options = [{user_ctx, couch_db:get_user_ctx(Db)}]
},
Max = chttpd:chunked_response_buffer_size(),
   +couch_log:error(" +++TRACING _changes ~p:~p@~B REQPID:~p", 
[?MODULE, ?FUNCTION_NAME, ?LINE, self()]),
   +dbg:tracer(), dbg:p(self(), p),
case ChangesArgs#changes_args.feed of
"normal" ->
T0 = os:timestamp(),
   diff --git a/src/fabric/src/fabric_db_update_listener.erl 
b/src/fabric/src/fabric_db_update_listener.erl
   index 78ccf5a4d..fb508294a 100644
   --- a/src/fabric/src/fabric_db_update_listener.erl
   +++ b/src/fabric/src/fabric_db_update_listener.erl
   @@ -37,6 +37,8 @@
}).
   
go(Parent, ParentRef, DbName, Timeout) ->
   +couch_log:error(" +++TRACING UPDATE NOTIFIER+++ ~p:~p@~B ~p Parent:~p", 
[?MODULE, ?FUNCTION_NAME, ?LINE, DbName, Parent]),
   +dbg:tracer(), dbg:p(self(), p),
Shards = mem3:shards(DbName),
Notifiers = start_update_notifiers(Shards),
MonRefs = lists:usort([rexi_utils:server_pid(N) || #worker{node = N} <- 
Notifiers]),
   @@ -82,6 +84,8 @@ start_update_notifiers(Shards) ->
   
% rexi endpoint
start_update_notifier(DbNames) ->
   +couch_log:error(" +++TRACING UPDATE NOTIFIER WORKER+++ ~p:~p@~B~p", 
[?MODULE, ?FUNCTION_NAME, ?LINE, DbNames]),
   +dbg:tracer(), dbg:p(self(), p),
{Caller, Ref} = get(rexi_from),
Notify = config:get("couchdb", "maintenance_mode", "false") /= "true",
State = #cb_state{client_pid = Caller, client_ref = Ref, notify = 
Notify},
   diff --git a/src/fabric/src/fabric_rpc.erl b/src/fabric/src/fabric_rpc.erl
   index fa6ea5116..64fdbf4b5 100644
   --- a/src/fabric/src/fabric_rpc.erl
   +++ b/src/fabric/src/fabric_rpc.erl
   @@ -69,6 +69,8 @@ changes(DbName, Args, StartSeq) ->
changes(DbName, #changes_args{} = Args, StartSeq, DbOptions) ->
changes(DbName, [Args], StartSeq, DbOptions);
changes(DbName, Options, StartVector, DbOptions) ->
   +couch_log:error(" ++TRACING CHANGES WORKER+ ~p:~p@~B~p", [?MODULE, 
?FUNCTION_NAME, ?LINE, DbName]),
   +dbg:tracer(), dbg:p(self(), p),
set_io_priority(DbName, DbOptions),
Args0 = lists:keyfind(changes_args, 1, Options),
#changes_args{dir = Dir, filter_fun = Filter} = Args0,
   ```
   
   That's setting up process even traces on the request process, worker 
process, db updater process and db update worker process. The goal is that 
these should be cleaned as soon as we attempt any write to the closed socket.
   
   (`db` is Q=1 empty db)
   
   ```
   % curl -i $DB/db/_changes'?feed=continuous&since=now'
   HTTP/1.1 200 OK
   ...
   
   ^C
   ```
   
   ```
   [error] 2023-08-18T22:54:36.654375Z node1@127.0.0.1 <0.949.0> cc6744542c  
+++TRACING _changes chttpd_db:handle_changes_req1@130 REQPID:<0.949.0>
   (<0.949.0>) spawn <0.974.0> as erlang:apply(#Fun,[])
   (<0.949.0>) link <0.974.0>
   (<0.949.0>) getting_unlinked <0.974.0>
   [error] 2023-08-18T22:54:36.654949Z node1@127.0.0.1 <0.976.0>   
+++TRACING UPDATE NOTIFIER+++ fabric_db_update_listener:go@40 <<"db">> 
Parent:<0.949.0>
   (<0.949.0>) spawn <0.976.0> as 
fabric_db_update_listener:go(<0.949.0>,#Ref<0.830586435.801374217.85503>,<<"db">>,6)
   (<0.949.0>) link <0.976.0>
   (<0.949.0>) spawn <0.978.0> as erlang:apply(#Fun,[])
   (<0.949.0>) link <0.978.0>
   (<0.949.0>) spawn <0.979.0> as 
erlang:apply(#Fun,[])
   [error] 2023-08-18T22:54:36.655018Z node1@127.0.0.1 <0.977.0> cc6744542c  
++TRACING CHANGES WORKER+ 
fabric_rpc:changes@72<<"shards/-/db.1692392006">>
   (<0.976.0>) spawn <0.980.0> as erlang:apply(#Fun,[])
   [error] 2023-08-18T22:54:36.655123Z node1@127.0.0.1 <0.982.0>   
+++TRACING UPDATE NOTIFIER WORKER+++ 
fabric_db_update_listener:start_update_notifier@87[<<"shards/-/db.1692392006">>]
   (<0.976.0>) link <0.980.0>
   (<0.976.0>) spawn <0.981.0> as 
erlang:apply(#Fun,[])
   (<0.977.0>) exit normal
   (<0.949.0>) getting_unlinked <0.978.0>
   
   (<0.976.0>) exit normal
   (<0.982.0>) exit killed
   (<0.949.0>) exit shutdown
   ```
   
* `0.949.0` request process is killed with a shutdown 

[Jenkins] SUCCESS: CouchDB » Full Platform Builds » main #818

2023-08-18 Thread Apache Jenkins Server
Yay, we passed. 
https://ci-couchdb.apache.org/job/jenkins-cm1/job/FullPlatformMatrix/job/main/818/display/redirect