nickva commented on code in PR #5762:
URL: https://github.com/apache/couchdb/pull/5762#discussion_r2565694960


##########
src/config/src/config.erl:
##########
@@ -615,6 +594,58 @@ settings_fmap_fun({{?MODULE, ?SETTINGS, Sec, Key}, Val}) ->
 settings_fmap_fun(_) ->
     false.
 
+reload(Config) ->
+    #config{ini_files_dirs = IniFilesDirs} = Config,
+    IniFiles = expand_dirs(IniFilesDirs),
+    % Update ets with ini values.
+    IniMap = ini_map(IniFiles),
+    maps:foreach(
+        fun({Sec, Key}, V) ->
+            VExisting = get_value(Sec, Key, undefined),
+            put_value(Sec, Key, V),

Review Comment:
   Wonder if this can be moved to the `false` case. The issue is constantly 
updating put-ing persistent term values and triggering global gc across all the 
processes using (unless persistent term logic has something to detect that it's 
a no-op, which I doubt it has but it's worth checking).
   
   See https://www.erlang.org/doc/apps/erts/persistent_term.html
   
   > When a (complex) term is deleted (using 
[erase/1](https://www.erlang.org/doc/apps/erts/persistent_term.html#erase/1)) 
or replaced by another (using 
[put/2](https://www.erlang.org/doc/apps/erts/persistent_term.html#put/2)), a 
global garbage collection is initiated. It works like this:
   
   >  All processes in the system will be scheduled to run a scan of their 
heaps for the term that has been deleted. While such scan is relatively 
light-weight, if there are many processes, the system can become less 
responsive until all processes have scanned their heaps.
   If the deleted term (or any part of it) is still used by a process, that 
process will do a major (fullsweep) garbage collection and copy the term into 
the process. However, at most two processes at a time will be scheduled to do 
that kind of garbage collection.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to