nickva commented on a change in pull request #3940:
URL: https://github.com/apache/couchdb/pull/3940#discussion_r814304096
##########
File path: src/fabric/src/fabric_rpc.erl
##########
@@ -638,16 +638,14 @@ make_att_readers([#doc{atts = Atts0} = Doc | Rest]) ->
[Doc#doc{atts = Atts} | make_att_readers(Rest)].
make_att_reader({follows, Parser, Ref}) ->
+ % This code will fail if the returned closure is called by a
+ % process other than the one that called make_att_reader/1 in the
+ % first place. The reason we don't put everything inside the
+ % closure is that the `hello_from_writer` message must *always* be
+ % sent to the parser, even if the closure never gets called.
+ ParserRef = erlang:monitor(process, Parser),
+ Parser ! {hello_from_writer, Ref, self()},
Review comment:
I had noticed we previously cached the PRef in the `mp_parser_ref`
process dict. So, for multiple attachments we would have created only a single
monitor.
In the PR we have a separate monitor for each attachment and send a
`hello_from_writer` for each one. If it's just saving an extra few monitor
references it's not a big deal (though they are remote, not sure actually how
impactful those are...). But would multiple `hello_from_writer` affect the by
fetching protocol correctness. Each one would end up overwriting the `WriterPid
=> {WriterRef, 0}` entry but monitors would stack up, then we could get
multiple `DOWN` messages for each attachment instead of once per writer
termination.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]