zbentley commented on issue #11845:
URL: https://github.com/apache/pulsar/issues/11845#issuecomment-1183192468

   To double down, I think the statement @merlimat made 
[here](https://github.com/apache/pulsar/issues/6463#issuecomment-910782962):
   
   > you're expected to keep 1 single instance of C++ Pulsar client for the 
duration of your application
   
   is not going to be true in a lot of the Python world. In Python, the pulsar 
client is one library of many in potentially long-lived applications; it's not 
necessarily part of a global, nor is it expected (or desired) for it to manage 
a global pool of connection state. Python programs can and will attempt to 
briefly use pulsar clients and then fully dispose of them, potentially 
   
   @BewareMyPower re: thread safety, some thoughts:
   - Splitting loggers into threadlocal versions for each file is no problem, 
so long as those threadlocal instances (and the threads they're in) have 
lifetimes tied to the client object that created them. So long as we properly 
join those threads on client destruction and never daemonize (I think it's 
"detach" in Boost?) them, their lifetimes should be something that can be 
linked to the parent client object.
   - Do we need true nonblocking thread safety for those loggers? Could they be 
protected with a lock instead, so long as we didn't acquire that lock until as 
late (I.e. we knew we needed to emit a log message at a given level) as 
possible? My hunch is that this would be OK performance-wise, and that most 
loggers are writing to the same stream/file anyway, so there's already a lock 
around their internal "write" actions to prevent the output data from getting 
mangled, so moving the lock "up" into application code wouldn't harm much if 
anything.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to