On Thu, Nov 17, 2011 at 04:52:58PM +0100, Bip Thelin wrote:
> Hi,
> 
>  I've been thinking about using lager to push logs to our Big Data Hadoop 
> backend for later analysis using Scribe. Looking at the lager_syslog backend 
> it looks fairly simple to create such a backend. But thrift works kind of 
> different and some implementation questions occur when going through the 
> lager code.
> 
> The way thrift works is that you create a connection:
> {ok, Client} = thrift_client_util:new()
> 
> then you can use the Client to do subsequent calls like
> {Client2, _Result} = thrift_client:call(Client, ...)
> {Client3, _Result} = thrift_client:call(Client2, ...)
> [...]
> 
> The problem is that you get a new Client descriptor every time and as I've 
> understood it you shouldn't use the same Client twice but instead use the new 
> one with each call. One could of course create a client and do the call 
> within each handle_event() and pay that penalty hit but there must be a 
> smarter way. I was going through the lager_file_backend and saw that you use 
> a FD that get's passed around, is that a good approach to use something 
> similar for a Scribe backend?
> 
It should be fine to create the client and just store the current
descriptor in the backend's state. Creating a whole new client on every
message seems like a bad idea.

Andrew

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to