Hi,

 I've been thinking about using lager to push logs to our Big Data Hadoop 
backend for later analysis using Scribe. Looking at the lager_syslog backend it 
looks fairly simple to create such a backend. But thrift works kind of 
different and some implementation questions occur when going through the lager 
code.

The way thrift works is that you create a connection:
{ok, Client} = thrift_client_util:new()

then you can use the Client to do subsequent calls like
{Client2, _Result} = thrift_client:call(Client, ...)
{Client3, _Result} = thrift_client:call(Client2, ...)
[...]

The problem is that you get a new Client descriptor every time and as I've 
understood it you shouldn't use the same Client twice but instead use the new 
one with each call. One could of course create a client and do the call within 
each handle_event() and pay that penalty hit but there must be a smarter way. I 
was going through the lager_file_backend and saw that you use a FD that get's 
passed around, is that a good approach to use something similar for a Scribe 
backend?

--
Bip Thelin
 
Evolope AB | Lugnets Allé 1 | 120 33 Stockholm
Tel 08-533 335 37 | Mob 0735-18 18 90
www.evolope.se

_______________________________________________
riak-users mailing list
[email protected]
http://lists.basho.com/mailman/listinfo/riak-users_lists.basho.com

Reply via email to