Hi
You are missing retryInterval, and I would say it's needed for proper retry after a queue lock. In our tests, once relp queue was full, it never resumed (although resumeretrycount=-1 was set) until we added interval setting. On 26/10/17 15:52, Jether B. Santos via rsyslog wrote:
Hi, In order to not loose any message I realized I have to change the queue.dequeuebatchsize to "1" and action.resumeretrycount to a value greater than default "0" (I used "-1" for infinite). That configuration has worked for me: ruleset(name="remote_logserver"){ action(type="omrelp" Target="10.0.0.43" Port="514" queue.filename="fila-msg" queue.size="1800000" queue.saveonshutdown="on" queue.type="LinkedList" queue.dequeuebatchsize="1" queue.maxdiskspace="536870912" action.resumeretrycount="-1" ) } Regards, Jether De: Jether B. Santos Enviada em: segunda-feira, 23 de outubro de 2017 20:40 Para: 'rsyslog@lists.adiscon.com' <rsyslog@lists.adiscon.com> Assunto: RES: Client/server architecture with omrelp/imrelp - Lost of messages in case of server side failure Hello, I realized the parameter RebindInterval was causing duplicated messages in both balanced servers. I removed that and apparentely the messages haven't been lost anymore. Tomorrow I will stress it more. Regards, Jether De: Jether B. Santos Enviada em: segunda-feira, 23 de outubro de 2017 11:35 Para: 'rsyslog@lists.adiscon.com' <rsyslog@lists.adiscon.com<mailto:rsyslog@lists.adiscon.com>> Assunto: Client/server architecture with omrelp/imrelp - Lost of messages in case of server side failure Hi everybody, I'm testing the following rsyslog client/server architecture: - Client side: a rsyslog server that consumes messages from a local apache log file through imfile module. The main worker threads save the messages in a disk assisted queue of an omrelp action. The action threads dequeue the messages and send them to rsyslog servers behind a haproxy instance. Rsyslog's client configuration: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- module(load="imfile" mode="inotify") input(type="imfile" file="/var/log/httpd/access_log" tag="apache:" facility="local3" severity="notice" ruleset="remote_logserver") module(load="omrelp") ruleset(name="remote_logserver"){ action(type="omrelp" Target="10.0.0.43" Port="514" queue.filename="fila-msg" queue.size="1800000" queue.saveonshutdown="on" queue.type="LinkedList" action.resumeretrycount="-1" action.resumeInterval="1" RebindInterval="720000") } --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- - Server side: two rsyslog servers behind a haproxy load balancer instance. The servers listen the logs through imrelp module. Rsyslog's servers configuration: --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- module(load="imrelp") input(type="imrelp" port="514") local3.* /var/log/httpd-access.log --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- The problem: Everytime I stop and start the rsyslog daemon in the remote rsyslog server where the client is conected, despite haproxy switches the connection to the other rsyslog server, at least a message is lost. I tried configure retrycount="-1" but without success. I read about similar problem in the mail list (http://lists.adiscon.net/pipermail/rsyslog/2016-March/042379.html) but I've not understood exaclly what's been done to solve that (https://github.com/rsyslog/rsyslog/issues/974?_pjax=%23js-repo-pjax-container): - Is it really mandatory to configure batch of only one message a time (queue.dequeuebatchsize="1") in order to not lose any message in case of the remote rsyslog server fail? - Has the "isolation" of a "corrupt message" implemented (Rainer said in the mail list mentioned above: I thought that if a batch failed, it pushed all the messages back on the queue and retried with a half size batch until it got to the one message that could not be processed and only did a fatal fail on that message)? - What about the action retry feature? When it is enabled and you have a batch of many messages and just one fails, what's expected to happen? My servers environment: - Centos 7.3 - Rsyslog version: 8.24 Regards, Jether "Esta mensagem e reservada e sua divulgacao, distribuicao, reproducao ou qualquer forma de uso e proibida e depende de previa autorizacao desta instituicao. O remetente utiliza o correio eletronico no exercicio do seu trabalho ou em razao dele, eximindo esta instituicao de qualquer responsabilidade por utilizacao indevida. Se voce recebeu esta mensagem por engano, favor elimina-la imediatamente." "This message is reserved and its disclosure, distribution, reproduction or any other form of use is prohibited and shall depend upon previous proper authorization. The sender uses the electronic mail in the exercise of his/her work or by virtue thereof, and the institution accepts no liability for its undue use. If you have received this e-mail by mistake, please delete it immediately." _______________________________________________ rsyslog mailing list http://lists.adiscon.net/mailman/listinfo/rsyslog http://www.rsyslog.com/professional-services/ What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE THAT.
_______________________________________________ rsyslog mailing list http://lists.adiscon.net/mailman/listinfo/rsyslog http://www.rsyslog.com/professional-services/ What's up with rsyslog? Follow https://twitter.com/rgerhards NOTE WELL: This is a PUBLIC mailing list, posts are ARCHIVED by a myriad of sites beyond our control. PLEASE UNSUBSCRIBE and DO NOT POST if you DON'T LIKE THAT.