Re: Don't print Ping caused error logs

2017-06-19 Thread Eric Plowe
The driver had load balancing policies built in. Behind a load balancer
you'd lose the benefit things like the TokenAwarePolicy.
On Mon, Jun 19, 2017 at 3:49 PM Jonathan Haddad  wrote:

> The driver grabs all the cluster information from the nodes you provide
> the driver and connects automatically to the rest.  You don't need (and
> shouldn't use) a load balancer.
>
> Jon
>
> On Mon, Jun 19, 2017 at 12:28 PM Daniel Hölbling-Inzko <
> daniel.hoelbling-in...@bitmovin.com> wrote:
>
>> Just out of curiosity how to you then make sure all nodes get the same
>> amount of traffic from clients without having to maintain a manual contact
>> points list of all cassandra nodes in the client applications?
>> Especially with big C* deployments this sounds like a lot of work
>> whenever adding/removing nodes. Putting them behind a lb that can Auto
>> discover nodes (or manually adding them to the LB rotation etc) sounds like
>> a much easier way.
>> I am thinking mostly about cloud lb systems like AWS ELB or GCP LB
>>
>> Or can the client libraries discover nodes and use other contact points
>> für subsequent requests? Having a bunch of seed nodes would be easier I
>> guess.
>>
>> Greetings Daniel
>> Akhil Mehra  schrieb am Mo. 19. Juni 2017 um 11:44:
>>
>>> Just in case you are not aware using a load balancer is an anti patter.
>>> Please refer to (
>>> http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal
>>> )
>>>
>>> You can turnoff logging for a particular class using the nodetool
>>> setlogginglevel (
>>> http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetLogLev.html
>>> ).
>>>
>>> In your case you can try setting the log level for
>>> org.apache.cassandra.transport.Message to warn using the following command
>>>
>>> nodetool setlogginglevel org.apache.cassandra.transport.Message WARN
>>>
>>> Obviously this will suppress all info level logging in the message
>>> class.
>>>
>>> I hope that helps.
>>>
>>> Cheers,
>>> Akhil
>>>
>>>
>>>
>>>
>>> On 19/06/2017, at 9:13 PM, wxn...@zjqunshuo.com wrote:
>>>
>>> Hi,
>>> Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and
>>> the Cassandra client access the cluster by the VIP.
>>> System.log print the below IOException every several seconds. I guess
>>> it's the SLB service which Ping the port 9042 of the Cassandra node
>>> periodically and caused the exceptions print.
>>> Any method to prevend the Ping caused exceptions been print?
>>>
>>>
>>> INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
>>> Unexpected exception during request; channel = [id: 0x332c09b7, /
>>> 10.253.106.210:9042]
>>> java.io.IOException: Error while read(...): Connection reset by peer
>>>
>>> at io.netty.channel.epoll.Native.readAddress(Native Method) 
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at 
>>> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at 
>>> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at 
>>> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
>>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at 
>>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>>
>>> at 
>>> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]
>>>
>>> Cheer,
>>> -Simon
>>>
>>>
>>>


Re: Don't print Ping caused error logs

2017-06-19 Thread Jonathan Haddad
The driver grabs all the cluster information from the nodes you provide the
driver and connects automatically to the rest.  You don't need (and
shouldn't use) a load balancer.

Jon

On Mon, Jun 19, 2017 at 12:28 PM Daniel Hölbling-Inzko <
daniel.hoelbling-in...@bitmovin.com> wrote:

> Just out of curiosity how to you then make sure all nodes get the same
> amount of traffic from clients without having to maintain a manual contact
> points list of all cassandra nodes in the client applications?
> Especially with big C* deployments this sounds like a lot of work whenever
> adding/removing nodes. Putting them behind a lb that can Auto discover
> nodes (or manually adding them to the LB rotation etc) sounds like a much
> easier way.
> I am thinking mostly about cloud lb systems like AWS ELB or GCP LB
>
> Or can the client libraries discover nodes and use other contact points
> für subsequent requests? Having a bunch of seed nodes would be easier I
> guess.
>
> Greetings Daniel
> Akhil Mehra  schrieb am Mo. 19. Juni 2017 um 11:44:
>
>> Just in case you are not aware using a load balancer is an anti patter.
>> Please refer to (
>> http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal
>> )
>>
>> You can turnoff logging for a particular class using the nodetool
>> setlogginglevel (
>> http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetLogLev.html
>> ).
>>
>> In your case you can try setting the log level for
>> org.apache.cassandra.transport.Message to warn using the following command
>>
>> nodetool setlogginglevel org.apache.cassandra.transport.Message WARN
>>
>> Obviously this will suppress all info level logging in the message class.
>>
>> I hope that helps.
>>
>> Cheers,
>> Akhil
>>
>>
>>
>>
>> On 19/06/2017, at 9:13 PM, wxn...@zjqunshuo.com wrote:
>>
>> Hi,
>> Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and
>> the Cassandra client access the cluster by the VIP.
>> System.log print the below IOException every several seconds. I guess
>> it's the SLB service which Ping the port 9042 of the Cassandra node
>> periodically and caused the exceptions print.
>> Any method to prevend the Ping caused exceptions been print?
>>
>>
>> INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
>> Unexpected exception during request; channel = [id: 0x332c09b7, /
>> 10.253.106.210:9042]
>> java.io.IOException: Error while read(...): Connection reset by peer
>>
>> at io.netty.channel.epoll.Native.readAddress(Native Method) 
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at 
>> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at 
>> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at 
>> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
>> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at 
>> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>>
>> at 
>> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]
>>
>> Cheer,
>> -Simon
>>
>>
>>


Re: Don't print Ping caused error logs

2017-06-19 Thread Daniel Hölbling-Inzko
Just out of curiosity how to you then make sure all nodes get the same
amount of traffic from clients without having to maintain a manual contact
points list of all cassandra nodes in the client applications?
Especially with big C* deployments this sounds like a lot of work whenever
adding/removing nodes. Putting them behind a lb that can Auto discover
nodes (or manually adding them to the LB rotation etc) sounds like a much
easier way.
I am thinking mostly about cloud lb systems like AWS ELB or GCP LB

Or can the client libraries discover nodes and use other contact points für
subsequent requests? Having a bunch of seed nodes would be easier I guess.

Greetings Daniel
Akhil Mehra  schrieb am Mo. 19. Juni 2017 um 11:44:

> Just in case you are not aware using a load balancer is an anti patter.
> Please refer to (
> http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal
> )
>
> You can turnoff logging for a particular class using the nodetool
> setlogginglevel (
> http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetLogLev.html
> ).
>
> In your case you can try setting the log level for
> org.apache.cassandra.transport.Message to warn using the following command
>
> nodetool setlogginglevel org.apache.cassandra.transport.Message WARN
>
> Obviously this will suppress all info level logging in the message class.
>
> I hope that helps.
>
> Cheers,
> Akhil
>
>
>
>
> On 19/06/2017, at 9:13 PM, wxn...@zjqunshuo.com wrote:
>
> Hi,
> Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and
> the Cassandra client access the cluster by the VIP.
> System.log print the below IOException every several seconds. I guess it's
> the SLB service which Ping the port 9042 of the Cassandra node periodically
> and caused the exceptions print.
> Any method to prevend the Ping caused exceptions been print?
>
>
> INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
> Unexpected exception during request; channel = [id: 0x332c09b7, /
> 10.253.106.210:9042]
> java.io.IOException: Error while read(...): Connection reset by peer
>
> at io.netty.channel.epoll.Native.readAddress(Native Method) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>
> at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
> at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]
>
> Cheer,
> -Simon
>
>
>


Re: Don't print Ping caused error logs

2017-06-19 Thread Akhil Mehra
Just in case you are not aware using a load balancer is an anti patter. Please 
refer to 
(http://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAntiPatterns.html#planningAntiPatterns__AntiPatLoadBal
 
)

You can turnoff logging for a particular class using the nodetool 
setlogginglevel 
(http://docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsSetLogLev.html 
).

In your case you can try setting the log level for 
org.apache.cassandra.transport.Message to warn using the following command

nodetool setlogginglevel org.apache.cassandra.transport.Message WARN

Obviously this will suppress all info level logging in the message class. 

I hope that helps.

Cheers,
Akhil




> On 19/06/2017, at 9:13 PM, wxn...@zjqunshuo.com wrote:
> 
> Hi,
> Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and the 
> Cassandra client access the cluster by the VIP. 
> System.log print the below IOException every several seconds. I guess it's 
> the SLB service which Ping the port 9042 of the Cassandra node periodically 
> and caused the exceptions print.
> Any method to prevend the Ping caused exceptions been print?
> 
> INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
> Unexpected exception during request; channel = [id: 0x332c09b7, 
> /10.253.106.210:9042]
> java.io.IOException: Error while read(...): Connection reset by peer
>   at io.netty.channel.epoll.Native.readAddress(Native Method) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
> ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at 
> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
>  ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
>   at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]
> 
> Cheer,
> -Simon



Don't print Ping caused error logs

2017-06-19 Thread wxn...@zjqunshuo.com
Hi,
Our cluster nodes are behind a SLB(Service Load Balancer) with a VIP and the 
Cassandra client access the cluster by the VIP. 
System.log print the below IOException every several seconds. I guess it's the 
SLB service which Ping the port 9042 of the Cassandra node periodically and 
caused the exceptions print.
Any method to prevend the Ping caused exceptions been print?

INFO  [SharedPool-Worker-1] 2017-06-19 16:54:15,997 Message.java:605 - 
Unexpected exception during request; channel = [id: 0x332c09b7, 
/10.253.106.210:9042]
java.io.IOException: Error while read(...): Connection reset by peer
at io.netty.channel.epoll.Native.readAddress(Native Method) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) 
~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at 
io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137)
 ~[netty-all-4.0.23.Final.jar:4.0.23.Final]
at java.lang.Thread.run(Thread.java:745) [na:1.7.0_85]

Cheer,
-Simon