Hi All,
I'm running a benchmark on Cassandra while using a benchmark client which I've
written myself.
I'm running the following scenario:
One Cassandra node on the same machine as the client.
The client writes a new key every 1 second and deletes it after 10 seconds, so
at any given time there
The only indication I have that cassandra realized something was wrong
during this period was this INFO message:
10.33.2.70:/var/log/cassandra/output.log
DEBUG 20:00:35,841 get_slice
DEBUG 20:00:35,841 weakreadremote reading SliceFromReadCommand(table='jolitics.c
om', key='4c43228354b38f14a1a015d
Agreed. But those connection errors were happening at a sort of random
time. Not the time when I was seeing the problem. Now I am seeing the
problem and here are some logs without ConnectionExceptions.
Here we're asking 10.33.2.70 for key: 52e86817a577f75e545cdecd174d8b17
which resides only on 10.
This is definitely not a Cassandra bug, something external is causing
those connection failures.
On Sat, Jun 19, 2010 at 3:12 PM, AJ Slater wrote:
> Logging with TRACE reveals immediate problems with no client requests
> coming in to the servers. The problem was immediate and persisted over
> the
tcpdump shows bidirectional communication with ACKs during a known
problem period. I did not have TRACE logging going during the period I
have tcpdump logs, but I assume that an 'INFO error connecting to' is
probably caused by ConnectExceptions
For instance...
lpc03:~$ telnet fs02 7000
...conne
> TRACE 14:42:06,248 unable to connect to /10.33.3.20
> java.net.ConnectException: Connection refused
> at java.net.PlainSocketImpl.socketConnect(Native Method)
So that's interesting since it is a clear failure that comes from the
operating system and indicates something which can be observ
Logging with TRACE reveals immediate problems with no client requests
coming in to the servers. The problem was immediate and persisted over
the course of half an hour:
10.33.2.70 lpc03
10.33.3.10 fs01
10.33.3.20 fs02
a...@lpc03:~$ grep unable /var/log/cassandra/output.log
TRACE 14:07:52,1
On Sat, Jun 19, 2010 at 9:30 AM, Christian van der Leeden
wrote:
> Hi Thomas,
>
> did you look at cassandra gem from twitter (fauna/cassandra) on github?
> They also use the thrift_client and already have the basic cassandra API
> accessible.
>
> I'm also using ruby with cassandra and stil
I shall do just that. I did a bunch of tests this morning and the
situation appears to be this:
I have three nodes A, B and C, with RF=2. I understand now why this
issue wasn't apparent with RF=3.
If there are regular intranode column requests going on (e.g. i set up
a pinger to get remote column
Hi Thomas,
did you look at cassandra gem from twitter (fauna/cassandra) on github?
They also use the thrift_client and already have the basic cassandra API
accessible.
I'm also using ruby with cassandra and still need to find a slick way to do the
inserts
and when to update the indexes.
@chris. Thanks. I wil keep you update if I find something
@Joe. I am not telling this is a bad number. I am just telling this is
still not enough for us ( in order to limit the number of nodes) ;o)
If I look at the last bench, version 0.6.2 is around 13000w/s
I should/would be able to reach 1000
11 matches
Mail list logo