Hi all,
I’ve always been told that multigets are a Cassandra anti-pattern for
performance reasons. I ran a quick test tonight to prove it to myself, and,
sure enough, slowness ensued. It takes about 150ms to get 100 keys for my use
case. Not terrible, but at least an order of magnitude from
Hello Graham
You can use the following code with the official Java driver:
SocketOptions socketOptions = new SocketOptions();
socketOptions.setKeepAlive(true);
Cluster.builder().addContactPoints(contactPointsList)
.withPort(cql3Port)
Are you making the 100 calls in serial, or in parallel?
Thanks,
Daniel
On Tue, Apr 8, 2014 at 11:22 PM, Allan C alla...@gmail.com wrote:
Hi all,
I've always been told that multigets are a Cassandra anti-pattern for
performance reasons. I ran a quick test tonight to prove it to myself, and,
Nate,
What values for the FlushWriter line would draw concern to you? What is the
difference between Blocked and All Time Blocked?
Parag
From: Nate McCall [mailto:n...@thelastpickle.com]
Sent: Thursday, February 27, 2014 4:22 PM
To: Cassandra Users
Subject: Re: Commit logs building up
What
1) Why is the default 4GB? Has anyone changed this? What are some aspects
to consider when determining the commitlog size?
2) If the commitlog is in periodic mode, there is a property to set a time
interval to flush the incoming mutations to disk. This implies that there is a
Hi all,
I'm getting the following error in a 2.0.6 instance:
ERROR [Native-Transport-Requests:16633] 2014-04-09 10:11:45,811
ErrorMessage.java (line 222) Unexpected exception during request
java.lang.AssertionError: localhost/127.0.0.1
at
Have a test cluster with three nodes each in two datacenters. The
following causes nodetool repair to go into an (apparent) infinite
loop. This is with 2.0.6.
On node 10.140.140.101:
cqlsh CREATE KEYSPACE looptest WITH replication = {
... 'class': 'NetworkTopologyStrategy',
...
Hello All,
Kindly help with below issues, I'm really stuck here.
Thanks,
Joy
On 8 April 2014 21:55, Joyabrata Das joy.luv.challen...@gmail.com wrote:
Hello,
I've a four node apache cassandra community 1.2 cluster in single
datacenter with a seed.
All configurations are similar in
In fact, it did eventually finish in ~20 minutes. Is this duration
expected/normal?
--Kevin
On Wed, Apr 9, 2014 at 9:32 AM, Kevin McLaughlin kmcla...@gmail.com wrote:
Have a test cluster with three nodes each in two datacenters. The
following causes nodetool repair to go into an (apparent)
Hello
The nodetool status that you mentioned, was that executed on the 4th
node itself? Also What does netstat display? Are the correct ports
listening on that node?
Per opscenter, What version of opscenter are you using? Are you able to
manually start the agents on the nodes
As Jonathan also asked for some various details, perhaps it would be
helpful to be very specific about who, what, when, where, why, what you
tried, actual errors, versions, pastebins of configs, etc. Provide the
things that might be needed for people to help you out.
For instance, the
Parag:
To answer your questions:
1) Default is just that, a default. I wouldn't advise raising it
though. The bigger it is the longer it takes to restart the node.
2) I think they juse use fsync. There is no queue. All files in
cassandra use java.nio buffers, but they need to be fsynced
On 04/08/2014 11:25 AM, Joyabrata Das wrote:
Further observed that problematic node has Ubuntu 64-Bit other nodes
are Ubuntu 32-Bit, can it be the reason?
This may not be recommended, might/should(?) work, and may be a reason
[0]. My first suggestion would be to remove this variable. This
As one CQL statement:
SELECT * from Event WHERE key IN ([100 keys]);
-Allan
On April 9, 2014 at 12:52:13 AM, Daniel Chia (danc...@coursera.org) wrote:
Are you making the 100 calls in serial, or in parallel?
Thanks,
Daniel
On Tue, Apr 8, 2014 at 11:22 PM, Allan C alla...@gmail.com wrote:
Hi
Thanks, but I would think that just sets keep alive from the client end; I’m
talking about the server end… this is one of those issues where there is
something (e.g. switch, firewall, VPN in between the client and the server) and
we get left with orphaned established connections to the server
On 04/09/2014 11:39 AM, graham sanderson wrote:
Thanks, but I would think that just sets keep alive from the client end;
I’m talking about the server end… this is one of those issues where
there is something (e.g. switch, firewall, VPN in between the client and
the server) and we get left with
Michael, it is not that the connections are being dropped, it is that the
connections are not being dropped.
These server side sockets are ESTABLISHED, even though the client connection on
the other side of the network device is long gone. This may well be an issue
with the network device (it
On Wed, Apr 9, 2014 at 3:06 AM, Parag Patel ppa...@clearpoolgroup.comwrote:
some questions about the commitlog and related assumptions
https://issues.apache.org/jira/browse/CASSANDRA-6764
You might wish to get in contact with the reporter here, who has similar
questions!
=Rob
On Wed, Apr 9, 2014 at 3:06 AM, Parag Patel ppa...@clearpoolgroup.comwrote:
What values for the FlushWriter line would draw concern to you? What is
the difference between Blocked and All Time Blocked?
Non-zero all time blocked. Because if the FlushWriter is blocked, you
probably don't have
On Wed, Apr 9, 2014 at 7:09 AM, Kevin McLaughlin kmcla...@gmail.com wrote:
In fact, it did eventually finish in ~20 minutes. Is this duration
expected/normal?
https://issues.apache.org/jira/browse/CASSANDRA-5220
=Rob
On 04/09/2014 12:41 PM, graham sanderson wrote:
Michael, it is not that the connections are being dropped, it is that
the connections are not being dropped.
Thanks for the clarification.
These server side sockets are ESTABLISHED, even though the client
connection on the other side of the
I've been doing a lot of reading on SSTable fragmentation due to updates and
the costs associated with reconstructing the end data from multiple SSTables
that have been created over time and not yet compacted. One question is stuck
in my head: If you re-insert entire rows instead of updating
Hi everyone,
Is there a way to change the partitioner on a per-table or per-keyspace
basis?
We have some tables for which we'd like to enable ordered scans of rows, so
we'd like to use the ByteOrdered partitioner for those, but use Murmur3 for
everything else in our cluster.
Is this possible?
Hello,
Partitioner is per cluster. We have seen users create separate clusters
for items like this, but that's an edge case.
Jonathan
Jonathan Lacefield
Solutions Architect, DataStax
(404) 822 3487
http://www.linkedin.com/in/jlacefield
http://www.datastax.com/cassandrasummit14
On Wed,
Thanks Michael,
Yup keepalive is not the default. It is possible they are going away after
nf_conntrack_tcp_timeout_established; will have to do more digging (it is hard
to tell how old a connection is - there are no visible timers (thru netstat) on
an ESTABLISHED connection))…
This is
I don't believe so. Cassandra still needs to hit the bloom filters for
each SST table and then reconcile all versions and all tombstones for any
row. That's why overwrites have similar performance impact as tombstones,
overwrites just happen to be less common.
On Wed, Apr 9, 2014 at 2:42 PM,
Can you trace the query and paste the results?
On Wed, Apr 9, 2014 at 11:17 AM, Allan C alla...@gmail.com wrote:
As one CQL statement:
SELECT * from Event WHERE key IN ([100 keys]);
-Allan
On April 9, 2014 at 12:52:13 AM, Daniel Chia (danc...@coursera.org) wrote:
Are you making the
On Tue, Apr 8, 2014 at 4:39 AM, Alain RODRIGUEZ arodr...@gmail.com wrote:
Yet, can't we rebuild a new DC with the current C* version, upgrade it to
the new major once it is fully part of the C* cluster, and then switch all
the clients to the new DC once we are sure everything is ok and shut
We have around 36 node Cassandra cluster and we have three Datacenters.
Each datacenter have 12 node.
We already have data flowing in Cassandra now and we cannot wipe out all
our data now.
Considering this - what is the right way to rename the cluster name without
any or minimal impact?
What version are you running? As of 1.2.x you can do the following:
1. Start the cqlsh connected locally to the node.
2. Run:
update system.local set cluster_name='$CLUSTER_NAME' where key='local';
3. Run nodetool flush on the node.
4. Update the cassandra.yaml file on the node, changing the
30 matches
Mail list logo