@Christopher, not sure if you noticed it but, CASSANDRA-4573 is now fixed
in C*2.0.0 RC2 = http://goo.gl/AGVTOF
No idea if this could fix our issue
Alain
2013/8/14 Jake Luciani jak...@gmail.com
This is technically a Thrift message not Cassandra, it happens when a
client hangs up without
Hi,
I am new to Cassandra. have followed the guide here [1] and imported
Cassandra as an IntelliJIdea project. But when I tried to build using 'Ant
Build', it gives compilation errors.
When I tried to open the java classes in the workspace, all most all of the
errors are due to missing
Hi,
This was a dependency issue, and now it is fixed.
Thanks,
Nipuni
On Wed, Aug 21, 2013 at 2:15 PM, Nipuni Piyabasi Perera
nipuni880...@gmail.com wrote:
Hi,
I am new to Cassandra. have followed the guide here [1] and imported
Cassandra as an IntelliJIdea project. But when I tried to
A thread dump on one of the machine that has a suspiciously high CPU might
help figuring out what it is that is taking all that CPU.
On Wed, Aug 21, 2013 at 8:57 AM, Keith Wright kwri...@nanigans.com wrote:
Some last minute info on this to hopefully enlighten. We are doing ~200
reads and
Hi,
Suppose we have two networks:
10.1.0.0/16 and 10.2.0.0/16.
It is not possible to route packets between the two networks, but all
nodes have interfaces on both networks, so any node can communicate with
any address on either network.
We are currently running our all nodes on one network,
hi:
I am using pig 0.11.1 and cassandra 1.2.8.
i try this
http://frommyworkshop.blogspot.com.es/2013/07/hadoop-map-reduce-with-cassandra.html
and...
*rows = LOAD
'cql://keyspace1/test?page_size=1split_size=4where_clause=age%3D30' USING
CqlStorage();*
*dump rows;*
works fine if I skip
In order to narrow down the problem, I would start without the request
parameters and see if that works. Then I would add the request parameters one
at a time to see what breaks things. Often pig is not very helpful with its
error messages, so I've had to use this method a lot.
On 21 Aug
Yup, there are other types of indexing like that in PlayOrm which do it
differently so all nodes are not hit so it works better for instance if you are
partitioning your data and you query into just a single partition so it doesn't
put load on all the nodes. (of course, you have to have a
Hello,
When i have tested the issue CASSANDRA-5234, it's work with the following query
rows = LOAD
'cql://keyspace1/test?page_size=1columns=title,agesplit_size=4where_clause=age%3D30'
USING CqlStorage();
there was no problem with param columns. May be something goes wrong with
version 1.2.8 -
Hi,
After upgrading from 1.0 to 1.2, I wanted to make use of the automatic
tombstone compaction feature, so using CQL3 I issued:
ALTER TABLE versions WITH compaction = {'class' :
'SizeTieredCompactionStrategy', 'min_threshold' : 4, 'max_threshold' : 32,
'tombstone_compaction_interval' : 1,
Hi,
do you mean LeveledCompactionStrategy?
Also you will need to run nodetool upgradesstables [keyspace][cf_name] after
changing the compaction strategy.
Thanks,
Haithem Jarraya
On 21 Aug 2013, at 15:15,
tamas.fold...@thomsonreuters.commailto:tamas.fold...@thomsonreuters.com wrote:
Hi,
Hi,
I ran upgradesstables as part of the Cassandra upgrade, before issuing the CQL
alter command.
According to the docs, SizeTieredCompactionStrategy is fine (that is what I
used, and plan on continue using), and automatic tombstone compaction is
available for it:
Hi,
We are interested in secondary index implementation of Cassandra. What are
the classes that we need to approach in order to get an understanding on
secondary index implementation.
We could download and setup the basic configuration to run Cassandra.We
also could setup Cassandra as a project.
Thanks Dean. Any reason why it is sequential ? It is to avoid loading all the
nodes and see if one node can return the desired results ?
-Original Message-
From: Hiller, Dean [mailto:dean.hil...@nrel.gov]
Sent: 21 August 2013 07:36
To: user@cassandra.apache.org
Subject: Re: Secondary
I guess I didn't understand your question then, I thought you changed the
compaction strategy. If that what you did, you have to run upgradesstaable
again.
On 21 Aug 2013, at 15:33,
tamas.fold...@thomsonreuters.commailto:tamas.fold...@thomsonreuters.com wrote:
Hi,
I ran upgradesstables as
Tamas,
If there are rows with the same key in other SSTables, that rows won't
be deleted.
Tombstone compaction make guess if it can actually drop safely by
scanning overlap with other SSTables.
Do you have many rows in your large SSTable?
If you don't, then chance to run tombstone compaction may
Sorry, I forget why. Someone told me at the cassandra conference. It
might be to not overload the entire cluster at once so if you have 1000
nodes and you run just 5 queries, you could take out your cluster. (This
is why I use playorm's querying and in tons of use cases, you don't want
to query
Oh, I do know it is not see if one node can return the desired results
as each node will have different results for your client and you get
results from the first node, then results from second node, etc. etc. (I
remember having this discussion but for the life of me can't remember why
it is
Hi, I am sorry about digging this up but I was in search of this kind of
information and read this thread.
How to make sure that the first rowkey you select has the smaller token ? I
mean when you perform select rowkey from my_table limit N; can you have
any data with any token or is data token
Hello,
I have 3 clusters in aws, need to turn in a single cluster.
My current infrastructure:
Cluster 1: 3 xlarge, replication factor 3, with 80% disk usage, 4 ephemeral
raid0 xfs 1.7TB
Cluster 2: 3 xlarge, replication factor 2, with 40% disk usage, 4 ephemeral
raid0 xfs 1.7TB
Cluster 3: 2
Hello,
I've been tasked with tuning a Cassandra-based app for eventual production
deployment and I'm running into an issue I can't seem to solve when I run
my load tests. I'm still relatively new to Cassandra so i'm hoping there is
something obvious I'm missing here.
Basically, everything runs
On Tue, Aug 20, 2013 at 5:57 PM, Kanwar Sangha kan...@mavenir.com wrote:
Hi – I was reading some blogs on implementation of secondary indexes in
Cassandra and they say that “the read requests are sent sequentially to all
the nodes” ?
** **
So if I have a query to fetch ALL records
On Wed, Aug 21, 2013 at 3:58 AM, Tim Wintle timwin...@gmail.com wrote:
What would the best way to achieve this? (We can tolerate a fairly short
period of downtime).
I think this would work, but may require a full cluster shutdown.
1) stop nodes on old network
2) set auto_bootstrap to false
On Wed, Aug 21, 2013 at 8:23 AM, Yuki Morishita mor.y...@gmail.com wrote:
If there are rows with the same key in other SSTables, that rows won't
be deleted.
Tombstone compaction make guess if it can actually drop safely by
scanning overlap with other SSTables.
Background @ :
On Tue, Aug 20, 2013 at 11:35 PM, Keith Wright kwri...@nanigans.com wrote:
Still looking for help! We have stopped almost ALL traffic to the cluster
and still some nodes are showing almost 1000% CPU for cassandra with no
iostat activity. We were running cleanup on one of the nodes that was
On Wed, Aug 21, 2013 at 10:47 AM, Robert Coli rc...@eventbrite.com wrote:
On Tue, Aug 20, 2013 at 11:35 PM, Keith Wright kwri...@nanigans.comwrote:
Still looking for help! We have stopped almost ALL traffic to the
cluster and still some nodes are showing almost 1000% CPU for cassandra
with
Well, these tables are somewhat similar to a 'cache' - we insert rows, then
leave them for a week using TTL (usually untouched, read only), and then we
need to compact them away. If I understand correctly, they should not be
affected by the below issue...
The question is rather if the setup is
In the context of Yuki's response, if you are using the same key for the
cache, then your rows will get increasingly fragmented.
On Wed, Aug 21, 2013 at 1:09 PM, tamas.fold...@thomsonreuters.com wrote:
Well, these tables are somewhat similar to a 'cache' - we insert rows,
then leave them for
Building the giant batch string wasn't as bad as I thought, and at first
I had great(!) results (using unlogged batches): 2500 rows/sec
(batches of 100 in 48 threads) ran very smoothly, and the load on the
cassandra server nodes averaged about 1.0 or less continuously.
But then I upped it to
The bcrypt rounds are indeed expensive and ClientState should hold the
result for the active connection. So it sounds like you are creating a lot
of new connections and thus hitting that bcrypt penalty.
On Wed, Aug 21, 2013 at 12:28 PM, Joshua M. Thompson
joshua.thomp...@gmail.com wrote:
The only thing I can think to suggest at this point is upping that batch
size - say to 500 and see what happens.
Do you have any monitoring on this cluster? If not, what do you see as the
output of 'nodetool tpstats' while you run this test?
On Wed, Aug 21, 2013 at 1:40 PM, Keith Freeman
On Wed, Aug 21, 2013 at 2:46 PM, Nate McCall n...@thelastpickle.com wrote:
The bcrypt rounds are indeed expensive and ClientState should hold the
result for the active connection. So it sounds like you are creating a lot
of new connections and thus hitting that bcrypt penalty.
Thanks, that
What's the disk setup like on these system? You have some pending tasks in
MemtablePostFlusher and FlushWriter which may mean there is contention on
flushing discarded segments from the commit log.
On Wed, Aug 21, 2013 at 5:14 PM, Keith Freeman 8fo...@gmail.com wrote:
Ok, I tried batching 500
Nature of issue CASSANDRA-4573 compare to Read an invalid frame size of
0. looks different, nevertheless if someone can test the issue fix would
include invalid frame size.. would be awesome!
Jason
On Wed, Aug 21, 2013 at 4:08 PM, Alain RODRIGUEZ arodr...@gmail.com wrote:
@Christopher, not
34 matches
Mail list logo