I'm running on a single node on my laptop.
It looks like the point when rows dissapear from the index depends on
JVM memory settings. With more memory it needs more data to feed in
before things start disappearing.
Please try to run Cassandra with -Xms1927M -Xmx1927M -Xmn400M
To be sure, try to
I think this might be https://issues.apache.org/jira/browse/CASSANDRA-4670
Unfortunately apart from me no one was yet able to reproduce.
Check if data is available before/after compaction
If you have leveled compaction it is hard to test because you cannot trigger
compaction manually.
Hi,
I know this have been a topic here before, but I need some input on
how to move data from one datacenter to another (and google just gives
me some old mails) - and at the same time moving production writing
the same way. To add the target cluster into the source cluster and
just replicate
In the TCP mib for SNMP (Simple Network Management Protocol) this
information is available
http://www.simpleweb.org/ietf/mibs/mibSynHiLite.php?category=IETFmodule=TCP-MIB
On Wed, Dec 19, 2012 at 12:22 AM, Michael Kjellman
mkjell...@barracuda.comwrote:
netstat + cron is your friend at this
Hello All,
We have a 3-node cluster and we created a keyspace (say Test_1) with
Replication Factor set to 3. I know is not great but we wanted to test
different behaviors. So, we created a Column Family (say cf_1) and we
tried writing something with Consistency Level ANY, ONE, TWO, THREE,
QUORUM
ANY: worked (expected...)
ONE: worked (expected...)
TWO: did not work (WHT???)
This is expected sometimes and sometimes not. It depends on the 2 of the
3 nodes that have the data. Since you have one node down, that might be
the one where that data goes ;).
THREE: did not work (expected...)
Ps, you may be getting a bit confused by the way. Just think if you have
a 10 node cluster and one node is down and you do CL=2Š..if the node that
is down is where your data goes, yes, you will fail. If you do CL=quorum
and RF=3 you can tolerate one node being downŠIf you use astyanax, I think
Solr? Are you on DSE or am i missing something ( huge ) about Cassandra? (
wouldnt be the first time :-)
Or do you mean the json manifest ? Its there and it looks ok, in fact its been
corrupted twice due to storage problems and i hit
https://issues.apache.org/jira/browse/CASSANDRA-5041
TBH i
Hi
RF 2 means that 2 nodes are responsible for any given row (no matter how many
nodes are in the cluster)
For your cluster with three nodes let's just assume the following
responsibilities
NodeA B C
Primary keys0-5 6-1011-15
The problem was with the compatibility. I was using a lower version of
Cassandra jar files. Now, BulkOutputFormat works fine.
-Original Message-
From: anand_balara...@homedepot.com [mailto:anand_balara...@homedepot.com]
Sent: Friday, December 14, 2012 12:37 AM
To:
Hi
I am working on options to load my sstables to load into Cassandra (1.1.6 -
localhost).
Had tried 2 options so far:
* Running sstableloader from java module -
Created a java class which invokes org.apache.cassandra.tools.BulkLoader.main
with the following args:
-d '127.0.0.1'
The following features will not be available in the cli:
* in describe keyspace you will not get current index rebuilds
* in describe keyspace you will not get current built indexes
Do you want to create a ticket to add JMX user name and password support to
cassandra-cli ?
Is there a sustained difference or did it settle back ?
Could this have been compaction or repair or upgrade tables working ?
Do the read / write counts available in nodetool cfstats show anything
different ?
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
Couple of approaches to exporting…
1) If you know the list of keys you want to export, you could use / modify the
sstable2json tool and pass in the list of keys. If expiring columns are used
remove the expiration later or modify sstable2json to not include it.
2) If the list of keys to too
What? I thought cassandra was using nio so thread per connection is not
true?
Dean
From: Rob Coli rc...@palominodb.commailto:rc...@palominodb.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Wednesday,
i will add that we have had a good experience with leveled compaction
cleaning out tombstoned data faster than size tiered, therefore
keeping our total disk usage much more reasonable than size tiered.
it is at the cost of I/O ... maybe 2X the I/O?? but that is not
bothering us.
what is
to get it correct, meaning consistent, it seems you will need to do
a repair no matter what since the source cluster is taking writes
during this time and writing to commit log. so to avoid filename
issues just do the first copy and then repair. i am not sure if they
can have any filename.
to
i am on DSE, and i am referring to the json manifest ... but my memory
isn't very good so i could have the name wrong. we are hitting this bug:
https://issues.apache.org/jira/browse/CASSANDRA-3306
On Wed, Dec 19, 2012 at 8:17 AM, Andras Szerdahelyi
andras.szerdahe...@ignitionone.com wrote:
i believe we have hit this as well. if you use nodetool to
rebuild_index, does it work?
On Wed, Dec 19, 2012 at 8:10 PM, aaron morton aa...@thelastpickle.com wrote:
Well that was fun https://issues.apache.org/jira/browse/CASSANDRA-5079
Just testing my idea of a fix now.
Cheers
Great stuff, Aaron. Thanks for your time
On 20 December 2012 05:10, aaron morton aa...@thelastpickle.com wrote:
Well that was fun https://issues.apache.org/jira/browse/CASSANDRA-5079
Just testing my idea of a fix now.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
20 matches
Mail list logo