In the example you gave the primary key user _ name is the row key. Since
the default partition is random you are getting rows in random order.
Since each row no clustering column there is no further grouping of data.
Or in simple terms each row has one record and is being returned ordered by
. But since reads never really go to an outdated shard, the tombstones
do not slow down the reads.
Hope that helps.
Jan
Thanks,
Rado
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.aeris.com*
*http://narendrasharma.blogspot.com/ http://narendrasharma.blogspot.com/*
--
Sent from Jeff Dean's printf() mobile console
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.aeris.com*
*http://narendrasharma.blogspot.com/ http://narendrasharma.blogspot.com/*
I think one table say record should be good. The primary key is record id.
This will ensure good distribution.
Just update the active attribute to true or false.
For range query on active vs archive records maintain 2 indexes or try
secondary index.
On Apr 23, 2015 1:32 PM, Ali Akhtar
gender='male
)
thanks
--
Sorry this was sent from mobile. Will do less grammar and spell check than
usual.
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.aeris.com*
*http://narendrasharma.blogspot.com/ http://narendrasharma.blogspot.com/*
Any pointers? I am planning to do rolling restart of the cluster nodes to
see if it will help.
On Jan 15, 2014 2:59 PM, Narendra Sharma narendra.sha...@gmail.com
wrote:
RF=3.
On Jan 15, 2014 1:18 PM, Andrey Ilinykh ailin...@gmail.com wrote:
what is the RF? What does nodetool ring show
from, I stopped the node.
On Thu, Jan 16, 2014 at 12:49 PM, Jonathan Haddad j...@jonhaddad.com wrote:
Please include the output of nodetool ring, otherwise no one can help
you.
On Thu, Jan 16, 2014 at 12:45 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Any pointers? I am planning
is streaming from N1, N2, N6 and N7. I expect it to
steam from (worst case) N5, N6, N7, N8. What could potentially cause the
node to get confused about the ring?
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.aeris.com*
*http://narendrasharma.blogspot.com/ http
RF=3.
On Jan 15, 2014 1:18 PM, Andrey Ilinykh ailin...@gmail.com wrote:
what is the RF? What does nodetool ring show?
On Wed, Jan 15, 2014 at 1:03 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Sorry for the odd subject but something is wrong with our cassandra ring.
We have a 9
8 node cluster running in aws. Any pointers where I should start looking?
No kill -9 in history.
, Narendra Sharma narendra.sha...@gmail.com
wrote:
8 node cluster running in aws. Any pointers where I should start looking?
No kill -9 in history.
You should start looking at instructions as to how to upgrade to at least
the top of the 1.1 line... :D
=Rob
--
Narendra Sharma
Software
Memory Analyzer (Eclipse
MAT http://www.eclipse.org/mat) to figure out root causes and potential
leaks
Hope this helps
-- Nitin
On Thu, Jan 2, 2014 at 9:00 PM, Narendra Sharma narendra.sha...@gmail.com
wrote:
The root cause turned out to be high heap. The Linux OOM Killer (
http://linux
Principal Consultant
Apache Cassandra Consulting
http://www.thelastpickle.com
On 17/12/2013, at 12:28 pm, Narendra Sharma narendra.sha...@gmail.com
wrote:
No snapshots.
I restarted the node and now the Load in ring is in sync with the disk
usage. Not sure what caused it to go out of sync
to sstables which will cause them not be deleted.
-Arindam
*From:* Narendra Sharma [mailto:narendra.sha...@gmail.com]
*Sent:* Sunday, December 15, 2013 1:15 PM
*To:* user@cassandra.apache.org
*Subject:* Cassandra 1.1.6 - Disk usage and Load displayed in ring
doesn't match
We have 8
for the CF reported:
SSTable count: 16
Space used (live): 670524321067
Space used (total): 670524321067
3. 'ls -1 *Data* | wc -l' in the data folder for CF returned
16
4. 'du -ksh .' in the data folder for CF returned
625G
-Naren
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http
I was successfully able to bootstrap the node. The issue was RF 2. Thanks
again Robert.
On Wed, Oct 30, 2013 at 10:29 AM, Narendra Sharma narendra.sha...@gmail.com
wrote:
Thanks Robert.
I didn't realize that some of the keyspaces (not all and esp. the biggest
one I was focusing on) had RF
for to further analyze the
issue? I haven't restarted the Cassandra process. I am afraid the node will
start bootstrap again if I restart the node.
Thanks,
Naren
--
Narendra Sharma
Software Engineer
*http://www.aeris.com*
*http://narendrasharma.blogspot.com/*
:
On Tue, Oct 29, 2013 at 11:45 AM, Narendra Sharma
narendra.sha...@gmail.com wrote:
We had a cluster of 4 nodes in AWS. The average load on each node was
approx 750GB. We added 4 new nodes. It is now more than 30 hours and the
node is still in JOINING mode.
Specifically I am analyzing the one
. Perera
AdroitLogic, http://adroitlogic.org
http://esbmagic.blogspot.com
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
+Storage+of+Small+Objects
Does this apply to Cassandra column names?
-- Drew
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
the sender accept liability for
any errors or omissions in the content of this message which arise as a
result of its e-mail transmission. Please note that all e-mail
communications to and from chors GmbH may be monitored./p
--
w3m
--
Narendra Sharma
Software Engineer
*http
.
For most applications, if the lock managers is down, you don't
acquire the lock, so you don't enter the critical section. Rather
than allowing inconsistency, you become unavailable (at least to
writes that require a lock).
-Bryce
--
Narendra Sharma
Software Engineer
*http
...@venarc.com wrote:
So what are the common RIGHT solutions/tools for this?
On Jan 6, 2012, at 2:46 PM, Narendra Sharma wrote:
It's very surprising that no one seems to have solved such a common use
case.
I would say people have solved it using RIGHT tools for the task.
On Fri, Jan 6, 2012 at 2
338:02.72 java
Thank you in advance,
Daning
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
Ravi
--
Narendra Sharma
Software Engineer
*http://www.aeris.com http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
inserted.
But, when tried from command line client, it worked correctly.
Any pointer on this would be of great use
Thanks in advance,
Regards,
Anuya
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
accomplish this.
Thanks
Anurag
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
Regards
Sam
*__
Sam Ganesan Ph.D.
Distinguished member, Technical Staff
Motorola Mobility - On Demand Video
900 Chelmsford Street,
Lowell, MA 01851
tel:+1 978 614-3165 (changed)
mob:+1 978 328-7132
mailto: sam.gane...@motorola.com*
--
Narendra Sharma
Solution Architect
*http
properly without giving any warnngs/error but does not
create the keyspace offline
which is defined above.
Please suggest.
Thanks
Anurag
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
.
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
in
production system to maintain the ring.
Thanks
--
maki
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
-the-Cassandra-server-from-Java-without-command-line-tp6273826p6273826.html
Sent from the cassandra-u...@incubator.apache.org mailing list archive at
Nabble.com.
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
corrective action. So
try QUORUM under normal circumstances, if unavailable try ONE. My questions
-
Do you guys see any flaws with this approach?
What happens when DC1 comes back up and we start reading/writing at QUORUM
again? Will we read stale data in this case?
Thanks
-Raj
--
Narendra
have attached the code file.
Cassandra is running on the port I am trying to connect to .
Please Suggest
Thanks
Anurag
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
mxbeanProxy =
JMX.newMXBeanProxy(mbsc, mxbeanName, ColumnFamilyStores.class);
mxbeanProxy.forceMajorCompaction();
jmxc.close();
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
understand the
output.Can someone please shower some light on it.
Thanks
Anurag
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
with commit log reply filling the
heap in the form of memtables that are sized too big for your heap.
There's a wiki page somewhere that describes the overall rule of thumb
for heap sizing, but I can't find it right now.
--
/ Peter Schuller
--
Narendra Sharma
Solution Architect
*http
- repair
- clean
I understand that compaction consolidates the SSTables and physically
performs deletes by taking into account the Tombstones. But what does clean
and repair do then?
--
Narendra Sharma
Solution Architect
*http://www.persistentsys.com*
*http://narendrasharma.blogspot.com/*
that is
causing OOM.
-Naren
On Wed, Mar 30, 2011 at 4:45 PM, Anurag Gujral anurag.guj...@gmail.comwrote:
I am using 16G of heap space how much more should i increase.
Please suggest
Thanks
Anurag
On Wed, Mar 30, 2011 at 11:43 AM, Narendra Sharma
narendra.sha...@gmail.com wrote:
http
[CompactionExecutor:1] 2011-03-30 18:46:33,272 CompactionManager.java
(line 406) insufficient space to compact all requested files SSTableReader(
I am using 16G of java heap space ,please let me know should I consider
this as a sign of something which I need to worry about.
Thanks
Anurag
--
Narendra
Hope you find following useful. It uses raw thirft. In case you find
difficulty in build and/or running the code, please reply back.
private Cassandra.Client createClient(String host, int port) {
TTransport framedTransport = new TFramedTransport(new TSocket(host,
port));
TProtocol
Cassandra 0.7.4
Column names in my CF are of type byte[] but I want to order columns by
timestamp. What is the best way to achieve this? Does it make sense for
Cassandra to support ordering of columns by timestamp as option for a column
family irrespective of the column name type?
Thanks,
Naren
I think it is due to fragmentation in old gen, due to which survivor area
cannot be moved to old gen. 300MB data size of memtable looks high for 3G
heap. I learned that in memory overhead of memtable can be as high as 10x of
memtable data size in memory. So either increase the heap or reduce the
I understand that. The overhead could be as high as 10x of memtable data
size. So overall the overhead for 16CF collectively in your case could be
300*10 = 3G.
Thanks,
Naren
On Wed, Mar 23, 2011 at 11:18 AM, ruslan usifov ruslan.usi...@gmail.comwrote:
2011/3/23 Narendra Sharma narendra.sha
usifov ruslan.usi...@gmail.comwrote:
2011/3/23 Narendra Sharma narendra.sha...@gmail.com
I understand that. The overhead could be as high as 10x of memtable data
size. So overall the overhead for 16CF collectively in your case could be
300*10 = 3G.
And how about G1 GC, it must prevent memory
The logic to find the node is not complicated. You compute the MD5 hash of
the key. Create sorted list of tokens assigned to the nodes in the ring.
Find the first token greater than the hash. This is the first node. Next in
the list is the replica, which depends on the RF. Now this is simple
Is this new install or upgrade?
Thanks,
Naren
On Wed, Mar 16, 2011 at 11:15 PM, Anurag Gujral anurag.guj...@gmail.comwrote:
I am getting exception when starting cassandra 0.7.3
ERROR 01:10:48,321 Exception encountered during startup.
java.lang.NegativeArraySizeException
at
What heap size are you running with? and Which version of Cassandra?
Thanks,
Naren
On Thu, Mar 17, 2011 at 3:45 AM, ruslan usifov ruslan.usi...@gmail.comwrote:
Hello
Some times i have very long GC pauses:
Total time for which application threads were stopped: 0.0303150 seconds
lot of time.
Check if it is due to some JVM bug.
http://bugs.sun.com/bugdatabase/view_bug.do?bug_id=6477891
-Naren
On Thu, Mar 17, 2011 at 9:47 AM, ruslan usifov ruslan.usi...@gmail.comwrote:
2011/3/17 Narendra Sharma narendra.sha...@gmail.com
What heap size are you running with? and Which
libcassandra isn't vary active. Since we already has a object pool library,
we went for using raw thrift in C++ instead of using any other library.
Thanks,
Naren
On Wed, Mar 16, 2011 at 10:03 PM, Primal Wijesekera
primalwijesek...@yahoo.com wrote:
You could try this,
Sometime back I looked at the code to find that out. Following is the
result. There will be some additional overhead for internal DS for
ConcurrentLinkedHashMap.
Keycache size * (8 bytes for position i.e. value + X bytes for key +
16 bytes for token (RP) + 8 byte reference for DecoratedKey + 8
On the same page there is a section on Load Balance that talks about python
script to compute tokens. I believe your question is more about assigning
new tokens and not compute tokens.
1. nodetool loadbalance will result in recomputation of tokens. It will
pick tokens based on the load and not
reading it wrong. the output shows a
nice fancy column called Owns but i've only ever seen the percentage
... the amount of data or load is even ... doh. thanks for the
reply. cheers
-sd
On Mon, Mar 14, 2011 at 10:47 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
On the same page
Multiple write for same key and column will result in overwriting of column
in a memtable. Basically multiple updates for same (key, column) are
reconciled based on the column's timestamp. This happens per memtable. So if
a memtable is flushed to an sstable, this rule will be valid for the next
I have been through tuning for GC and OOM recently. If you can provide the
cassandra.yaml, I can help. Mostly I had to play with memtable thresholds.
Thanks,
Naren
On Fri, Mar 4, 2011 at 12:43 PM, Mark static.void@gmail.com wrote:
We have 7 column families and we are not using the default
I am unable to enable/disable HH via JMX (JConsole).
Even though the load is on and read/writes happening, I don't see
operations component on Jconsole. To clarify further, I see only
Jconsole-MBeans-org.apache.cassandra.db.StorageProxy.Attributes. I don't
see
You are missing the point. The coordinator node that is handling the request
won't wait for all the nodes to return their copy/digest of data. It just
wait for Q (RF/2+1) nodes to return. This is the reason I explained two
possible scenarios.
Further, on what basis Cassandra will know that the
1. Why 24GB of heap? Do you need this high heap? Bigger heap can lead to
longer GC cycles but 15min look too long.
2. Do you have ROW cache enabled?
3. How many column families do you have?
4. Enable GC logs and monitor what GC is doing to get idea of why it is
taking so long. You can add
Today it is not possible to change the comparators (compare_with and
compare_subcolumns_with). I went through the discussion on thread
http://comments.gmane.org/gmane.comp.db.cassandra.user/12466.
Does it make sense to atleast allow one way change i.e. from specific types
to generic type? For eg
Remember the simple rule. Column with highest timestamp is the one that will
be considered correct EVENTUALLY. So consider following case:
Cluster size = 3 (say node1, node2 and node3), RF = 3, Read/Write CL =
QUORUM
a. QUORUM in this case requires 2 nodes. Write failed with successful write
to
, Feb 23, 2011 at 6:47 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Remember the simple rule. Column with highest timestamp is the one that
will be considered correct EVENTUALLY. So consider following case:
Cluster size = 3 (say node1, node2 and node3), RF = 3, Read/Write CL =
QUORUM
Version: Cassandra 0.7.1 (build from trunk)
Setup:
- Cluster of 2 nodes (Say A and B)
- HH enabled
- Using the default Keyspace definition in cassandra.yaml
- Using SuperCounter1 CF
Client:
- Using CL of ONE
I started the two Cassandra nodes, created schema and then shutdown one of
the
Version: Cassandra 0.7.1 (build from trunk)
Setup:
- Cluster of 2 nodes (Say A and B)
- HH enabled
- Using the default Keyspace definition in cassandra.yaml
- Using SuperCounter1 CF
Steps:
- Started the two nodes, loaded schema using nodetool
- Executed counter update and read operations on A
As per config:
# this defines the maximum amount of time a dead host will have hints
# generated. After it has been dead this long, hints will be dropped.
max_hint_window_in_ms: 360 # one hour
Will this result in deletion of existing hints (from mem and disk)? or it
will just stop creating
Version: Cassandra 0.7.1
I am seeing following exception at regular interval (very frequently) in
Cassandra. I did a clean install of Cassandra 0.7.1 and deleted all old
data. Any idea what could be the cause? The stack is same for all the
occurrances.
Thanks,
Naren
ERROR [ReadStage:11232]
. There is some latency that needs to be sorted out, but overall I
am positive. This is with 6.6, am in the process of moving it to 0.7.
On Wed, Jan 26, 2011 at 11:37 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Anyone using Cassandra for storing large number (millions) of large
(mostly immutable
minor compactions turned on.
On Thu, Jan 27, 2011 at 12:56 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Thanks Anand. Few questions:
- What is the size of nodes (in terms for data)?
- How long have you been running?
- Howz compaction treating you?
Thanks,
Naren
On Thu, Jan 27
Anyone using Cassandra for storing large number (millions) of large (mostly
immutable) objects (200KB-5MB size each)? I would like to understand the
experience in general considering that Cassandra is not considered a good
fit for large objects. https://issues.apache.org/jira/browse/CASSANDRA-265
Yes. See this http://wiki.apache.org/cassandra/FAQ#range_ghosts
-Naren
On Tue, Jan 25, 2011 at 2:59 PM, Nick Santini nick.sant...@kaseya.comwrote:
Hi,
I'm trying a test scenario where I create 100 rows in a CF, then
use get_range_slices to get all the rows, and I get 100 rows, so far so good
The schema is not loaded from cassandra.yaml by default. You need to either
load it through jconsole or define it through CLI. Please read following
page for details:
http://wiki.apache.org/cassandra/LiveSchemaUpdates
Also look for Where are my keyspaces on following page:
With raw thrift APIs:
1. Fetch column from supercolumn:
ColumnPath cp = new ColumnPath(ColumnFamily);
cp.setSuper_column(SuperColumnName);
cp.setColumn(ColumnName);
ColumnOrSuperColumn resp = client.get(getByteBuffer(RowKey), cp,
ConsistencyLevel.ONE);
Column c = resp.getColumn();
2. Add a new
that.
I am using Cassandra 0.7.0-rc2.
I will try this DB client. Thanks.
On Tue, Dec 28, 2010 at 10:41 AM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Please do mention the Cassandra version you are using in all ur queries.
It helps.
Try https://github.com/driftx/chiton
Thanks
that was based on order of the rows in the
column family, so I didn't explore that much.
On Mon, Dec 27, 2010 at 9:55 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Did you look at get_range_slices? Once you get the columns from super
column, pick the first and last to form the range
#1 - No limit
#2 - If you are referring to secondary indexes then NO. Also see
https://issues.apache.org/jira/browse/CASSANDRA-598
#3 - No limit
Following are key limitations:
1. All data for a single row must fit (on disk) on a single machine in the
cluster
2. A single column value may not be
that row.
Hope that helps.
Aaron
On 04 Dec, 2010,at 09:23 AM, Narendra Sharma narendra.sha...@gmail.com
wrote:
What is the impact (performance and I/O) of row size (in bytes) on
compaction?
What is the impact (performance and I/O) of number of super columns and
columns on compaction
What is the impact (performance and I/O) of row size (in bytes) on
compaction?
What is the impact (performance and I/O) of number of super columns and
columns on compaction?
Does anyone has any details and data to share?
Thanks,
Naren
Hi,
My schema has a row that has thousands of Super Columns. The size of each
super column is around 500B (20 columns). I need to query 1 SuperColumn
based on value of one of its column. Something like
SELECT SuperColumn FROM Row WHERE SuperColumn.column=value
Questions:
1. Is this possible
key is the value of
your field and the columns are the row keys of your super column family
(inverted index)
Nicolas Santini
Director of Cloud Computing
Auckland - New Zealand
(64) 09 914 9426 ext 2629
(64) 021 201 3672
On Fri, Dec 3, 2010 at 1:00 PM, Narendra Sharma
narendra.sha
Are there any C++ clients out there similar to Hector (in terms of features)
for Cassandra? I am looking for C++ Client for Cassandra 0.7.
Thanks,
Naren
Hi,
I am using Cassandra 0.7 beta3 and Hector.
I create a mutation map. The mutation involves adding few columns for a
given row. After that I use batch_mutate API to send the changes to
Cassandra.
Question:
If there are multiple column writes on same row in a mutation_map, does
Cassandra show
Is there any documentation available on what is possible with secondary
indexes? For eg
- Is it possible to define secondary index on columns within a SuperColumn?
- If I define a secondary index at run time, does Cassandra index all the
existing data or only new data is indexed?
Some
wrote:
On Mon, Nov 29, 2010 at 7:59 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Is there any documentation available on what is possible with secondary
indexes?
Not yet.
- Is it possible to define secondary index on columns within a
SuperColumn?
No.
- If I define a secondary
On Mon, Nov 29, 2010 at 9:32 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Mon, Nov 29, 2010 at 11:26 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Thanks Jonathan.
Couple of more questions:
1. Is there any technical limit on the number of secondary indexes that
can
Hi,
I am using Cassandra 0.6.5. Our application uses the get_range_slices to get
rows in the given range.
Could someone please explain how get_range_slices works internally esp when
a count parameter (value = 1) is also specified in the SlicePredicate? Does
Cassandra first search all in the
.
the performance of those two predicates is equivalent, assuming a row
start key actually exists.
On Thu, Oct 14, 2010 at 1:09 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Hi,
I am using Cassandra 0.6.5. Our application uses the get_range_slices to
get
rows in the given range.
Could
Cassandra Version: 0.6.5
I am running a long duration test and I need to keep the commit log to see
the sequence of operations to debug few application issues. Is it possible
to retain the commit logs? Apart from increasing the value of
CommitLogRotationThresholdInMB
what is the other way to
Has any one used sstable2json on 0.6.5 and noticed the issue I described in
my email below? This doesn't look like data corruption issue as sstablekeys
shows the keys.
Thanks,
Naren
On Tue, Oct 5, 2010 at 8:09 PM, Narendra Sharma
narendra.sha...@gmail.comwrote:
0.6.5
-Naren
On Tue, Oct 5
Thanks Oleg!
Could you please share the patch. I have build Cassandra before from source.
I can definitely give it try.
-Naren
On Wed, Oct 6, 2010 at 3:55 AM, Oleg Anastasyev olega...@gmail.com wrote:
Is it possible to retain the commit logs?
In off-the-shelf cassandra 0.6.5 this is not
Hi,
I am using sstable2json to extract row data for debugging some application
issue. I first ran sstablekeys to find the list of keys in the sstable. Then
I use the key to fetch row from sstable. The sstable is from Lucandra
deployment. I get following.
-bash-3.2$ ./sstablekeys
0.6.5
-Naren
On Tue, Oct 5, 2010 at 6:56 PM, Jonathan Ellis jbel...@gmail.com wrote:
Version?
On Tue, Oct 5, 2010 at 7:28 PM, Narendra Sharma
narendra.sha...@gmail.com wrote:
Hi,
I am using sstable2json to extract row data for debugging some
application
issue. I first ran
Read Use mlockall via JNA, if present, to prevent Linux from swapping out
parts of the JVM https://issues.apache.org/jira/browse/CASSANDRA-1214 on
following link:
http://www.riptano.com/blog/whats-new-cassandra-065
-Naren
On Wed, Sep 29, 2010 at 5:21 PM, Jeremy Davis
We are seeing high number of DigestMismatchException on our Cassandra
deployment. We have a cluster of 4 nodes with RF=3 and we read/write in
Quorum. I understand some DigestMismatchException is normal and is the
mechanism for Cassandra to ensure consistency by doing read-repair.
In our case,
Hi,
We have an application that uses Cassandra to store data. The application is
deployed on multiple nodes that are part of an application cluster. We are
at present using single Cassandra node. We have noticed few errors in
application and our analysis revealed that the root cause was that the
94 matches
Mail list logo