I had similar issue (reported many times here, there's also a JIRA
issue, but people reporting this problem were unable to reproduce it).
What I can say is that for me the solution was to run major compaction
on the index CF via JMX. To be clear - we're not talking about
compacting the CF that
W dniu 07.10.2013 08:02, Alexander Shutyaev pisze:
* We have not modified any *consistency settings* in our app, so I assume
we have the *default QUORUM* (2 out of 3 in our case) consistency *for
reads and writes*.
cqlsh uses ONE by default, pycassa uses ONE by default too. I have no
experienc
1.2.
Just curious, has anyone tried 1.2 with large data set, around 1 TB ?
Thanks !
On Thu, Oct 3, 2013 at 7:20 AM, Michał Michalski wrote:
I was experimenting with 128 vs. 512 some time ago and I was unable to see
any difference in terms of performance. I'd probably check 1024 too, but
I was experimenting with 128 vs. 512 some time ago and I was unable to
see any difference in terms of performance. I'd probably check 1024 too,
but we migrated to 1.2 and heap space was not an issue anymore.
M.
W dniu 02.10.2013 16:32, srmore pisze:
I changed my index_interval from 128 to ind
Hi Tim,
Not sure if you've seen this, but I'd start from DataStax's documentation:
http://www.datastax.com/documentation/cassandra/2.0/webhelp/index.html#cassandra/architecture/architecturePlanningAbout_c.html?pagename=docs&version=1.2&file=cluster_architecture/cluster_planning
Taking a look at
I believe the reason is that cfhistograms tells you about the sizes of
the rows returned by given node in a response to the read request, while
cfstats tracks the largest row stored on given node.
M.
W dniu 19.09.2013 11:31, Rene Kochen pisze:
Hi all,
I use Cassandra 1.0.11
If I do cfstats
You might be interested in this:
http://mail-archives.apache.org/mod_mbox/cassandra-user/201308.mbox/%3ccaeqobhpav25pcgjfwbkmd1rzxvrif94e6lpybpj3mu_bqn9...@mail.gmail.com%3E
M.
W dniu 18.09.2013 15:34, Ertio Lew pisze:
For any website just starting out, the load initially is minimal & grows
wit
s unable to find anything ?
On Wed, Aug 7, 2013 at 11:27 AM, Michał Michalski wrote:
2. when cassandra lookups a key in sstable (assuming bloom-filter and
other
"stuff" failed, also assuming the key is located in this single sstable),
cassandra DO NOT USE sequential I/O. "She&
2. when cassandra lookups a key in sstable (assuming bloom-filter and other
"stuff" failed, also assuming the key is located in this single sstable),
cassandra DO NOT USE sequential I/O. "She" probably will read the
hash-table slot or similar structure, then cassandra will do another disk
seek i
Not sure how up-to-date this info is, but from some discussions that
happened here long time ago I remember that a minimum of 1MB per
Memtable needs to be allocated.
The other constraint here is memtable_total_space_in_mb setting in
cassandra.yaml, which you might wish to tune when having a lo
I believe it won't run on 1.6. Java 1.7 is required to compile C* 2.0+
and once it's done, you cannot run it using Java 1.6 (this is what
"Unsupported major.minor version" error tells you about; java version 50
is 1.6 and 51 is 1.7).
M.
W dniu 22.07.2013 10:06, Andrew Cobley pisze:
I know it
Thanks! :-)
M.
W dniu 18.07.2013 08:42, Jean-Armel Luce pisze:
@Michal : look a this for the improvement of read performance :
https://issues.apache.org/jira/browse/CASSANDRA-2498
Best regards.
Jean Armel
2013/7/18 Michał Michalski
SSTables are immutable - once they're written to
SSTables are immutable - once they're written to disk, they cannot be
changed.
On read C* checks *all* SSTables [1], but to make it faster, it uses
Bloom Filters, that can tell you if a row is *not* in a specific
SSTable, so you don't have to read it at all. However, *if* you read it
in case
Hi Aaron,
> * Tombstones will only be purged if all fragments of a row are in the
SStable(s) being compacted.
According to my knowledge it's not necessarily true. In a specific case
this patch comes into play:
https://issues.apache.org/jira/browse/CASSANDRA-4671
"We could however purge tom
Deletion is not really "removing" data, but it's adding tombstones
(markers) of deletion. They'll be later merged with existing data during
compaction and - in the end (see: gc_grace_seconds) - removed, but by
this time they'll take some space.
http://wiki.apache.org/cassandra/DistributedDelet
It doesn't tell you anything if file ends it with "ic-###", except
pointing out the SSTable version it uses ("ic" in this case).
Files related to secondary index contain something like this in the
filename: -., while in "regular" CFs do not contain
any dots except the one just before file exte
I think I'd try removing "broken" SSTables (when node is down) and
running repair then.
M.
W dniu 05.07.2013 09:10, Jan Kesten pisze:
Hi,
i tried to scrub the keyspace - but with no success either, the process
threw an exception when hitting the corrupt block and stopped then. I
will rebootst
My blind guess is: https://issues.apache.org/jira/browse/CASSANDRA-5179
In our case the only sensible solution was to pause hints delivery and
disable storing them (both done with a nodetool: pausehandoff and
disablehandoff). Once they TTL'd (3 hours by default I believe?) I
turned HH on again
I don't think you need to run repair if you decrease RF. At least I
wouldn't do it.
In case of *decreasing* RF have 3 nodes containing some data, but only 2
of them should store them from now on, so you should rather run cleanup,
instead of repair, toget rid of the data on 3rd replica. And I g
19 matches
Mail list logo