Hi.
I am using Cassandra 2.0.5 version. If null is explicitly set to a column,
paging_state will not work. My test procedure is as follows:
--
create a table and insert 10 records using cqlsh. the query is as follows:
cqlsh:test CREATE TABLE mytable (id int, range int, value text,
not
getting any result.*
[default@comsdb] get updated_upload_id['20140218'];
Returned 0 results.
This looks very strange as i don't see any exception in cassandra logs too
*.*
*Any lead would be appreciated.*
*Regards,*
*Ankit Tyagi*
On Mon, Feb 17, 2014 at 12:39 PM, David Chia davyc...@gmail.com wrote:
It's not clear to me in the doc about cold_reads_to_omit. when
cold_reads_to_omit ignores low read rate sstables during compactions, what
would gc tombstones in those cold sstables?
If those tombstones apply to data that
We're getting exceptions like the one below using cassandra 2.0.5. A google
search turns up nothing about these except the source code. Anyone have any
insight?
ERROR [CompactionExecutor:188] 2014-02-12 04:15:53,232 CassandraDaemon.java
(line 192) Exception in thread
The node is still out of the ring. Any suggestions on how to get it in will be
very helpful.
From: Arindam Barua [mailto:aba...@247-inc.com]
Sent: Friday, February 14, 2014 1:04 AM
To: user@cassandra.apache.org
Subject: Bootstrap stuck: vnode enabled 1.2.12
After our otherwise successful
On Mon, Feb 17, 2014 at 4:35 PM, Plotnik, Alexey aplot...@rhonda.ru wrote:
After analyzing Heap I saw this buffer has a size about 70KB per SSTable.
I have more than 30K SSTables per node.
I'm thinking your problem is not compression, it's using the old 5mb
default for Level Compaction and
Personally I think having compression on by default is the wrong choice.
Depending on your access patterns and row sizes the overhead of compression
can create more Garbage Collection and become your bottleneck before your
potentially bottleneck your disk (ssd disk)
On Tue, Feb 18, 2014 at 2:23
There is a bug where a node without schema can not bootstrap. Do you have
schema?
On Tue, Feb 18, 2014 at 1:29 PM, Arindam Barua aba...@247-inc.com wrote:
The node is still out of the ring. Any suggestions on how to get it in
will be very helpful.
*From:* Arindam Barua
I upgraded the Cassandra to 2.0.5, these issues did not occur so far.
Thanks
Mahesh
On Mon, Feb 17, 2014 at 1:43 PM, mahesh rajamani
rajamani.mah...@gmail.comwrote:
Christian,
There are 2 use cases which are failing, and both looks to be similar
issue, basically happens in column family
My SSTable size is 100Mb. Last time I removed leveled manifest compaction was
running for 3 months
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent: 19 февраля 2014 г. 6:24
To: user@cassandra.apache.org
Subject: Re: Turn off compression (1.2.11)
On Mon, Feb 17, 2014 at 4:35 PM, Plotnik,
Compression buffers are located in Heap, I saw them in Heapdump. That is:
==
public class CompressedRandomAccessReader extends RandomAccessReader {
…..
private ByteBuffer compressed; // -- THAT IS
==
From: Robert Coli [mailto:rc...@eventbrite.com]
Sent:
Sounds like you have CMSInitiatingOccupancyFraction set close to 60.
You can raise that and/or figure out how to use less heap.
On Mon, Feb 17, 2014 at 5:06 PM, John Pyeatt john.pye...@singlewire.com wrote:
I have a 6 node cluster running on AWS. We are using m1.large instances with
heap size
I am new and trying to learn Cassandra.
Based on my understanding of the problem, almost 2Gb is taken up just for
the compression headers heap.
And 100MB per SSTable, and about 30,000 files gives about 3TB of data?
What is the hardware and memory configuration you are using to provide this
On Tue, Feb 18, 2014 at 2:51 PM, Plotnik, Alexey aplot...@rhonda.ru wrote:
My SSTable size is 100Mb. Last time I removed leveled manifest compaction
was running for 3 months
At 3TB per node, you are at, and probably exceeding the maximum size anyone
suggests for Cassandra 1.2.x.
Add more
I believe you are talking about CASSANDRA-6685, which was introduced in 1.2.15.
I'm trying to add a node to a production ring. I have added nodes previously
just fine. However, this node had hardware issues during a previous bootstrap,
and now even a clean bootstrap seems to be having
On Tue, Feb 18, 2014 at 10:17 AM, Donald Smith
donald.sm...@audiencescience.com wrote:
We're getting exceptions like the one below using cassandra 2.0.5. A
google search turns up nothing about these except the source code. Anyone
have any insight?
ERROR [CompactionExecutor:188]
16 matches
Mail list logo