if you don't want the commit logs to go crazy.
-Jeremiah
On 11/28/2011 11:11 AM, Alexandru Dan Sicoe wrote:
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF=2, replica placement strategy =
SimpleStartegy, write consistency level = ANY, memtable_flush_after_mins
=1440
the sstable loader to load the
sstables from all of the current machines into the new machine. Run major
compaction a couple times. You will have all of the data on one machine.
On 12/07/2011 10:17 AM, Alexandru Dan Sicoe wrote:
Hello everyone.
3 node Cassandra 0.8.5 cluster. I've left
Hi,
I am thinking of strategies to deploy my application that uses a 3 node
Cassandra cluster.
Quick recap: I have several client applications that feed in about 2
million different variables (each representing a different monitoring
value/channel) in Cassandra. The system receives updates for
Hello everyone.
3 node Cassandra 0.8.5 cluster. I've left the system running in production
environment for long term testing. I've accumulated about 350GB of data
with RF=2. The machines I used for the tests are older and need to be
replaced. Because of this I need to export the data to a
503.553.2554 M: 707.738.8132 TW: @tas50
*webtrends http://www.webtrends.com/* | 851 SW 6th Ave, Suite 1600,
Portland, OR 97204
*The Global Leader in Mobile and Social* *Analytics*
** **
*From:* Alexandru Dan Sicoe [mailto:sicoe.alexan...@googlemail.com]
*Sent:* Wednesday, December
, Dec 2, 2011 at 8:35 AM, Alexandru Dan Sicoe
sicoe.alexan...@googlemail.com wrote:
Ok, so my problem persisted. On the node that is filling up the harddisk,
I have a 230 GB disk. Right after I restart the node I it deletes tmp files
and reaches 55GB of data on disk. Then it start to quickly
,
Alex
On Thu, Dec 1, 2011 at 10:08 PM, Jahangir Mohammed
md.jahangi...@gmail.comwrote:
Yes, mostly sounds like it. In our case failed repairs were causing
accumulation of the tmp files.
Thanks,
Jahangir Mohammed.
On Thu, Dec 1, 2011 at 2:43 PM, Alexandru Dan Sicoe
sicoe.alexan
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF =2.
One node started throwing exceptions in its log:
ERROR 10:02:46,837 Fatal exception in thread Thread[FlushWriter:1317,5,main]
java.lang.RuntimeException: java.lang.RuntimeException: Insufficient disk
space to flush 17296 bytes
will then need to run
repair on that node to get back any data that was missed while it was
full. If your commit log was on a different device you may not even have
lost much.
-Jeremiah
On 12/01/2011 04:16 AM, Alexandru Dan Sicoe wrote:
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF
flushed because they don't reach the operations and throughput limits? Then
why do only some nodes exhibit this behaviour?
It would be interesting to understand how to control the size of the
commitlog also to know how to size my commitlog disks!
Thanks,
Alex
--
Alexandru Dan Sicoe
MEng
Hello everyone,
4 node Cassandra 0.8.5 cluster with RF=2, replica placement strategy =
SimpleStartegy, write consistency level = ANY, memtable_flush_after_mins
=1440; memtable_operations_in_millions=0.1; memtable_throughput_in_mb = 40;
max_compaction_threshold =32; min_compaction_threshold =4;
I
Hi,
I'm using the community version of OpsCenter to monitor my clutster. At
the moment I'm interested in storage space. In the performance metrics
page, if I choose to see the graph of a the metric CF: SSTable Size for a
certain CF of interest, two things are plotted on the graph: Total disk
://twitter.com/tjake
--
Alexandru Dan Sicoe
MEng, CERN Marie Curie ACEOLE Fellow
0 configuration is recommended for the data file
directory. Can anyone explain why?
Sorry for the huge email.
Cheers,
Alex
--
Alexandru Dan Sicoe
MEng, CERN Marie Curie ACEOLE Fellow
Hi guys,
It's interesting to see this thread. I recently discovered a similar
problem on my 3 node Cassandra 0.8.5 cluster. It was working fine, then I
took a node down to see how it behaves. All of a sudden I couldn't write or
read because of this exception being thrown:
Exception in thread
of 2 is 2).
Probably a common setup is to use RF=3 because it allows you to
survive a node going down, while also allowing you to use QUORUM. But,
whether that matters will be up to your use-case.
--
/ Peter Schuller (@scode, http://worldmodscode.wordpress.com)
--
Alexandru Dan Sicoe
MEng
Thanks for the detailed answers Dan, what you said makes sense. I think my
biggest worry right now is making the correct preditions of my data storage
space based on the measurements with the current cluster. Other than that I
should be fairly comfortable with the rest of the HW specs.
Thanks for
/10/2011, at 3:44 AM, Alexandru Dan Sicoe wrote:
Hello everyone,
I was trying to get some cluster wide statistics of the total insertions
performed in my 3 node Cassandra 0.8.6 cluster. So I wrote a nice little
program that gets the CompletedTasks attribute of
org.apache.cassandra.db:type
Hello everyone,
I was trying to get some cluster wide statistics of the total insertions
performed in my 3 node Cassandra 0.8.6 cluster. So I wrote a nice little
program that gets the CompletedTasks attribute of
org.apache.cassandra.db:type=Commitlog from every node, sums up the values
and
Hello,
I'm currently doing my masters project. I need to store lots of time series
data of any type (String, int, booleans, arrays of the previous) with a high
writing rate(20MBytes/sec - 170TBytes/year - note not running continuously)
but less strict read requirements. This is monitoring data
20 matches
Mail list logo