The WAL (and walls in general) impose a performance overhead.
If one were to just take a machine out of the cluster, permanently, when a
machine crashes, you could quickly get all the shards back up to N replicas
after a node crashes.
So realistically, running with a WAL is somewhat redundant.
Hi Neha,
As far as I'm aware, 4GB of RAM is a bit underpowered for Cassandra even if
there are no other processes on the same server (i.e. Tomcat and ActiveMQ).
There are some general guidelines at
http://wiki.apache.org/cassandra/CassandraHardware which should help you
out. You may not need all
Well... it depends. Are you saying whenever a machine dies, or any
reason, you'd bootstrap a new one in it's place? Or do you just not care
about the data?
There are cases where it might be ok (if you're using Cassandra as a cache)
but if it's your source of truth I think this is likely to
No wonder, the client is timing out. Even though C* supports up to 2B columns,
it is recommended not to have more 100k CQL rows in a partition.
It has been a long time since I used Astyanax, so I don’t remember whether the
AllRowsReader reads all CQL rows or storage rows. If it is reading all
On Fri, Jan 23, 2015 at 9:59 AM, Kevin Burton bur...@spinn3r.com wrote:
The WAL (and walls in general) impose a performance overhead.
If one were to just take a machine out of the cluster, permanently, when a
machine crashes, you could quickly get all the shards back up to N replicas
after a
The model you are using seems OK.
Your question: This forces me to enter the wea_name and wea_add for each new
row, so how to identify a new row has been created?
Answer: You do 'not' need to add the wea_name or wea_address during inserts
for every new row. Your insert could only
On Fri, Jan 23, 2015 at 10:03 AM, Robert Wille rwi...@fold3.com wrote:
The docs say Use -pr to repair only the first range returned by the
partitioner”. What does this mean? Why would I only want to repair the
first range?
If you're repairing the whole cluster, repairing only the primary
Thanks Alex.
Andy
On 22 Jan 2015, at 16:54, Alex Popescu
al...@datastax.commailto:al...@datastax.com wrote:
Thanks for the feedback Andy. I'll forward this to the DevCenter team.
Currently we have an email for sending feedback our way:
Thanks a lot Steve.
I suspected the same. I will definitely read.
regards
Neha
On Fri, Jan 23, 2015 at 11:22 PM, Steve Robenalt sroben...@highwire.org
wrote:
Hi Neha,
As far as I'm aware, 4GB of RAM is a bit underpowered for Cassandra even
if there are no other processes on the same server
root@anuj-700-430qe:/usr/share/dse/bin# echo $JAVA_HOME
*/jdk1.8.0_25*
*which is the installation folder for jdk1.8.x(Sun/Oracle JDK)*
*and $JAVA_HOME/bin is also added to the path.*
On Fri, Jan 23, 2015 at 9:26 PM, Jacob Rhoden jacob.rho...@me.com wrote:
What does this show?
ls
What does this show?
ls $JAVA_HOME
__
Sent from iPhone
On 24 Jan 2015, at 2:18 pm, anujacharya11 . anuj.acharya1...@gmail.com
wrote:
I had installed DataStax Enterprise Cassandra on my Ubuntu Linux Desktop
which is having Oracle/Sun JDK1.8.x. Setup
I had installed DataStax Enterprise Cassandra on my Ubuntu Linux Desktop
which is having Oracle/Sun JDK1.8.x. Setup JAVA_HOME correctly but when i
am trying to start the service dse-i am getting this error:
I have setup JAVA_HOME in .bashrc and setup the path also.
Even after that i am getting
Hi All,
I was following the TimeSeries data modelling in PlanetCassandra by
Patrick McFadin. Regarding that, I had one query:
If I need to store the weather station name also, should it be in the same
table, say:
create table test (wea_id int, wea_name text, wea_add text, eventday
timeuuid,
Thanks it's working now
Infact it's needs to be setup in
/etc/dse/cassandra/cassandra-env.sh
On Fri, Jan 23, 2015 at 9:48 PM, Andrew redmu...@gmail.com wrote:
You should be setting these as values in /etc/default/dse, not in your
bashrc.
I.e., /etc/default/dse should contain:
Thanks Dave. I found that Pig 0.14 and Hadoop 2.6.0 still use Guava 11.x
which was causing issue. I replacing all of those locations with Guava 17
did not end the ordeal. Seems like Guava made some breaking changes (
https://issues.apache.org/jira/browse/HADOOP-11032) in v17. You need
version 16.0
I forgot, my task at hand is to generate a report of all the weather
station's along with the sum of temperatures measured each day.
Regards,
Seenu.
On Fri, Jan 23, 2015 at 2:14 PM, Srinivasa T N seen...@gmail.com wrote:
Hi All,
I was following the TimeSeries data modelling in
For the cases where repair brings down a cluster, I can safely say that
cluster has more problems than just repair. Consider the load caused by
repair to be somewhat equivalent to losing a node. This is not a 1 for 1
comparison, as the node you're running repair on is up albeit busy,and the
17 matches
Mail list logo