Hi All,
In DSE they claim to have Cassandra File System in place of Hadoop which makes
it real fault tolerant.
Is there a way to use Cassandra file system CFS in place of HDFS if I don't
have DSE?
Regards,
Tarun Tiwari | Workforce Analytics-ETL | Kronos India
M: +91 9540 28 27 77 | Tel: +91
50100 inserts or deletes? also how much ram / cpu do you have on the server
running this, and what's the ram / cpu usage at about the time it fails?
On Tue, Mar 24, 2015 at 5:29 PM, joss Earl j...@rareformnewmedia.com
wrote:
on a stock install, it gets to about 50100 before grinding to a halt
Hi Joss
We faced similar issue recently. The problem seems to be related to huge number
of tombstones generated after deletion. I would suggest you to increase
tombstone warning and failure threshold in cassandra.yaml.
Once you do that and run your program make sure that you monitor
Hi Anuj
Yes, thanks.. looking at my log file I see:
ERROR [SharedPool-Worker-2] 2015-03-24 13:52:06,751
SliceQueryFilter.java:218 - Scanned over 10 tombstones in test1.msg;
query aborted (see tombstone_\
failure_threshold)
WARN [SharedPool-Worker-2] 2015-03-24 13:52:06,759
Hi,
Cassandra Demon found in org/apache/cassandra/service/CassandraDaemon.java
This contain Main() method also.
From: Divya Divs divya.divi2...@gmail.com
Sent: Tuesday, March 24, 2015 10:59 AM
To: user@cassandra.apache.org; Jason Wee; Eric Stevens
Subject:
It inserts 100,000 messages, I then start deleting the messages by grabbing
chunks of 100 at a time and then individually deleting each message.
So, the 100,000 messages get inserted without any trouble, I run into
trouble when I have deleted about half of them. I've run this on machines
with
Cool I think that helps.
Regards
Tarun
From: Jonathan Lacefield [mailto:jlacefi...@datastax.com]
Sent: Tuesday, March 24, 2015 6:39 PM
To: user@cassandra.apache.org
Subject: Re: Way to Cassandra File System
Hello,
CFS is a DataStax proprietary implementation of the Hadoop File System
This talk from DataStax was talking about deletes as an Anti-pattern. It may
be worth watching.
Thanks for your interest in the following webinar:
Avoiding anti-patterns: How to stay in love with Cassandra
Here are the links to the video recording and presentation slides.
Thanks,
Hi Joss,
On 24/03/15 12:58, joss Earl wrote:
I run into trouble after a while if I delete rows, this happens in both 2.1.3
and 2.0.13, and I encountered the same problem when using either the datastax
java driver or the stock python driver.
The problem is reproducible using the attached python
Hi all,
I need to import a csv file to a table using copy command, but file
contains carriage returns which causing me problem in doing so, Is there
any way in cassandra to solve this
Regards:
Rahul
--
Follow IndiaMART.com http://www.indiamart.com for latest updates on this
and more:
Thank very much.
Thanks and Regards,
Divya.K
M-Tech IT
Intern at Honeywell technology Solutions, Bangalore
On Tue, Mar 24, 2015 at 4:15 PM, Brian O'Neill b...@alumni.brown.edu
wrote:
FWIW — I just went through this, and posted the process I used to get up
and running:
Hello,
Reading this documentation http://www.datastax.com/docs/1.1/install/upgrading
If you are upgrading to Cassandra 1.1.9 from a version earlier than
1.1.7, all nodes must be upgraded before any streaming can take place.
Until you upgrade all nodes, you cannot add version 1.1.7 nodes or
later
FWIW I just went through this, and posted the process I used to get up and
running:
http://brianoneill.blogspot.com/2015/03/getting-started-with-cassandra.html
-brian
---
Brian O'Neill
Chief Technology Officer
Health Market Science, a LexisNexis Company
215.588.6024 Mobile @boneill42
I run into trouble after a while if I delete rows, this happens in both
2.1.3 and 2.0.13, and I encountered the same problem when using either the
datastax java driver or the stock python driver.
The problem is reproducible using the attached python program.
Once the problem is encountered, the
This is a known issue with older versions of the AMI. Use the 2.5.1 version
of the AMI (seehttps://
raw.githubusercontent.com/riptano/ComboAMI/2.5/ami_ids.json for the ids for
your region). For a more detailed discussion of what's going wrong, see
this StackExchange
Not sure if this will help, but I have had issues with windows file in Unix
before and this has worked for me…
To remove the ^M characters at the end of all lines in vi, use:
:%s/^V^M//g
The ^v is a CONTROL-V character and ^m is a CONTROL-M. When you type this, it
will look like this:
On Tue, Mar 24, 2015 at 5:30 AM, Rahul Bhardwaj
rahul.bhard...@indiamart.com wrote:
I need to import a csv file to a table using copy command, but file
contains carriage returns which causing me problem in doing so, Is there
any way in cassandra to solve this
You can surround the field with
Make sure to have a priest nearby, or the demon can get out of hands! ;)
On Tue, Mar 24, 2015 at 7:11 PM, Job Thomas j...@suntecgroup.com wrote:
Hi,
Cassandra Demon found
in org/apache/cassandra/service/CassandraDaemon.java
This contain Main() method also.
Hi,
What is the process to re-bootstrap a node after hard drive failure
(Cassandra 2.1.3)?
This is the same node as previously, but the data folder has been wiped,
and I would like to re-bootstrap it from the data stored on the other nodes
of the cluster (I have RF=3).
I am not using vnodes.
you can use nodetool rebuild in this node.
2015-03-25 9:20 GMT+08:00 Flavien Charlon flavien.char...@gmail.com:
Hi,
What is the process to re-bootstrap a node after hard drive failure
(Cassandra 2.1.3)?
This is the same node as previously, but the data folder has been wiped,
and I would
Hello,
If i have a custom type EventDefinition and i create a table like
create table TestTable {
user_id long,
ts timestamp,
definition 'com.anishek.EventDefinition',
Primary Key (user_id, ts))
with clustering order by (ts desc) and compression={'sstable_compression' :
'SnappyCompressor'}
and
Hi Roman,
On 24/03/15 18:05, Roman Tkachenko wrote:
Hi Duncan,
Thanks for the response!
I can try increasing gc_grace_seconds and run repair on all nodes. It does not
make sense though why all *new* deletes (for the same column that resurrects
after repair) I do are forgotten as well after
Hi Duncan,
Thanks for the response!
I can try increasing gc_grace_seconds and run repair on all nodes. It does
not make sense though why all *new* deletes (for the same column that
resurrects after repair) I do are forgotten as well after repair? Doesn't
Cassandra insert a new tombstone every
Hi Roman,
On 24/03/15 17:32, Roman Tkachenko wrote:
Hey guys,
Has anyone seen anything like this behavior or has an explanation for it? If
not, I think I'm gonna file a bug report.
this can happen if repair is run after the tombstone gc_grace_period has
expired. I suggest you increase
Hey guys,
Has anyone seen anything like this behavior or has an explanation for it?
If not, I think I'm gonna file a bug report.
Thanks!
Roman
On Mon, Mar 23, 2015 at 4:45 PM, Roman Tkachenko ro...@mailgunhq.com
wrote:
Hey guys,
We're having a very strange issue: deleted columns get
Some ideas I throw in here:
The delay Y will be at least 1 minute, and at most 90 days with a
resolution per minute -- Use the delay (with format MMDDHHMM as
integer) as your partition key.
Example: today March 24th at 12:00 (201502241200) you need to delay 3
actions, action A in exact 3
Hi,
I tried to launch the datastax AMI using enterprise version following these
steps:
- I launched the AMI only with the Opscenter in a ec2 instance and then I
have created a new Cluster containg 2 nodes - Cassanrda Solr. This
scenario works just fine in South America (Sao Paulo).
I tried to
On Tue, Mar 24, 2015 at 5:05 AM, Robin Verlangen ro...@us2.nl wrote:
- for every point in the future there are probably hundreds of actions
which have to be processed
- all actions for a point in time will be processed at once (thus not
removing action by action as a typical queue would do)
do you just mean that it's easy to forget to always set your timestamp
correctly, and if you goof it up, it makes it difficult to recover from
(i.e. you issue a delete with system timestamp instead of document version,
and that's way larger than your document version would ever be, so you can
29 matches
Mail list logo