Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is definitely our bottleneck... If someone could explain why the row cache
has so much impact on my JVM and how to avoid it, it would be appreciated
:).
2013/3/8 Alain RODRIGUEZ arodr...@gmail.com
Hi,
We
I can add that I have JNA corectly loaded, from the logs: JNA mlockall
successful
2013/3/11 Alain RODRIGUEZ arodr...@gmail.com
Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is definitely our bottleneck... If someone could explain why the row cache
It seems to me that the repair –pr is not compatible with vnode cluster.
is it true ?
I'm afraid that's probably true. repair --pr should repair every primary
range for all of it's token, but it seems this hasn't been updated for
vnodes.
Would you mind creating a bug report on JIRA?
In the
I have the same problem!
2013/3/11 Alain RODRIGUEZ arodr...@gmail.com
I can add that I have JNA corectly loaded, from the logs: JNA mlockall
successful
2013/3/11 Alain RODRIGUEZ arodr...@gmail.com
Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is
Ok, thank for the answer.
I have created a bug report : number 5329.
2013/3/11 Sylvain Lebresne sylv...@datastax.com
It seems to me that the repair –pr is not compatible with vnode cluster.
is it true ?
I'm afraid that's probably true. repair --pr should repair every
primary range
·There are some columns set with TTL as X. After X Cassandra will
mark them as tombstones. Is there still a probability of running into
DistributedDeletes issue ? I understand that “distributeddeletes” is more
applicable to application deletes ?
TTL'd columns are first turned into
What statement are you issuing ?
What have you tried ?
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 8/03/2013, at 5:49 AM, Adam Venturella aventure...@gmail.com wrote:
TL;DR:
Is it possible to use WHERE IN on
+1, we ran into the same issue. From docs, nodetool drain shuts down ports
which then prevent nodetool from working as it needs to talk to cassandra
through the ports.
I asked this same question 2 weeks ago but didn't get an answer. I am guessing
but I think 1.2.2 seems to fix this a little
I'm trying to understand what will happen when we start deleting the old data.
Are you going to delete data or use the TTL?
With size tiered compaction, suppose we have one 160Gb sstable and some
smaller tables totalling 40Gb.
Not sure on that, it depends on the work load.
My
Check that read_repair_chance on the CF's is 0.1, not the old 1.0
Wait at least 10 minutes for the DynamicSnitch to re-calculate.
Use the org.apache.cassandra.db:type=DynamicEndpointSnitch MBean to see what
scores it has given the nodes.
Cheers
-
Aaron Morton
Freelance
Drain stops listening for connections from client and other nodes, and flushes
all the data to disk. The purpose is to get everything into SSTables, so we do
not want to process any more writes.
The error is logged at DEBUG as it's not important, just means a thread (the
processed gossip) was
Well, we finally have dynamic switch working. It seems to switch nodes to a
Remote node and SimpleSTrategy can not deal with that well. Also, we had to
move to CL=QUOROM_LOCAL instead of QUOROM.
So for those of you that want cassandra to keep performing well when one node
starts to get
I keep seeing these in my log. Three-node cluster, one node is working fine,
but two other nodes have increased latencies and these in the error logs (might
of course be unrelated). No obvious GC pressure, no disk errors that I can see.
Ubuntu 12.04 on EC2, Java 7. Repair is run regularly.
Hi
I need a tutorial for deployong Hadoop+Cassandra on single-nodes
Thanks
I have seen numerous posts on transferring data from MySql to Cassandra but
have yet to find a good way to transfer directly from a Microsoft SQL Server
table to a Cassandra CF. Even better would be a method to take as input the
output of an arbitrary SQL query. Ideas?
Hi there,
Check this out [1]. It´s kinda old but I think it will help you get started.
Renato M.
[1] http://www.datastax.com/docs/0.7/map_reduce/hadoop_mr
2013/3/11 oualid ait wafli oualid.aitwa...@gmail.com:
Hi
I need a tutorial for deployong Hadoop+Cassandra on single-nodes
Thanks
Hi,
You may try Talend data integration suite.
Lohith
Original Message
Subject: Importing data from SQL Server
From: Kevin Burton rkevinbur...@charter.net
To: user@cassandra.apache.org user@cassandra.apache.org
CC:
I have seen numerous posts on transferring data from MySql to
I use Cassandra 1.2.2 and Hadoop 1.0.4
2013/3/11 Renato Marroquín Mogrovejo renatoj.marroq...@gmail.com
Hi there,
Check this out [1]. It´s kinda old but I think it will help you get
started.
Renato M.
[1] http://www.datastax.com/docs/0.7/map_reduce/hadoop_mr
2013/3/11 oualid ait
You can quickly create a query using dapper and then transfer all your rows
using cassandra-sharp actually. The only thing you have to do is to create a
class to materialize a row and obviously the SQL and CQL statements.
I also have a pending change for cassandra-sharp-contrib that I'm
Not familiar with 'dapper' or Cassandra-sharp is there a step by step guide to
this process including the install? Thanks for the tip.
On Mar 11, 2013, at 9:41 AM, Pierre Chalamet pie...@chalamet.net wrote:
You can quickly create a query using dapper and then transfer all your rows
using
They mention Hadoop, HBase, and Hive. I am assuming that Cassandra comes under
the umbrella of 'NoSql databases'.
On Mar 11, 2013, at 9:33 AM, Lohith Samaga M lohith.sam...@mphasis.com
wrote:
Hi,
You may try Talend data integration suite.
Lohith
Original Message
Please check the bigdata package.
Lohith.
Sent from my Xperia™ smartphone smartphone
Original Message
Subject: Re: Importing data from SQL Server
From: Kevin Burton rkevinbur...@charter.net
To: user@cassandra.apache.org user@cassandra.apache.org
CC:
They mention Hadoop,
Where can I get the 'bigdata package?
On Mar 11, 2013, at 10:01 AM, Lohith Samaga M lohith.sam...@mphasis.com
wrote:
Please check the bigdata package.
Lohith.
Sent from my Xperia™ smartphone smartphone
Original Message
Subject: Re: Importing data from SQL Server
I'm seeing this error on cassandra 1.2.2 on startup:
ERROR [COMMIT-LOG-ALLOCATOR] 2013-03-11 16:51:22,076 CassandraDaemon.java (line
132) Exception in thread Thread[COMMIT-LOG-ALLOCATOR,5,main]
FSWriteError in /var/lib/cassandra/commitlog/CommitLog-2-1363017061553.log
at
Caused by: java.io.FileNotFoundException: /var/lib/cassandra/commitlog/
CommitLog-2-1363017061553.log (Permission denied)
^ It seems like your running cassandra as a user thatdoes not have access
to this directory. Possibly you can something as root and now the files
are root owned at one point
Always make sure root does not have java in his path so you don't make
that mistake. At least that is how we handle it so root never runs
cassandra as we end up with java not found.
Later,
Dean
On 3/11/13 9:57 AM, Marco Matarazzo marco.matara...@hexkeep.com wrote:
I'm seeing this error on
You should file a JIRA if dsnitch only works with LOCAL_QUORUM something is
very wrong.
On Mon, Mar 11, 2013 at 9:58 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
Well, we finally have dynamic switch working. It seems to switch nodes to
a Remote node and SimpleSTrategy can not deal with that
Il 2013/03/11 14:42 PM, aaron morton ha scritto:
I'm trying to understand what will happen when we start deleting the old data.
Are you going to delete data or use the TTL?
We delete the data explicitly, since we might change idea on the data TTL after
it has been written.
With size tiered
Hello!
is it possible to use both ipv4 and ipv6 for Cassandra cluster ?
Cheers,
Ilya Shipitsin
You said all versions. However, when I try to access
cassandra://twissandra/users based on
http://www.datastax.com/docs/1.0/dml/using_cql I get :
2013-03-11 17:35:48,444 [main] INFO org.apache.pig.Main - Apache Pig version
0.11.0 (r1446324) compiled Feb 14 2013, 16:40:57
2013-03-11
Sorry, I was referring to the Talend Open Studio for BigData.
Lohith.
Sent from my Xperia™ smartphone smartphone
Original Message
Subject: Re: Importing data from SQL Server
From: Kevin Burton rkevinbur...@charter.net
To: user@cassandra.apache.org user@cassandra.apache.org
CC:
What kind of inserts and multiget queries are you running?
On Sun, Mar 10, 2013 at 1:16 PM, André Cruz andre.c...@co.sapo.pt wrote:
On 10/03/2013, at 16:49, Chuan-Heng Hsiao hsiao.chuanh...@gmail.com
wrote:
However, my guess is that cassandra only guarantee that
if you successfully write
On Mon, Mar 11, 2013 at 11:25 AM, Flavio Baronti
f.baro...@list-group.comwrote:
One more question. I read and reread your description of deletes [1], but
I still am confused on tombstones and GCGraceSeconds, specifically when you
say If the deletion is before gcBefore it is totally ignored.
Ticket filed.
https://issues.apache.org/jira/browse/CASSANDRA-5333
Thanks,
Dean
From: Edward Capriolo edlinuxg...@gmail.commailto:edlinuxg...@gmail.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Monday,
Can you not set up VPN between your data centers?
On Mar 10, 2013, at 7:05 AM, Илья Шипицин wrote:
Hello!
Is it possible to run cluster in 2 datacenters which are not routable?
Each datacenter is running its own lan prefixes, however lan are not routable
across datacenters.
Cheers,
http://frommyworkshop.blogspot.ru/2012/07/single-node-hadoop-cassandra-pig-setup.html
I use Cassandra 1.2.2 and Hadoop 1.0.4
2013/3/11 Renato Marroquín Mogrovejo renatoj.marroq...@gmail.com
Hi there,
Check this out [1]. It´s kinda old but I think it will help you get started.
Renato M.
On Mar 11, 2013, at 5:02 PM, Tyler Hobbs ty...@datastax.com wrote:
What kind of inserts and multiget queries are you running?
I use the ColumnFamily objects. The pool is initialised with
write_consistency_level=ConsistencyLevel.QUORUM.
The insert is a regular insert, so the QUORUM is used.
Hi,
I'd like to resurrect this thread from April 2012 -
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/migrating-from-SimpleStrategy-to-NetworkTopologyStrategy-td7481090.html
- migrating from SimpleStrategy to NetworkTopologyStrategy
We're in a similar situation, and I'd like
On Mon, Mar 11, 2013 at 7:05 AM, Janne Jalkanen
janne.jalka...@ecyrd.com wrote:
I keep seeing these in my log. Three-node cluster, one node is working fine,
but two other nodes have increased latencies and these in the error logs
(might of course be unrelated). No obvious GC pressure, no
You may have bumped to this issue:
https://github.com/Netflix/Priam/issues/161
make sure is_replace_token Priam API call is working for you.
On Fri, Mar 8, 2013 at 8:22 AM, aaron morton aa...@thelastpickle.comwrote:
If it does not have the schema check the logs for errors and ensure it is
Probably also ensure port 7000 for the nodes to be reachable between nodes.
Jason
On Tue, Mar 12, 2013 at 4:11 AM, Dane Miller d...@optimalsocial.com wrote:
Hi,
I'd like to resurrect this thread from April 2012 -
I can see this problem resurfacing in Cassandra 1.1.9 on my system. I am
using RHEL 6.0 and 7199 can be seen bound by Cassandra upon netstat. When I
do telnet x.x.x.x 7199, that works too.
If the process was started with a JMX port property then you are not seeing
that error.
Can you show
Is this just a display bug in nodetool or this upgraded node really sees the
other ones as dead?
Is the 1.2.2 node which is see all the others as down processing requests ?
Is it showing the others as down in the log ?
I'm not really sure what's happening. But you can try starting the 1.2.2
It's a lot easier for people to help you if you state what the problem is and
what you have tried.
There is some information on the wiki
http://wiki.apache.org/cassandra/HadoopSupport
and some documentation on the data stax site
any idea why the function loadFunc does not work correctly ?
No sorry.
Not sure why you are linking to the CQL info or what Pig script / config you
are running.
Did you follow the example in the examples/pig in the source distribution ?
Also please use at least cassandra 1.1.
Cheers
45 matches
Mail list logo