While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this issue.
THe number of files in the data directory is around 29500+.
If we further increase the limit of ulimt, would it
routing more traffic to it?
So shouldn't I see more network in on that node in the AWS console ?
It seems that each node is recieving and sending an equal amount of data.
What value should I use for dynamic-snitch-badness-threshold to give it a
try ?
Le 20 déc. 2012 00:37, Bryan Talbot
This bug is fixed in 1.1.5
Andrey
On Thu, Dec 20, 2012 at 12:01 AM, santi kumar santi.ku...@gmail.com wrote:
While running the nodetool repair , we are running into
FileNotFoundException with too many open files error. We increased the
ulimit value to 32768, and still we have seen this
I've a running cluster (3 nodes) with release version 1.2.0-beta2, and I've
successfully added/removed nodes to this cluster in the past.
I'm trying to add a new node to the cluster with release version 1.2 rc1,
but it seems like other peers are refusing to connect, these are the
exceptions:
Can you please give more details about this bug? bug id or something
Now if I want to upgrade, is there any specific process or best practices.
Thanks
Santi
On Thu, Dec 20, 2012 at 1:44 PM, Andrey Ilinykh ailin...@gmail.com wrote:
This bug is fixed in 1.1.5
Andrey
On Thu, Dec 20, 2012
Hello,
Thank you very much for your quick responses.
Initially we were thinking the same thing, that an explanation would
be that the wrong node could be down, but then isn't this something
that hinted handoff sorts out? So actually, Consistency Level refers
to the number of replicas, not the
Thanks for the clarification -- it doesn't sound too bad for my purposes.
As per your suggestion, I've created an issue:
https://issues.apache.org/jira/browse/CASSANDRA-5080
Best regards,
Sergey
aaron morton wrote
The following features will not be available in the cli:
* in describe
Don't run with a replication factor of 2, use 3 instead, and do all reads
and writes using quorum consistency.
That way, if a single node is down, all your operations will complete. In
fact, if every third node is down, you'll still be fine and able to handle
all requests.
However, if two
On Thu, Dec 20, 2012 at 11:26 AM, Vasileios Vlachos
vasileiosvlac...@gmail.com wrote:
Initially we were thinking the same thing, that an explanation would
be that the wrong node could be down, but then isn't this something
that hinted handoff sorts out?
If a node is partitioned from the rest
The Cassandra team is pleased to announce the release of Apache Cassandra
version 1.1.8.
Cassandra is a highly scalable second-generation distributed database,
bringing together Dynamo's fully distributed design and Bigtable's
ColumnFamily-based data model. You can read more here:
Hi
I am working on options to load my sstables to load into Cassandra (1.1.6 -
localhost).
Reference -
http://amilaparanawithana.blogspot.com/2012/06/bulk-loading-external-data-to-cassandra.html
Note - I am running Hadoop in windows (standalone mode). Trying to load my hive
tables to
This is almost surely due to
https://issues.apache.org/jira/browse/CASSANDRA-4813 which slightly changed
the stream on-wire format. If you first upgrade the rest of your cluster to
rc1, you should be fine.
On Thu, Dec 20, 2012 at 9:33 AM, Omar Shibli o...@eyeviewdigital.comwrote:
I've a
On Thu, Dec 20, 2012 at 1:17 AM, santi kumar santi.ku...@gmail.com wrote:
Can you please give more details about this bug? bug id or something
https://issues.apache.org/jira/browse/CASSANDRA-4571
Now if I want to upgrade, is there any specific process or best practices.
migration from 1.1.4
Hi,
The directory information should contain entire path to the sstables location.
'C:\Anand\Workspace\H2C_POC\Customer\column key name.
I assume customer is the keyspace.
Hope it helps.
thanks
pradeep
On Thu, Dec 20, 2012 at 6:15 AM, anand_balara...@homedepot.com wrote:
Hi
I am
Hi,
Let's imagine a cluster of 6 nodes, 5 on rack1 and 1 on rack2.
With RF=3 and NetworkTopologyStrategy, The first replica per data center is
placed according to the partitioner (same as with SimpleStrategy). Additional
replicas in the same data center are then determined by walking the ring
On Thu, Dec 20, 2012 at 10:18 AM, DE VITO Dominique
dominique.dev...@thalesgroup.com wrote:
With RF=3 and NetworkTopologyStrategy, The first replica per data center is
placed according to the partitioner (same as with SimpleStrategy). Additional
replicas in the same data center are then
Yes, but they will get compacted away again unless the patch is there.
it's a small patch so you should be able to apply it easily enough if you need
a fix ASAP.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
On Wed, Dec 19, 2012 at 4:20 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
What? I thought cassandra was using nio so thread per connection is not
true?
Here's the monkey test I used to verify my conjecture.
1) ps -eLf |grep jsvc |grep cassandra | wc -l # note number of threads
2) for name
Hi
Yes, Customer is the keyspace.
I tried giving the column family name as well and getting same error.
Also tweaked by changing the slash from '\' to '/' or '\\'.
Any other ideas?
Thanks
Anand B
-Original Message-
From: Pradeep Kumar Mantha [mailto:pradeep...@gmail.com]
Sent: Thursday,
Sounds about right, i've done similar things before.
Some notes…
* I would make sure repair has completed on the source cluster before making
changes. I just like to know data is distributed. I would also do it once all
the moves are done.
* Rather than flush, take a snap shot and copy from
THe number of files in the data directory is around 29500+.
If you are using Levelled Compaction it is probably easier to set the ulimit to
unlimited.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 21/12/2012, at
In the case without CQL3, where I would use composite columns, I see how
this sort of lines up with what CQL3 is doing.
I don't have the ability to use CQL3 as I am using pycassa for my client,
so that leaves me with CompositeColumns
Under composite columns, I would have 1 row, which would be
On Thu, Dec 20, 2012 at 12:41 PM, Rob Coli rc...@palominodb.com wrote:
So, by default Cassandra does in fact use one thread per thrift connection.
Also of note is that even with hsha, an *active* connection (where
synchronous storage backend is doing something) consumes a thread.
Some more
this actually what is happening, how is it possible to ever have a
node-failure resiliant cassandra cluster?
Background http://thelastpickle.com/2011/06/13/Down-For-Me/
I would suggest double-checking your test setup; also, make sure you
use the same row keys every time (if this is not
So, if I understand correctly the data of rack1's 5 nodes will be replicated
on the single node of rack2.
And then, the node of rack1 will host all the data of the cluster.
Yup.
To get RF3 NTS will place a replica in rack 1, then one in rack 2 and then one
in rack 1.
If you are using
-d '127.0.0.1' 'C:\Anand\Workspace\H2C_POC\Customer'
Are you using the quotes on the command line / in the arguments ?
Try without them.
I end up having Unknown directory: 'C:\Anand\Workspace\H2C_POC\Customer'
error.
Whats the full error stack.
Cheers
-
Aaron Morton
I tried without quotes as well. Still error persists.
And I am using them in my Run configuration arguments of eclipse java project.
While running it as a java application, it provides only one line error. Rest
are just options to be passed in arguments - such as host name, port name,
directory
27 matches
Mail list logo