I had ran into the same problem before:
http://comments.gmane.org/gmane.comp.db.cassandra.user/25334
I have not fond any solutions yet.
Bill
On Mon, Jul 16, 2012 at 11:10 AM, Bart Swedrowski b...@timedout.org wrote:
On 16 July 2012 11:25, aaron morton aa...@thelastpickle.com wrote:
In
. (Does not
check rack again.)
You should be able to move one node at a time and run repair. Also ensure
reads are at QUOURM.
hope that helps.
-
Aaron Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 10/05/2012, at 2:08 AM, Bill Au wrote:
I am
My cluster is currently running with 2 data centers, dc1 and dc2. I would
like to remove dc2 and all its nodes completely. I am using local quorum
for read and right. I figure that I need to change the replication factor
to {dc1:3, dc2:0} before running nodetool decommission on each node in
DC setup? Are you seeing a
lot of dropped Mutations/Messages? Are the nodes going up and down all the
time while the repair is running?
Regards,
/VJ
On Tue, May 8, 2012 at 2:05 PM, Bill Au bill.w...@gmail.com wrote:
There are no error message in my log.
I ended up restarting all
Morton
Freelance Developer
@aaronmorton
http://www.thelastpickle.com
On 8/05/2012, at 2:15 PM, Ben Coverston wrote:
Check the log files for warnings or errors. They may indicate why your
repair failed.
On Mon, May 7, 2012 at 10:09 AM, Bill Au bill.w...@gmail.com wrote:
I restarted
I restarted the nodes and then restarted the repair. It is still hanging
like before. Do I keep repeating until the repair actually finish?
Bill
On Fri, May 4, 2012 at 2:18 PM, Rob Coli rc...@palominodb.com wrote:
On Fri, May 4, 2012 at 10:30 AM, Bill Au bill.w...@gmail.com wrote:
I know
I know repair may take a long time to run. I am running repair on a node
with about 15 GB of data and it is taking more than 24 hours. Is that
normal? Is there any way to get status of the repair? tpstats does show 2
active and 2 pending AntiEntropySessions. But netstats and compactionstats
a repair we have
bounced all of the participating nodes. I've been told that there is no
harm in stopping repairs.
On Apr 24, 2012, at 2:55 PM, Bill Au wrote:
I am running 1.0.8. I am adding a new data center to an existing
cluster. Following steps outlined in another thread on the mailing
I am running 1.0.8. I am adding a new data center to an existing cluster.
Following steps outlined in another thread on the mailing list, things went
fine except for the last step, which is to run repair on all the nodes in
the new data center. Repair seems to be hanging indefinitely. There is
I just followed the step outlined in this email thread to add a second data
center to my existing cluster. I am running 1.0.8. Each data center has a
replication factor of 2. I am using local quorum for read and write.
Everything went smoothly until I ran the last step, which is to run
All the examples of cassandra-topology.properties that I have seen have a
default entry assigning unknown nodes to a specific data center and rack.
Is it possible to have Cassandra ignore unknown nodes for the purpose of
replication?
Bill
with this, but you should make sure that a node really doesn’t need to
contact the unknown nodes before marking them as such.
** **
** **
Richard
** **
** **
*From:* Bill Au [mailto:bill.w...@gmail.com]
*Sent:* 19 April 2012 17:16
*To:* user@cassandra.apache.org
*Subject:* default required
Thanks for the info.
Upgrade within the 1.0.x branch is simply a rolling restart, right?
Bill
On Thu, Feb 16, 2012 at 9:20 PM, Jonathan Ellis jbel...@gmail.com wrote:
CASSANDRA-3496, fixed in 1.0.4+
On Thu, Feb 16, 2012 at 8:27 AM, Bill Au bill.w...@gmail.com wrote:
I am running 1.0.2
I am running 1.0.2 with the default tiered compaction. After running a
nodetool compact, I noticed that on about half of the machines in my
cluster, both nodetool ring and nodetool info report that the load is
actually higher than before when I expect it to be lower. It is almost
twice as much
Developer
@aaronmorton
http://www.thelastpickle.com
On 17/02/2012, at 3:27 AM, Bill Au wrote:
I am running 1.0.2 with the default tiered compaction. After running a
nodetool compact, I noticed that on about half of the machines in my
cluster, both nodetool ring and nodetool info report
, Bill Au bill.w...@gmail.com wrote:
One of my Cassandra server crashed with the following:
ERROR [ACCEPT-xxx.xxx.xxx/nnn.nnn.nnn.nnn] 2010-10-19 00:25:10,419
CassandraDaemon.java (line 82) Uncaught exception in thread
Thread[ACCEPT-xxx.xxx.xxx/nnn.nnn.nnn.nnn,5,main
One of my Cassandra server crashed with the following:
ERROR [ACCEPT-xxx.xxx.xxx/nnn.nnn.nnn.nnn] 2010-10-19 00:25:10,419
CassandraDaemon.java (line 82) Uncaught exception in thread
Thread[ACCEPT-xxx.xxx.xxx/nnn.nnn.nnn.nnn,5,main]
java.lang.OutOfMemoryError: unable to create new native thread
://issues.apache.org/jira/browse/CASSANDRA-699 :)
/advertising
--
Sylvain
On Fri, Mar 12, 2010 at 8:28 AM, Mark Robson mar...@gmail.com wrote:
On 12 March 2010 03:34, Bill Au bill.w...@gmail.com wrote:
Let take Twitter as an example. All the tweets are timestamped. I
want
18 matches
Mail list logo