tion,
> topology, version)?
>
> On Thu, Feb 4, 2016 at 2:06 PM, Flavien Charlon <flavien.char...@gmail.com
> > wrote:
>
>> I'm using the C# driver 2.5.2. I did try to restart the client
>> application, but that didn't make any difference, I still get the same
Yes, all three nodes see all three nodes as UN.
Also, connecting from a local Cassandra machine using cqlsh, I can run the
same query just fine (with QUORUM consistency level).
On 4 February 2016 at 21:02, Robert Coli <rc...@eventbrite.com> wrote:
> On Thu, Feb 4, 2016 at 12:53 PM
;
> On Feb 4, 2016, at 6:32 PM, Flavien Charlon <flavien.char...@gmail.com>
> wrote:
>
> No, there was no other change. I did run "apt-get upgrade" before
> rebooting, but Cassandra has not been upgraded.
>
> On 4 February 2016 at 22:48, Bryan Cheng <br...@bloc
Hi,
My cluster was running fine. I rebooted all three nodes (one by one), and
now all nodes are back up and running. "nodetool status" shows UP for all
three nodes on all three nodes:
-- AddressLoad Tokens OwnsHost ID
Rack
UN xx.xx.xx.xx331.84 GB 1 ?
Hi,
What is the process to re-bootstrap a node after hard drive failure
(Cassandra 2.1.3)?
This is the same node as previously, but the data folder has been wiped,
and I would like to re-bootstrap it from the data stored on the other nodes
of the cluster (I have RF=3).
I am not using vnodes.
I already experienced the same problem (hundreds of thousands of SSTables)
with Cassandra 2.1.2. It seems to appear when running an incremental repair
while there is a medium to high insert load on the cluster. The repair goes
in a bad state and starts creating way more SSTables than it should
Did you run incremental repair? Incremental repair is broken in 2.1 and
tends to create way too many SSTables.
On 2 February 2015 at 18:05, 曹志富 cao.zh...@gmail.com wrote:
Hi,all:
I have 18 nodes C* cluster with cassandra2.1.2.Some nodes have aboud
40,000+ sstables.
my compaction strategy is
I don't think you can do nodetool repair on a single node cluster.
Still, one day or another you'll have to reboot your server, at which point
your cluster will be down. If you want high availability, you should use a
3 nodes cluster with RF = 3.
On 22 January 2015 at 18:10, Robert Coli
. At the
same time in a different thread, Roland had a similar problem, and Tyler's
proposed workaround seemed to work for him.
On Tue, Jan 20, 2015 at 3:35 PM, Robert Coli rc...@eventbrite.com wrote:
On Sun, Jan 18, 2015 at 6:06 PM, Flavien Charlon
flavien.char...@gmail.com wrote:
It's set on all
, 2015 at 5:44 PM, Flavien Charlon
flavien.char...@gmail.com wrote:
Thanks Andi. The reason I was asking is that even though my nodes have
been 100% available and no write has been rejected, when running an
incremental repair, the logs still indicate that some ranges are out of
sync (which
/documentation/cassandra/2.0/cassandra/dml/dmlClientRequestsRead.html
)
- nodetool repair (Manually
http://www.datastax.com/documentation/cassandra/2.0/cassandra/operations/ops_repair_nodes_c.html
)
Regards
Andi
--
*From:* Flavien Charlon [flavien.char...@gmail.com
Hi,
When writing to Cassandra using CL = Quorum (or anything less than ALL), is
it correct to say that Cassandra tries to write to all the replica, but
only waits for Quorum?
If so, what can cause some replica to become out of sync when they're all
online?
Thanks
Flavien
Thanks Roland. Good to know, I will try that. Do you know the JIRA ticket
number of that bug?
Thanks,
Flavien
On 19 January 2015 at 06:15, Roland Etzenhammer r.etzenham...@t-online.de
wrote:
Hi Flavien,
I hit some problem with minor compations recently (just some days ago) -
but with many
Yan--**My twitter:Andy Yan @yanchao727
https://twitter.com/yanchao727*
*My Weibo:http://weibo.com/herewearenow
http://weibo.com/herewearenow--*
2015-01-19 3:51 GMT+08:00 Flavien Charlon flavien.char...@gmail.com:
Hi,
I am using Size Tier Compaction (Cassandra 2.1.2
Hi,
I am using Size Tier Compaction (Cassandra 2.1.2). Minor compaction is not
triggering even though it should. See the SSTables on disk:
http://pastebin.com/PSwZ5mrT
You can see that we have 41 SSTable between 60MB and 85MB, which should
trigger compaction unless I am missing something.
Is
16 matches
Mail list logo