Unsubscribe

2023-08-16 Thread Mark Furlong
Please unsubscribe from this list


Thanks
Mark Furlong
Sr. Database Administrator
mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043


[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]
We empower journeys of personal discovery to enrich lives




TTL on UDT

2019-12-03 Thread Mark Furlong
When I run the command 'select ttl(udt_field) from table; I'm getting an error 
'InvalidRequest: Error from server: code=2200 [Invalid query] message="Cannot 
use selection function ttl on collections"'. How can I get the TTL from a UDT 
field?

Mark Furlong


[cid:image001.png@01D5A920.52B244C0]
We empower journeys of personal discovery to enrich lives




Alter table

2018-12-17 Thread Mark Furlong
Why would I want to use alter table vs upserts with the new document format?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Node failure

2017-10-06 Thread Mark Furlong
I’ll check to see what our app is using.

Thanks
Mark
801-705-7115 office

From: Steinmaurer, Thomas [mailto:thomas.steinmau...@dynatrace.com]
Sent: Friday, October 6, 2017 12:25 PM
To: user@cassandra.apache.org
Subject: RE: Node failure

QUORUM should succeed with a RF=3 and 2 of 3 nodes available.

Modern client drivers also have ways to “downgrade” the CL of requests, in case 
they fail. E.g. for the Java driver: 
http://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/policies/DowngradingConsistencyRetryPolicy.html


Thomas

From: Mark Furlong [mailto:mfurl...@ancestry.com]
Sent: Freitag, 06. Oktober 2017 19:43
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: RE: Node failure

Thanks for the detail. I’ll have to remove and then add one back in. It’s my 
consistency levels that may bite me in the interim.

Thanks
Mark
801-705-7115 office

From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, October 6, 2017 11:29 AM
To: cassandra <user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
Subject: Re: Node failure

There's a lot to talk about here, what's your exact question?


- You can either remove it from the cluster or replace it. You typically remove 
it if it'll never be replaced, but in RF=3 with 3 nodes, you probably need to 
replace it. To replace, you'll start a new server with 
-Dcassandra.replace_address=a.b.c.d ( 
http://cassandra.apache.org/doc/latest/operating/topo_changes.html#replacing-a-dead-node
 ) , and it'll stream data from the neighbors and eventually replace the dead 
node in the ring (the dead node will be removed from 'nodetool status', the new 
node will be there instead).

- If you're not going to replace it, things get a bit more complex - you'll do 
some combination of repair, 'nodetool removenode' or 'nodetool assassinate', 
and ALTERing the keyspace to set RF=2. The order matters, and so does the 
consistency level you use for reads/writes (so we can tell you whether or not 
you're likely to lose data in this process), so I'm not giving step-by-steps 
here because it's not very straight forward and there are a lot of caveats.




On Fri, Oct 6, 2017 at 10:20 AM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
What happens when I have a 3 node cluster with RF 3 and a node fails that needs 
to be removed?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427<tel:(801)%20859-7427>
O: 801-705-7115<tel:(801)%20705-7115>
1300 W Traverse 
Pkwy<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>
Lehi, UT 
84043<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]




The contents of this e-mail are intended for the named addressee only. It 
contains information that may be confidential. Unless you are the named 
addressee or an authorized designee, you may not copy or use it, or disclose it 
to anyone else. If you received it in error please notify us immediately and 
then destroy it. Dynatrace Austria GmbH (registration number FN 91482h) is a 
company registered in Linz whose registered office is at 4040 Linz, Austria, 
Freistädterstraße 313


RE: Node failure

2017-10-06 Thread Mark Furlong
We are using quorum on our reads and writes.

Thanks
Mark
801-705-7115 office

From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, October 6, 2017 11:30 AM
To: cassandra <user@cassandra.apache.org>
Subject: Re: Node failure

If you write with CL:ANY, CL:ONE (or LOCAL_ONE), and one node fails, you may 
lose data that hasn't made it to other nodes.


On Fri, Oct 6, 2017 at 10:28 AM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
The only time I’ll have a problem is if I have a do a read all or write all. 
Any other gotchas I should be aware of?

Thanks
Mark
801-705-7115<tel:(801)%20705-7115> office

From: Akshit Jain 
[mailto:akshit13...@iiitd.ac.in<mailto:akshit13...@iiitd.ac.in>]
Sent: Friday, October 6, 2017 11:25 AM
To: user@cassandra.apache.org<mailto:user@cassandra.apache.org>
Subject: Re: Node failure

You replace it with a new node and bootstraping happens.The new node receives 
data from other two nodes.
Rest depends on the scenerio u are asking for.

Regards
Akshit Jain
B-Tech,2013124
9891724697
[Image removed by sender.]

On Fri, Oct 6, 2017 at 10:50 PM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
What happens when I have a 3 node cluster with RF 3 and a node fails that needs 
to be removed?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427<tel:(801)%20859-7427>
O: 801-705-7115<tel:(801)%20705-7115>
1300 W Traverse 
Pkwy<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>
Lehi, UT 
84043<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]







RE: Node failure

2017-10-06 Thread Mark Furlong
Thanks for the detail. I’ll have to remove and then add one back in. It’s my 
consistency levels that may bite me in the interim.

Thanks
Mark
801-705-7115 office

From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Friday, October 6, 2017 11:29 AM
To: cassandra <user@cassandra.apache.org>
Subject: Re: Node failure

There's a lot to talk about here, what's your exact question?


- You can either remove it from the cluster or replace it. You typically remove 
it if it'll never be replaced, but in RF=3 with 3 nodes, you probably need to 
replace it. To replace, you'll start a new server with 
-Dcassandra.replace_address=a.b.c.d ( 
http://cassandra.apache.org/doc/latest/operating/topo_changes.html#replacing-a-dead-node
 ) , and it'll stream data from the neighbors and eventually replace the dead 
node in the ring (the dead node will be removed from 'nodetool status', the new 
node will be there instead).

- If you're not going to replace it, things get a bit more complex - you'll do 
some combination of repair, 'nodetool removenode' or 'nodetool assassinate', 
and ALTERing the keyspace to set RF=2. The order matters, and so does the 
consistency level you use for reads/writes (so we can tell you whether or not 
you're likely to lose data in this process), so I'm not giving step-by-steps 
here because it's not very straight forward and there are a lot of caveats.




On Fri, Oct 6, 2017 at 10:20 AM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
What happens when I have a 3 node cluster with RF 3 and a node fails that needs 
to be removed?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427<tel:(801)%20859-7427>
O: 801-705-7115<tel:(801)%20705-7115>
1300 W Traverse 
Pkwy<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>
Lehi, UT 
84043<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]






RE: Node failure

2017-10-06 Thread Mark Furlong
The only time I’ll have a problem is if I have a do a read all or write all. 
Any other gotchas I should be aware of?

Thanks
Mark
801-705-7115 office

From: Akshit Jain [mailto:akshit13...@iiitd.ac.in]
Sent: Friday, October 6, 2017 11:25 AM
To: user@cassandra.apache.org
Subject: Re: Node failure

You replace it with a new node and bootstraping happens.The new node receives 
data from other two nodes.
Rest depends on the scenerio u are asking for.

Regards
Akshit Jain
B-Tech,2013124
9891724697
[Image removed by sender.]

On Fri, Oct 6, 2017 at 10:50 PM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
What happens when I have a 3 node cluster with RF 3 and a node fails that needs 
to be removed?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse 
Pkwy<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>
Lehi, UT 
84043<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]






Node failure

2017-10-06 Thread Mark Furlong
What happens when I have a 3 node cluster with RF 3 and a node fails that needs 
to be removed?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Cassandra downgrade of 2.1.15 to 2.1.12

2017-09-12 Thread Mark Furlong
Great information. 

Thank you
Mark
801-705-7115 office

-Original Message-
From: Michael Shuler [mailto:mshu...@pbandjelly.org] On Behalf Of Michael Shuler
Sent: Monday, September 11, 2017 5:54 PM
To: user@cassandra.apache.org
Subject: Re: Cassandra downgrade of 2.1.15 to 2.1.12

On 09/11/2017 06:29 PM, Mark Furlong wrote:
> I have a requirement to test a downgrade of 2.1.15 to 2.1.12. Can 
> someone please identify how to achieve this?

Downgrades have never been officially supported, but this is a relatively small 
step. Testing it out is definitely a good thing. Since protocols and on-disk 
sstable versions should be the same, I'd say work backwards through NEWS.txt 
and see what you think about how it affects your specific usage. I'd also be 
wary of the fixed bugs you will re-introduce on downgrade (CHANGES.txt).

https://github.com/apache/cassandra/blob/cassandra-2.1.15/NEWS.txt#L16-L44
https://github.com/apache/cassandra/blob/cassandra-2.1.15/CHANGES.txt#L1-L100

As for the actual software downgrade, it depends on install method.
`wget` the 2.1.12 tar or deb files and `tar -xzvf` or `dpkg -i` them.
Here's where you can find the old versions of artifacts:

tar:
http://archive.apache.org/dist/cassandra/2.1.12/
deb:
http://archive.apache.org/dist/cassandra/debian/pool/main/c/cassandra/

This definitely would not work on a major release downgrade like 2.2.x to 
2.1.x, since the sstable versions would be different, but in your
2.1.15 to 2.1.12 example, this might "just work".

--
Kind regards,
Michael

-
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org



Cassandra downgrade of 2.1.15 to 2.1.12

2017-09-11 Thread Mark Furlong
I have a requirement to test a downgrade of 2.1.15 to 2.1.12. Can someone 
please identify how to achieve this?

Thanks,
Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Manual repair not showing in the log.

2017-09-07 Thread Mark Furlong
Yes, I have one validation task in my compactionstats.

Thanks
Mark
801-705-7115 office

From: Carlos Rolo [mailto:r...@pythian.com]
Sent: Thursday, September 7, 2017 2:06 PM
To: user@cassandra.apache.org
Subject: Re: Manual repair not showing in the log.

Can you check if you have any validation compaction running in nodetool 
compactionstats?

On 7 Sep 2017 7:56 pm, "Mark Furlong" 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
I have started a repair and I received the message ‘Starting repair command #1, 
repairing 25301 ranges for keyspace x (parallelism=PARALLEL, full=true). When I 
look in the log for antientropy repairs I do not see anything for this 
keyspace. Expecting to see messages for the merkle trees on each column family 
in the keyspace… nothing.

The repair appears to not be doing anything, what is it stuck on?


Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse 
Pkwy<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>
Lehi, UT 
84043<https://maps.google.com/?q=1300+W+Traverse+Pkwy%0D+Lehi,+UT+84043=gmail=g>





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]






--




Manual repair not showing in the log.

2017-09-07 Thread Mark Furlong
I have started a repair and I received the message ‘Starting repair command #1, 
repairing 25301 ranges for keyspace x (parallelism=PARALLEL, full=true). When I 
look in the log for antientropy repairs I do not see anything for this 
keyspace. Expecting to see messages for the merkle trees on each column family 
in the keyspace… nothing.

The repair appears to not be doing anything, what is it stuck on?


Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Invalid Gossip generation

2017-08-31 Thread Mark Furlong
What do you recommend on taking this node out of the cluster, a decommission or 
a removenode? Since the communication between nodes is getting invalid gossip 
generation messages I would think a decommission might not be effective.

Thanks
Mark
801-705-7115 office

From: Erick Ramirez [mailto:flightc...@gmail.com]
Sent: Wednesday, August 30, 2017 7:34 PM
To: user@cassandra.apache.org
Subject: Re: Invalid Gossip generation

Unfortunately, the only available workaround is a rolling restart of the 
cluster until you get the fix in C* 2.1.13 
(CASSANDRA-10969<https://issues.apache.org/jira/browse/CASSANDRA-10969>).

On Thu, Aug 31, 2017 at 5:52 AM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
I have a 2.1.12 cluster and have experienced an invalid gossip generation error 
on one of the nodes. We have tried altering the local generation value without 
achieving the desired result. A rolling restart of this production cluster of 
136 nodes is a last chance option. The next step we know is to upgrade this 
cluster to a new version of 2.1. In the meantime is there any other way then 
the above mentioned to get this node communicating with the cluster?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427<tel:(801)%20859-7427>
O: 801-705-7115<tel:(801)%20705-7115>
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]






Invalid Gossip generation

2017-08-30 Thread Mark Furlong
I have a 2.1.12 cluster and have experienced an invalid gossip generation error 
on one of the nodes. We have tried altering the local generation value without 
achieving the desired result. A rolling restart of this production cluster of 
136 nodes is a last chance option. The next step we know is to upgrade this 
cluster to a new version of 2.1. In the meantime is there any other way then 
the above mentioned to get this node communicating with the cluster?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





Rack Awareness

2017-08-29 Thread Mark Furlong
I am getting ready to start what I understand as a rack aware repair. This is 
to run the repair on each node within the rack resulting in a repair of the 
entire cluster. My question comes due to a highly out of balance set of racks 
and I want to know if Cassandra 2.1.12 is smart enough to identify the racks by 
the rack value or by the ip address?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Restarting an existing node hangs

2017-08-23 Thread Mark Furlong
Cassandra doesn’t exit and continues to run with very little CPU usage shown in 
top.

Thanks
Mark
801-705-7115 office

From: Jeff Jirsa [mailto:jji...@gmail.com]
Sent: Wednesday, August 23, 2017 12:07 PM
To: cassandra <user@cassandra.apache.org>
Subject: Re: Restarting an existing node hangs

Typically if that sstable is damaged you'd see some sort of message. If you 
recently changed bloom filter or index intervals for that table, it may be 
silently rebuilding the other components of that sstable. Does cassandra exit 
or does it just keep churning away?



On Wed, Aug 23, 2017 at 10:20 AM, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:
I had an existing node go down. I don’t know the cause of this. I am starting 
Cassandra and I can see in the log that it starts and then hangs on the opening 
of an sstable. Is there anything I can do to fix the sstable?

I’m on OSC 2.1.12.

Thanks in advance,
Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427<tel:(801)%20859-7427>
O: 801-705-7115<tel:(801)%20705-7115>
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]






Restarting an existing node hangs

2017-08-23 Thread Mark Furlong
I had an existing node go down. I don’t know the cause of this. I am starting 
Cassandra and I can see in the log that it starts and then hangs on the opening 
of an sstable. Is there anything I can do to fix the sstable?

I’m on OSC 2.1.12.

Thanks in advance,
Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Repair on system_auth

2017-07-07 Thread Mark Furlong
I’m currently on 2.1.12. Are you saying this bug exists on the current latest 
version 3.0.14?

Thank you
Mark
801-705-7115 office

From: ­Fay Hou [Storage Service] [mailto:fay...@coupang.com]
Sent: Thursday, July 6, 2017 2:24 PM
To: User <user@cassandra.apache.org>
Subject: Re: Repair on system_auth

There is a bug on repair system_auth keyspace. We just skip the repair on 
system_auth. Yes. it is ok to kill the running repair job

On Thu, Jul 6, 2017 at 1:14 PM, Subroto Barua 
<sbarua...@yahoo.com.invalid<mailto:sbarua...@yahoo.com.invalid>> wrote:
you can check the status via nodetool netstats
to kill the repair job, restart the instance


On Thursday, July 6, 2017, 1:09:42 PM PDT, Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:



I have started a repair on my system_auth keyspace. The repair has started and 
the process shows as running with ps but am not seeing any CPU with top. I’m 
also note seeing any antientropy sessions building merkle trees in the log. Can 
I safely kill a repair and how?





Mark Furlong


Sr. Database Administrator


mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427

O: 801-705-7115

1300 W Traverse Pkwy

Lehi, UT 84043








​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]







-
To unsubscribe, e-mail: 
user-unsubscr...@cassandra.apache.org<mailto:user-unsubscr...@cassandra.apache.org>
For additional commands, e-mail: 
user-h...@cassandra.apache.org<mailto:user-h...@cassandra.apache.org>



Repair on system_auth

2017-07-06 Thread Mark Furlong
I have started a repair on my system_auth keyspace. The repair has started and 
the process shows as running with ps but am not seeing any CPU with top. I’m 
also note seeing any antientropy sessions building merkle trees in the log. Can 
I safely kill a repair and how?


Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





Re-adding Decommissioned node

2017-06-27 Thread Mark Furlong
I have a node that has been decommissioned and it showed ‘UL’, the data volume 
and the commitlogs have been removed, and I now want to add that node back into 
my ring. When I add this node, (bootstrap=true, start cassandra service) it 
comes back up in the ring as an existing node and shows as ‘UN’ instead of 
‘UJ’. Why is this? It has no data.

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





Manual Repairs

2017-06-21 Thread Mark Furlong
Can a repair be paused, and if paused can it be restarted from the point of the 
pause, or does it start over?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





Adding nodes and cleanup

2017-06-19 Thread Mark Furlong
I have added a few nodes and now am running some cleanups. Can I add an 
additional node while these cleanups are running? What are the ramifications of 
doing this?

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Stable version apache cassandra 3.X /3.0.X

2017-05-31 Thread Mark Furlong
I need to reduce my disk footprint, how is 3.0.14 for stability? Also, where do 
I find upgrade instructions and version requirements?

Thanks
Mark
801-705-7115 office

From: Carlos Rolo [mailto:r...@pythian.com]
Sent: Wednesday, May 31, 2017 11:17 AM
To: Jonathan Haddad 
Cc: Junaid Nasir ; pabbireddy avinash 
; user@cassandra.apache.org
Subject: Re: Stable version apache cassandra 3.X /3.0.X

On sync in Jon.

Only go 3.0.x if you REALLY need something from there (ex: MV) even then, be 
carefull.

3.x wait for 3.11.x. 3.10 if you REALLY need something from there right now.

Latest 2.2.x or 2.1.x if you are just doing baseline Cassandra and need the 
stability.

Regards,

Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP

Pythian - Love your data

rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin: 
linkedin.com/in/carlosjuzarterolo

Mobile: +351 918 918 100
www.pythian.com

On Wed, May 31, 2017 at 5:48 PM, Jonathan Haddad 
> wrote:
I really wouldn't go by the tick tock blog post, considering tick tock is dead.

I'm still not wild about putting any 3.0 or 3.x into production.  3.0 removed 
off heap memtables and there have been enough bugs in the storage engine that 
I'm still wary.  My hope is to see 3.11.x get enough bug fixes to where most 
people just skip 3.0 altogether.  I'm not sure if we're there yet though.


On Wed, May 31, 2017 at 9:43 AM Junaid Nasir 
> wrote:
as mentioned here http://www.datastax.com/dev/blog/cassandra-2-2-3-0-and-beyond
Under normal conditions, we will NOT release 3.x.y stability releases for x > 
0.  That is, we will have a traditional 3.0.y stability series, but the 
odd-numbered bugfix-only releases will fill that role for the tick-tock series 
— recognizing that occasionally we will need to be flexible enough to release 
an emergency fix in the case of a critical bug or security vulnerability.
We do recognize that it will take some time for tick-tock releases to deliver 
production-level stability, which is why we will continue to deliver 2.2.y and 
3.0.y bugfix releases.  (But if we do demonstrate that tick-tock can deliver 
the stability we want, there will be no need for a 4.0.y bugfix series, only 
4.x tick-tock.)

On Wed, May 31, 2017 at 9:02 PM, pabbireddy avinash 
> wrote:
Hi,

We are planning to deploy a cassandra production cluster on 3.X /3.0.X . Please 
let us know if there is any stable version  in 3.X/3.0.X that we could deploy 
in production .

Regards,
Avinash.




--




RE: Decommissioned node cluster shows as down

2017-05-16 Thread Mark Furlong
I thought the same that the decommission would complete the removal of a node. 
I have heard something said about a 72 hour window, I’m not sure if that 
pertains to this version.

Thanks
Mark
801-705-7115 office

From: Hannu Kröger [mailto:hkro...@gmail.com]
Sent: Tuesday, May 16, 2017 10:09 AM
To: suraj pasuparthy <suraj.pasupar...@gmail.com>
Cc: Mark Furlong <mfurl...@ancestry.com>; user@cassandra.apache.org
Subject: Re: Decommissioned node cluster shows as down

That’s weird. I thought decommission would ultimately remove the node from the 
cluster because the token(s) should be removed from the ring and data should be 
streamed to new owners. “DN” is IMHO not a state where the node should end up 
in.

Hannu

On 16 May 2017, at 19:05, suraj pasuparthy 
<suraj.pasupar...@gmail.com<mailto:suraj.pasupar...@gmail.com>> wrote:

Yes, you have to run a nodetool removenode to decomission completely.. this 
will also allow another node with the same ip different HashId to join the 
cluster..

Thanks
-suraj
On Tue, May 16, 2017 at 9:01 AM Mark Furlong 
<mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>> wrote:













I have a node I decommissioned on a large ring using 2.1.12. The node completed 
the decommission process and is no longer communicating with the rest of the 
cluster. However when I run a nodetool status on any node in the cluster it 
shows

the node as ‘DN’. Why is this and should I just run a removenode now?

























Thanks,

Mark Furlong



Sr. Database Administrator



mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>


M: 801-859-7427

O: 801-705-7115

1300 W Traverse Pkwy

Lehi, UT 84043






















​
















Decommissioned node cluster shows as down

2017-05-16 Thread Mark Furlong
I have a node I decommissioned on a large ring using 2.1.12. The node completed 
the decommission process and is no longer communicating with the rest of the 
cluster. However when I run a nodetool status on any node in the cluster it 
shows the node as ‘DN’. Why is this and should I just run a removenode now?

Thanks,
Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]





RE: Repairs on 2.1.12

2017-05-11 Thread Mark Furlong
The repair on the DC has completed in 308 hours. What would be helpful is if 
anyone has a good way of monitoring a manual ‘antientropy’ repair.

Thanks
Mark
801-705-7115 office

From: kurt greaves [mailto:k...@instaclustr.com]
Sent: Thursday, May 11, 2017 1:06 AM
To: Mark Furlong <mfurl...@ancestry.com>
Cc: user@cassandra.apache.org
Subject: Re: Repairs on 2.1.12

to clarify, what exactly was your repair command, and in reference to a ring 
did you mean the DC or the cluster, and has the repair been running for 2 weeks 
or is that in reference to the "ring"?

It would be helpful if you provided the relevant logs as well, also, the 
cassandra version you are running.
​


Repairs on 2.1.12

2017-05-09 Thread Mark Furlong
I have a large cluster running a -dc repair on a ring which has been running 
for nearly two weeks. When I review the logs I can see where my tables are 
reporting as ‘fully synced’ multiple times. I’m looking for some information to 
help me confirm that my repair is not looping and is running properly.

Mark Furlong

Sr. Database Administrator

mfurl...@ancestry.com<mailto:mfurl...@ancestry.com>
M: 801-859-7427
O: 801-705-7115
1300 W Traverse Pkwy
Lehi, UT 84043





​[http://c.mfcreative.com/mars/email/shared-icon/sig-logo.gif]