Re: Cassandra HEAP Suggestion.. Need a help

2018-05-10 Thread Jeff Jirsa
There's no single right answer. It depends a lot on the read/write patterns
and other settings (onheap memtable, offheap memtable, etc).

One thing that's probably always true, if you're using ParNew/CMS, 16G heap
is a bit large, but may be appropriate for some read heavy workloads, but
you'd want to make sure you start CMS earlier than default (set CMS
initiating occupancy lower than default). May find it easier to do
something like 12/3 or 12/4, and leave the remaining RAM for page cache.

CASSANDRA-8150 has a bunch of notes for tuning GC configs (
https://issues.apache.org/jira/browse/CASSANDRA-8150 ), and Amy's 2.1
tuning guide is pretty solid too (
https://tobert.github.io/pages/als-cassandra-21-tuning-guide.html )





On Fri, May 11, 2018 at 10:30 AM, Mokkapati, Bhargav (Nokia - IN/Chennai) <
bhargav.mokkap...@nokia.com> wrote:

> Hi Team,
>
>
>
> I have 64GB of total system memory. 5 node cluster.
>
>
>
> x ~# free -m
>
>   totalusedfree  shared  buff/cache
> available
>
> Mem:  64266   17549   41592  665124
> 46151
>
> Swap: 0   0   0
>
> x ~#
>
>
>
> and “egrep -c 'processor([[:space:]]+):.*' /proc/cpuinfo” giving 12 cpu
> cores.
>
>
>
> Currently Cassandra-env.sh calculating MAX_HEAP_SIZE as ‘8GB’ and
> HEAP_NEWSIZE as ‘1200 MB’
>
>
>
> I am facing Java insufficient memory issue and Cassandra service is
> getting down.
>
>
>
> I going to hard code the HEAP values in Cassandra-env.sh as below.
>
>
>
> MAX_HEAP_SIZE="16G"  (1/4 of total RAM)
>
> HEAP_NEWSIZE="4G" (1/4 of MAX_HEAP_SIZE)
>
>
>
> Is these values correct for my setup in production? Is there any
> disadvantages doing this?
>
>
>
> Please let me know if any of you people faced the same issue.
>
>
>
> Thanks in advance!
>
>
>
> Best regards,
>
> Bhargav M
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>
>


Cassandra HEAP Suggestion.. Need a help

2018-05-10 Thread Mokkapati, Bhargav (Nokia - IN/Chennai)
Hi Team,

I have 64GB of total system memory. 5 node cluster.

x ~# free -m
  totalusedfree  shared  buff/cache   available
Mem:  64266   17549   41592  665124   46151
Swap: 0   0   0
x ~#

and "egrep -c 'processor([[:space:]]+):.*' /proc/cpuinfo" giving 12 cpu cores.

Currently Cassandra-env.sh calculating MAX_HEAP_SIZE as '8GB' and HEAP_NEWSIZE 
as '1200 MB'

I am facing Java insufficient memory issue and Cassandra service is getting 
down.

I going to hard code the HEAP values in Cassandra-env.sh as below.

MAX_HEAP_SIZE="16G"  (1/4 of total RAM)
HEAP_NEWSIZE="4G" (1/4 of MAX_HEAP_SIZE)

Is these values correct for my setup in production? Is there any disadvantages 
doing this?

Please let me know if any of you people faced the same issue.

Thanks in advance!

Best regards,
Bhargav M










Re: Cassandra upgrade from 2.1 to 3.0

2018-05-10 Thread kooljava2
 Hello Jeff,
2.1.19 to 3.0.15.
Thank you. 

On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa  
wrote:  
 
 Which minor version of 3.0

-- Jeff Jirsa

On May 11, 2018, at 2:54 AM, kooljava2  wrote:



Hello,

Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set to 
"null". These null columns were created during the row creation time.

After looking at the data see a pattern where update was done on these rows. 
Rows which were updated has data but rows which were not part of the update are 
set to null.

 created_on    | created_by  | id
-+-+-
    null |    null |    
12345



sstabledump:- 

WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 5155159
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 5168738,
    "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
"local_delete_time" : "2018-03-28T20:38:08Z" },
    "cells" : [
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "industry", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2018-03-28T20:38:08.060Z" },
  { "name" : "last_modified_date", "value" : "2018-03-28 
20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "locale", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
"2018-03-28T20:38:08Z" },
    "tstamp" : "2018-03-28T20:38:08.060Z"
  },
  { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
"{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b82597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\",\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}",
 "tstamp" : "2018-03-28T20:38:08.060Z" },
  { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } },
  { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
"2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } }
    ]
  }
    ]
  }
]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
setting cdc_total_space_in_mb to 1278.  You can override this in cassandra.yaml
[
  {
    "partition" : {
  "key" : [ "12345" ],
  "position" : 18743072
    },
    "rows" : [
  {
    "type" : "row",
    "position" : 18751808,
    "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
    "cells" : [
  { "name" : "created_by", "value" : "12345" },
  { "name" : "created_on", "value" : "2017-10-25 10:22:41.637Z" },
  { "name" : "doc_type", "value" : false, "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
"2017-10-25T10:22:42.487Z" },
  { "name" : "last_modified_date", "value" : "2017-11-10 
00:09:52.668Z", "tstamp" : "2017-11-10T00:09:52.668Z" },
  { "name" : "per_type", "value" : "user" },
  { "name" : "lists", "path" : [ "cn.cncn.bpnp" ], "value" : 
"[\"::accid:ab\",\"::accid:e1\",\"::accid:d2\",\"::accid:d3\",\"::accid:f3\",\"::accid:g3\",\"::accid:f4\",\"::accid:9c486ae5-00b2-3c63-af70-cff2950c4181\"]",
 "tstamp" : "2017-10-25T10:22:42.782Z" },
  { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
"{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263820be49c3a222e0248532bcefc80c773194a804057561a97382e595b51f36bb46b8675589fc89dea4a5c0ceb944d63861b39d63c0067161e84c79328077c650df33530c7625857444711dc4b1051638123694ba6e9e29b1f906663f3\",\"lastRenewedDate\":\"2017-11-10T00:09:52Z\"}",
 "tstamp" : "2017-11-10T00:09:52.668Z" }
    ]
  }
    ]
  }
]
  

Re: dtests failing with - ValueError: unsupported hash type md5

2018-05-10 Thread Patrick Bannister
There may be some unstated environmental dependencies at issue here. If you
run the dtests on an Ubuntu 16.04 LTS environment with the configuration
described in the dtest README.md, then when you run cqlsh by calling the a
ccm Node object's run_cqlsh() function, it will run cqlsh with Python 2.7.
For example, in this kind of environment, I was able to run
cql_tracing_test.py successfully on trunk and cassandra-3.11.

The error you're experiencing is interesting. I did an experiment on an
environment without Python 2.7, and when I tried to run the
cql_tracing_test dtests, I got errors like this instead:

Subprocess ['cqlsh', 'TRACING ON', None] exited with non-zero status;
exit status: 1;
stderr: No appropriate python interpreter found.

In contrast, the errors you posted look like trying to run the Python 2.7
scripts in a Python 3.x interpreter. Are you testing with a modified
version of cqlsh? Or, do you have a "python2.7" in your path that points to
Python 3.x?

Patrick Bannister



On Thu, May 10, 2018 at 2:22 AM, Rajiv Dimri  wrote:

> Thank you for the response,
>
>
>
> Single test command
>
> pytest --cassandra-dir=$CASSANDRA_HOME cql_tracing_test.py::
> TestCqlTracing::test_tracing_simple
>
>
>
> pytest is being run from within the virtual env (Python 3.6.5)
>
> however cqlsh is a part of Cassandra distribution present in
> $CASSANDRA_HOME/bin
>
> Even if I install cqlsh in the virtualenv, node.py in ccmlib will pick
> cqlsh present in the Cassandra source directory.
>
>
>
> *From:* kurt greaves 
> *Sent:* Thursday, May 10, 2018 11:37 AM
> *To:* User 
> *Subject:* Re: dtests failing with - ValueError: unsupported hash type md5
>
>
>
> What command did you run? Probably worth checking that cqlsh is installed
> in the virtual environment and that you are executing pytest from within
> the virtual env.
>
>
>
> On 10 May 2018 at 05:06, Rajiv Dimri  wrote:
>
> Hi All,
>
>
>
> We have setup a dtest environment to run against Cassandra db version
> 3.11.1 and 3.0.5
>
> As per instruction on https://github.com/apache/cassandra-dtest
> 
> we have setup the environment with python 3.6.5 along with other
> dependencies.
>
> The server used is Oracle RHEL (Red Hat Enterprise Linux Server release
> 6.6 (Santiago))
>
>
>
> During the multiple tests are failing with specific error as mentioned
> below.
>
>
>
> process = , cmd_args =
> ['cqlsh', 'TRACING ON', None]
>
>
>
> def handle_external_tool_process(process, cmd_args):
>
> out, err = process.communicate()
>
> rc = process.returncode
>
>
>
> if rc != 0:
>
> >   raise ToolError(cmd_args, rc, out, err)
>
> E   ccmlib.node.ToolError: Subprocess ['cqlsh', 'TRACING ON',
> None] exited with non-zero status; exit status: 1;
>
> E   stderr: ERROR:root:code for hash md5 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type md5
>
> E   ERROR:root:code for hash sha1 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha1
>
> E   ERROR:root:code for hash sha224 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   

Re: Cassandra upgrade from 2.1 to 3.0

2018-05-10 Thread Jeff Jirsa
Which minor version of 3.0

-- 
Jeff Jirsa


> On May 11, 2018, at 2:54 AM, kooljava2  wrote:
> 
> 
> Hello,
> 
> Upgraded Cassandra 2.1 to 3.0.  We see certain data in few columns being set 
> to "null". These null columns were created during the row creation time.
> 
> After looking at the data see a pattern where update was done on these rows. 
> Rows which were updated has data but rows which were not part of the update 
> are set to null.
> 
>  created_on| created_by  | id
> -+-+-
> null |null |  
>   12345
> 
> 
> 
> sstabledump:- 
> 
> WARN  20:47:38,741 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
> setting cdc_total_space_in_mb to 1278.  You can override this in 
> cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 5155159
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 5168738,
> "deletion_info" : { "marked_deleted" : "2018-03-28T20:38:08.05Z", 
> "local_delete_time" : "2018-03-28T20:38:08Z" },
> "cells" : [
>   { "name" : "doc_type", "value" : false, "tstamp" : 
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "industry", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
> "2018-03-28T20:38:08.060Z" },
>   { "name" : "last_modified_date", "value" : "2018-03-28 
> 20:38:08.059Z", "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "locale", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "postal_code", "deletion_info" : { "local_delete_time" : 
> "2018-03-28T20:38:08Z" },
> "tstamp" : "2018-03-28T20:38:08.060Z"
>   },
>   { "name" : "ticket", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> },
>   { "name" : "ticket", "path" : [ "TEMP_DATA" ], "value" : 
> "{\"name\":\"TEMP_DATA\",\"ticket\":\"a42638dae8350e889f2603be1427ac6f5dec5e486d4db164a76bf80820cdf68d635cff5e7d555e6d4eabb9b5b82597b68bec0fcd735fcca\",\"lastRenewedDate\":\"2018-03-28T20:38:08Z\"}",
>  "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
> "{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263b7350d1f2683\",\"lastRenewedDate\":\"2018-03-28T20:38:07Z\"}",
>  "tstamp" : "2018-03-28T20:38:08.060Z" },
>   { "name" : "ppstatus_pf", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> },
>   { "name" : "ppstatus_pers", "deletion_info" : { "marked_deleted" : 
> "2018-03-28T20:38:08.05Z", "local_delete_time" : "2018-03-28T20:38:08Z" } 
> }
> ]
>   }
> ]
>   }
> ]WARN  20:47:41,325 Small cdc volume detected at /var/lib/cassandra/cdc_raw; 
> setting cdc_total_space_in_mb to 1278.  You can override this in 
> cassandra.yaml
> [
>   {
> "partition" : {
>   "key" : [ "12345" ],
>   "position" : 18743072
> },
> "rows" : [
>   {
> "type" : "row",
> "position" : 18751808,
> "liveness_info" : { "tstamp" : "2017-10-25T10:22:41.612Z" },
> "cells" : [
>   { "name" : "created_by", "value" : "12345" },
>   { "name" : "created_on", "value" : "2017-10-25 10:22:41.637Z" },
>   { "name" : "doc_type", "value" : false, "tstamp" : 
> "2017-10-25T10:22:42.487Z" },
>   { "name" : "last_modified_by", "value" : "12345", "tstamp" : 
> "2017-10-25T10:22:42.487Z" },
>   { "name" : "last_modified_date", "value" : "2017-11-10 
> 00:09:52.668Z", "tstamp" : "2017-11-10T00:09:52.668Z" },
>   { "name" : "per_type", "value" : "user" },
>   { "name" : "lists", "path" : [ "cn.cncn.bpnp" ], "value" : 
> "[\"::accid:ab\",\"::accid:e1\",\"::accid:d2\",\"::accid:d3\",\"::accid:f3\",\"::accid:g3\",\"::accid:f4\",\"::accid:9c486ae5-00b2-3c63-af70-cff2950c4181\"]",
>  "tstamp" : "2017-10-25T10:22:42.782Z" },
>   { "name" : "ticket", "path" : [ "TEMP_TEMP2" ], "value" : 
> "{\"name\":\"TEMP_TEMP2\",\"ticket\":\"a4263820be49c3a222e0248532bcefc80c773194a804057561a97382e595b51f36bb46b8675589fc89dea4a5c0ceb944d63861b39d63c0067161e84c79328077c650df33530c7625857444711dc4b1051638123694ba6e9e29b1f906663f3\",\"lastRenewedDate\":\"2017-11-10T00:09:52Z\"}",
>  "tstamp" : "2017-11-10T00:09:52.668Z" }
> ]
>   }
> ]
>   }
> ]


Re: Cassandra Summit 2019 / Cassandra Summit 2018

2018-05-10 Thread Patrick McFadin
Sorry Ben. Instaclustr. My spell checker keeps buying vowels.

On Thu, May 10, 2018 at 10:43 AM, Patrick McFadin 
wrote:

> +1 to what Ben said. Lynn Bender has a great reputation for building
> vendor-neutral events and this is shaping up to be a really good one for
> the Cassandra community. I'm devoting a lot of DataStax resources to it and
> I know Ben is doing the same at Instacluster.
>
> Now that being said. If you want a truly amazing event, we need good
> talks! This is always a struggle for any event. If you have a great topic,
> don't be shy. Please speak up. If you need some help getting your talk
> together, I would be more than happy to help you. Let's make this a great
> event for the community.
>
> Patrick
>
> On Thu, May 3, 2018 at 8:33 AM, Ben Bromhead  wrote:
>
>> Distributed Data Day is shaping up to look like a Cassandra related
>> summit. See http://distributeddatasummit.com/2018-sf/speakers
>>
>> You'll most likely see all the same familiar faces and familiar vendors :)
>>
>> However note it is not the official Apache Cassandra Summit and it is
>> being run by a for profit group.
>>
>> On Thu, May 3, 2018 at 5:33 AM Horia Mocioi 
>> wrote:
>>
>>> Hello,
>>>
>>> Are there any updates on this event?
>>>
>>> Regards,
>>> Horia
>>>
>>> On tis, 2018-02-27 at 11:42 +, Carlos Rolo wrote:
>>>
>>> Hello all,
>>>
>>> I'm interested planning/organizing a small kinda of NGCC in Lisbon,
>>> Portugal in late May early June. Just waiting for the venue to confirm
>>> possible dates.
>>>
>>> Would be a 1day event kinda last year, is this something people would be
>>> interested? I can push a google form for accessing the interest today.
>>>
>>>
>>> Regards,
>>>
>>> Carlos Juzarte Rolo
>>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>>
>>> Pythian - Love your data
>>>
>>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>>> *linkedin.com/in/carlosjuzarterolo
>>> *
>>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>>> www.pythian.com
>>>
>>> On Tue, Feb 27, 2018 at 11:39 AM, Kenneth Brotman <
>>> kenbrot...@yahoo.com.invalid> wrote:
>>>
>>> Event planning is fun as long as you can pace it out properly.  Once you
>>> set a firm date for an event the pressure on you to keep everything on
>>> track is nerve racking.  To do something on the order of Cassandra Summit
>>> 2016, I think we are should plan for 2020.  It’s too late for 2018 and even
>>> trying to meet the timeline for everything that would have to come together
>>> makes 2019 too nerve racking a target date.  The steps should be:
>>>
>>> Form a planning committee
>>>
>>> Bring potential sponsors into the planning early
>>>
>>> Select an event planning vendor to guide us and to do
>>> the heavy lifting for us
>>>
>>>
>>>
>>> In the meantime, we could have a World-wide Distributed Asynchronous
>>> Cassandra Convention which offers four benefits:
>>>
>>> It allows us to address the fact that we are a
>>> world-wide group that needs a way to reach everyone in a way where no one
>>> is geographically disadvantaged
>>>
>>> No travel time, no travel expenses and no ticket fees
>>> makes it accessible to a lot of people that otherwise would have to miss out
>>>
>>> The lower production costs and simpler administrative workload allows us
>>> to reach implementation sooner
>>>
>>> It’s cutting edge, world class innovation like Cassandra
>>>
>>>
>>>
>>> Kenneth Brotman
>>>
>>>
>>>
>>> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
>>> *Sent:* Monday, February 26, 2018 9:38 PM
>>> *To:* cassandra
>>> *Subject:* Re: Cassandra Summit 2019 / Cassandra Summit 2018
>>>
>>>
>>>
>>> Instaclustr sponsored the 2017 NGCC (Next Gen Cassandra Conference),
>>> which was developer/development focused (vs user focused).
>>>
>>>
>>>
>>> For 2018, we're looking at options for both a developer conference and a
>>> user conference. There's a lot of logistics involved, and I think it's
>>> fairly obvious that most of the PMC members aren't professional event
>>> planners, so it's possible that either/both conferences may not happen, but
>>> we're doing our best to try to put something together.
>>>
>>>
>>>
>>>
>>>
>>> On Mon, Feb 26, 2018 at 3:00 PM, Rahul Singh <
>>> rahul.xavier.si...@gmail.com> wrote:
>>>
>>> I think some of the Instaclustr folks had done one last year which I
>>> really wanted to go to.. Distributed / Async both would be easier to get
>>> people to write papers, make slides, do youtube videos with.. and then we
>>> could do a virtual web conf of the best submissions.
>>>
>>>
>>> On Feb 26, 2018, 1:04 PM -0600, Kenneth Brotman <
>>> kenbrot...@yahoo.com.invalid>, wrote:
>>>
>>> Is there any planning yet for a Cassandra Summit 2019 or Cassandra
>>> Summit 2018 (probably too late)?
>>>
>>>
>>>
>>> Is there a 

Re: Cassandra Summit 2019 / Cassandra Summit 2018

2018-05-10 Thread Patrick McFadin
+1 to what Ben said. Lynn Bender has a great reputation for building
vendor-neutral events and this is shaping up to be a really good one for
the Cassandra community. I'm devoting a lot of DataStax resources to it and
I know Ben is doing the same at Instacluster.

Now that being said. If you want a truly amazing event, we need good talks!
This is always a struggle for any event. If you have a great topic, don't
be shy. Please speak up. If you need some help getting your talk together,
I would be more than happy to help you. Let's make this a great event for
the community.

Patrick

On Thu, May 3, 2018 at 8:33 AM, Ben Bromhead  wrote:

> Distributed Data Day is shaping up to look like a Cassandra related
> summit. See http://distributeddatasummit.com/2018-sf/speakers
>
> You'll most likely see all the same familiar faces and familiar vendors :)
>
> However note it is not the official Apache Cassandra Summit and it is
> being run by a for profit group.
>
> On Thu, May 3, 2018 at 5:33 AM Horia Mocioi 
> wrote:
>
>> Hello,
>>
>> Are there any updates on this event?
>>
>> Regards,
>> Horia
>>
>> On tis, 2018-02-27 at 11:42 +, Carlos Rolo wrote:
>>
>> Hello all,
>>
>> I'm interested planning/organizing a small kinda of NGCC in Lisbon,
>> Portugal in late May early June. Just waiting for the venue to confirm
>> possible dates.
>>
>> Would be a 1day event kinda last year, is this something people would be
>> interested? I can push a google form for accessing the interest today.
>>
>>
>> Regards,
>>
>> Carlos Juzarte Rolo
>> Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
>>
>> Pythian - Love your data
>>
>> rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
>> *linkedin.com/in/carlosjuzarterolo
>> *
>> Mobile: +351 918 918 100 <+351%20918%20918%20100>
>> www.pythian.com
>>
>> On Tue, Feb 27, 2018 at 11:39 AM, Kenneth Brotman <
>> kenbrot...@yahoo.com.invalid> wrote:
>>
>> Event planning is fun as long as you can pace it out properly.  Once you
>> set a firm date for an event the pressure on you to keep everything on
>> track is nerve racking.  To do something on the order of Cassandra Summit
>> 2016, I think we are should plan for 2020.  It’s too late for 2018 and even
>> trying to meet the timeline for everything that would have to come together
>> makes 2019 too nerve racking a target date.  The steps should be:
>>
>> Form a planning committee
>>
>> Bring potential sponsors into the planning early
>>
>> Select an event planning vendor to guide us and to do the
>> heavy lifting for us
>>
>>
>>
>> In the meantime, we could have a World-wide Distributed Asynchronous
>> Cassandra Convention which offers four benefits:
>>
>> It allows us to address the fact that we are a world-wide
>> group that needs a way to reach everyone in a way where no one is
>> geographically disadvantaged
>>
>> No travel time, no travel expenses and no ticket fees
>> makes it accessible to a lot of people that otherwise would have to miss out
>>
>> The lower production costs and simpler administrative workload allows us
>> to reach implementation sooner
>>
>> It’s cutting edge, world class innovation like Cassandra
>>
>>
>>
>> Kenneth Brotman
>>
>>
>>
>> *From:* Jeff Jirsa [mailto:jji...@gmail.com]
>> *Sent:* Monday, February 26, 2018 9:38 PM
>> *To:* cassandra
>> *Subject:* Re: Cassandra Summit 2019 / Cassandra Summit 2018
>>
>>
>>
>> Instaclustr sponsored the 2017 NGCC (Next Gen Cassandra Conference),
>> which was developer/development focused (vs user focused).
>>
>>
>>
>> For 2018, we're looking at options for both a developer conference and a
>> user conference. There's a lot of logistics involved, and I think it's
>> fairly obvious that most of the PMC members aren't professional event
>> planners, so it's possible that either/both conferences may not happen, but
>> we're doing our best to try to put something together.
>>
>>
>>
>>
>>
>> On Mon, Feb 26, 2018 at 3:00 PM, Rahul Singh <
>> rahul.xavier.si...@gmail.com> wrote:
>>
>> I think some of the Instaclustr folks had done one last year which I
>> really wanted to go to.. Distributed / Async both would be easier to get
>> people to write papers, make slides, do youtube videos with.. and then we
>> could do a virtual web conf of the best submissions.
>>
>>
>> On Feb 26, 2018, 1:04 PM -0600, Kenneth Brotman <
>> kenbrot...@yahoo.com.invalid>, wrote:
>>
>> Is there any planning yet for a Cassandra Summit 2019 or Cassandra Summit
>> 2018 (probably too late)?
>>
>>
>>
>> Is there a planning committee?
>>
>>
>>
>> Who wants there to be a Cassandra Summit 2019 and who thinks there is a
>> better way?
>>
>>
>>
>> We could try a Cassandra Distributed Summit 2019 where we meet virtually
>> and perhaps asynchronously, but there would be a lot more energy and
>> bonding if it’s 

Re: Running multiple instances of Cassandra on each node in the cluster

2018-05-10 Thread Eric Evans
On Thu, May 10, 2018 at 2:25 AM  wrote:

> Dear community,
>
> Is it possible to have a cluster in Cassandra where each of the server is
> running multiple instances of Cassandra(each instance is part of the same
> cluster).
>
> I'm aware that if there's a single server in the cluster, then it's
> possible to run multiple instances of Cassandra on it
> , but is it also possible
> to have multiple such servers in the cluster. If yes, how will the
> configuration look like(listen address, ports etc)?
>
> Even if it was possible, I understand that there might not be any
> performance benefits at all, just wanted to know if it's theoretically
> possible.
>
We do this.  It's not ideal from an operational POV, but as Jeff points
out, if you have more hardware than it makes sense to give to a single
Cassandra node, it's an option.

Perhaps the biggest/most obvious thing to be aware of is that you need to
use the network-topology strategy, and you need to ensure that all of the
instances on a host are part of the same rack (otherwise you'll end up with
replicas on the same machine, which isn't very redundant).

If you are fluent in Puppet, our setup is here:
https://github.com/wikimedia/puppet (Cassandra module here:
https://github.com/wikimedia/puppet/tree/production/modules/cassandra)

One of the things that Puppet module does, is write per-instance metadata
as YAML files into /etc/cassandra-instances.d/, which we use in support of
some tooling to ease some the operation burden.  As an example, there is a
`c-foreach-nt` script that iteratively issues nodetool commands against all
configured instances (or a `c-any-nt` when you only need one, and don't
care which), and a `c-foreach-restart` that iteratively restarts.  Those
are here: https://github.com/eevans/cassandra-tools-wmf


-- 
Eric Evans
eev...@wikimedia.org


Re: Basic Copy vs Snapshot for backup

2018-05-10 Thread Jeff Jirsa
If you backup the current state of the sstables at the time you upload the
new sstables, you can keep a running point-in-time view without an explicit
snapshot. This is similar to what tablesnap does (
https://github.com/JeremyGrosser/tablesnap )

On Thu, May 10, 2018 at 12:30 PM, Ben Slater 
wrote:

> The snapshot gives you a complete set of your sstables at a point in time.
> If you were copying sstables directly from a live node you would have to
> deal with files coming and going due to compactions.
>
> Cheers
> Ben
>
> On Thu, 10 May 2018 at 16:45  wrote:
>
>> Dear Community,
>>
>>
>>
>> Is there any benefit of taking backup of a node via ‘nodetool snapshot’
>> vs simply copying the data directory other than the fact that snapshot will
>> first flush the memTable and then take the backup.
>>
>>
>>
>> Thanks and regards,
>>
>> Vishal Sharma
>>
>>
>> "*Confidentiality Warning*: This message and any attachments are
>> intended only for the use of the intended recipient(s), are confidential
>> and may be privileged. If you are not the intended recipient, you are
>> hereby notified that any review, re-transmission, conversion to hard copy,
>> copying, circulation or other use of this message and any attachments is
>> strictly prohibited. If you are not the intended recipient, please notify
>> the sender immediately by return email and delete this message and any
>> attachments from your system.
>>
>> *Virus Warning:* Although the company has taken reasonable precautions
>> to ensure no viruses are present in this email. The company cannot accept
>> responsibility for any loss or damage arising from the use of this email or
>> attachment."
>>
> --
>
>
> *Ben Slater*
>
> *Chief Product Officer *
>
>    
>
>
> Read our latest technical blog posts here
> .
>
> This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
> and Instaclustr Inc (USA).
>
> This email and any attachments may contain confidential and legally
> privileged information.  If you are not the intended recipient, do not copy
> or disclose its content, but please reply to this email immediately and
> highlight the error to the sender and then immediately delete the message.
>


Re: Running multiple instances of Cassandra on each node in the cluster

2018-05-10 Thread Jeff Jirsa
It works fine, and there can be meaningful performance benefits if you have
a sufficiently large machine where either you have so much RAM or so much
disk that a single instance would likely underutilize those resources. You
can configure it by adding multiple IPs to the servers, and running one
instance of cassandra per IP.


On Thu, May 10, 2018 at 12:55 PM,  wrote:

> Dear community,
>
> Is it possible to have a cluster in Cassandra where each of the server is
> running multiple instances of Cassandra(each instance is part of the same
> cluster).
>
> I'm aware that if there's a single server in the cluster, then it's
> possible to run multiple instances of Cassandra on it
> , but is it also possible
> to have multiple such servers in the cluster. If yes, how will the
> configuration look like(listen address, ports etc)?
>
> Even if it was possible, I understand that there might not be any
> performance benefits at all, just wanted to know if it's theoretically
> possible.
>
>
>
> Stack Overflow Link: https://stackoverflow.com/
> questions/50267683/multiple-instances-of-cassandra-on-
> each-node-in-the-cluster
>
>
>
> Thanks and regards,
>
> Vishal Sharma
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>


Running multiple instances of Cassandra on each node in the cluster

2018-05-10 Thread Vishal1.Sharma
Dear community,

Is it possible to have a cluster in Cassandra where each of the server is 
running multiple instances of Cassandra(each instance is part of the same 
cluster).

I'm aware that if there's a single server in the cluster, then it's possible to 
run multiple instances of Cassandra on 
it, but is it also possible to 
have multiple such servers in the cluster. If yes, how will the configuration 
look like(listen address, ports etc)?

Even if it was possible, I understand that there might not be any performance 
benefits at all, just wanted to know if it's theoretically possible.



Stack Overflow Link: 
https://stackoverflow.com/questions/50267683/multiple-instances-of-cassandra-on-each-node-in-the-cluster



Thanks and regards,

Vishal Sharma
"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment."


Re: Basic Copy vs Snapshot for backup

2018-05-10 Thread Ben Slater
The snapshot gives you a complete set of your sstables at a point in time.
If you were copying sstables directly from a live node you would have to
deal with files coming and going due to compactions.

Cheers
Ben

On Thu, 10 May 2018 at 16:45  wrote:

> Dear Community,
>
>
>
> Is there any benefit of taking backup of a node via ‘nodetool snapshot’ vs
> simply copying the data directory other than the fact that snapshot will
> first flush the memTable and then take the backup.
>
>
>
> Thanks and regards,
>
> Vishal Sharma
>
>
> "*Confidentiality Warning*: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, re-transmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return email and delete this message and any attachments
> from your system.
>
> *Virus Warning:* Although the company has taken reasonable precautions to
> ensure no viruses are present in this email. The company cannot accept
> responsibility for any loss or damage arising from the use of this email or
> attachment."
>
-- 


*Ben Slater*

*Chief Product Officer *

   


Read our latest technical blog posts here
.

This email has been sent on behalf of Instaclustr Pty. Limited (Australia)
and Instaclustr Inc (USA).

This email and any attachments may contain confidential and legally
privileged information.  If you are not the intended recipient, do not copy
or disclose its content, but please reply to this email immediately and
highlight the error to the sender and then immediately delete the message.


Basic Copy vs Snapshot for backup

2018-05-10 Thread Vishal1.Sharma
Dear Community,

Is there any benefit of taking backup of a node via 'nodetool snapshot' vs 
simply copying the data directory other than the fact that snapshot will first 
flush the memTable and then take the backup.

Thanks and regards,
Vishal Sharma
"Confidentiality Warning: This message and any attachments are intended only 
for the use of the intended recipient(s). 
are confidential and may be privileged. If you are not the intended recipient. 
you are hereby notified that any 
review. re-transmission. conversion to hard copy. copying. circulation or other 
use of this message and any attachments is 
strictly prohibited. If you are not the intended recipient. please notify the 
sender immediately by return email. 
and delete this message and any attachments from your system.

Virus Warning: Although the company has taken reasonable precautions to ensure 
no viruses are present in this email. 
The company cannot accept responsibility for any loss or damage arising from 
the use of this email or attachment."


RE: dtests failing with - ValueError: unsupported hash type md5

2018-05-10 Thread Rajiv Dimri
Thank you for the response,

 

Single test command

pytest --cassandra-dir=$CASSANDRA_HOME 
cql_tracing_test.py::TestCqlTracing::test_tracing_simple

 

pytest is being run from within the virtual env (Python 3.6.5)

however cqlsh is a part of Cassandra distribution present in $CASSANDRA_HOME/bin

Even if I install cqlsh in the virtualenv, node.py in ccmlib will pick cqlsh 
present in the Cassandra source directory.

 

From: kurt greaves  
Sent: Thursday, May 10, 2018 11:37 AM
To: User 
Subject: Re: dtests failing with - ValueError: unsupported hash type md5

 

What command did you run? Probably worth checking that cqlsh is installed in 
the virtual environment and that you are executing pytest from within the 
virtual env.

 

On 10 May 2018 at 05:06, Rajiv Dimri mailto:rajiv.di...@oracle.com"rajiv.di...@oracle.com> wrote:

Hi All,

 

We have setup a dtest environment to run against Cassandra db version 3.11.1 
and 3.0.5

As per instruction on HYPERLINK 
"https://urldefense.proofpoint.com/v2/url?u=https-3A__github.com_apache_cassandra-2Ddtest=DwMFaQ=RoP1YumCXCgaWHvlZYR8PZh8Bv7qIrMUB65eapI_JnE=upnXohQU-Prt8noQKNsgGweRpR-6zqCD_G43z-KiGAY=edPZKKf-HAxUyTYlDyt3iCMAFmUdLAoET9BZE00hZ3o=8cTnSjXdX2IoD95YkkFKRGCRj5KwXGGSzZke8KLO_ag="https://github.com/apache/cassandra-dtest
 we have setup the environment with python 3.6.5 along with other dependencies.

The server used is Oracle RHEL (Red Hat Enterprise Linux Server release 6.6 
(Santiago))

 

During the multiple tests are failing with specific error as mentioned below.

 

process = , cmd_args = ['cqlsh', 
'TRACING ON', None]

 

    def handle_external_tool_process(process, cmd_args):

    out, err = process.communicate()

    rc = process.returncode

 

    if rc != 0:

>   raise ToolError(cmd_args, rc, out, err)

E   ccmlib.node.ToolError: Subprocess ['cqlsh', 'TRACING ON', None] 
exited with non-zero status; exit status: 1;

E   stderr: ERROR:root:code for hash md5 was not found.

E   Traceback (most recent call last):

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 139, in 

E   globals()[__func_name] = __get_hash(__func_name)

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 91, in __get_builtin_constructor

E   raise ValueError('unsupported hash type ' + name)

E   ValueError: unsupported hash type md5

E   ERROR:root:code for hash sha1 was not found.

E   Traceback (most recent call last):

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 139, in 

E   globals()[__func_name] = __get_hash(__func_name)

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 91, in __get_builtin_constructor

E   raise ValueError('unsupported hash type ' + name)

E   ValueError: unsupported hash type sha1

E   ERROR:root:code for hash sha224 was not found.

E   Traceback (most recent call last):

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 139, in 

E   globals()[__func_name] = __get_hash(__func_name)

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 91, in __get_builtin_constructor

E   raise ValueError('unsupported hash type ' + name)

E   ValueError: unsupported hash type sha224

E   ERROR:root:code for hash sha256 was not found.

E   Traceback (most recent call last):

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 139, in 

E   globals()[__func_name] = __get_hash(__func_name)

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 91, in __get_builtin_constructor

E   raise ValueError('unsupported hash type ' + name)

E   ValueError: unsupported hash type sha256

E   ERROR:root:code for hash sha384 was not found.

E   Traceback (most recent call last):

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 139, in 

E   globals()[__func_name] = __get_hash(__func_name)

E File 
"/ade_autofs/ade_infra/nfsdo_linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
 line 91, in __get_builtin_constructor

E   raise ValueError('unsupported hash type ' + name)

E   ValueError: 

Re: dtests failing with - ValueError: unsupported hash type md5

2018-05-10 Thread kurt greaves
What command did you run? Probably worth checking that cqlsh is installed
in the virtual environment and that you are executing pytest from within
the virtual env.

On 10 May 2018 at 05:06, Rajiv Dimri  wrote:

> Hi All,
>
>
>
> We have setup a dtest environment to run against Cassandra db version
> 3.11.1 and 3.0.5
>
> As per instruction on https://github.com/apache/cassandra-dtest we have
> setup the environment with python 3.6.5 along with other dependencies.
>
> The server used is Oracle RHEL (Red Hat Enterprise Linux Server release
> 6.6 (Santiago))
>
>
>
> During the multiple tests are failing with specific error as mentioned
> below.
>
>
>
> process = , cmd_args =
> ['cqlsh', 'TRACING ON', None]
>
>
>
> def handle_external_tool_process(process, cmd_args):
>
> out, err = process.communicate()
>
> rc = process.returncode
>
>
>
> if rc != 0:
>
> >   raise ToolError(cmd_args, rc, out, err)
>
> E   ccmlib.node.ToolError: Subprocess ['cqlsh', 'TRACING ON',
> None] exited with non-zero status; exit status: 1;
>
> E   stderr: ERROR:root:code for hash md5 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type md5
>
> E   ERROR:root:code for hash sha1 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha1
>
> E   ERROR:root:code for hash sha224 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha224
>
> E   ERROR:root:code for hash sha256 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha256
>
> E   ERROR:root:code for hash sha384 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha384
>
> E   ERROR:root:code for hash sha512 was not found.
>
> E   Traceback (most recent call last):
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 139, in 
>
> E   globals()[__func_name] = __get_hash(__func_name)
>
> E File "/ade_autofs/ade_infra/nfsdo_
> linux.x64/PYTHON/2.7.8/LINUX.X64/141106.0120/python/lib/python2.7/hashlib.py",
> line 91, in __get_builtin_constructor
>
> E   raise ValueError('unsupported hash type ' + name)
>
> E   ValueError: unsupported hash type sha512
>
> E   Traceback (most recent call last):
>
> E File