cqlsh COPY ... TO ... doesn't work if one node down

2018-06-29 Thread Dmitry Simonov
Hello! I have cassandra cluster with 5 nodes. There is a (relatively small) keyspace X with RF5. One node goes down. Status=Up/Down |/ State=Normal/Leaving/Joining/Moving -- Address Load Tokens Owns (effective) Host ID Rack UN 10.0.0.82 253.64

Re: [EXTERNAL] Re: consultant recommendations

2018-06-29 Thread Joe Schwartz
Aaron Morton at the Last Pickle is solid; he knows his stuff. Also, like Sean said, nothing against Instacluster; they are good folks too.! Joe Joseph B. Schwartz Western Region Sales Director Mobile: 408-316-0289 On Fri, Jun 29, 2018 at 11:46 AM, Durity, Sean R < sean_r_dur...@homedepot.com>

RE: [EXTERNAL] Re: consultant recommendations

2018-06-29 Thread Durity, Sean R
I haven’t ever hired a Cassandra consultant, but the company named The Last Pickle (yes, an odd name) has some outstanding Cassandra experts. Not sure how they work, but worth a mention here. Nothing against Instacluster. There are great folks there, too. Sean Durity From: Evelyn Smith

Re: consultant recommendations

2018-06-29 Thread Evelyn Smith
Hey Randy Instaclustr provides consulting services for Cassandra as well as managed services if you are looking to offload the admin burden. https://www.instaclustr.com/services/cassandra-consulting/ Alternatively, send me an email

consultant recommendations

2018-06-29 Thread Randy Lynn
Having some OOM issues. Would love to get feedback from the group on what companies/consultants you might use? -- Randy Lynn rl...@getavail.com office: 859.963.1616 <+1-859-963-1616> ext 202 163 East Main Street - Lexington, KY 40507 - USA getavail.com

Re: C* in multiple AWS AZ's

2018-06-29 Thread Pradeep Chhetri
Ohh i see now. It makes sense. Thanks a lot. On Fri, Jun 29, 2018 at 9:17 PM, Randy Lynn wrote: > data is only lost if you stop the node. between restarts the storage is > fine. > > On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri > wrote: > >> Isnt NVMe storage an instance storage ie. the

Re: C* in multiple AWS AZ's

2018-06-29 Thread Randy Lynn
data is only lost if you stop the node. between restarts the storage is fine. On Fri, Jun 29, 2018 at 10:39 AM, Pradeep Chhetri wrote: > Isnt NVMe storage an instance storage ie. the data will be lost in case > the instance restarts. How are you going to make sure that there is no data > loss

Re: Cassandra read/sec and write/sec

2018-06-29 Thread Eric Evans
On Thu, Jun 28, 2018 at 5:19 PM Abdul Patel wrote: > > Hi all > > We use prometheus to monitor cassandra and then put it on graphana for > dashboard. > Whats the parameter to m3asure throughput of cassandra? I'm not sure how you're getting metrics from Cassandra to Prometheus, or if you're

Re: C* in multiple AWS AZ's

2018-06-29 Thread Pradeep Chhetri
Isnt NVMe storage an instance storage ie. the data will be lost in case the instance restarts. How are you going to make sure that there is no data loss in case instance gets rebooted? On Fri, 29 Jun 2018 at 7:00 PM, Randy Lynn wrote: > GPFS - Rahul FTW! Thank you for your help! > > Yes,

Re: C* in multiple AWS AZ's

2018-06-29 Thread Randy Lynn
GPFS - Rahul FTW! Thank you for your help! Yes, Pradeep - migrating to i3 from r3. moving for NVMe storage, I did not have the benefit of doing benchmarks.. but we're moving from 1,500 IOPS so I intrinsically know we'll get better throughput. On Fri, Jun 29, 2018 at 7:21 AM, Rahul Singh wrote:

Common knowledge on C* heap size/file _cache_size_in_mb/other RAM usage parameters

2018-06-29 Thread Vsevolod Filaretov
What are general community guidelines on setting up C* heap size, file_cache_size_in_mb, offheap space usage and other RAM usage settings? Are there any general guidelines like "if your total data size per node is X/median/max partition size is Y, your RAM usage settings better be Z or things

Re: C* in multiple AWS AZ's

2018-06-29 Thread Rahul Singh
Totally agree. GPFS for the win. EC2 multi region snitch is an automation tool like Ansible or Puppet. Unless you have two orders of magnitude more servers than you do now, you don’t need it. Rahul On Jun 29, 2018, 6:18 AM -0400, kurt greaves , wrote: > Yes. You would just end up with a rack

Re: C* in multiple AWS AZ's

2018-06-29 Thread kurt greaves
Yes. You would just end up with a rack named differently to the AZ. This is not a problem as racks are just logical. I would recommend migrating all your DCs to GPFS though for consistency. On Fri., 29 Jun. 2018, 09:04 Randy Lynn, wrote: > So we have two data centers already running.. > >

Re: C* in multiple AWS AZ's

2018-06-29 Thread Pradeep Chhetri
Just curious - >From which instance type are you migrating to i3 type and what are the reasons to move to i3 type ? Are you going to take benefit from NVMe instance storage - if yes, how ? Since we are also migrating our cluster on AWS - but we are currently using r4 instance, so i was