Re: [HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread Bowen Song via user
ly in our case, as it was fixed in 4.0, and I also don't believe our data centre grade SSDs are that slow. *Tried*: reducing the stream_throughput_outbound from 30 to 15 MiB/s *Result*: did not help, no sign of any improvement *Tried*: analyse the logs from the joining node and the streaming source

Re: [HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread C. Scott Andreas
ducing the stream_throughput_outbound from 30 to 15 MiB/s Result: did not help, no sign of any improvement Tried: analyse the logs from the joining node and the streaming source nodes Result: the error says the write connection timed out on the sending end, but a f

[HELP] Cassandra 4.1.1 Repeated Bootstrapping Failure

2023-09-11 Thread Bowen Song via user
ata centre grade SSDs are that slow. *Tried*: reducing the stream_throughput_outbound from 30 to 15 MiB/s *Result*: did not help, no sign of any improvement *Tried*: analyse the logs from the joining node and the streaming source nodes *Result*: the error says the write connection timed out on the s

Re: Help determining pending compactions

2022-11-07 Thread Richard Hesse
d upgrading to 3.2. > > Upgrading Reaper to 3.2 resolved our issue. > > > > Hope this helps. > > Eric > > > > > > > > *From:* Richard Hesse > *Sent:* Sunday, October 30, 2022 12:07 PM > *To:* user@cassandra.apache.org > *Subject:* Help determini

RE: Help determining pending compactions

2022-11-07 Thread Eric Ferrenbach
: Sunday, October 30, 2022 12:07 PM To: user@cassandra.apache.org Subject: Help determining pending compactions [WARNING - EXTERNAL EMAIL] Do not open links or attachments unless you recognize the sender of this email. If you are unsure please click the button "Report suspicious email"

Re: Help determining pending compactions

2022-10-30 Thread Richard Hesse
Sorry about that. 4.0.6 On Sun, Oct 30, 2022, 11:19 AM Dinesh Joshi wrote: > It would be helpful if you could tell us what version of Cassandra you’re > using? > > Dinesh > > > On Oct 30, 2022, at 10:07 AM, Richard Hesse wrote: > > > >  > > Hi, I'm hopi

Re: Help determining pending compactions

2022-10-30 Thread Dinesh Joshi
It would be helpful if you could tell us what version of Cassandra you’re using? Dinesh > On Oct 30, 2022, at 10:07 AM, Richard Hesse wrote: > >  > Hi, I'm hoping to get some help with a vexing issue with one of our > keyspaces. During Reaper repair sessions, one keysp

Help determining pending compactions

2022-10-30 Thread Richard Hesse
Hi, I'm hoping to get some help with a vexing issue with one of our keyspaces. During Reaper repair sessions, one keyspace will end up with hanging, non-started compactions. That is, the number of compactions as reported by nodetool compactionstats stays flat and there are no running compactions

Re: Need urgent help in cassandra modelling

2022-03-19 Thread MyWorld
Anyone have any clue? On Wed, Mar 9, 2022 at 7:01 PM MyWorld wrote: > Hi all, > Some problems with the display. Resending my query- > > I am modelling a table for a shopping site where we store products for > customers and their data in json. Max prods for a customer is 10k. > > We initially

Re: Need urgent help in cassandra modelling

2022-03-09 Thread MyWorld
Hi all, Some problems with the display. Resending my query- I am modelling a table for a shopping site where we store products for customers and their data in json. Max prods for a customer is 10k. We initially designed this table with the architecture below: cust_prods(cust_id bigint PK,

Need urgent help in cassandra modelling

2022-03-09 Thread MyWorld
Hi all, I am modelling a table for a shopping site where we store products for customers and their data in json. Max prods for a customer is 10k. >>We initially designed this table with the architecture below: cust_prods(cust_id bigint PK, prod_id bigint CK, prod_data text). cust_id is partition

Re: TWCS repair and compact help

2021-06-29 Thread Kane Wilson
> > Oh. So our data is all messed up now because of the “nodetool compact” I > ran. > > > > Hi Erick. Thanks for the quick reply. > > > > I just want to be sure about compact. I saw Cassandra will do compaction > by itself even when I do not run “nodetool compact” manually (nodetool >

RE: TWCS repair and compact help

2021-06-29 Thread Eric Wong
ache.org Subject: Re: TWCS repair and compact help You definitely shouldn't perform manual compactions -- you should let the normal compaction tasks take care of it. It is unnecessary to manually run compactions since it creates more problems than it solves as I've explained in

Re: TWCS repair and compact help

2021-06-29 Thread Gábor Auth
Hi, On Tue, Jun 29, 2021 at 12:34 PM Erick Ramirez wrote: > You definitely shouldn't perform manual compactions -- you should let the > normal compaction tasks take care of it. It is unnecessary to manually run > compactions since it creates more problems than it solves as I've explained > in

Re: TWCS repair and compact help

2021-06-29 Thread Erick Ramirez
You definitely shouldn't perform manual compactions -- you should let the normal compaction tasks take care of it. It is unnecessary to manually run compactions since it creates more problems than it solves as I've explained in this post -- https://community.datastax.com/questions/6396/. Cheers!

TWCS repair and compact help

2021-06-29 Thread Eric Wong
Hi: We need some help on cassandra repair and compact for a table that uses TWCS. We are running cassandra 4.0-rc1. A database called test_db, biggest table "minute_rate", storing time-series data. It has the following configuration: CREATE TABLE test_db.minute_rate ( marke

Is there any canssandra genius who can help me to solve the suddenly starup error of "Error: Could not find or load main class -ea"?

2019-12-24 Thread Nimbus Lin
To Cassandra's developers and users: CC dimo: Firstly thanks dimo for his guiding, but as my former mail show, there is no -ea variable in the cassandra's startup program and configuration. And the centos6.9 OS's env also don't have -ea variable. And the cassandra startup fail, so I cann't

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread Jeff Jirsa
process itself didn’t go down when marked as “DN”... (the node > itself might just be temporary having some hiccup and not reachable )... so > would not auto-start still help? > #2 we can’t set longer gc grace because we are very sensitive to latency ... > and we have a lot data in and dat

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread yuping wang
Thank you; very helpful. But we do have some difficulties #1 Cassandra process itself didn’t go down when marked as “DN”... (the node itself might just be temporary having some hiccup and not reachable )... so would not auto-start still help? #2 we can’t set longer gc grace because we are very

Re: Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread Rhys Campbell
#1 Set the cassandra service to not auto-start. #2 Longer gc_grace time would help #3 Rebootstrap? If the node doesn't come back within gc_grace,_seconds, remove the node, wipe it, and bootstrap it again. https://docs.datastax.com/en/archived/cassandra/2.0/cassandra/dml/dml_about_deletes_c.html

Need help on dealing with Cassandra robustness and zombie data

2019-07-01 Thread yuping wang
Hi all, Sorry for the interruption. But I need help. Due to specific reasons of our use case, we have gc grace on the order of 10 minutes instead of default 10 days. Since we have a large amount of nodes in our Cassandra fleet, not surprisingly, we encounter occasionally node status

Re: Ansible scripts for Cassandra to help with automation needs

2019-02-14 Thread Abdul Patel
One idea will be to rolling restart of complete cluster , that script will be huge help. Just read a blog too that last pickle group has come up with a tool called 'cstart' something which can help in rolling restart. On Thursday, February 14, 2019, Jeff Jirsa wrote: > > > > On

Re: Ansible scripts for Cassandra to help with automation needs

2019-02-13 Thread Jeff Jirsa
> On Feb 13, 2019, at 9:51 PM, Kenneth Brotman > wrote: > > I want to generate a variety of Ansible scripts to share with the Apache > Cassandra community. I’ll put them in a Github repository. Just email me > offline what scripts would help the most. > > Doe

Ansible scripts for Cassandra to help with automation needs

2019-02-13 Thread Kenneth Brotman
I want to generate a variety of Ansible scripts to share with the Apache Cassandra community. I'll put them in a Github repository. Just email me offline what scripts would help the most. Does this exist already? I can't find it. Let me know if it does. If not, let's put it together

RE: Help with sudden spike in read requests

2019-02-01 Thread Kenneth Brotman
for monitoring and how did you find out it was happening? Is this a DSE cluster or OSS Cassandra cluster? Kenneth Brotman From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID] Sent: Friday, February 01, 2019 10:48 AM To: user@cassandra.apache.org Subject: Re: Help with sudden spike in read

Re: Help with sudden spike in read requests

2019-02-01 Thread Subroto Barua
; happening? What changed since it started happening? > > Kenneth Brotman > > From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID] > Sent: Friday, February 01, 2019 10:13 AM > To: user@cassandra.apache.org > Subject: Re: Help with sudden spike in read requests > >

RE: Help with sudden spike in read requests

2019-02-01 Thread Kenneth Brotman
, February 01, 2019 10:13 AM To: user@cassandra.apache.org Subject: Re: Help with sudden spike in read requests Vnode is 256 C*: 3.0.15 on m4.4xlarge gp2 vol There are 2 more DCs on bare metal (raid 10 and older machines) attached to this cluster and we have not seen this behavior on on-prem

Re: Help with sudden spike in read requests

2019-02-01 Thread Subroto Barua
day, February 01, 2019 8:45 AM > To: User cassandra.apache.org > Subject: Help with sudden spike in read requests > > In our production cluster, we observed sudden spike (over 160 MB/s) in read > requests on *all* Cassandra nodes for a very short period (less than a min); >

RE: Help with sudden spike in read requests

2019-02-01 Thread Kenneth Brotman
If you had a query that went across the partitions and especially if you had vNodes set high, that would do it. Kenneth Brotman From: Subroto Barua [mailto:sbarua...@yahoo.com.INVALID] Sent: Friday, February 01, 2019 8:45 AM To: User cassandra.apache.org Subject: Help with sudden spike

Help with sudden spike in read requests

2019-02-01 Thread Subroto Barua
In our production cluster, we observed sudden spike (over 160 MB/s) in read requests on *all* Cassandra nodes for a very short period (less than a min); this event happens few times a day. I am not able to get to the bottom of this issue, nothing interesting in system.log or from app level;

Re: Help in understanding strange cassandra CPU usage

2018-12-09 Thread Michael Shuler
On 12/9/18 4:09 AM, Devaki, Srinivas wrote: > > Cassandra Version: 2.2.4 There have been over 300 bug fixes and improvements in the nearly 3 years between 2.2.4 and the latest 2.2.13 release. Somewhere in there was a GC logging addition as I scanned the changes, which coul

Re: Help in understanding strange cassandra CPU usage

2018-12-09 Thread Jeff Jirsa
Sounds like over time you’re ending to doing something odd - maybe you’re leaking cql connections or something and it gets more and more intensive to manage them until you invoke the breaker, then it drops Will probably take someone going through a heap dump to really understand what’s going

Help in understanding strange cassandra CPU usage

2018-12-09 Thread Devaki, Srinivas
Hi Guys, Since the start of our org, cassandra used to be a SPOF, due to recent priorities we changed our code base so that cassandra won't be SPOF anymore, and during that process we made a kill switch within the code(PHP), this kill switch would ensure that no connection is made to the

Re: Cassandra HEAP Suggestion.. Need a help

2018-05-24 Thread Elliott Sims
JVM GC tuning can be pretty complex, but the simplest solution to OOM is probably switching to G1GC and feeding it a rather large heap. Theoretically a smaller heap and carefully-tuned CMS collector is more efficient, but CMS is kind of fragile and tuning it is more of a black art, where you can

Re: Cassandra HEAP Suggestion.. Need a help

2018-05-10 Thread Jeff Jirsa
There's no single right answer. It depends a lot on the read/write patterns and other settings (onheap memtable, offheap memtable, etc). One thing that's probably always true, if you're using ParNew/CMS, 16G heap is a bit large, but may be appropriate for some read heavy workloads, but you'd want

Cassandra HEAP Suggestion.. Need a help

2018-05-10 Thread Mokkapati, Bhargav (Nokia - IN/Chennai)
Hi Team, I have 64GB of total system memory. 5 node cluster. x ~# free -m totalusedfree shared buff/cache available Mem: 64266 17549 41592 665124 46151 Swap: 0 0 0

Re: Help needed to enbale Client-to-node encryption(SSL)

2018-02-19 Thread Alain RODRIGUEZ
> > (2.0 is getting pretty old and isn't supported, you may want to consider > upgrading; 2.1 would be the smallest change and least risk, but it, too, is > near end of life) I would upgrade as well. Yet I think moving from Cassandra 2.0 to Cassandra 2.2 directly is doable smoothly and

Re: Help needed to enbale Client-to-node encryption(SSL)

2018-02-16 Thread Jeff Jirsa
http://thelastpickle.com/blog/2015/09/30/hardening-cassandra-step-by-step-part-1-server-to-server.html https://www.youtube.com/watch?v=CKt0XVPogf4 (2.0 is getting pretty old and isn't supported, you may want to consider upgrading; 2.1 would be the smallest change and least risk, but it, too, is

Help needed to enbale Client-to-node encryption(SSL)

2018-02-16 Thread Prachi Rath
Hi, I am using cassandra version 2.0 . My goal is to do cassandra client to node security using SSL with my self-signed CA. What would be the recommended procedure for enabling SSL on cassandra version 2.0.17 . Thanks, Prachi

Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
Ah cool, I didn't realize reaper did that. On October 30, 2017 at 1:29:26 PM, Paulo Motta (pauloricard...@gmail.com) wrote: > This is also the case for full repairs, if I'm not mistaken. Assuming I'm not > missing something here, that should mean that he shouldn't need to mark > sstables as

Re: Need help with incremental repair

2017-10-30 Thread Paulo Motta
> This is also the case for full repairs, if I'm not mistaken. Assuming I'm not > missing something here, that should mean that he shouldn't need to mark > sstables as unrepaired? That's right, but he mentioned that he is using reaper which uses subrange repair if I'm not mistaken, which

Re: Need help with incremental repair

2017-10-30 Thread Blake Eggleston
> Once you run incremental repair, your data is permanently marked as repaired This is also the case for full repairs, if I'm not mistaken. I'll admit I'm not as familiar with the quirks of repair in 2.2, but prior to 4.0/CASSANDRA-9143, any global repair ends with an anticompaction that marks

Re: Need help with incremental repair

2017-10-30 Thread kurt greaves
Yes mark them as unrepaired first. You can get sstablerepairedset from source if you need (probably make sure you get the correct branch/tag). It's just a shell script so as long as you have C* installed in a default/canonical location it should work.

Re: Need help with incremental repair

2017-10-29 Thread Aiman Parvaiz
at out too. From: Paulo Motta <pauloricard...@gmail.com> Sent: Sunday, October 29, 2017 1:56:38 PM To: user@cassandra.apache.org Subject: Re: Need help with incremental repair > Assuming the situation is just "we accidentally ran incremental repair", you > shouldn't hav

Re: Need help with incremental repair

2017-10-29 Thread Paulo Motta
a streaming, > and inconsistencies in some edge cases, but as long as you're running full > repairs before gc grace expires, everything should be ok. > > Thanks, > > Blake > > > On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (ai...@steelhouse.com) > wrote: > &g

Re: Need help with incremental repair

2017-10-28 Thread Blake Eggleston
ut as long as you're running full repairs before gc grace expires, everything should be ok. Thanks, Blake On October 28, 2017 at 1:28:42 AM, Aiman Parvaiz (ai...@steelhouse.com) wrote: Hi everyone, We seek your help in a issue we are facing in our 2.2.8 version. We have 24 nodes cluster spread

Need help with incremental repair

2017-10-28 Thread Aiman Parvaiz
Hi everyone, We seek your help in a issue we are facing in our 2.2.8 version. We have 24 nodes cluster spread over 3 DCs. Initially, when the cluster was in a single DC we were using The Last Pickle reaper 0.5 to repair it with incremental repair set to false. We added 2 more DCs. Now

How Can I get started with Using Cassandra and Netbeans- Please help

2017-09-29 Thread Lutaya Shafiq Holmes
How Can I get started with Using Cassandra and Netbeans- Please help -- Lutaaya Shafiq Web: www.ronzag.com | i...@ronzag.com Mobile: +256702772721 | +256783564130 Twitter: @lutayashafiq Skype: lutaya5 Blog: lutayashafiq.com http://www.fourcornersalliancegroup.com/?a=shafiqholmes "The

Re: Help in c* Data modelling

2017-07-23 Thread @Nandan@
Hi , The best way will go with per query per table plan.. and distribute the common column into both tables. This will help you to support queries as well as Read and Write will be fast. Only Drawback will be, you have to insert common data into both tables at the same time which can be easily

Re: Help in c* Data modelling

2017-07-23 Thread Jonathan Haddad
Using a different table to answer each query is the correct answer here assuming there's a significant amount of data. If you don't have that much data, maybe you should consider using a database like Postgres which gives you query flexibility instead of horizontal scalability. On Sun, Jul 23,

Re: Help in c* Data modelling

2017-07-23 Thread techpyaasa .
Hi vladyu/varunbarala Instead of creating second table as you said can I just have one(first) table below and get all rows with status=0. CREATE TABLE IF NOT EXISTS test.user ( account_id bigint, pid bigint, disp_name text, status int, PRIMARY KEY (account_id, pid) ) WITH CLUSTERING ORDER BY

Re: Help in c* Data modelling

2017-07-23 Thread Vladimir Yudovin
Hi, unfortunately ORDER BY is supported for clustering columns only... Winguzone - Cloud Cassandra Hosting On Sun, 23 Jul 2017 12:49:36 -0400 techpyaasa . techpya...@gmail.com wrote Hi Varun, Thanks a lot for your reply. In this case if I want to update

Re: Help in c* Data modelling

2017-07-23 Thread techpyaasa .
Hi Varun, Thanks a lot for your reply. In this case if I want to update status(status can be updated for given account_id, pid) , I need to delete existing row in 2nd table & add new one... :( :( Its like hitting cassandra twice for 1 change.. :( On Sun, Jul 23, 2017 at 8:42 PM, Varun

Re: Help in c* Data modelling

2017-07-23 Thread Varun Barala
Hi, You can create pseudo index table. IMO, structure can be:- CREATE TABLE IF NOT EXISTS test.user ( account_id bigint, pid bigint, disp_name text, status int, PRIMARY KEY (account_id, pid) ) WITH CLUSTERING ORDER BY (pid ASC); CREATE TABLE IF NOT EXISTS test.user_index ( account_id bigint,

Help in c* Data modelling

2017-07-22 Thread techpyaasa .
Hi , We have a table like below : CREATE TABLE ks.cf ( accountId bigint, pid bigint, dispName text, status > int, PRIMARY KEY (accountId, pid) ) WITH CLUSTERING ORDER BY (pid ASC); We would like to have following queries possible on the above table: select * from site24x7.wm_current_status

Re: need help tuning dropped mutation messages

2017-07-06 Thread Subroto Barua
c* version: 3.0.11 cross_node_timeout: truerange_request_timeout_in_ms: 1write_request_timeout_in_ms: 2000counter_write_request_timeout_in_ms: 5000cas_contention_timeout_in_ms: 1000 On Thursday, July 6, 2017, 11:43:44 AM PDT, Subroto Barua wrote: I am seeing

need help tuning dropped mutation messages

2017-07-06 Thread Subroto Barua
I am seeing these errors: MessagingService.java: 1013 -- MUTATION messages dropped in last 5000 ms: 0 for internal timeout and 4 for cross node timeout write consistency @ LOCAL_QUORUM is failing on a 3-node cluster and 18-node cluster..

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Zoltan Lorincz
tatic, > > doc_title text static, > > element_title text, > > PRIMARY KEY (doc_id, element_id) > > ); > > The static columns are present once per unique doc_id. > > > > On 03/27/2017 01:08 PM, Zoltan Lorincz wrote: > > Hi Alexander,

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Avi Kivity
. On 03/27/2017 01:08 PM, Zoltan Lorincz wrote: Hi Alexander, thank you for your help! I think we found the answer: CREATE TABLE documents ( doc_id uuid, description text, title text, PRIMARY KEY (doc_id) ); CREATE TABLE nodes ( doc_id uuid, element_id uuid, title

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Zoltan Lorincz
eries can be used sometimes but I > usually run parallel async as Alexander explained. > > On Mon, Mar 27, 2017 at 12:08 PM, Zoltan Lorincz <zol...@gmail.com> wrote: > >> Hi Alexander, >> >> thank you for your help! I think we found the answer: >

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Matija Gobec
Thats exactly what I described. IN queries can be used sometimes but I usually run parallel async as Alexander explained. On Mon, Mar 27, 2017 at 12:08 PM, Zoltan Lorincz <zol...@gmail.com> wrote: > Hi Alexander, > > thank you for your help! I think we found the answer: &

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Zoltan Lorincz
Hi Alexander, thank you for your help! I think we found the answer: CREATE TABLE documents ( doc_id uuid, description text, title text, PRIMARY KEY (doc_id) ); CREATE TABLE nodes ( doc_id uuid, element_id uuid, title text, PRIMARY KEY (doc_id, element_id) ); We

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-27 Thread Alexander Dejanovski
Hi Zoltan, you must try to avoid multi partition queries as much as possible. Instead, use asynchronous queries to grab several partitions concurrently. Try to send no more than ~100 queries at the same time to avoid DDOS-ing your cluster. This would leave you roughly with 1000+ async queries

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-26 Thread Zoltan Lorincz
Querying by (doc_id and element_id ) OR just by (element_id) is fine, but the real question is, will it be efficient to query 100k+ primary keys in the elements table? e.g. SELECT * FROM elements WHERE element_id IN (element_id1, element_id2, element_id3, element_id100K+) ? The elements_id

Re: Help with data modelling (from MySQL to Cassandra)

2017-03-26 Thread Matija Gobec
Have one table hold document metadata (doc_id, title, description, ...) and have another table elements where partition key is doc_id and clustering key is element_id. Only problem here is if you need to query and/or update element just by element_id but I don't know your queries up front. On

Help with data modelling (from MySQL to Cassandra)

2017-03-26 Thread Zoltan Lorincz
Dear cassandra users, We have the following structure in MySql: documents->[doc_id(primary key), title, description] elements->[element_id(primary key), doc_id(index), title, description] Notation: table name->[column1(key or index), column2, …] We want to transfer the data to Cassandra. Each

Re: HELP with bulk loading

2017-03-14 Thread Artur R
Thank you all! It turns out that the fastest ways are: https://github.com/brianmhess/ cassandra-loader and COPY FROM. So I decided to stick with COPY FROM as it built-in and easy-to-use. On Fri, Mar 10, 2017 at 2:22 PM, Ahmed Eljami wrote: > Hi, > > >3. sstableloader is

Re: HELP with bulk loading

2017-03-10 Thread Ahmed Eljami
Hi, >3. sstableloader is slow too. Assuming that I have new empty C* cluster, how can I improve the upload speed? Maybe disable replication or some other settings while streaming and then turn it back? Maybe you can accelerate you load with the option -cph (connection per host):

Re: HELP with bulk loading

2017-03-09 Thread Stefania Alborghetti
When I tested cqlsh COPY FROM for CASSANDRA-11053 , I was able to import about 20 GB in under 4 minutes on a cluster with 8 nodes using

Re: HELP with bulk loading

2017-03-09 Thread Ryan Svihla
I suggest using cassandra loader https://github.com/brianmhess/cassandra-loader On Mar 9, 2017 5:30 PM, "Artur R" wrote: > Hello all! > > There are ~500gb of CSV files and I am trying to find the way how to > upload them to C* table (new empty C* cluster of 3 nodes,

HELP with bulk loading

2017-03-09 Thread Artur R
Hello all! There are ~500gb of CSV files and I am trying to find the way how to upload them to C* table (new empty C* cluster of 3 nodes, replication factor 2) within reasonable time (say, 10 hours using 3-4 instance of c3.8xlarge EC2 nodes). My first impulse was to use CQLSSTableWriter, but it

Re: Attached profiled data but need help understanding it

2017-03-06 Thread Romain Hardouin
Hi Kant, You'll find more information about ixgbevf here  http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/sriov-networking.htmlI repeat myself but don't underestimate VMs placement: same AZ? same placement group? etc.Note that LWT are not discouraged but as the doc says: "[...] reserve

Re: Attached profiled data but need help understanding it

2017-03-06 Thread Kant Kodali
Hi Romain, We may be able to achieve what we need without LWT but that would require bunch of changes from the application side and possibly introducing caching layers and designing solution around that. But for now, we are constrained to use LWT's for another month or so. All said, I still would

Re: Attached profiled data but need help understanding it

2017-03-03 Thread Romain Hardouin
Also, I should have mentioned that it would be a good idea to spawn your three benchmark instances in the same AZ, then try with one instance on each AZ to see how network latency affects your LWT rate. The lower latency is achievable with three instances on the same placement group of course

Re: Attached profiled data but need help understanding it

2017-03-02 Thread Romain Hardouin
o, could you please look into my responses from other email? It would greatly help. Thanks,kant On Tue, Feb 28, 2017 at 10:20 PM, Kant Kodali <k...@peernova.com> wrote: Hi Romain, I am using Cassandra version 3.0.9 and here is the generated report  (Graphical view) of my thread dump as well!. J

Re: Attached profiled data but need help understanding it

2017-02-28 Thread Kant Kodali
Hi Romain, I am using Cassandra version 3.0.9 and here is the generated report (Graphical view) of my thread dump as well!. Just send this over in case if it helps. Thanks, kant On Tue,

Re: Attached profiled data but need help understanding it

2017-02-28 Thread Kant Kodali
Hi Romain, Thanks again. My response are inline. kant On Tue, Feb 28, 2017 at 10:04 AM, Romain Hardouin wrote: > > we are currently using 3.0.9. should we use 3.8 or 3.10 > > No, don't use 3.X in production unless you really need a major feature. > I would advise to

Re: Attached profiled data but need help understanding it

2017-02-28 Thread Romain Hardouin
> we are currently using 3.0.9.  should we use 3.8 or 3.10 No, don't use 3.X in production unless you really need a major feature.I would advise to stick to 3.0.X (i.e. 3.0.11 now).You can backport CASSANDRA-11966 easily but of course you have to deploy from source as a prerequisite. > I haven't

Re: Attached profiled data but need help understanding it

2017-02-27 Thread Kant Kodali
attached screenshot? I can see the CPU is almost >> maxed out but should I say that is because of compaction or >> shared-worker-pool threads (which btw, I dont know what they are doing >> perhaps I need to take threadump)? Also what is alloc for each thread? >> >> I have a insert heavy workload (almost like an ingest running against >> cassandra cluster) and in my case all writes are LWT. >> >> The current throughput is 1500 writes/sec where each write is about 1KB. >> How can I tune something for a higher throughput? Any pointers or >> suggestions would help. >> >> Thanks much, >> kant >> >> >> >> >> >

Re: Attached profiled data but need help understanding it

2017-02-27 Thread Kant Kodali
andra cluster) and in my case all writes are LWT. > > The current throughput is 1500 writes/sec where each write is about 1KB. > How can I tune something for a higher throughput? Any pointers or > suggestions would help. > > Thanks much, > kant > > > > >

Re: Attached profiled data but need help understanding it

2017-02-27 Thread Romain Hardouin
The current throughput is 1500 writes/sec where each write is about 1KB. How can I tune something for a higher throughput? Any pointers or suggestions would help. Thanks much,kant

Re: Attached profiled data but need help understanding it

2017-02-27 Thread Kant Kodali
ch thread? > > I have a insert heavy workload (almost like an ingest running against > cassandra cluster) and in my case all writes are LWT. > > The current throughput is 1500 writes/sec where each write is about 1KB. > How can I tune something for a higher throughput? Any pointers or > suggestions would help. > > Thanks much, > kant >

Re: Help with cassandra triggers

2017-01-17 Thread Jonathan Haddad
ggers for a local write. But i do not see the trigger > on sync. > Could anyone please help me out with this ? > > thanks > Suraj >

Help with cassandra triggers

2017-01-17 Thread suraj pasuparthy
for a local write. But i do not see the trigger on sync. Could anyone please help me out with this ? thanks Suraj

Re: Help

2017-01-15 Thread Jonathan Haddad
es were failing, are in different DC. Those nodes > do not have any load. > > Gossips shows everything is up. I already set write timeout to 60 sec, > but no help. > > Can anyone encounter this scenario ? Network side everything is fine. > > Cassandra version is 2.1.13 > > -- > *Regards,* > *Anshu * > > > > > > -- > *Regards,* > *Anshu * > > >

Re: Help

2017-01-15 Thread Anshu Vajpayee
cy requirements. >> >> The nodes for which writes were failing, are in different DC. Those >> nodes do not have any load. >> >> Gossips shows everything is up. I already set write timeout to 60 sec, >> but no help. >> >> Can anyone encounter this

Re: Help

2017-01-14 Thread Aleksandr Ivanov
r meeting > consistency requirements. > > The nodes for which writes were failing, are in different DC. Those nodes > do not have any load. > > Gossips shows everything is up. I already set write timeout to 60 sec, > but no help. > > Can anyone encounter this scenar

Re: Help

2017-01-09 Thread Chris Lohfink
ing is up. I already set write timeout to 60 sec, > but no help. > > Can anyone encounter this scenario ? Network side everything is fine. > > Cassandra version is 2.1.13 > > -- > *Regards,* > *Anshu * > > >

Re: Help

2017-01-09 Thread Edward Capriolo
sistency requirements. > > The nodes for which writes were failing, are in different DC. Those nodes > do not have any load. > > Gossips shows everything is up. I already set write timeout to 60 sec, > but no help. > > Can anyone encounter this scenario ? Network side everyt

Help

2017-01-08 Thread Anshu Vajpayee
load. Gossips shows everything is up. I already set write timeout to 60 sec, but no help. Can anyone encounter this scenario ? Network side everything is fine. Cassandra version is 2.1.13 -- *Regards,* *Anshu *

Re: Schema help required

2016-12-18 Thread Sagar Jambhulkar
Thanks Alain for the help. I will give these options a try. On Dec 18, 2016 10:01 PM, "Alain RODRIGUEZ" <arodr...@gmail.com> wrote: > Hi Sagar, > > >> But this is a known anti pattern to not use Cassandra as a queue causing >> tombstones etc. >> B

Re: Schema help required

2016-12-18 Thread Alain RODRIGUEZ
d, including a talk this year at the summit from Jeff who contributed with TWCS to Apache Cassandra: https://www.youtube.com/watch?v=PWtekUWCIaw. Also using a time buckets in the partition key could help making sure tombstones will be correctly removed and are not being scanned when requesting new

Schema help required

2016-12-17 Thread Sagar Jambhulkar
Hi, Needed a suggestion for a schema query. I want to build a reconciliation using Cassandra. Basically two or more systems send message to a reconciliation process. The reconciliation process first does a level one match of id's and than does complete comparison of messages. The best I could

Re: ITrigger - Help

2016-11-11 Thread siddharth verma
h verma < > sidd.verma29.l...@gmail.com> wrote: > >> Hi Sathish, >> You could look into, Change Data Capture (CDC) ( >> https://issues.apache.org/jira/browse/CASSANDRA-8844 . >> It might help you for some of your requirements. >> >> Regards >> Si

Re: ITrigger - Help

2016-11-11 Thread Jonathan Haddad
Change Data Capture (CDC) ( > https://issues.apache.org/jira/browse/CASSANDRA-8844 . > It might help you for some of your requirements. > > Regards > Siddharth Verma > > On Fri, Nov 11, 2016 at 11:34 PM, Jonathan Haddad <j...@jonhaddad.com> > wrote: > > cqlsh uses the P

Re: ITrigger - Help

2016-11-11 Thread sat
Data Capture (CDC) ( > https://issues.apache.org/jira/browse/CASSANDRA-8844 . > It might help you for some of your requirements. > > Regards > Siddharth Verma > > On Fri, Nov 11, 2016 at 11:34 PM, Jonathan Haddad <j...@jonhaddad.com> > wrote: > >> cqlsh uses th

Re: ITrigger - Help

2016-11-11 Thread sat
Hi Jon, Thanks for your prompt answer. Thanks A.SathishKumar On Fri, Nov 11, 2016 at 10:04 AM, Jonathan Haddad wrote: > cqlsh uses the Python driver, I don't see how there would be any way to > differentiate where the request came from unless you stuck an extra field > in

Re: ITrigger - Help

2016-11-11 Thread siddharth verma
Hi Sathish, You could look into, Change Data Capture (CDC) ( https://issues.apache.org/jira/browse/CASSANDRA-8844 . It might help you for some of your requirements. Regards Siddharth Verma On Fri, Nov 11, 2016 at 11:34 PM, Jonathan Haddad <j...@jonhaddad.com> wrote: > cqlsh uses t

Re: ITrigger - Help

2016-11-11 Thread siddharth verma
Hi Sathish, You could look into, Change Data Capture (CDC) ( https://issues.apache.org/jira/browse/CASSANDRA-8844 . It might help you for some of your requirements. Regards Siddharth Verma On Fri, Nov 11, 2016 at 11:34 PM, Jonathan Haddad <j...@jonhaddad.com> wrote: > cqlsh uses t

Re: ITrigger - Help

2016-11-11 Thread siddharth verma
Hi Sathish, You could look into, Change Data Capture (CDC) ( https://issues.apache.org/ jira/browse/CASSANDRA-8844 . It might help you for some of your requirements. Regards Siddharth Verma On Fri, Nov 11, 2016 at 11:34 PM, Jonathan Haddad <j...@jonhaddad.com> wrote: > cqlsh uses t

  1   2   3   4   5   6   >