Re: Internal Handling of Map Updates

2016-05-25 Thread kurt Greaves
702676 >> www.codecentric.de | blog.codecentric.de | www.meettheexperts.de | >> www.more4fi.de >> >> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal >> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns >> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen >> Schütz >> >> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält >> vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht >> der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, >> informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail und >> evtl. beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder >> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser >> E-Mail ist nicht gestattet >> > > > > -- > Tyler Hobbs > DataStax <http://datastax.com/> > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Interesting use case

2016-06-10 Thread kurt Greaves
Sorry, I did mean larger number of rows per partition. On 9 June 2016 at 10:12, John Thomas <jthom...@gmail.com> wrote: > The example I gave was for when N=1, if we need to save more values I > planned to just add more columns. > > On Thu, Jun 9, 2016 at 12:51 A

Re: Interesting use case

2016-06-10 Thread kurt Greaves
woops was obviously tired, what I said clearly doesn't make sense. On 10 June 2016 at 14:52, kurt Greaves <k...@instaclustr.com> wrote: > Sorry, I did mean larger number of rows per partition. > > On 9 June 2016 at 10:12, John Thomas <jthom...@gmail.com> wrote: &g

Re: Streaming from 1 node only when adding a new DC

2016-06-14 Thread kurt Greaves
the node being replaced, but when rebuilding a new DC, it should > probably select sources "randomly" (rather than always selecting the same > source for a specific range). > What do you think ? > > Best Regards, > Fabien > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Internal Handling of Map Updates

2016-06-01 Thread kurt Greaves
>> vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht >>>> der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, >>>> informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail und >>>> evtl. beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder >>>> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser >>>> E-Mail ist nicht gestattet >>>> >>> >>> >>> >>> -- >>> Tyler Hobbs >>> DataStax <http://datastax.com/> >>> >> >> >> >> -- >> Matthias Niehoff | IT-Consultant | Agile Software Factory | Consulting >> codecentric AG | Zeppelinstr 2 | 76185 Karlsruhe | Deutschland >> tel: +49 (0) 721.9595-681 | fax: +49 (0) 721.9595-666 | mobil: +49 (0) >> 172.1702676 >> www.codecentric.de | blog.codecentric.de | www.meettheexperts.de | >> www.more4fi.de >> >> Sitz der Gesellschaft: Solingen | HRB 25917| Amtsgericht Wuppertal >> Vorstand: Michael Hochgürtel . Mirko Novakovic . Rainer Vehns >> Aufsichtsrat: Patric Fedlmeier (Vorsitzender) . Klaus Jäger . Jürgen >> Schütz >> >> Diese E-Mail einschließlich evtl. beigefügter Dateien enthält >> vertrauliche und/oder rechtlich geschützte Informationen. Wenn Sie nicht >> der richtige Adressat sind oder diese E-Mail irrtümlich erhalten haben, >> informieren Sie bitte sofort den Absender und löschen Sie diese E-Mail und >> evtl. beigefügter Dateien umgehend. Das unerlaubte Kopieren, Nutzen oder >> Öffnen evtl. beigefügter Dateien sowie die unbefugte Weitergabe dieser >> E-Mail ist nicht gestattet >> > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Efficiently filtering results directly in CS

2016-04-08 Thread kurt Greaves
gt; Founder/CEO Spinn3r.com >>>> Location: *San Francisco, CA* >>>> blog: http://burtonator.wordpress.com >>>> … or check out my Google+ profile >>>> <https://plus.google.com/102718274791889610666/posts> >>>> >>>> >> > > > -- > > We’re hiring if you know of any awesome Java Devops or Linux Operations > Engineers! > > Founder/CEO Spinn3r.com > Location: *San Francisco, CA* > blog: http://burtonator.wordpress.com > … or check out my Google+ profile > <https://plus.google.com/102718274791889610666/posts> > > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Increasing replication factor and repair doesn't seem to work

2016-05-24 Thread kurt Greaves
State=Normal/Leaving/Joining/Moving >>>> -- Address Load Tokens Owns (effective) Host ID >>>> Rack >>>> UN 10.142.0.14 6.4 GB 256 100.0% >>>> c3a5c39d-e1c9-4116-903d-b6d1b23fb652 default >&g

Re: Increasing replication factor and repair doesn't seem to work

2016-05-23 Thread kurt Greaves
ow to get the data correctly synced without > decommissioning the node and re-adding it. > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: cqlsh problem

2016-05-09 Thread kurt Greaves
vishwas.gu...@snapdeal.com>: >>>>>>>> >>>>>>>>> Have you started the Cassandra service? >>>>>>>>> >>>>>>>>> sh cassandra >>>>>>>>> On 17-Mar-2016 7:59 pm, "Alain RODRIGUEZ" <arodr...@gmail.com> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>>> Hi, did you try with the address of the node rather than 127.0.0.1 >>>>>>>>>> >>>>>>>>>> Is the transport protocol used by cqlsh (not sure if it is thrift >>>>>>>>>> or binary - native in 2.1) active ? What is the "nodetool info" >>>>>>>>>> output ? >>>>>>>>>> >>>>>>>>>> C*heers, >>>>>>>>>> --- >>>>>>>>>> Alain Rodriguez - al...@thelastpickle.com >>>>>>>>>> France >>>>>>>>>> >>>>>>>>>> The Last Pickle - Apache Cassandra Consulting >>>>>>>>>> http://www.thelastpickle.com >>>>>>>>>> >>>>>>>>>> 2016-03-17 14:26 GMT+01:00 joseph gao <gaojf.bok...@gmail.com>: >>>>>>>>>> >>>>>>>>>>> hi, all >>>>>>>>>>> cassandra version 2.1.7 >>>>>>>>>>> When I use cqlsh to connect cassandra, something is wrong >>>>>>>>>>> >>>>>>>>>>> Connection error: ( Unable to connect to any servers', >>>>>>>>>>> {'127.0.0.1': OperationTimedOut('errors=None, last_host=None,)}) >>>>>>>>>>> >>>>>>>>>>> This happens lots of times, but sometime it works just fine. >>>>>>>>>>> Anybody knows why? >>>>>>>>>>> >>>>>>>>>>> -- >>>>>>>>>>> -- >>>>>>>>>>> Joseph Gao >>>>>>>>>>> PhoneNum:15210513582 >>>>>>>>>>> QQ: 409343351 >>>>>>>>>>> >>>>>>>>>> >>>>>>>>>> >>>>>>>> >>>>>>>> >>>>>>>> -- >>>>>>>> -- >>>>>>>> Joseph Gao >>>>>>>> PhoneNum:15210513582 >>>>>>>> QQ: 409343351 >>>>>>>> >>>>>>> >>>>>>> >>>>>> >>>>>> >>>>>> -- >>>>>> -- >>>>>> Joseph Gao >>>>>> PhoneNum:15210513582 >>>>>> QQ: 409343351 >>>>>> >>>>> >>>>> >>>> >>>> >>>> -- >>>> -- >>>> Joseph Gao >>>> PhoneNum:15210513582 >>>> QQ: 409343351 >>>> >>> >>> >>> >>> -- >>> -- >>> Joseph Gao >>> PhoneNum:15210513582 >>> QQ: 409343351 >>> >> >> > > > -- > -- > Joseph Gao > PhoneNum:15210513582 > QQ: 409343351 > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: nodetool repair with -pr and -dc

2016-08-11 Thread kurt Greaves
>>>> which leads to the following command : >>>> >>>> ./src/range_repair.py -k [keyspace] -c [columnfamily name] -v -H >>>> localhost -p -D* DC1* >>>> >>>> but looks like the merkle tree is being calculated on nodes which are >>>> part of other *DC2.* >>>> >>>> why does this happen? i thought it should only look at the nodes in >>>> local cluster. however on nodetool the* -pr* option cannot be used >>>> with *-local* according to docs @https://docs.datastax.com/en/ >>>> cassandra/2.0/cassandra/tools/toolsRepair.html >>>> >>>> so i am may be missing something, can someone help explain this please. >>>> >>>> thanks >>>> anishek >>>> >>> >>> >> > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: [Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-30 Thread kurt greaves
On 30 January 2017 at 04:43, Abhishek Kumar Maheshwari < abhishek.maheshw...@timesinternet.in> wrote: > But how I will tell rebuild command source DC if I have more than 2 Dc? You will need to rebuild the new DC from at least one DC for every keyspace present on the new DC and the old DC's.

Re: Why does CockroachDB github website say Cassandra has no Availability on datacenter failure?

2017-02-07 Thread kurt greaves
Marketing never lies. Ever

Re: UnknownColumnFamilyException after removing all Cassandra data

2017-02-07 Thread kurt greaves
The node is trying to communicate with another node, potentially streaming data, and is receiving files/data for an "unknown column family". That is, it doesn't know about the CF with the id e36415b6-95a7-368c-9ac0- ae0ac774863d. If you deleted some columnfamilies but not all the system keyspace

Re: [Multi DC] Old Data Not syncing from Existing cluster to new Cluster

2017-01-27 Thread kurt greaves
What Dikang said, in your original email you are passing -dc to rebuild. This is incorrect. Simply run nodetool rebuild from each of the nodes in the new dc. On 28 Jan 2017 07:50, "Dikang Gu" wrote: > Have you run 'nodetool rebuild dc_india' on the new nodes? > > On Tue,

Re: Re : Decommissioned nodes show as DOWN in Cassandra versions 2.1.12 - 2.1.16

2017-01-27 Thread kurt greaves
we've seen this issue on a few clusters, including on 2.1.7 and 2.1.8. pretty sure it is an issue in gossip that's known about. in later versions it seems to be fixed. On 24 Jan 2017 06:09, "sai krishnam raju potturi" wrote: > In the Cassandra versions 2.1.11 - 2.1.16,

Re: Time series data model and tombstones

2017-01-29 Thread kurt greaves
Your partitioning key is text. If you have multiple entries per id you are likely hitting older cells that have expired. Descending only affects how the data is stored on disk, if you have to read the whole partition to find whichever time you are querying for you could potentially hit tombstones

Re: lots of connection timeouts around same time every day

2017-02-17 Thread kurt greaves
typically when I've seen that gossip issue it requires more than just restarting the affected node to fix. if you're not getting query related errors in the server log you should start looking at what is being queried. are the queries that time out each day the same?

Re: Count(*) is not working

2017-02-17 Thread kurt greaves
really... well that's good to know. it still almost never works though. i guess every time I've seen it it must have timed out due to tombstones. On 17 Feb. 2017 22:06, "Sylvain Lebresne" <sylv...@datastax.com> wrote: On Fri, Feb 17, 2017 at 11:54 AM, kurt greaves <k...@ins

Re: High disk io read load

2017-02-17 Thread kurt greaves
what's the Owns % for the relevant keyspace from nodetool status?

Re: Which compaction strategy when modeling a dumb set

2017-02-24 Thread kurt greaves
Probably LCS although what you're implying (read before write) is an anti-pattern in Cassandra. Something like this is a good indicator that you should review your model. ​

Re: Read exceptions after upgrading to 3.0.10

2017-02-24 Thread kurt greaves
That stacktrace generally implies your clients are resetting connections. The reconnection policy probably handles the issue automatically, however worth investigating. I don't think it normally causes statuslogger output however, what were the log messages prior to the stacktrace? On 24 February

Re: High disk io read load

2017-02-24 Thread kurt greaves
How many CFs are we talking about here? Also, did the script also kick off the scrubs or was this purely from changing the schemas? ​

Re: Why does Cassandra recommends Oracle JVM instead of OpenJDK?

2017-02-13 Thread kurt greaves
are people actually trying to imply that Google is less evil than oracle? what is this shill fest On 12 Feb. 2017 8:24 am, "Kant Kodali" wrote: Saw this one today... https://news.ycombinator.com/item?id=13624062 On Tue, Jan 3, 2017 at 6:27 AM, Eric Evans

Re: Count(*) is not working

2017-02-17 Thread kurt greaves
if you want a reliable count, you should use spark. performing a count (*) will inevitably fail unless you make your server read timeouts and tombstone fail thresholds ridiculous On 17 Feb. 2017 04:34, "Jan" wrote: > Hi, > > could you post the output of nodetool cfstats for the

Re: lots of connection timeouts around same time every day

2017-02-17 Thread kurt greaves
have you tried a rolling restart of the entire DC?

Re: Unreliable JMX metrics

2017-01-19 Thread kurt Greaves
Yes. You likely will still be able to see the nodes in nodetool gossipinfo

Re: Tombstoned error and then OOM

2016-10-04 Thread kurt Greaves
at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown > Source) ~[na:1.7.0_80] > at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown > Source) ~[na:1.7.0_80] > at java.lang.Thread.run(Unknown Source) ~[na:1.7.0_80] > > -- IB > > > > > > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Tombstoned error and then OOM

2016-10-06 Thread kurt Greaves
st_cf where status = 0; > Here the status is integer column which is indexed. > > -- IB > > ------ > *From:* kurt Greaves <k...@instaclustr.com> > *To:* user@cassandra.apache.org; INDRANIL BASU <indranil...@yahoo.com> > *Sent:* Tuesday, 4 O

Read Repairs and CL

2016-08-27 Thread kurt Greaves
s based off the CL of the query. However I don't think that makes sense at other CLs. Anyway, I just want to clarify what CL the read for the read repair occurs at for cases where the overall query CL is not ALL. Thanks, Kurt. -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: How to confirm TWCS is fully in-place

2016-11-09 Thread kurt Greaves
What compaction strategy are you migrating from? If you're migrating from STCS it's likely that when switching to TWCS no extra compactions are necessary, as the SSTables will be put into their respective windows but there won't be enough candidates for compaction within a window. Kurt Greaves k

Re: Introducing Cassandra 3.7 LTS

2016-10-19 Thread kurt Greaves
it is revisited. There has certainly been discussion regarding the tick-tock cadence, and it seems safe to say it will change. There hasn't been any official announcement yet, however. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: non incremental repairs with cassandra 2.2+

2016-10-19 Thread kurt Greaves
anticompactions. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: non incremental repairs with cassandra 2.2+

2016-10-20 Thread kurt Greaves
Welp, that's good but wasn't apparent in the codebase :S. Kurt Greaves k...@instaclustr.com www.instaclustr.com On 20 October 2016 at 05:02, Alexander Dejanovski <a...@thelastpickle.com> wrote: > Hi Kurt, > > we're not actually. > Reaper performs full repair by subrange bu

Re: time series data model

2016-10-20 Thread kurt Greaves
eems workable, I assume you're using DTCS/TWCS, and aligning the time windows to your day bucket. (If not you should do that) Kurt Greaves k...@instaclustr.com www.instaclustr.com On 20 October 2016 at 07:29, wxn...@zjqunshuo.com <wxn...@zjqunshuo.com> wrote: > Hi All, > I'm trying

Re: non incremental repairs with cassandra 2.2+

2016-10-20 Thread kurt Greaves
probably because i was looking the wrong version of the codebase :p

Re: time series data model

2016-10-20 Thread kurt Greaves
Ah didn't pick up on that but looks like he's storing JSON within position. Is there any strong reason for this or as Vladimir mentioned can you store the fields under "position" in separate columns? Kurt Greaves k...@instaclustr.com www.instaclustr.com On 20 October 2016 at 08:17

Re: Question about compaction strategy changes

2016-10-23 Thread kurt Greaves
your TTL passes your read queries won't benefit from the smaller window size. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Question about compaction strategy changes

2016-10-23 Thread kurt Greaves
​More compactions meaning "actual number of compaction tasks". A compaction task generally operates on many SSTables (how many depends on the chosen compaction strategy). The number of pending tasks does not line up with the number of SSTables that will be compacted. 1 task may compact many

Re: Cassandra installation best practices

2016-10-18 Thread kurt Greaves
Mehdi, Nothing as detailed as Oracle's OFA currently exists. You can probably also find some useful information here: https://docs.datastax.com/en/landing_page/doc/landing_page/planning/planningAbout.html Kurt Greaves k...@instaclustr.com www.instaclustr.com On 18 October 2016 at 07:38, Mehdi

Re: time series data model

2016-10-24 Thread kurt Greaves
entioned? Yes. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Cluster Maintenance Mishap

2016-10-24 Thread kurt Greaves
hese nodes were configured as seed nodes, which means they wouldn't have bootstrapped. In this case it shouldn't have been an issue after you fixed up the data directories. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: CommitLogReadHandler$CommitLogReadException: Unexpected error deserializing mutation

2016-10-24 Thread kurt Greaves
those new features it's probably your best bet (until 4.0), however note that it's still 3.7 and likely less stable than the latest 3.0.x releases. https://github.com/instaclustr/cassandra Read the README at the repo for more info. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Doing an upsert into a collection?

2016-10-24 Thread kurt Greaves
e, you've specified a list of (frozen) ratings, so ratings.rating and ratings.user doesn't make sense. Collection types can't be part of the primary key, so updating as you've mentioned above won't really be possible. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Thousands of SSTables generated in only one node

2016-10-25 Thread kurt Greaves
+1 definitely upgrade to 2.1.16. You shouldn't see any compatibility issues client side when upgrading from 2.1.0. If scrub removed 500 SSTables that's quite worrying. If the mass SSTables are causing issues you can disconnect the node from the cluster using: nodetool disablegossip && nodetool

Re: Question about compaction strategy changes

2016-10-24 Thread kurt Greaves
TL'd data mixed in will result in SSTables that don't expire because some small portion may be live data. Plus mixed with the small number of compaction candidates, it could take a long time for these types of SSTables to be compacted (possibly never). Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: How to throttle up/down compactions without a restart

2016-10-20 Thread kurt Greaves
much disk bandwidth. If you're needing to alter this in peak periods you may be starting to overload your nodes with writes, or potentially something else is not ideal like memtables flushing too frequently. Kurt Greaves k...@instaclustr.com www.instaclustr.com On 21 October 2016 at 04:41, Tho

Re: Cluster Maintenance Mishap

2016-10-20 Thread kurt Greaves
from other nodes in the cluster. If you did, they wouldn't have assumed the token ranges and you shouldn't have any issues. You can just copy the original data back (including system tables) and they should assume their own ranges again, and then you can repair to fix any missing replicas. Ku

Re: lots of DigestMismatchException in cassandra3

2016-11-22 Thread kurt Greaves
nsert, update, delete on the > same record at the same time , is it a possibility? > > > > -- > > Regards, Adeline > > > > > > > > *From:* kurt Greaves [mailto:k...@instaclustr.com] > *Sent:* Wednesday, November 23, 2016 6:51 AM > *To

Re: Is it *safe* to issue multiple replace-node at the same time?

2016-11-21 Thread kurt Greaves
is assuming RF<=# of racks as well (and NTS). Kurt Greaves www.instaclustr.com

Re: lots of DigestMismatchException in cassandra3

2016-11-22 Thread kurt Greaves
deta/not all replicas receiving all writes. You should run a repair and see if the number of mismatches is reduced. Kurt Greaves k...@instaclustr.com www.instaclustr.com On 22 November 2016 at 06:30, <adeline@thomsonreuters.com> wrote: > Hi Kurt, > > Thank you for

RE: lots of DigestMismatchException in cassandra3

2016-11-21 Thread kurt Greaves
That's a debug message. From the sound of it, it's triggered on read where there is a digest mismatch between replicas. As to whether it's normal, well that depends on your cluster. Are the nodes reporting lots of dropped mutations and are you writing at

Re: lots of DigestMismatchException in cassandra3

2016-11-21 Thread kurt Greaves
Actually, just saw the error message in those logs and what you're looking at is probably https://issues.apache.org/jira/browse/CASSANDRA-12694 Kurt Greaves k...@instaclustr.com www.instaclustr.com On 21 November 2016 at 08:59, kurt Greaves <k...@instaclustr.com> wrote: > That'

Re: Incremental repairs leading to unrepaired data

2016-10-31 Thread kurt Greaves
Blowing out to 1k SSTables seems a bit full on. What args are you passing to repair? Kurt Greaves k...@instaclustr.com www.instaclustr.com On 31 October 2016 at 09:49, Stefano Ortolani <ostef...@gmail.com> wrote: > I've collected some more data-points, and I still see dropped &g

Re: cluster creating problem due to same cluster name

2016-10-26 Thread kurt Greaves
github, not JIRA... Kurt Greaves k...@instaclustr.com www.instaclustr.com On 26 October 2016 at 09:36, kurt Greaves <k...@instaclustr.com> wrote: > you probably should raise this as an issue on their JIRA. (I assume you're > using TLP's fork: https://github.com/thelastpickle/cass

Re: cluster creating problem due to same cluster name

2016-10-26 Thread kurt Greaves
you probably should raise this as an issue on their JIRA. (I assume you're using TLP's fork: https://github.com/thelastpickle/cassandra-reaper) Kurt Greaves k...@instaclustr.com www.instaclustr.com On 26 October 2016 at 06:51, Abhishek Aggarwal < abhishek.aggarwa...@snapdeal.com>

Re: Rebuilding with vnodes

2016-11-02 Thread kurt Greaves
are your heap settings and memtable_flush_writers? Kurt Greaves k...@instaclustr.com www.instaclustr.com On 2 November 2016 at 19:59, Anubhav Kale <anubhav.k...@microsoft.com> wrote: > Hello, > > > > I am trying to rebuild a new Data Center with 50 Nodes, and expect 1 TB / >

Re: Backup restore with a different name

2016-11-03 Thread kurt Greaves
t name (in the same or a different keyspace). 2. Rename all the snapshotted SSTables to match the *new* table name. 3. Copy SSTables into new table directory. 4. nodetool refresh or restart Cassandra. Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Incremental repairs leading to unrepaired data

2016-11-01 Thread kurt Greaves
Can't say I have too many ideas. If load is low during the repair it shouldn't be happening. Your disks aren't overutilised correct? No other processes writing loads of data to them?

Re: Repair in Multi Datacenter - Should you use -dc Datacenter repair or repair with -pr

2016-10-13 Thread kurt Greaves
ta center? Do > we need to run repair on each node in that case or will it repair all nodes > within the datacenter? > > 2. Is running repair with -pr across all nodes required , if we perform > the step 1 every night? > > 3. Is cross data center repair required and if so whats the best option? > > > Thanks > > > Leena > > > > -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: are there any free Cassandra -> ElasticSearch connector / plugin ?

2016-10-13 Thread kurt Greaves
cent gromakowski < >> vincent.gromakow...@gmail.com> wrote: >> >> Elassandra >> https://github.com/vroyer/elassandra >> >> Le 14 oct. 2016 12:02 AM, "Eric Ho" <e...@analyticsmd.com> a écrit : >> >> I don't want to change my code to write into C* and then to ES. >> So, I'm looking for some sort of a sync tool that will sync my C* table >> into ES and it should be smart enough to avoid duplicates or gaps. >> Is there such a tool / plugin ? >> I'm using stock apache Cassandra 3.7. >> I know that some premium Cassandra has ES builtin or integrated but I >> can't afford premium right now... >> Thanks. >> >> -eric ho >> >> >> >> >> -- Kurt Greaves k...@instaclustr.com www.instaclustr.com

Re: Cassandra Upgrade

2016-11-29 Thread kurt Greaves
Why would you remove all the data? That doesn't sound like a good idea. Just upgrade the OS and then go through the normal upgrade flow of starting C* with the next version and upgrading sstables. Also, *you will need to go from 2.0.14 -> 2.1.16 -> 2.2.8* and upgrade sstables at each stage of the

Re: Which version is stable enough for production environment?

2016-11-30 Thread kurt Greaves
Yes Benjamin, no one said it wouldn't. We're actively backporting things as we get time, if you find something you'd like backported raise an issue and let us know. We're well aware of the issues affecting MVs, but they haven't really been solved anywhere yet. On 30 November 2016 at 07:54,

Re: Cassandra 2.x Stability

2016-11-30 Thread kurt Greaves
Latest release in 2.2. 2.1 is borderline EOL and from my experience 2.2 is quite stable and has some handy bugfixes that didn't actually make it into 2.1 On 30 November 2016 at 10:41, Shalom Sagges wrote: > Hi Everyone, > > I'm about to upgrade our 2.0.14 version to a

Re: Cassandra cluster performance

2017-01-05 Thread kurt Greaves
you should try switching to async writes and then perform the test. sync writes won't make much difference from a single node but multiple nodes there should be a massive difference. On 4 Jan 2017 10:05, "Branislav Janosik -T (bjanosik - AAP3 INC at Cisco)" < bjano...@cisco.com> wrote: > Hi, > >

Re: How to change Replication Strategy and RF

2016-12-29 Thread kurt Greaves
​If you're already using the cluster in production and require no downtime you should perform a datacenter migration first to change the RF to 3. Rough process would be as follows: 1. Change keyspace to NetworkTopologyStrategy with RF=1. You shouldn't increase RF here as you will receive

Re: Join_ring=false Use Cases

2016-12-20 Thread kurt Greaves
It seems that you're correct in saying that writes don't propagate to a node that has join_ring set to false, so I'd say this is a flaw. In reality I can't see many actual use cases in regards to node outages with the current implementation. The main usage I'd think would be to have additional

Re: Incremental repair for the first time

2016-12-20 Thread kurt Greaves
No workarounds, your best/only option is to upgrade (plus you get the benefit of loads of other bug fixes). On 16 December 2016 at 21:58, Kathiresan S wrote: > Thank you! > > Is any work around available for this version? > > Thanks, > Kathir > > > On Friday,

Re: iostat -like tool to parse 'nodetool cfstats'

2016-12-20 Thread kurt Greaves
Anything in cfstats you should be able to retrieve through the metrics Mbeans. See https://cassandra.apache.org/doc/latest/operating/metrics.html On 20 December 2016 at 23:04, Richard L. Burton III wrote: > I haven't seen anything like that myself. It would be nice to have >

Re: Cassandra cluster performance

2016-12-23 Thread kurt Greaves
Branislav, are you doing async writes?

Re: Very odd & inconsistent results from SASI query

2017-03-20 Thread kurt greaves
As secondary indexes are stored individually on each node what you're suggesting sounds exactly like a consistency issue. the fact that you read 0 cells on one query implies the node that got the query did not have any data for the row. The reason you would sometimes see different behaviours is

Re: Internal Security - Authentication & Authorization

2017-03-15 Thread kurt greaves
Jacob, seems you are on the right track however my understanding is that only the user that was auth'd has their permissions/roles/creds cached. Also. Cassandra will query at QUORUM for the "cassandra" user, and at LOCAL_ONE for *all* other users. This is the same for creating users/roles.

Re: changing compaction strategy

2017-03-15 Thread kurt greaves
The rogue pending task is likely a non-issue. If your jmx command went through without errors and you got the log message you can assume it worked. It won't show in the schema unless you run the ALTER statement which affects the whole cluster. If you were switching from STCS then you wouldn't

Re: Change the IP of a live node

2017-03-15 Thread kurt greaves
Cassandra uses the IP address for more or less everything. It's possible to change it through some hackery however probably not a great idea. The nodes system tables will still reference the old IP which is likely your problem here. On 14 March 2017 at 18:58, George Sigletos

Re: Streaming errors during bootstrap

2017-04-20 Thread kurt greaves
Did this error persist? What was the expected outcome? Did you drop this CF and now expect it to no longer exist? On 12 April 2017 at 01:26, Jai Bheemsen Rao Dhanwada wrote: > Hello, > > I am seeing streaming errors while adding new nodes(in the same DC) to the > cluster.

Re: Cassandra isn't compacting old files

2017-08-01 Thread kurt greaves
Seeing as there aren't even 100 SSTables in L2, LCS should be gradually trying to compact L3 with L2. You could search the logs for "Adding high-level (L3)" to check if this is happening. ​

Re: UndeclaredThrowableException, C* 3.11

2017-08-02 Thread kurt greaves
If the repair command failed, repair also failed. Regarding % repaired, no it's unlikely you will see 100% repaired after a single repair. Maybe after a few consecutive repairs with no data load you might get it to 100%.​

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
You can't just add a new DC and then tell their clients to connect to the new one (after migrating all the data to it obv.)? If you can't achieve that you should probably use GossipingPropertyFileSnitch.​ Your best plan is to have the desired RF/redundancy from the start. Changing RF in production

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
If you want to change RF on a live system your best bet is through DC migration (add another DC with the desired # of nodes and RF), and migrate your clients to use that DC. There is a way to boot a node and not join the ring, however I don't think it will work for new nodes (have not confirmed),

Re: Data Loss irreparabley so

2017-08-02 Thread kurt greaves
You should run repairs every GC_GRACE_SECONDS. If a node is overloaded/goes down, you should run repairs. LOCAL_QUORUM will somewhat maintain consistency within a DC, but certainly doesn't mean you can get away without running repairs. You need to run repairs even if you are using QUORUM or ONE.​

Re: Bootstrapping a new Node with Consistency=ONE

2017-08-02 Thread kurt greaves
only in this one case might that work (RF==N)

Re: Is it possible to delete system_auth keyspace.

2017-08-01 Thread kurt greaves
You should be able to create it yourself prior to enabling auth without issues. alternatively you could just add an extra node with auth on, or switch one node to have auth on then change th RF.

Re: Migrate from DSE (Datastax) to Apache Cassandra

2017-08-15 Thread kurt greaves
Haven't done it for 5.1 but went smoothly for earlier versions. If you're not using any of the additional features of DSE, it should be OK. Just change any custom replication strategies before migrating and also make sure your yaml options are compatible.

Re: Attempted to write commit log entry for unrecognized table

2017-08-15 Thread kurt greaves
what does nodetool describecluster show? stab in the dark but you could try nodetool resetlocalschema or a rolling restart of the cluster if it's schema issues.

Re: rebuild constantly fails, 3.11

2017-08-11 Thread kurt greaves
cc'ing user back in... On 12 Aug. 2017 01:55, "kurt greaves" <k...@instaclustr.com> wrote: > How much memory do these machines have? Typically we've found that G1 > isn't worth it until you get to around 24G heaps, and even at that it's not > really better than CMS. You

Re: Dropping down replication factor

2017-08-13 Thread kurt greaves
On 14 Aug. 2017 00:59, "Brian Spindler" wrote: Do you think with the setup I've described I'd be ok doing that now to recover this node? The node died trying to run the scrub; I've restarted it but I'm not sure it's going to get past a scrub/repair, this is why I

Re: Unbalanced cluster

2017-07-10 Thread kurt greaves
the reason for the default of 256 vnodes is because at that many tokens the random distribution of tokens is enough to balance out each nodes token allocation almost evenly. any less and some nodes will get far more unbalanced, as Avi has shown. In 3.0 there is a new token allocating algorithm

Re: adding nodes to a cluster and changing rf

2017-07-14 Thread kurt greaves
Increasing RF will result in nodes that previously didn't have a replica of the data now being responsible for it. This means that a repair is required after increasing the RF. Until the repair completes you will suffer from inconsistencies in data. For example, in a 3 node cluster with RF 2,

Re: write time for nulls is not consistent

2017-07-18 Thread kurt greaves
can you try select a, writetime(b) from test.t I heard of an issue recently where cqlsh reports null incorrectly if you query a column twice, wondering if it extends to this case with writetime.

Re: Understanding gossip and seeds

2017-07-21 Thread kurt greaves
Haven't checked the code but pretty sure it's because it will always use the known state stored in the system tables. the seeds in the yaml are mostly for initial set up, used to discover the rest of the nodes in the ring. Once that's done there is little reason to refer to them again, unless

Re: read/write request counts and write size of each write

2017-07-25 Thread kurt greaves
You will need to use jmx to collect write/read related metrics. not aware of anything that measures write size, but if there isn't it should be easily measured on your client. there are quite a few existing solutions for monitoring Cassandra out there, you should find some easily with a quick

Re: performance penalty of add column in CQL3

2017-07-25 Thread kurt greaves
If by "offline" you mean with no reads going to the nodes, then yes that would be a *potentially *safe time to do it, but it's still not advised. You should avoid doing any ALTERs on versions of 3 less than 3.0.14 or 3.11 if possible. Adding/dropping a column does not require a re-write of the

Re: 回复: tolerate how many nodes down in the cluster

2017-07-25 Thread kurt greaves
Keep in mind that you shouldn't just enable multiple racks on an existing cluster (this will lead to massive inconsistencies). The best method is to migrate to a new DC as Brooke mentioned.​

Re: read/write request counts and write size of each write

2017-07-25 Thread kurt greaves
Looks like you can collect MutationSizeHistogram for each write as well from the coordinator, in regards to write request size. See the Write request section under https://cassandra.apache.org/doc/latest/operating/metrics.html#client-request-metrics

Re: Data Loss irreparabley so

2017-07-25 Thread kurt greaves
Cassandra doesn't do any automatic repairing. It can tell if your data is inconsistent, however it's really up to you to manage consistency through repairs and choice of consistency level for queries. If you lose a node, you have to manually repair the cluster after replacing the node, but really

Re: 1 node doing compaction all the time in 6-node cluster (C* 2.2.8)

2017-07-24 Thread kurt greaves
Just to rule out a simple problem, are you using a load balancing policy?

Re: 回复: tolerate how many nodes down in the cluster

2017-07-24 Thread kurt greaves
I've never really understood why Datastax recommends against racks. In those docs they make it out to be much more difficult than it actually is to configure and manage racks. The important thing to keep in mind when using racks is that your # of racks should be equal to your RF. If you have

Re: 1 node doing compaction all the time in 6-node cluster (C* 2.2.8)

2017-07-24 Thread kurt greaves
Have you checked system logs/dmesg? I'd suspect it's an instance problem too, maybe you'll see some relevant errors in those logs. ​

Re: 回复: 回复: tolerate how many nodes down in the cluster

2017-07-27 Thread kurt greaves
Note that if you use more racks than RF you lose some of the operational benefit. e.g: you'll still only be able to take out one rack at a time (especially if using vnodes), despite the fact that you have more racks than RF. As Jeff said this may be desirable, but really it comes down to what your

Re: Restore Snapshot

2017-06-28 Thread kurt greaves
Hm, I did recall seeing a ticket for this particular use case, which is certainly useful, I just didn't think it had been implemented yet. Turns out it's been in since 2.0.7, so you should be receiving writes with join_ring=false. If you confirm you aren't receiving writes then we have an issue.

Re: ALL range query monitors failing frequently

2017-06-28 Thread kurt greaves
I'd say that no, a range query probably isn't the best for monitoring, but it really depends on how important it is that the range you select is consistent. >From those traces it does seem that the bulk of the time spent was waiting for responses from the replicas, which may indicate a network

  1   2   3   4   >