Secondary index tombstone limit
Hi, could you please clarify: 100k tombstone limit for SE is per CF, cf-node, original sstable or (very unlikely) partition? Thanks! -- Oleg Krayushkin
Issue with Unexpected exception
Hi, about month ago I already asked about my problem here (with subject "Error while read after upgrade from 2.2.7 to 3.0.8") and also at stackoverflow <http://stackoverflow.com/q/39791419/2226888>. Unfortunately, I still didn't find a solution. It's "Unexpected exception" -- maybe it's a good idea to make an Issue with it? ..or is it my mistake somewhere? Thanks -- Oleg Krayushkin
Improving cassandra documentation
Hi, from time to time I find errors in datastax cassandra docs. Is there a right & easy way to report them? Thanks. -- Oleg Krayushkin
Re: Secondary Index on Boolean column with TTL
Thanks a lot, DuyHai! 2016-10-31 19:53 GMT+03:00 DuyHai Doan <doanduy...@gmail.com>: > Technically TTL should be handled properly. However, be careful of expired > data turning into tombstones. For the original table, it may be a tombstone > on a skinny partition but for the 2nd index, it may be a tombstone set on a > wide partition and you'll start getting into trouble when reading a > partition with a lot of them > > On Mon, Oct 31, 2016 at 5:08 PM, Oleg Krayushkin <allight...@gmail.com> > wrote: > >> Hi, DuyHai, thank you. >> >> I got the idea of caveat with too low cardinality, but still wondering of >> possible troubles at the idea to put TTL (months) on indexed column (not >> bool, say, 100 different values of int). >> >> 2016-10-31 16:33 GMT+03:00 DuyHai Doan <doanduy...@gmail.com>: >> >>> http://www.planetcassandra.org/blog/cassandra-native-seconda >>> ry-index-deep-dive/ >>> >>> See section E Caveats which applies to your boolean use-case >>> >>> On Mon, Oct 31, 2016 at 2:19 PM, Oleg Krayushkin <allight...@gmail.com> >>> wrote: >>> >>>> Hi, >>>> >>>> Is it a good approach to make a boolean column with TTL and build a >>>> secondary index on it? >>>> (For example, I want to get rows which need to be updated after a >>>> certain time, but I don't want, say, to add a filed "update_date" as >>>> clustering column or to create another table) >>>> >>>> In what kind of trouble it could lead me? >>>> >>>> Thanks in advance for any suggestions. >>>> >>>> -- >>>> >>>> Oleg Krayushkin >>>> >>> >>> >> >> >> -- >> >> Oleg Krayushkin >> > > -- Oleg Krayushkin
Re: Secondary Index on Boolean column with TTL
Hi, DuyHai, thank you. I got the idea of caveat with too low cardinality, but still wondering of possible troubles at the idea to put TTL (months) on indexed column (not bool, say, 100 different values of int). 2016-10-31 16:33 GMT+03:00 DuyHai Doan <doanduy...@gmail.com>: > http://www.planetcassandra.org/blog/cassandra-native- > secondary-index-deep-dive/ > > See section E Caveats which applies to your boolean use-case > > On Mon, Oct 31, 2016 at 2:19 PM, Oleg Krayushkin <allight...@gmail.com> > wrote: > >> Hi, >> >> Is it a good approach to make a boolean column with TTL and build a >> secondary index on it? >> (For example, I want to get rows which need to be updated after a certain >> time, but I don't want, say, to add a filed "update_date" as clustering >> column or to create another table) >> >> In what kind of trouble it could lead me? >> >> Thanks in advance for any suggestions. >> >> -- >> >> Oleg Krayushkin >> > > -- Oleg Krayushkin
Secondary Index on Boolean column with TTL
Hi, Is it a good approach to make a boolean column with TTL and build a secondary index on it? (For example, I want to get rows which need to be updated after a certain time, but I don't want, say, to add a filed "update_date" as clustering column or to create another table) In what kind of trouble it could lead me? Thanks in advance for any suggestions. -- Oleg Krayushkin
Re: how to get the size of the particular partition key belonging to an sstable ??
Hi, I guess it's about getting particular Partition Size on disk. If so, I would like to know this too. 2016-10-28 9:09 GMT+03:00 Vladimir Yudovin <vla...@winguzone.com>: > Hi, > > >size of a particular partition key > Can you please elucidate this? Key can be just number, or string, or > several values. > > Best regards, Vladimir Yudovin, > > *Winguzone <https://winguzone.com?from=list> - Hosted Cloud > CassandraLaunch your cluster in minutes.* > > > On Thu, 27 Oct 2016 11:45:47 -0400*Pranay akula > <pranay.akula2...@gmail.com <pranay.akula2...@gmail.com>>* wrote > > how can i get the size of a particular partition key belonging to an > sstable ?? can we find it using index or summary or Statistics.db files ?? > does reading the hexdump of these files help ?? > > > > Thanks > Pranay. > > > -- Oleg Krayushkin
strange node load decrease after nodetool repair -pr
Hi. After I've run token-ranged repair from node at 12.5.13.125 with nodetool repair -full -st ${start_tokens[i]} -et ${end_tokens[i]} on every token range, I got this node load: -- Address Load Tokens Owns Rack UN 12.5.13.141 23.94 GB 256 32.3% rack1 DN 12.5.13.125 34.71 GB 256 31.8% rack1 UN 12.5.13.4629.01 GB 512 58.1% rack1 UN 12.5.13.228 41.17 GB 512 58.5% rack1 UN 12.5.13.3445.93 GB 512 59.8% rack1 UN 12.5.13.8242.05 GB 512 59.4% rack1 Then I've run partitioner-range repair from the same node with nodetool repair -full -pr And unexpectedly I got such a different load: -- Address Load Tokens Owns Rack UN 12.5.13.141 22.93 GB 256 32.3% rack1 UN 12.5.13.125 30.94 GB 256 31.8% rack1 UN 12.5.13.4627.38 GB 512 58.1% rack1 UN 12.5.13.228 39.51 GB 512 58.5% rack1 UN 12.5.13.3441.58 GB 512 59.8% rack1 UN 12.5.13.8233.9 GB512 59.4% rack1 What are posible reasons of such load decrease after last repair? Maybe some compaction, that were not done after token-ranged repairs? But at 12.5.13.82 gone about 8GB! Additional info: - There were no writes to db during these periods. - All repair operations completed without errors, exceptions or fails. - Before the first repair I've done sstablescrub on every node -- maybe this gives a clue? - cassandra version is 3.0.8 -- Oleg Krayushkin
Re: Run sstablescrub in parallel
Thanks for response! It's just seemed to me that sstables are processed so independently, that there should be a workaround with sstablescrub. StandaloneScrubber.java#L120 <https://github.com/apache/cassandra/blob/81f6c784ce967fadb6ed7f58de1328e713eaf53c/src/java/org/apache/cassandra/tools/StandaloneScrubber.java#L120> 2016-10-12 17:44 GMT+03:00 Eric Evans <john.eric.ev...@gmail.com>: > On Wed, Oct 12, 2016 at 2:38 AM, Oleg Krayushkin <allight...@gmail.com> > wrote: > > Is there any way to run sstablescrub on one CF in parallel? > > I don't think so, but you can use `nodetool scrub' which has concurrency. > > If you need to do this "offline" you can use `nodetool > disable{thrift,binary}` to prevent client connections and `nodetool > disablegossip` to leave the ring. > > Cheers, > > -- > Eric Evans > john.eric.ev...@gmail.com > -- Oleg Krayushkin
Run sstablescrub in parallel
Hello, Is there any way to run sstablescrub on one CF in parallel? Thanks! -- Oleg Krayushkin
Re: Error while read after upgrade from 2.2.7 to 3.0.8
Hi, Adil, thanks for response. Both before and after C* upgrade we're using java driver 3.0.3, which seems to be compatible to 2.2.7 and 3.0.8. Also I forgot to mention that such errors occur even when there are no clients connected to cluster. 2016-10-02 7:57 GMT+00:00 Adil <adil.cha...@gmail.com>: > Hi, > That means that some clients closes the connection, have you upgraded all > clients? > > Il 30/set/2016 14:25, "Oleg Krayushkin" <allight...@gmail.com> ha scritto: > >> Hi, >> >> Since the upgrade from Cassandra version 2.2.7 to 3.0.8 We're getting >> following error almost every several minutes on every node. For node at >> 173.170.147.120 error in system.log would be: >> >> INFO [SharedPool-Worker-4] 2016-09-30 10:26:39,068 Message.java:605 >>- Unexpected exception during request; channel = [id: 0xfd64cd67, >> /173.170.147.120:50660 :> /18.4.63.191:9042] >> java.io.IOException: Error while read(...): Connection reset by peer >> at io.netty.channel.epoll.Native.readAddress(Native Method) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at >> io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) >> ~[netty-all-4.0.23.Final.jar:4.0.23.Final] >> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91] >> >> As far as I see, in all such errors there are always [id: <...>, >> /: :> /:> sport_port>. Also broadcast_address and listen_address are always >> belong to the current node adresses. >> >> What are possible reasons of such errors and how can I fix it? Any >> thoughts would be appreciated. >> >
Error while read after upgrade from 2.2.7 to 3.0.8
Hi, Since the upgrade from Cassandra version 2.2.7 to 3.0.8 We're getting following error almost every several minutes on every node. For node at 173.170.147.120 error in system.log would be: INFO [SharedPool-Worker-4] 2016-09-30 10:26:39,068 Message.java:605 - Unexpected exception during request; channel = [id: 0xfd64cd67, /173.170.147.120:50660 :> /18.4.63.191:9042] java.io.IOException: Error while read(...): Connection reset by peer at io.netty.channel.epoll.Native.readAddress(Native Method) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.doReadBytes(EpollSocketChannel.java:675) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollSocketChannel$EpollSocketUnsafe.epollInReady(EpollSocketChannel.java:714) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:326) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:264) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:116) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at io.netty.util.concurrent.DefaultThreadFactory$DefaultRunnableDecorator.run(DefaultThreadFactory.java:137) ~[netty-all-4.0.23.Final.jar:4.0.23.Final] at java.lang.Thread.run(Thread.java:745) [na:1.8.0_91] As far as I see, in all such errors there are always [id: <...>, /: :> /:. Also broadcast_address and listen_address are always belong to the current node adresses. What are possible reasons of such errors and how can I fix it? Any thoughts would be appreciated.