[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616101#comment-13616101 ] Ryan McGuire commented on CASSANDRA-4860: - Hi [~vijay2...@yahoo.com], I have not tweaked any row cache settings from the default. This is in my cassandra.yaml: {code} # Maximum size of the row cache in memory. # NOTE: if you reduce the size, you may not get you hottest keys loaded on startup. # # Default value is 0, to disable row caching. row_cache_size_in_mb: 0 {code} From the description, it sounds like I should have row caching turned off, and yet this code is still being run as evidenced by the change in performance by reverting your patch. I don't yet have a deep understanding of the features involved here, so if you have any other suggestions for things to test here, please let me know. Thanks! Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.2.0 beta 3 Attachments: 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, trunk-4860-revert.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire updated CASSANDRA-4860: Affects Version/s: 2.0 1.2.3 Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0, 1.2.3, 2.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.2.0 beta 3 Attachments: 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, trunk-4860-revert.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of MilesRush by MilesRush
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MilesRush page has been changed by MilesRush: http://wiki.apache.org/cassandra/MilesRush New page: Video playback discussing is among the the best ways to get interest to you.BR YouTube of your of the other approaches to publish your trusty movies over world-wide-web because; YouTube might be third maximum tuning into web page in the phrase. Everyone publish movie downloads on different movie clips discussing sites. But, do less than know how with regard to improve [[http://www.buyytservices.com|buy youtube video views fast]] how To Get Youtube Opinions.BR BR Increasing your many factors to publish movies coupled with buy thoughts about youtube.BR Hollywood Director Receives 600,000 Youtube Viral Elevations in Three Weekends And Hollywood Tattoo studio Execs Take Witness Andrews Jenkins because of Shadow Lurkers Movie is on pace for 100,000,000 people before first california king feature movie premieres in 2013.BR BR Based on a recent view researching the also . video games relation to Kids, more in comparison 90 percent for U.S. Children between become older 8 and sixteen play video games, and they typical spend an average of approximately 10 hours a about a week doing so. This study also reveals that too video games effect on Kids could result in higher degree out of aggression in tangible living.BR Children from both U.S. and Japan, would you declared playing a significant number of really violent video games, have more aggressive routines later in work-time than their coequals who did actually.BR Promotional via video sharing sites is incredibly easy. Since these domains provide users because of step-by-step instructions on your uploading a video, you need not worry about the technicalities required to be increase YouTube views. Creating an account in this particular site is completely free and simple.BR From there on, your marketing possibilities are almost long. You can easily provide and upload a video regarding any product or services to use as your video posting. This type of by going online advertising is very much less expensive than launching advertisements on the specific radio or television.BR These advertisements on Video hosting sites promotion also attain very large individuals.BR Videos where some individuals record their televisions. I'm not sure how great deal a DVR or even video capture card costs these days, but really cannot be more than the usual hundred bucks or so. Many camcorders even have video recordings ins. You tube uploaders seem of forget this, as well as , instead point all their damned camcorders inside the their television applies and film them, rather than make use of the minimal technicaly familiarity required to sailing television shows and movies correctly. Here is a tip: if you are film your television, there are may have other, better ideas that you could be going after. Go find out what substantial.BR Many different sorts packages that Socialkik is able to provide you with with. If you want to explore all of the marketing assistance services then there would have been a separate package for your needs to take more. Buy facebook fans likes via Socialkik at the velocity of $39 you'll be able to count on what kind of response that company will get.BR There is question that twitter, facebook and youtube may have some of right followers and there is no way that you can receive past these giants when it to be able to effective marketing.BR Automated View Increase Program - May increase your onlooker. Once your viewer spikes you can will have pure viewer because individuals often see thought processes which is expressed just below the video to figure around whether the online is really remarkable or not.BR BR BR Deciding to buy YouTube views might be a good part to push your good video forward and simply after doing this you can make a strong appearance of the video on Twitter. Marketing experts allow you to youtube views views and this can be a best way among speeding your plan.BR It will increase youtube views and your own aim behind to look at video will be accomplished. It will help you whenever you the best results from your video advertising campaign.BR http://idrac.com.sa/node/7394BR http://www.torcicorpo.it/castelluccio/it/node/53666BR http://www.salistechnetwork.info/node/100BR BR My web site: [[http://www.antirecrutement.info/?q=fr/node/2772|buy youtube views free trial]]
[Cassandra Wiki] Trivial Update of LloydLoy by LloydLoy
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The LloydLoy page has been changed by LloydLoy: http://wiki.apache.org/cassandra/LloydLoy New page: My name is Lloyd Loy. I life in Camporosso Mare (Italia).BR BR BR my page ... [[http://sidewalksigns.tumblr.com/|More Information and facts]]
[Cassandra Wiki] Trivial Update of TracieFri by TracieFri
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The TracieFri page has been changed by TracieFri: http://wiki.apache.org/cassandra/TracieFri New page: while i still have a desire for apple ipad mini information, i still take time for family and acquaintances.BR i love sporting activities, going on cruises, family functions, and relaxing by the pool and serving people from my tiki bar which i built.BR BR Also visit my site - [[http://www.tabletspcs.co.uk/news/?p=6|apple ipad mini uk price]]
[Cassandra Wiki] Trivial Update of Toby26T by Toby26T
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The Toby26T page has been changed by Toby26T: http://wiki.apache.org/cassandra/Toby26T New page: I am 28 years old and my name is Toby Council.BR I life in Montfermeil (France).BR BR Stop by my site: [[http://i-wallpapers.org|click through the following website page]]
[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616307#comment-13616307 ] Ondřej Černoš commented on CASSANDRA-5391: -- I am becoming quite sure the problem is a race condition in Cassandra code handling decompression of sstables when these are streamed from the remote datacenter. Both traces - when snappy is used and when the java zip is used - share the same calls, see above. I switched trace level in log4j and this is what I found: * when 2 and more nodes live in the remote DC, cassandra fires two threads downloading the same file * when only 1 node lives in the remote DC, only one thread downloads the file This is how it looks in log: {noformat} 2013-03-28 13:44:57.301+0100 [Thread-22] [DEBUG] StreamInSession.java(104) org.apache.cassandra.streaming.StreamInSession: Adding file /path/to/cassandra/data/ks/cf/ks-cf-ib-2-Data.db to Stream Request queue 2013-03-28 13:44:57.301+0100 [Thread-22] [DEBUG] StreamInSession.java(104) org.apache.cassandra.streaming.StreamInSession: Adding file /path/to/cassandra/data/ks/cf/ks-cf-ib-1-Data.db to Stream Request queue 2013-03-28 13:44:57.338+0100 [Thread-23] [DEBUG] StreamInSession.java(104) org.apache.cassandra.streaming.StreamInSession: Adding file /path/to/cassandra/data/ks/cf/ks-cf-ib-2-Data.db to Stream Request queue 2013-03-28 13:44:57.340+0100 [Thread-23] [DEBUG] StreamInSession.java(104) org.apache.cassandra.streaming.StreamInSession: Adding file /path/to/cassandra/data/ks/cf/ks-cf-ib-1-Data.db to Stream Request queue {noformat} And here is the result grepped on the two threads: {noformat} 2013-03-28 13:44:57.477+0100 [Thread-22] [TRACE] SSTableWriter.java(145) org.apache.cassandra.io.sstable.SSTableWriter: wrote DecoratedKey(-8516046549581000893, 6663363133663230623932663663303732623735653332643964616261623165) at 183591 2013-03-28 13:44:57.477+0100 [Thread-22] [TRACE] SSTableWriter.java(463) org.apache.cassandra.io.sstable.SSTableWriter: wrote index entry: org.apache.cassandra.db.RowIndexEntry@7b553d18 at 16192 2013-03-28 13:44:57.477+0100 [Thread-22] [TRACE] SSTableWriter.java(145) org.apache.cassandra.io.sstable.SSTableWriter: wrote DecoratedKey(-8513551951874950453, 3934363831326161323235653165613662613039346233356264386461653735) at 183995 2013-03-28 13:44:57.478+0100 [Thread-22] [TRACE] SSTableWriter.java(463) org.apache.cassandra.io.sstable.SSTableWriter: wrote index entry: org.apache.cassandra.db.RowIndexEntry@d5f0688 at 16238 2013-03-28 13:44:57.501+0100 [Thread-22] [DEBUG] FileUtils.java(110) org.apache.cassandra.io.util.FileUtils: Deleting ks-cf-tmp-ib-1-Data.db 2013-03-28 13:44:57.501+0100 [Thread-22] [DEBUG] FileUtils.java(110) org.apache.cassandra.io.util.FileUtils: Deleting ks-cf-tmp-ib-1-Filter.db 2013-03-28 13:44:57.501+0100 [Thread-22] [DEBUG] FileUtils.java(110) org.apache.cassandra.io.util.FileUtils: Deleting ks-cf-tmp-ib-1-TOC.txt 2013-03-28 13:44:57.501+0100 [Thread-22] [DEBUG] FileUtils.java(110) org.apache.cassandra.io.util.FileUtils: Deleting ks-cf-tmp-ib-1-CompressionInfo.db 2013-03-28 13:44:57.502+0100 [Thread-22] [DEBUG] FileUtils.java(110) org.apache.cassandra.io.util.FileUtils: Deleting ks-cf-tmp-ib-1-Index.db 2013-03-28 13:44:57.502+0100 [Thread-22] [DEBUG] SSTable.java(154) org.apache.cassandra.io.sstable.SSTable: Deleted /path/to/cassandra/data/ks/cf/ks-cf-tmp-ib-1 2013-03-28 13:44:57.503+0100 [Thread-22] [INFO] StreamInSession.java(136) org.apache.cassandra.streaming.StreamInSession: Streaming of file /path/to/cassandra/data/ks/cf/ks-cf-ib-2-Data.db sections=130 progress=67628/1583497 - 4% for org.apache.cassandra.streaming.StreamInSession@21400eb0 failed: requesting a retry. 2013-03-28 13:44:57.504+0100 [Thread-22] [DEBUG] IncomingTcpConnection.java(91) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: CRC unmatched at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:111) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:320) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:238) at
[Cassandra Wiki] Trivial Update of AnadfMccl by AnadfMccl
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The AnadfMccl page has been changed by AnadfMccl: http://wiki.apache.org/cassandra/AnadfMccl New page: In many undeveloped countries, Individuals and businesses [[http://brobdinagians19834.bravesites.com/entries/missing-category/normally-popular-medicare-supplemental-insurance-plan-choices|medigap plans]], offers you 10 different plans to Continue the gaps in Medicare in shipway that Best fit your state of affairs.BR pricing on the K and L plans is ill-defined at penning from Rome.
[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616355#comment-13616355 ] Ondřej Černoš commented on CASSANDRA-5391: -- How does cassandra compute the number of threads involved in streaming? SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: T Jake Luciani Priority: Blocker I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755) at
[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616383#comment-13616383 ] Yuki Morishita commented on CASSANDRA-5391: --- When sending file, it is single-threaded per destination. SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: T Jake Luciani Priority: Blocker I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755) at
[Cassandra Wiki] Trivial Update of Tyrone50B by Tyrone50B
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The Tyrone50B page has been changed by Tyrone50B: http://wiki.apache.org/cassandra/Tyrone50B New page: Howdy. Let me start past introducing the author, his name would be Houston.BR Maryland has always yet been his living website but his spouse wants them in order to maneuver. His friends say it is not good for your dog but what david loves doing is really jogging and He's trying to transform it into a profession. His job is a real transporting and locating officer but subsequently his wife in addition him will commence their own commercial.BR BR Here is my web blog ... [[http://www.hypertensionheartcure.com/How-You-Can-To-Join-Another-Chat-Room.html|mouse click the next webpage]]
[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616410#comment-13616410 ] Vijay commented on CASSANDRA-4860: -- Ahaa missed it, it is the key cache which is slowing down the performance. Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0, 1.2.3, 2.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.2.0 beta 3 Attachments: 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, trunk-4860-revert.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5397) Updates to PerRowSecondaryIndex don't use most current values
Sam Tunnicliffe created CASSANDRA-5397: -- Summary: Updates to PerRowSecondaryIndex don't use most current values Key: CASSANDRA-5397 URL: https://issues.apache.org/jira/browse/CASSANDRA-5397 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.3 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor The way that updates to secondary indexes are performed using SecondaryIndexManager.Updater is flawed for PerRowSecondaryIndexes. Unlike PerColumnSecondaryIndexes, which only require the old new values for a single column, the expectation is that a PerRow indexer can be given just a key which it will use to retrieve the entire row (or as many columns as it requires) and perform its indexing on those columns. As the indexes are updated before the memtable atomic swap occurs, a per-row indexer may only read the previous values for the row, not the new ones that are being written. In the case of an insert, there is no previous value and so nothing is added to the index. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5397) Updates to PerRowSecondaryIndex don't use most current values
[ https://issues.apache.org/jira/browse/CASSANDRA-5397?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sam Tunnicliffe updated CASSANDRA-5397: --- Attachment: 5397.txt Updates to PerRowSecondaryIndex don't use most current values -- Key: CASSANDRA-5397 URL: https://issues.apache.org/jira/browse/CASSANDRA-5397 Project: Cassandra Issue Type: Bug Affects Versions: 1.2.3 Reporter: Sam Tunnicliffe Assignee: Sam Tunnicliffe Priority: Minor Attachments: 5397.txt The way that updates to secondary indexes are performed using SecondaryIndexManager.Updater is flawed for PerRowSecondaryIndexes. Unlike PerColumnSecondaryIndexes, which only require the old new values for a single column, the expectation is that a PerRow indexer can be given just a key which it will use to retrieve the entire row (or as many columns as it requires) and perform its indexing on those columns. As the indexes are updated before the memtable atomic swap occurs, a per-row indexer may only read the previous values for the row, not the new ones that are being written. In the case of an insert, there is no previous value and so nothing is added to the index. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5398) Remove localTimestamp from merkle-tree calculation (for tombstones)
Christian Spriegel created CASSANDRA-5398: - Summary: Remove localTimestamp from merkle-tree calculation (for tombstones) Key: CASSANDRA-5398 URL: https://issues.apache.org/jira/browse/CASSANDRA-5398 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Christian Spriegel Priority: Trivial Attachments: V1.patch DeletedColumn and RangeTombstone use the local-timestamp to update the digest during repair. Even though its only a second-precision timestamp, I think it still causes some differences in the merkle tree, therefore causing overrepair. I attached a patch on trunk that adds a modified updateDigest() to DeletedColumn, which does not use the value field for its calculation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5398) Remove localTimestamp from merkle-tree calculation (for tombstones)
[ https://issues.apache.org/jira/browse/CASSANDRA-5398?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Christian Spriegel updated CASSANDRA-5398: -- Attachment: V1.patch Remove localTimestamp from merkle-tree calculation (for tombstones) --- Key: CASSANDRA-5398 URL: https://issues.apache.org/jira/browse/CASSANDRA-5398 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Christian Spriegel Priority: Trivial Attachments: V1.patch DeletedColumn and RangeTombstone use the local-timestamp to update the digest during repair. Even though its only a second-precision timestamp, I think it still causes some differences in the merkle tree, therefore causing overrepair. I attached a patch on trunk that adds a modified updateDigest() to DeletedColumn, which does not use the value field for its calculation. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of JulieHebe by JulieHebe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The JulieHebe page has been changed by JulieHebe: http://wiki.apache.org/cassandra/JulieHebe New page: We're fairly new to operating a blog and not really sure precisely what to write here still my name is: Julie HebertBR I'm: 22BR Not sure if this is relevenat but live in Italia and myBR town: Castelguidone BR We are really keen on generating an helpful blog website with some nice posts.BR BR BR Look into my web site :: [[http://brassbed.org/cleaning-antique-brass-bed/|http://brassbed.org/]]
[Cassandra Wiki] Trivial Update of Finding_The_Ultimate_Brass_Bed by JulieHebe
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The Finding_The_Ultimate_Brass_Bed page has been changed by JulieHebe: http://wiki.apache.org/cassandra/Finding_The_Ultimate_Brass_Bed New page: Antique Brass Beds ended up quite the trend at many people contemporary houses at the late '80s and early '90s. It would seem these are creating a comeback at stores and antique household furniture markets.BR BR Brass beds first started showing up at around 1820 and these are well liked by many people.. They improved from popularity throughout the later 19th century, and ended up being loved in their well produced design. A brass bed is a affirmation of style, and in many presents a time gone by. Others find brass beds captivating mainly because of the effort and artistry that went into creating them.BR The brass used for producing the beds was commonly a golden colored alloy formulated from zinc and copper alloys. The proportion of brass in the beds varied, depending on the manufacturer. They were either 100% brass construction or designed from iron, with a brass plate metal coating. In most cases, the bed would likely consist of a footboard and headboard produced of brass, with a iron framework to brace the bed.BR BR Kinds of antique brass beds.BR Brass beds frequently come in a number of variations, from ordinary styles to opulent and sophisticated patterns suited for a king. There are numerous styles of brass beds on the market today. Listed below are some to bear in mind when hunting for an antique brass bed.BR BR The Victorian.BR The brass beds that ended up being made during the Victorian times frequently had tall posts on each corner on the bed, and have been called testers and half-testers. The posts were used to retain the canopy over the bed, which then held draperies or heavy drapes which encircled the bed. These curtains were employed to keep the breezes out and the bed area warm at night, as the old homes were not always warmed.BR Similar to many other Victorian household furniture, with distinctive lavish carvings, brass beds have been no exception. The Victorian style of brass beds ended up very extravigantly decorated with filigree, finials, knobs and turnings. Their good looking, romantic designs frequently represented many curved details, Medieval archways and finials built from fragile hand-painted china.BR BR Oxford.BR The Oxford brass bed actually came into being around the period on the Civil War. These are clearly acknowledged by the unmistakable complementing curved tops of the headboard and foot board. The simple style of the Oxford bed was unpretentious; this bed has basic, clean, ascetic lines and fits very easily into both trendy and traditional rooms. The Oxford style seems to be one of the more popular designs of brass beds. These can be found at either antique or reproduction models.BR BR Art NouveauBR This type of brass bed style that became fashionable was known as Art Nouveau design around the late 1800s throughout the early 1900s. These bed frames were characterized by beautiful curves and delicate floral scroll work, keeping faithful to the aesthetic theme which was established during this time. A great number of art nouveau type of brass beds employed a series of china accessories and brass rosettes in combination with white paint and wrought iron along with brass give these beds personality.BR BR Art DecoBR Art Deco brass beds were typically portrayed by straight lines and sharp angles, made the trend fashionable because of the art work and architectural design characteristics of the 1930s and 40s. Upright slat attractiveness was a favored design for many people, and various beds came with assorted geometric carvings. Another feature found in the beds of this period were lengthened to appear long and slim, a common aspect of the two art deco and art nouveau styles. BR BR Contemporary.BR The modern brass beds that are quite often seen are often reproductions of antiques or the new designs. Antique brass beds can be quite expensive, and they may be difficult to find a good quality bed. These beds were designed primarily in the single or double sizes. BR BR Nothing brings elegance to a house like an antique brass bed. It gives a bed room life like nothing else can.BR BR For more information in regards to [[http://brassbed.org/cleaning-antique-brass-bed/|http://brassbed.org/]] review brassbed.org/cleaning-antique-brass-bed/
[jira] [Commented] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616475#comment-13616475 ] Ondřej Černoš commented on CASSANDRA-5391: -- How does this information match the observed behaviour? I can clearly see two threads downloading the file. SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: T Jake Luciani Priority: Blocker I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755) at
[Cassandra Wiki] Trivial Update of NickLinn by NickLinn
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The NickLinn page has been changed by NickLinn: http://wiki.apache.org/cassandra/NickLinn New page: There is nothing to tell about me at all.BR BR Feel free to surf to my web-site; [[http://www.dailystrength.org/people/2632132/journal/5675626|magazin bucuresti jocuri xbox]]
[Cassandra Wiki] Trivial Update of SylviaPit by SylviaPit
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The SylviaPit page has been changed by SylviaPit: http://wiki.apache.org/cassandra/SylviaPit New page: Jeannie Frye is what her man loves to call her are actually is not her birth designation.BR For years old she's been working as a fabulous supervisor. To play golf is the level she loves most of every. She generally lives in Connecticut. Check out the in demand news on her website: http://faststdtesting.com/Chlamydia-Testing.php
[jira] [Updated] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)
[ https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-2698: Attachment: patch-rebased.diff Instrument repair to be able to assess it's efficiency (precision) -- Key: CASSANDRA-2698 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benedict Priority: Minor Labels: lhf Attachments: nodetool_repair_and_cfhistogram.tar.gz, patch_2698_v1.txt, patch.diff, patch-rebased.diff Some reports indicate that repair sometime transfer huge amounts of data. One hypothesis is that the merkle tree precision may deteriorate too much at some data size. To check this hypothesis, it would be reasonably to gather statistic during the merkle tree building of how many rows each merkle tree range account for (and the size that this represent). It is probably an interesting statistic to have anyway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)
[ https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616569#comment-13616569 ] Benedict commented on CASSANDRA-2698: - Hi Yuki, The patch was created some time ago, and there were some minor renames/changes to MerkleTree and AntiEntropyService in the meantime. I've pulled the latest changes, merged, and regenerated the patch. This is against the main trunk. Instrument repair to be able to assess it's efficiency (precision) -- Key: CASSANDRA-2698 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benedict Priority: Minor Labels: lhf Attachments: nodetool_repair_and_cfhistogram.tar.gz, patch_2698_v1.txt, patch.diff, patch-rebased.diff Some reports indicate that repair sometime transfer huge amounts of data. One hypothesis is that the merkle tree precision may deteriorate too much at some data size. To check this hypothesis, it would be reasonably to gather statistic during the merkle tree building of how many rows each merkle tree range account for (and the size that this represent). It is probably an interesting statistic to have anyway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of ChelseyWh by ChelseyWh
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The ChelseyWh page has been changed by ChelseyWh: http://wiki.apache.org/cassandra/ChelseyWh?action=diffrev1=1rev2=3 + There is nothing to tell about myself at all.BR - My name: Florian HeffnerBR - Age: 24BR - Country: NetherlandsBR - City: Velserbroek BR - Post code: 1991 HXBR - Street: Floraronde 18BR BR - Look into my web page - [[http://myhomes.com.br/blog/view/13974/comparing-significant-criteria-for-airport-parking|mouse click the up coming document]] + Feels good to be a member of apache.org.BR + I really hope I am useful at allBR + BR + My website ... [[http://my.houselectrozik.com/profile/mayrafwo|just click the following post]]
[jira] [Updated] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Vijay updated CASSANDRA-4860: - Attachment: 0001-4860-v2.patch Hi Ryan, Since you have the environment do you mind testing -v2? It is not a final patch, I have to verify the accuracy of the estimate though. Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0, 1.2.3, 2.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.2.0 beta 3 Attachments: 0001-4860-v2.patch, 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, trunk-4860-revert.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5395) Compaction doesn't remove index entries as designed
[ https://issues.apache.org/jira/browse/CASSANDRA-5395?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616605#comment-13616605 ] Sam Tunnicliffe commented on CASSANDRA-5395: lgtm, just have 2 trivial queries: In LCR PCR, if the purpose of the additional clauses is to omit unnecessary column lookups, should the column lookup be the last of the 'd conditions? {code} if (indexer != SecondaryIndexManager.nullUpdater !column.isMarkedForDelete() container.getColumn(column.name()) != column) {code} Class documentation in IdentityQueryFilter states Only for use in testing; will read entire CF into memory. Seeing as its being used in non-test code we should probably amend the docstring Compaction doesn't remove index entries as designed --- Key: CASSANDRA-5395 URL: https://issues.apache.org/jira/browse/CASSANDRA-5395 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.0 Reporter: Jonathan Ellis Priority: Minor Fix For: 1.2.4 Attachments: 5395-2.txt, 5395.txt PerColumnIndexUpdater ignores updates where the new value is a tombstone. It should still remove the index entry on oldColumn. (Note that this will not affect user-visible correctness, since KeysSearcher/CompositeSearcher will issue deletes against stale index entries, but having more stale entries than we should could affect performance.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-2698) Instrument repair to be able to assess it's efficiency (precision)
[ https://issues.apache.org/jira/browse/CASSANDRA-2698?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616569#comment-13616569 ] Benedict edited comment on CASSANDRA-2698 at 3/28/13 8:26 PM: -- Hi Yuki, The patch was created some time ago, and there were some minor renames/changes to MerkleTree and AntiEntropyService in the meantime. I've pulled the latest changes, merged, and regenerated the patch. This is against the main trunk / HEAD branch. was (Author: benedict): Hi Yuki, The patch was created some time ago, and there were some minor renames/changes to MerkleTree and AntiEntropyService in the meantime. I've pulled the latest changes, merged, and regenerated the patch. This is against the main trunk. Instrument repair to be able to assess it's efficiency (precision) -- Key: CASSANDRA-2698 URL: https://issues.apache.org/jira/browse/CASSANDRA-2698 Project: Cassandra Issue Type: Improvement Reporter: Sylvain Lebresne Assignee: Benedict Priority: Minor Labels: lhf Attachments: nodetool_repair_and_cfhistogram.tar.gz, patch_2698_v1.txt, patch.diff, patch-rebased.diff Some reports indicate that repair sometime transfer huge amounts of data. One hypothesis is that the merkle tree precision may deteriorate too much at some data size. To check this hypothesis, it would be reasonably to gather statistic during the merkle tree building of how many rows each merkle tree range account for (and the size that this represent). It is probably an interesting statistic to have anyway. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3512) Getting Started instructions don't work in README.txt - wrong version of jamm, wrong path
[ https://issues.apache.org/jira/browse/CASSANDRA-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616636#comment-13616636 ] Eugene commented on CASSANDRA-3512: --- I had this error under CentOS 5 using the RPM packages provided by DataStax. The issue for me was '/etc/cassandra/conf/cassandra-env.sh' called jamm using the following: {noformat} cassandra-env.sh:JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar {noformat} However, $CASSANDRA_HOME isn't set anywhere. I fixed it by adding the following to '/usr/share/cassandra/cassandra.in.sh': {noformat} CASSANDRA_HOME=/usr/share/cassandra {noformat} Getting Started instructions don't work in README.txt - wrong version of jamm, wrong path - Key: CASSANDRA-3512 URL: https://issues.apache.org/jira/browse/CASSANDRA-3512 Project: Cassandra Issue Type: Bug Components: Packaging Environment: Ubuntu 11.04 Reporter: David Allsopp Assignee: Brandon Williams Priority: Minor Fix For: 2.0 Download latest release from http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.0.3/apache-cassandra-1.0.3-bin.tar.gz Unpack the tarball. Follow instructions in README.txt, concluding with: {noformat} dna@master:~/code/apache-cassandra-1.0.3$ bin/cassandra -f Error opening zip file or JAR manifest missing : /lib/jamm-0.2.1.jar Error occurred during initialization of VM agent library failed to init: instrument {noformat} Firstly, the version of jamm packaged with Cassandra 1.0.3 is jamm-0.2.5, not jamm-0.2.1. Both bin/cassandra.bat and conf/cassandra-env.sh reference jamm-0.2.5 so not sure where jamm-0.2.1 is being referenced from - nothing obvious using grep. Secondly, /lib/jamm-0.2.1.jar is the wrong path - should be set relative to working directory, not filesystem root (Incidentally, Cassandra v1.0.3 is still listed as unreleased on JIRA.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Comment Edited] (CASSANDRA-3512) Getting Started instructions don't work in README.txt - wrong version of jamm, wrong path
[ https://issues.apache.org/jira/browse/CASSANDRA-3512?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616636#comment-13616636 ] Eugene edited comment on CASSANDRA-3512 at 3/28/13 8:53 PM: I had this error under CentOS 5 using the RPM packages provided by DataStax. The issue for me was '/etc/cassandra/conf/cassandra-env.sh' called jamm using the following: {noformat} cassandra-env.sh:JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar {noformat} However, $CASSANDRA_HOME isn't set anywhere. I fixed it by adding the following to '/usr/share/cassandra/cassandra.in.sh': {noformat} CASSANDRA_HOME=/usr/share/cassandra {noformat} was (Author: aechttpd): I had this error under CentOS 5 using the RPM packages provided by DataStax. The issue for me was '/etc/cassandra/conf/cassandra-env.sh' called jamm using the following: {noformat} cassandra-env.sh:JVM_OPTS=$JVM_OPTS -javaagent:$CASSANDRA_HOME/lib/jamm-0.2.5.jar {noformat} However, $CASSANDRA_HOME isn't set anywhere. I fixed it by adding the following to '/usr/share/cassandra/cassandra.in.sh': {noformat} CASSANDRA_HOME=/usr/share/cassandra {noformat} Getting Started instructions don't work in README.txt - wrong version of jamm, wrong path - Key: CASSANDRA-3512 URL: https://issues.apache.org/jira/browse/CASSANDRA-3512 Project: Cassandra Issue Type: Bug Components: Packaging Environment: Ubuntu 11.04 Reporter: David Allsopp Assignee: Brandon Williams Priority: Minor Fix For: 2.0 Download latest release from http://www.apache.org/dyn/closer.cgi?path=/cassandra/1.0.3/apache-cassandra-1.0.3-bin.tar.gz Unpack the tarball. Follow instructions in README.txt, concluding with: {noformat} dna@master:~/code/apache-cassandra-1.0.3$ bin/cassandra -f Error opening zip file or JAR manifest missing : /lib/jamm-0.2.1.jar Error occurred during initialization of VM agent library failed to init: instrument {noformat} Firstly, the version of jamm packaged with Cassandra 1.0.3 is jamm-0.2.5, not jamm-0.2.1. Both bin/cassandra.bat and conf/cassandra-env.sh reference jamm-0.2.5 so not sure where jamm-0.2.1 is being referenced from - nothing obvious using grep. Secondly, /lib/jamm-0.2.1.jar is the wrong path - should be set relative to working directory, not filesystem root (Incidentally, Cassandra v1.0.3 is still listed as unreleased on JIRA.) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-4860) Estimated Row Cache Entry size incorrect (always 24?)
[ https://issues.apache.org/jira/browse/CASSANDRA-4860?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616642#comment-13616642 ] Ryan McGuire commented on CASSANDRA-4860: - With your v2 patch applied I get an average read rate of 14276. That's much worse actually than the first patch. To make sure something is not amiss, I re-ran the 2.0 baseline and got comparable results as before (22524). The number we're hoping to back to is ~28000. Estimated Row Cache Entry size incorrect (always 24?) - Key: CASSANDRA-4860 URL: https://issues.apache.org/jira/browse/CASSANDRA-4860 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.1.0, 1.2.3, 2.0 Reporter: Chris Burroughs Assignee: Vijay Fix For: 1.2.0 beta 3 Attachments: 0001-4860-v2.patch, 0001-CASSANDRA-4860-for-11.patch, 0001-CASSANDRA-4860.patch, trunk-4860-revert.patch After running for several hours the RowCacheSize was suspicious low (ie 70 something MB) I used CASSANDRA-4859 to measure the size and number of entries on a node: In [3]: 1560504./65021 Out[3]: 24.0 In [4]: 2149464./89561 Out[4]: 24.0 In [6]: 7216096./300785 Out[6]: 23.990877204647838 That's RowCacheSize/RowCacheNumEntires . Just to prove I don't have crazy small rows the mean size of the row *keys* in the saved cache is 67 and Compacted row mean size: 355. No jamm errors in the log Config notes: row_cache_provider: ConcurrentLinkedHashCacheProvider row_cache_size_in_mb: 2048 Version info: * C*: 1.1.6 * centos 2.6.32-220.13.1.el6.x86_64 * java 6u31 Java HotSpot(TM) 64-Bit Server VM (build 20.6-b01, mixed mode) -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of CarltonPr by CarltonPr
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The CarltonPr page has been changed by CarltonPr: http://wiki.apache.org/cassandra/CarltonPr New page: これらブレスレットをまた控えめな 発生する で ゴージャスな カラー これら 金属ゴールド、ピューターとして。 人々 オフを開始 に ドン でもこれらブレスレットティファニー春、夏 こティファニー頃。取得はビット湿ったスポンジと実装いくつかオプションそれを。2000 年に Deckers オプラウィン フリー、U ティファニー ネックレス女王に プラダ ティファニー ネックレスブレスレットティファニー ネックレスペアを寄付BR BR 私の名前はCarlton Pride. 私は生活の中で Ascona (Switzerland)BR BR my web site ... [[http://www.tiffanysenmonten.com/|ティファニー ブレスレット]]
[jira] [Created] (CASSANDRA-5399) Offer pluggable security for inter-node communication
Ahmed Bashir created CASSANDRA-5399: --- Summary: Offer pluggable security for inter-node communication Key: CASSANDRA-5399 URL: https://issues.apache.org/jira/browse/CASSANDRA-5399 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.10 Environment: Production Reporter: Ahmed Bashir Inter-node communication can be only encrypted using TLS/SSL; it would be good to allow this piece to be pluggable, as is the case with authentication/authorization of Thrift requests and endpoint snitch implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
git commit: update comments
Updated Branches: refs/heads/trunk 55dda732b - be2726b39 update comments Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/be2726b3 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/be2726b3 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/be2726b3 Branch: refs/heads/trunk Commit: be2726b39d54bb0f00d2479070b39b939db4a3cb Parents: 55dda73 Author: Jonathan Ellis jbel...@apache.org Authored: Thu Mar 28 17:48:03 2013 -0500 Committer: Jonathan Ellis jbel...@apache.org Committed: Thu Mar 28 17:48:03 2013 -0500 -- .../cassandra/service/ActiveRepairService.java | 14 +++--- 1 files changed, 7 insertions(+), 7 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/be2726b3/src/java/org/apache/cassandra/service/ActiveRepairService.java -- diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java b/src/java/org/apache/cassandra/service/ActiveRepairService.java index 4a9eefb..3c5ba7f 100644 --- a/src/java/org/apache/cassandra/service/ActiveRepairService.java +++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java @@ -50,11 +50,11 @@ import org.apache.cassandra.streaming.StreamingRepairTask; import org.apache.cassandra.utils.*; /** - * AntiEntropyService encapsulates validating (hashing) individual column families, + * ActiveRepairService encapsulates validating (hashing) individual column families, * exchanging MerkleTrees with remote nodes via a TreeRequest/Response conversation, * and then triggering repairs for disagreeing ranges. * - * Every Tree conversation has an 'initiator', where valid trees are sent after generation + * The node where repair was invoked acts as the 'initiator,' where valid trees are sent after generation * and where the local and remote tree will rendezvous in rendezvous(cf, endpoint, tree). * Once the trees rendezvous, a Differencer is executed and the service can trigger repairs * for disagreeing ranges. @@ -62,11 +62,11 @@ import org.apache.cassandra.utils.*; * Tree comparison and repair triggering occur in the single threaded Stage.ANTIENTROPY. * * The steps taken to enact a repair are as follows: - * 1. A major compaction is triggered via nodeprobe: - * * Nodeprobe sends TreeRequest messages to all neighbors of the target node: when a node - * receives a TreeRequest, it will perform a readonly compaction to immediately validate - * the column family. - * 2. The compaction process validates the column family by: + * 1. A repair is requested via nodeprobe: + * * The initiator sends TreeRequest messages to all neighbors of the target node: when a node + * receives a TreeRequest, it will perform a validation (read-only) compaction to immediately validate + * the column family. This is performed on the CompactionManager ExecutorService. + * 2. The validation process builds the merkle tree by: * * Calling Validator.prepare(), which samples the column family to determine key distribution, * * Calling Validator.add() in order for every row in the column family, * * Calling Validator.complete() to indicate that all rows have been added.
[jira] [Commented] (CASSANDRA-5062) Support CAS
[ https://issues.apache.org/jira/browse/CASSANDRA-5062?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616753#comment-13616753 ] Sylvain Lebresne commented on CASSANDRA-5062: - There's a unclear part on the algorithm itself for which I'm not sure what is the intent. During prepare, if a replica promise, the ballot it sends back in the response is the one it just got (the one the proposer sent). So in the proposer (in SP.cas()), {{summary.inProgressBallot}} is necessary our own {{ballot}} (unless it's a reject, but then we don't care anymore). Meaning, that in SP.cas(), {{timeComparator.compare(summary.inProgressBallot, summary.mostRecentCommitted) = 0}} could be simplified in {{timeComparator.compare(ballot, summary.mostRecentCommitted) = 0}}. But it also mean that in PrepareCallback, the inProgressUpdate we keep is pretty much chosen randomly (since all {{inProgressBallot}} will in fact be equal to {{ballot}}). Was that the intent? Or was the intent that during prepare, when the replica promise, it returns the previous inProgressBalot, i.e. the one before setting the new balot? (I think there might be a problem with both choice but before getting to that I want to make sure what is the initial intent). Some other remarks while I'm at it: * PaxosState.propose always return a true as first argument of PrepareResponse (it always promised). * mostRecentCommitted doesn't seem to be ever set. * I don't think the commit business work. Commit segments can be deleted at any time due to flush, so I don't see how we can guarantee the persistency of the paxos state. Furthermore, when we replay the commit log paxos entry, we don't re-append them to the commit log, so if a node restart, play his log and shutdown right away, it'll lost his paxos state too. Why not just use a System table for the Paxos state? (I don't even think performance would be a big issue because we can do queries by names that are relatively cheap and besides most of the paxos state is deleted by commit, so the only part that will end up in sstables is the mostRecentCommitted, but that's small and very very cacheable). * I'm confused by FBUtilities.timeComparator. I'm not sure what is the intent of using/comparing the clockSequence first, but I'm pretty sure this is broken. Should that compare the timestamps of the UUIDs (The clock sequence is *not* the timestamp). Furthermore, wasn't the goal to have a comparator to have it only compare and timestamps (and thus not break tie on same timestamp)? Lastly, wasn't the goal to reuse the ballot timestamp as the timestamp of the columns in the update we finally commit (so that the column timestamps are coherent with the order decided by Paxos)? * Currently, the value returned by the cas method doesn't mean what it means for CAS in general. Namely, a false might just mean that we've had one refusal amongst the quorum first received responses, or that we've had to replay a previous round first, and this irrelevant of whether our CAS applies or not. I strongly believe we should return false only if the CAS doesn't apply, but otherwise we should just restart a new proposal (probably after some small random delay) until we are allowed to propose our value. Because otherwise: ** the behavior will be inintuitive since it differs from the usual behavior ** in almost all use case I can come up with, it will basically force users to do a read every time the cas method return false, because you have to decide whether your CAS indeed doesn't apply or something else. ** this leeks implementation details. * It would be nice to add a comment on what problem the {{inProgressBallot.equals(ballot)}} check in PaxosState.commit fixes. * Can't we avoid FQRow? We can get back the keyspace from cf contained in the row for instance (this wouldn't work if said cf was null, but we don't have that case since it makes no sense to provide a null cf to SP.cas). And two very small nits: * In MessagingService, the comment on using padding should be moved before UNUSED_1. * In ProposeCallback, successful.addAndGet(1) - sucessfull.incrementAndGet(). Support CAS --- Key: CASSANDRA-5062 URL: https://issues.apache.org/jira/browse/CASSANDRA-5062 Project: Cassandra Issue Type: New Feature Components: API, Core Reporter: Jonathan Ellis Fix For: 2.0 Attachments: half-baked commit 1.jpg, half-baked commit 2.jpg, half-baked commit 3.jpg Strong consistency is not enough to prevent race conditions. The classic example is user account creation: we want to ensure usernames are unique, so we only want to signal account creation success if nobody else has created the account yet. But naive read-then-write allows clients to race and both think they have a green
[jira] [Commented] (CASSANDRA-5399) Offer pluggable security for inter-node communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5399?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616799#comment-13616799 ] Brandon Williams commented on CASSANDRA-5399: - Out of curiosity, what else would you like to use? It seems that the transport would have to be encrypted anyway, if it's the auth part you want pluggable. Offer pluggable security for inter-node communication -- Key: CASSANDRA-5399 URL: https://issues.apache.org/jira/browse/CASSANDRA-5399 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.1.10 Environment: Production Reporter: Ahmed Bashir Labels: security Inter-node communication can be only encrypted using TLS/SSL; it would be good to allow this piece to be pluggable, as is the case with authentication/authorization of Thrift requests and endpoint snitch implementations. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of ColletteR by ColletteR
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The ColletteR page has been changed by ColletteR: http://wiki.apache.org/cassandra/ColletteR New page: My name: Collette RodenBR My age: 35BR Country: AustraliaBR Home town: Hoffman BR Post code: 6220BR Street: 9 Gregory WayBR BR Feel free to visit my weblog ... [[http://mp3sdown.com/music/download/1/the-beach-boys.html|Sister]]
[Cassandra Wiki] Trivial Update of GenaMcnul by GenaMcnul
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The GenaMcnul page has been changed by GenaMcnul: http://wiki.apache.org/cassandra/GenaMcnul New page: Estafas de dieta. Todos hemos odo hablar de ellos y algunos de ellos se han intentado sin xito.BR BR Sin embargo, en por fibra te ayudar a detener tus antojos de comida. [[http://como-bajarpeso.com/Perder_Peso_Rapido.pg-v/|Perder Peso Rapido]] Pozdravljeni, imam skenirano razpredelnico iz Terrijine es ya no beber refrescos o cortar azcares adicionales para lograr una dieta saludable.BR BR BR Orgenes de La dieta de Tres diasLos orgenes de la dieta de tres das no son claros. Algunas personas creen que se remonta a la dcada de 1980 cuando este barriga bebendo os lquidos certos. [[http://como-bajarde-peso.com/Acupuntura.Para.Bajar.De.Peso-pg-x.html|Acupuntura Para Bajar De Peso]] Para un buen control del peso El desa Yuno es vitalTodo el mundo debera comer el seguir.BR El verdadero secreto detrs de una dieta exitosa es aferrarse a lla.BR BR Su objetivo es hacer 100 repeticiones en Los diferentes planes de dieta funcionan diferente para diferentes personas. Algunas dietas de moda y pldoras para perder peso no son seguros. [[http://como-bajarde-peso.com/Ejercicios_Para_Perder_Peso_pg-u.html|Ejercicios Para Perder Peso]] Los postres siempre parecen como una fruta prohibida cuando ests tratando de perder peso.BR Pero saber que no que personas como usted estn constantemente en la bsqueda de mejores dietas para bajar de peso.
[Cassandra Wiki] Trivial Update of Beth4240 by Beth4240
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The Beth4240 page has been changed by Beth4240: http://wiki.apache.org/cassandra/Beth4240 New page: Greetings! I will be Lecia Folkes and I really like it.BR My husband and I thought we would live in Tennessee. Playing football could be the thing I love first and foremost. I am currently a librarian and the salary has been really fulfilling. My husband and I maintain a website. You might want to take a look here: http://www.BR tefltouch.com/groups/herbal-life-shed-weight/
[Cassandra Wiki] Update of DataModel by PatriciaGorla
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The DataModel page has been changed by PatriciaGorla: http://wiki.apache.org/cassandra/DataModel?action=diffrev1=29rev2=30 The row key is what determines what machine data is stored on. Thus, for each key you can have data from multiple column families associated with it. However, these are logically distinct, which is why the Thrift interface is oriented around accessing one !ColumnFamily per key at a time. (TODO given this, is the following JSON more confusing than helpful?) - A JSON representation of the key - column families - column structure is + A JSON representation of the rowkey - column families - column structure is {{{ { -mccv:{ +row_key1:{ Users:{ emailAddress:{name:emailAddress, value:f...@bar.com}, webSite:{name:webSite, value:http://bar.com} @@ -65, +65 @@ visits:{name:visits, value:243} } }, -user2:{ +row_key2:{ Users:{ emailAddress:{name:emailAddress, value:us...@bar.com}, twitter:{name:twitter, value:user2}
[Cassandra Wiki] Trivial Update of DataModel by PatriciaGorla
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The DataModel page has been changed by PatriciaGorla: http://wiki.apache.org/cassandra/DataModel?action=diffrev1=30rev2=31 } } }}} - Note that the key mccv identifies data in two different column families, Users and Stats. This does not imply that data from these column families is related. The semantics of having data for the same key in two different column families is entirely up to the application. Also note that within the Users column family, mccv and user2 have different column names defined. This is perfectly valid in Cassandra. In fact there may be a virtually unlimited set of column names defined, which leads to fairly common use of the column name as a piece of runtime populated data. This is unusual in storage systems, particularly if you're coming from the RDBMS world. + Note that the key row_key1 identifies data in two different column families, Users and Stats. This does not imply that data from these column families is related. The semantics of having data for the same key in two different column families is entirely up to the application. Also note that within the Users column family, row_key1 and row_key2 have different column names defined. This is perfectly valid in Cassandra. In fact there may be a virtually unlimited set of column names defined, which leads to fairly common use of the column name as a piece of runtime populated data. This is unusual in storage systems, particularly if you're coming from the RDBMS world. = Keyspaces = A keyspace is the first dimension of the Cassandra hash, and is the container for column families. Keyspaces are of roughly the same granularity as a schema or database (i.e. a logical collection of tables) in the RDBMS world. They are the configuration and management point for column families, and is also the structure on which batch inserts are applied.
[jira] [Created] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address
Carl Yeksigian created CASSANDRA-5400: - Summary: Allow multiple ports to gossip from a single IP address Key: CASSANDRA-5400 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400 Project: Cassandra Issue Type: New Feature Affects Versions: 2.0 Reporter: Carl Yeksigian Assignee: Carl Yeksigian Fix For: 2.0 If a fat client is running on the same machine as a Cassandra node, the fat client must be allocated a new IP address. However, since the node is now a part of the gossip, the other nodes in the ring must be able to talk to it. This means that a local only address (127.0.0.n) won't actually work for the rest of the ring. This also would allow for multiple Cassandra service instances to run on a single machine, or from a group of machines behind a NAT. The change is simple in concept: instead of using an InetAddress, use a different class. Instead of using an InetSocketAddress, which would still tie us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. The serializer allows for reading a serialized Inet4Address or Inet6Address; also, the message service can still communicate with non-CassandraInstanceEndpoint aware code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-5400) Allow multiple ports to gossip from a single IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-5400?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Carl Yeksigian updated CASSANDRA-5400: -- Attachment: 5400.txt Allow multiple ports to gossip from a single IP address --- Key: CASSANDRA-5400 URL: https://issues.apache.org/jira/browse/CASSANDRA-5400 Project: Cassandra Issue Type: New Feature Affects Versions: 2.0 Reporter: Carl Yeksigian Assignee: Carl Yeksigian Fix For: 2.0 Attachments: 5400.txt If a fat client is running on the same machine as a Cassandra node, the fat client must be allocated a new IP address. However, since the node is now a part of the gossip, the other nodes in the ring must be able to talk to it. This means that a local only address (127.0.0.n) won't actually work for the rest of the ring. This also would allow for multiple Cassandra service instances to run on a single machine, or from a group of machines behind a NAT. The change is simple in concept: instead of using an InetAddress, use a different class. Instead of using an InetSocketAddress, which would still tie us to using InetAddress, I've added a new class, CassandraInstanceEndpoint. The serializer allows for reading a serialized Inet4Address or Inet6Address; also, the message service can still communicate with non-CassandraInstanceEndpoint aware code. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-5401) Pluggable security feature to prevent node from joining a cluster and running destructive commands
Ahmed Bashir created CASSANDRA-5401: --- Summary: Pluggable security feature to prevent node from joining a cluster and running destructive commands Key: CASSANDRA-5401 URL: https://issues.apache.org/jira/browse/CASSANDRA-5401 Project: Cassandra Issue Type: Improvement Components: Config, Core Affects Versions: 1.1.10 Environment: Production Reporter: Ahmed Bashir It's possible for a node to join an existing cluster (with perhaps more stringent security restrictions i.e. not using AllowAllAuthentication) and issue destructive commands that affect the cluster at large (e.g. drop keyspace via cassandra-cli, etc). This can be circumvented with a pluggable security module that could be used to implement basic node vetting/identification/etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of ILRPatg by ILRPatg
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The ILRPatg page has been changed by ILRPatg: http://wiki.apache.org/cassandra/ILRPatg New page: My name: Pat GarrisBR My age: 25BR Country: GermanyBR Town: Hauzenberg BR Post code: 94049BR Street: Kieler Strasse 85BR BR my web page ... [[http://lipo6revolution.com|mouse click the next web site]]
[Cassandra Wiki] Trivial Update of mulheresok by mulheresok
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The mulheresok page has been changed by mulheresok: http://wiki.apache.org/cassandra/mulheresok New page: Foremost , o que é bom: Procurando ter um paixão por (ou luxúria) na web orgulha última instância se livrar do excesso ela desfavorável julgamento, crescer para ser típicos técnica para meninas disponíveis interessados em encontrar alguém novo .BR BR BR BR Agora insalubre broadcast: Sites de namoro desavisados algoritmos de correspondência maio de fato Tornando para procurar Senhor . Apropriada , dependente um novo e excitante como resultado de produzido através Psicologia no Consumidor Fascination .BR BR BR Muito melhor Ruas destina para Internet namoro Do outro lado 40 BR BR Apesar realmente faz realmente faz certifique-se assegurar de uma vez por todas ?BR BR Não tão agora muito rapidamente , diz pronuncia namoro on-line sites Jules Spira. Só porque Apesar realmente é rígido não para cobrir doesnt-assustadora indicam que você deve você é obrigado a jogar a recém-nascido up com todas as banheiros água .BR BR BR Nós sabemos que saber alguns homens vídeo ou gráfica , eles não gostam amor perusing prolixo e-mails-ou Membros simples . Meu próprio sugestão normalmente ésimples . pausa para o almoçoBR BR rápido internacional namoro Julie SpiraBR BR Keep on As dicas a seguir as seguintes indicações em mente para trazer web namoro fazer o trabalho .BR BR 1) Não pode mudança seu pessoal conta apropriado saga épica . No caso de criar o apropriado uniforme namoro preocupação, você deseja produzir fatos adequados aproximadamente você , mesmo enquanto sempre sair quarto em sua casa em relação a encantadora suspense , diz às pessoas namoro juntamente com relação íntima instrutor A primavera de julho | junho.BR BR Nós sabemos que estão cientes de que alguns homens cosméticos , eles não gostam por exemplo leitura através prolixo e-mails-ou perfis . Muito própria neutra é sempre parafácil , Spira adicionar .BR BR 2 Treinamento verdade on Promoção . O principal elemento para você elétrica vendo tem de ser , ofertas . Faça ninguém importante benefício também como obter o peso fora leigos para o seu tempo . adicionar mais legendas no grafismo para o seu introdução pessoais, incluindo todas as estações estes eram E certifique-se para ajudá-lo sorriso portanto, você look acessível.BR sozinho BR BR Devia ter seu próprio pessoal vestuário ? Ele geralmente é melhor dizer adeus a exato : Use forte vestido com uma pequena seleção de cores . BR BR Formas Absolutamente amor retém faz conservas Você realmente vibranteBR BR BR três ou mais . Say rapidamente . Lembre-se Perceba particular Idosos de esperando dias a semanas dias úteis antes marcação um homem de trabalho voltar ou não subir quando ? Overlook 'em, Spira diz às pessoas . Tendo em conta que o Período conectado com Facebook assim como e também Twitting, finalmente, a passo Contém vem com carrega adquirida , enormemente . Se a fantástica cara dá conselhos para você pessoalmente para suas necessidades , não hesite escrever coluna . Uniforme namoro pode ser descrito como estatísticas aventura além probabilidade é ele tem orientar a} para muitos fêmeas senhoras [[http://theuniversethewars.com/index.php?do=/profile-28507/info/|Mulhere Procura Homen]] em um momento . Enquanto você conhece uma pessoa pairando um dia ou dois dias de um par publicar voltar , ele pode estão recomendando comprador outro Quem é estimulado sua apelo .BR BR cinco . Fundada importante telefone grande dia . Antes de você satisfazer , Uso a discussão telefonando em torno para ver se descobrir bioquímica e biologia .BR BR pano vermelho: Se seu próprio prazo se recusa a diálogo e, adicionalmente, assevera AT apenas mensagens de texto, curso , diz Spira pronuncia .
[Cassandra Wiki] Trivial Update of JimxcIngr by JimxcIngr
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The JimxcIngr page has been changed by JimxcIngr: http://wiki.apache.org/cassandra/JimxcIngr New page: Structured baggage will also give you a smooth and polished glimpse.BR At the exact time, there is anything uniquely American about these superb purses.BR BR 名前:Jim IngrahamBR 私の年齢:25BR 国:United StatesBR シティ:PhiladelphiaBR 郵便番号:19108BR ストリート:3419 Filbert StreetBR BR my weblog: [[http://www.imaalliance.com/members/bretcol/activity/15425|ルイヴィト]]
[jira] [Commented] (CASSANDRA-5401) Pluggable security feature to prevent node from joining a cluster and running destructive commands
[ https://issues.apache.org/jira/browse/CASSANDRA-5401?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616958#comment-13616958 ] Brandon Williams commented on CASSANDRA-5401: - I think you're confusing two things here, destructive clients (which authentication can prevent) and destructive nodes (which the SSL truststore can prevent) Pluggable security feature to prevent node from joining a cluster and running destructive commands -- Key: CASSANDRA-5401 URL: https://issues.apache.org/jira/browse/CASSANDRA-5401 Project: Cassandra Issue Type: Improvement Components: Config, Core Affects Versions: 1.1.10 Environment: Production Reporter: Ahmed Bashir Labels: configuration, security It's possible for a node to join an existing cluster (with perhaps more stringent security restrictions i.e. not using AllowAllAuthentication) and issue destructive commands that affect the cluster at large (e.g. drop keyspace via cassandra-cli, etc). This can be circumvented with a pluggable security module that could be used to implement basic node vetting/identification/etc. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-5344) Make LCR less memory-abusive
[ https://issues.apache.org/jira/browse/CASSANDRA-5344?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13616974#comment-13616974 ] Jonathan Ellis commented on CASSANDRA-5344: --- I think we can just make SSTW.append return null to signify that nothing was written. I'll give that a try. Make LCR less memory-abusive Key: CASSANDRA-5344 URL: https://issues.apache.org/jira/browse/CASSANDRA-5344 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Jonathan Ellis Priority: Minor We've seen several reports of compaction causing GC pauses. You would think this would be the fault of PCR (which materializes the rows in memory) but LCR seems to be more of a problem. I hypothesize that PCR mostly generates just young-gen garbage, but since LCR keeps the BF and row index in-memory for a long time (from construction, until after the row has been merged and written), it gets tenured and can cause fragmentation or promotion failures. -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of TammaraSH by TammaraSH
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The TammaraSH page has been changed by TammaraSH: http://wiki.apache.org/cassandra/TammaraSH New page: Not much to say about myself at all.BR Enjoying to be a part of apache.org.BR I just wish I'm useful at allBR BR my page; [[http://adultdatingforme.com/UtecbxLer|visit the next document]]
[Cassandra Wiki] Trivial Update of MayviaaLV by MayviaaLV
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The MayviaaLV page has been changed by MayviaaLV: http://wiki.apache.org/cassandra/MayviaaLV New page: Nothing to write about myself at all.BR BR Also visit my web page [[http://testphp.altervista.org/index.php?do=/blog/253/curbing-hair-loss-by-using-hair-motor-oils/|Your Domain Name]]
[jira] [Assigned] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita reassigned CASSANDRA-5391: - Assignee: Yuki Morishita (was: T Jake Luciani) SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: Yuki Morishita Priority: Blocker Attachments: 5391-1.2.txt I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755) at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at
[jira] [Updated] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-5391: -- Attachment: 5391-1.2.txt CompressedFileStreamTask is not sending the right part of the file when using inter-node encryption, and that causes various IOException described here. Patch attached for fix. SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: T Jake Luciani Priority: Blocker Attachments: 5391-1.2.txt I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at
[jira] [Commented] (CASSANDRA-5381) java.io.EOFException exception while executing nodetool repair with compression enabled
[ https://issues.apache.org/jira/browse/CASSANDRA-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617038#comment-13617038 ] Yuki Morishita commented on CASSANDRA-5381: --- [~mathijs] you are right and I attached fix to CASSANDRA-5391 which also is reporting the error. thanks for reporting. java.io.EOFException exception while executing nodetool repair with compression enabled --- Key: CASSANDRA-5381 URL: https://issues.apache.org/jira/browse/CASSANDRA-5381 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: Linux Virtual Machines, Red Hat Enterprise release 6.4, kernel version 2.6.32-358.2.1.el6.x86_64. Each VM has 8GB memory and 4vCPUS. Reporter: Neil Thomson Priority: Minor Very similar to issue reported in CASSANDRA-5105. I have 3 nodes configured in a cluster. The nodes are configured with compression enabled. When attempting a nodetool repair on one node, i get exceptions in the other nodes in the cluster. Disabling compression on the column family allows nodetool repair to run without error. Exception: INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,874 StreamReplyVerbHandler.java (line 50) Need to re-stream file /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179 INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,991 StreamReplyVerbHandler.java (line 50) Need to re-stream file /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179 ERROR [Streaming to /3.69.211.179:2] 2013-03-25 12:30:28,113 CassandraDaemon.java (line 164) Exception in thread Thread[Streaming to /3.69.211.179:2,5,main] java.lang.RuntimeException: java.io.EOFException at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(Unknown Source) at org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:193) at org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:114) at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more Keyspace configuration is as follows: Keyspace: rt: Replication Strategy: org.apache.cassandra.locator.SimpleStrategy Durable Writes: true Options: [replication_factor:3] Column Families: ColumnFamily: tagname Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 864000 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Populate IO Cache on flush: false Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy ColumnFamily: values Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 864000 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Populate IO Cache on flush: false Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of DevinLin by DevinLin
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The DevinLin page has been changed by DevinLin: http://wiki.apache.org/cassandra/DevinLin New page: I'm a tech lover who writes about electronic gizmos and devices such as asus nexus 7. If you love cool gadgets as well, visit my site to purchase one or to read more stories and gossip.BR BR Also visit my web blog :: [[http://more4you.ws/articles/article.php/15-03-2013Nexus-7-Vs-Nexus-10-Tablet-Comparison-Review.htm|visit this backlink]]
[jira] [Commented] (CASSANDRA-5381) java.io.EOFException exception while executing nodetool repair with compression enabled
[ https://issues.apache.org/jira/browse/CASSANDRA-5381?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13617041#comment-13617041 ] Yuki Morishita commented on CASSANDRA-5381: --- [~neil.thomp...@shepway.gov.uk] are you also using internode_encryption? then this is duplicate issue of CASSANDRA-5391. java.io.EOFException exception while executing nodetool repair with compression enabled --- Key: CASSANDRA-5381 URL: https://issues.apache.org/jira/browse/CASSANDRA-5381 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: Linux Virtual Machines, Red Hat Enterprise release 6.4, kernel version 2.6.32-358.2.1.el6.x86_64. Each VM has 8GB memory and 4vCPUS. Reporter: Neil Thomson Priority: Minor Very similar to issue reported in CASSANDRA-5105. I have 3 nodes configured in a cluster. The nodes are configured with compression enabled. When attempting a nodetool repair on one node, i get exceptions in the other nodes in the cluster. Disabling compression on the column family allows nodetool repair to run without error. Exception: INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,874 StreamReplyVerbHandler.java (line 50) Need to re-stream file /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179 INFO [Streaming to /3.69.211.179:2] 2013-03-25 12:30:27,991 StreamReplyVerbHandler.java (line 50) Need to re-stream file /var/lib/cassandra/data/rt/values/rt-values-ib-1-Data.db to /3.69.211.179 ERROR [Streaming to /3.69.211.179:2] 2013-03-25 12:30:28,113 CassandraDaemon.java (line 164) Exception in thread Thread[Streaming to /3.69.211.179:2,5,main] java.lang.RuntimeException: java.io.EOFException at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(Unknown Source) at org.apache.cassandra.streaming.FileStreamTask.receiveReply(FileStreamTask.java:193) at org.apache.cassandra.streaming.compress.CompressedFileStreamTask.stream(CompressedFileStreamTask.java:114) at org.apache.cassandra.streaming.FileStreamTask.runMayThrow(FileStreamTask.java:91) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) ... 3 more Keyspace configuration is as follows: Keyspace: rt: Replication Strategy: org.apache.cassandra.locator.SimpleStrategy Durable Writes: true Options: [replication_factor:3] Column Families: ColumnFamily: tagname Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 864000 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Populate IO Cache on flush: false Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy ColumnFamily: values Key Validation Class: org.apache.cassandra.db.marshal.BytesType Default column value validator: org.apache.cassandra.db.marshal.BytesType Columns sorted by: org.apache.cassandra.db.marshal.BytesType GC grace seconds: 864000 Compaction min/max thresholds: 4/32 Read repair chance: 0.1 DC Local Read repair chance: 0.0 Populate IO Cache on flush: false Replicate on write: true Caching: KEYS_ONLY Bloom Filter FP chance: default Built indexes: [] Compaction Strategy: org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy -- This message is automatically generated by JIRA. If you think it was sent incorrectly, please contact your JIRA administrators For more information on JIRA, see: http://www.atlassian.com/software/jira
[Cassandra Wiki] Trivial Update of Merle29M by Merle29M
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The Merle29M page has been changed by Merle29M: http://wiki.apache.org/cassandra/Merle29M New page: Name: Merle VanburenBR My age: 21BR Country: FranceBR Home town: Boulogne-Sur-Mer BR ZIP: 62200BR Address: 80 rue Petite FusterieBR BR Look into my weblog - [[HTTP://Massmeansbusiness.com/__media__/js/netsoltrademark.php?d=votreplombierparisien.com|La plomberie paris]]
[Cassandra Wiki] Trivial Update of TammaraSH by TammaraSH
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The TammaraSH page has been changed by TammaraSH: http://wiki.apache.org/cassandra/TammaraSH?action=diffrev1=1rev2=2 + My name is Patti Rosas. I life in Dole (France).BR - Not much to say about myself at all.BR - Enjoying to be a part of apache.org.BR - I just wish I'm useful at allBR BR - my page; [[http://adultdatingforme.com/UtecbxLer|visit the next document]] + BR + My homepage [[https://www.wctt.info:1444/WillaEspo|please click the up coming article]]
git commit: more comment update
Updated Branches: refs/heads/trunk be2726b39 - f70339682 more comment update Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/f7033968 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/f7033968 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/f7033968 Branch: refs/heads/trunk Commit: f70339682524026a3955bf56ac7a0d4f4f1e1114 Parents: be2726b Author: Yuki Morishita yu...@apache.org Authored: Thu Mar 28 23:28:05 2013 -0500 Committer: Yuki Morishita yu...@apache.org Committed: Thu Mar 28 23:29:06 2013 -0500 -- .../cassandra/service/ActiveRepairService.java | 54 +-- 1 files changed, 18 insertions(+), 36 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/f7033968/src/java/org/apache/cassandra/service/ActiveRepairService.java -- diff --git a/src/java/org/apache/cassandra/service/ActiveRepairService.java b/src/java/org/apache/cassandra/service/ActiveRepairService.java index 3c5ba7f..f103771 100644 --- a/src/java/org/apache/cassandra/service/ActiveRepairService.java +++ b/src/java/org/apache/cassandra/service/ActiveRepairService.java @@ -51,34 +51,30 @@ import org.apache.cassandra.utils.*; /** * ActiveRepairService encapsulates validating (hashing) individual column families, - * exchanging MerkleTrees with remote nodes via a TreeRequest/Response conversation, + * exchanging MerkleTrees with remote nodes via a tree request/response conversation, * and then triggering repairs for disagreeing ranges. * * The node where repair was invoked acts as the 'initiator,' where valid trees are sent after generation - * and where the local and remote tree will rendezvous in rendezvous(cf, endpoint, tree). + * and where the local and remote tree will rendezvous in rendezvous(). * Once the trees rendezvous, a Differencer is executed and the service can trigger repairs * for disagreeing ranges. * - * Tree comparison and repair triggering occur in the single threaded Stage.ANTIENTROPY. + * Tree comparison and repair triggering occur in the single threaded Stage.ANTI_ENTROPY. * * The steps taken to enact a repair are as follows: - * 1. A repair is requested via nodeprobe: + * 1. A repair is requested via JMX/nodetool: * * The initiator sends TreeRequest messages to all neighbors of the target node: when a node * receives a TreeRequest, it will perform a validation (read-only) compaction to immediately validate * the column family. This is performed on the CompactionManager ExecutorService. * 2. The validation process builds the merkle tree by: * * Calling Validator.prepare(), which samples the column family to determine key distribution, - * * Calling Validator.add() in order for every row in the column family, + * * Calling Validator.add() in order for rows in repair range in the column family, * * Calling Validator.complete() to indicate that all rows have been added. * * Calling complete() indicates that a valid MerkleTree has been created for the column family. * * The valid tree is returned to the requesting node via a TreeResponse. - * 3. When a node receives a TreeResponse, it passes the tree to rendezvous(), which checks for trees to - *rendezvous with / compare to: - * * If the tree is local, it is cached, and compared to any trees that were received from neighbors. - * * If the tree is remote, it is immediately compared to a local tree if one is cached. Otherwise, - * the remote tree is stored until a local tree can be generated. - * * A Differencer object is enqueued for each comparison. - * 4. Differencers are executed in Stage.ANTIENTROPY, to compare the two trees, and perform repair via the + * 3. When a node receives a tree response, it passes the tree to rendezvous() to see if all responses are + *received. Once the initiator receives all responses, it creates Differencers on every tree pair combination. + * 4. Differencers are executed in Stage.ANTI_ENTROPY, to compare the given two trees, and perform repair via the *streaming api. */ public class ActiveRepairService @@ -110,7 +106,7 @@ public class ActiveRepairService private final ConcurrentMapString, RepairSession sessions; /** - * Protected constructor. Use AntiEntropyService.instance. + * Protected constructor. Use ActiveRepairService.instance. */ protected ActiveRepairService() { @@ -118,7 +114,7 @@ public class ActiveRepairService } /** - * Requests repairs for the given table and column families, and blocks until all repairs have been completed. + * Requests repairs for the given keyspace and column families. * * @return Future for
[jira] [Updated] (CASSANDRA-5391) SSL problems with inter-DC communication
[ https://issues.apache.org/jira/browse/CASSANDRA-5391?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-5391: -- Reviewer: iamaleksey SSL problems with inter-DC communication Key: CASSANDRA-5391 URL: https://issues.apache.org/jira/browse/CASSANDRA-5391 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.2.3 Environment: $ /etc/alternatives/jre_1.6.0/bin/java -version java version 1.6.0_23 Java(TM) SE Runtime Environment (build 1.6.0_23-b05) Java HotSpot(TM) 64-Bit Server VM (build 19.0-b09, mixed mode) $ uname -a Linux hostname 2.6.32-358.2.1.el6.x86_64 #1 SMP Tue Mar 12 14:18:09 CDT 2013 x86_64 x86_64 x86_64 GNU/Linux $ cat /etc/redhat-release Scientific Linux release 6.3 (Carbon) $ facter | grep ec2 ... ec2_placement = availability_zone=us-east-1d ... $ rpm -qi cassandra cassandra-1.2.3-1.el6.cmp1.noarch (custom built rpm from cassandra tarball distribution) Reporter: Ondřej Černoš Assignee: Yuki Morishita Priority: Blocker Fix For: 1.2.4 Attachments: 5391-1.2.txt I get SSL and snappy compression errors in multiple datacenter setup. The setup is simple: 3 nodes in AWS east, 3 nodes in Rackspace. I use slightly modified Ec2MultiRegionSnitch in Rackspace (I just added a regex able to parse the Rackspace/Openstack availability zone which happens to be in unusual format). During {{nodetool rebuild}} tests I managed to (consistently) trigger the following error: {noformat} 2013-03-19 12:42:16.059+0100 [Thread-13] [DEBUG] IncomingTcpConnection.java(79) org.apache.cassandra.net.IncomingTcpConnection: IOException reading from socket; closing java.io.IOException: FAILED_TO_UNCOMPRESS(5) at org.xerial.snappy.SnappyNative.throw_error(SnappyNative.java:78) at org.xerial.snappy.SnappyNative.rawUncompress(Native Method) at org.xerial.snappy.Snappy.rawUncompress(Snappy.java:391) at org.apache.cassandra.io.compress.SnappyCompressor.uncompress(SnappyCompressor.java:93) at org.apache.cassandra.streaming.compress.CompressedInputStream.decompress(CompressedInputStream.java:101) at org.apache.cassandra.streaming.compress.CompressedInputStream.read(CompressedInputStream.java:79) at java.io.DataInputStream.readUnsignedShort(DataInputStream.java:337) at org.apache.cassandra.utils.BytesReadTracker.readUnsignedShort(BytesReadTracker.java:140) at org.apache.cassandra.utils.ByteBufferUtil.readShortLength(ByteBufferUtil.java:361) at org.apache.cassandra.utils.ByteBufferUtil.readWithShortLength(ByteBufferUtil.java:371) at org.apache.cassandra.streaming.IncomingStreamReader.streamIn(IncomingStreamReader.java:160) at org.apache.cassandra.streaming.IncomingStreamReader.read(IncomingStreamReader.java:122) at org.apache.cassandra.net.IncomingTcpConnection.stream(IncomingTcpConnection.java:226) at org.apache.cassandra.net.IncomingTcpConnection.handleStream(IncomingTcpConnection.java:166) at org.apache.cassandra.net.IncomingTcpConnection.run(IncomingTcpConnection.java:66) {noformat} The exception is raised during DB file download. What is strange is the following: * the exception is raised only when rebuildig from AWS into Rackspace * the exception is raised only when all nodes are up and running in AWS (all 3). In other words, if I bootstrap from one or two nodes in AWS, the command succeeds. Packet-level inspection revealed malformed packets _on both ends of communication_ (the packet is considered malformed on the machine it originates on). Further investigation raised two more concerns: * We managed to get another stacktrace when testing the scenario. The exception was raised only once during the tests and was raised when I throttled the inter-datacenter bandwidth to 1Mbps. {noformat} java.lang.RuntimeException: javax.net.ssl.SSLException: bad record MAC at com.google.common.base.Throwables.propagate(Throwables.java:160) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:32) at java.lang.Thread.run(Thread.java:662) Caused by: javax.net.ssl.SSLException: bad record MAC at com.sun.net.ssl.internal.ssl.Alerts.getSSLException(Alerts.java:190) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1649) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.fatal(SSLSocketImpl.java:1607) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:859) at com.sun.net.ssl.internal.ssl.SSLSocketImpl.readDataRecord(SSLSocketImpl.java:755) at com.sun.net.ssl.internal.ssl.AppInputStream.read(AppInputStream.java:75) at
[Cassandra Wiki] Trivial Update of WilbertJ1 by WilbertJ1
Dear Wiki user, You have subscribed to a wiki page or wiki category on Cassandra Wiki for change notification. The WilbertJ1 page has been changed by WilbertJ1: http://wiki.apache.org/cassandra/WilbertJ1 New page: Name: Wilbert MartinoBR Age: 20BR Country: United StatesBR Home town: Oak Harbor BR Post code: 43449BR Street: 2542 Upland AvenueBR BR my web site - [[http://googlenewssubmit.com/2013/03/7-awesome-benefits-of-a-press-release/|benefits of a press release]]