[jira] [Commented] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395094#comment-15395094 ] Daniel Kleviansky commented on CASSANDRA-12294: --- Apache Directory LDAP API requires including the following jars (the API + its requirements): {quote} antlr-2.7.7.jar api-all-1.0.0-RC1.jar api-asn1-api-1.0.0-RC1.jar api-asn1-ber-1.0.0-RC1.jar api-i18n-1.0.0-RC1.jar api-ldap-client-api-1.0.0-RC1.jar api-ldap-codec-core-1.0.0-RC1.jar api-ldap-extras-aci-1.0.0-RC1.jar api-ldap-extras-codec-1.0.0-RC1.jar api-ldap-extras-codec-api-1.0.0-RC1.jar api-ldap-model-1.0.0-RC1.jar api-ldap-schema-converter-1.0.0-RC1.jar api-ldap-schema-data-1.0.0-RC1.jar api-util-1.0.0-RC1.jar commons-codec-1.10.jar commons-collections-3.2.2.jar commons-lang-2.6.jar commons-pool-1.6.jar log4j-1.2.17.jar mina-core-2.0.13.jar org.apache.servicemix.bundles.antlr-2.7.7_5.jar org.apache.servicemix.bundles.dom4j-1.6.1_5.jar org.apache.servicemix.bundles.xpp3-1.1.4c_7.jar slf4j-api-1.7.16.jar slf4j-log4j12-1.7.16.jar xml-apis-2.0.2.jar {quote} Is this going to cause any issues? > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. > Work in progress: https://github.com/lqid/cassandra — Branch 12294-22 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395031#comment-15395031 ] Stefania commented on CASSANDRA-11465: -- I've tried following [~mshuler] advice on IRC and renamed the branches. They should be running on Openstack now. ||2.2||3.0||3.9||trunk|| |[patch|https://github.com/stef1927/cassandra/commits/11465-2.2]|[patch|https://github.com/stef1927/cassandra/commits/11465-3.0]|[patch|https://github.com/stef1927/cassandra/commits/11465-3.9]|[patch|https://github.com/stef1927/cassandra/commits/11465]| |[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-2.2-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-3.0-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-3.9-testall/]|[testall|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-testall/]| |[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-2.2-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-3.0-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-3.9-dtest/]|[dtest|http://cassci.datastax.com/view/Dev/view/stef1927/job/stef1927-11465-dtest/]| > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12174) COPY FROM should raise error for non-existing input files
[ https://issues.apache.org/jira/browse/CASSANDRA-12174?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395024#comment-15395024 ] Stefania commented on CASSANDRA-12174: -- Test results are clean, committed to trunk as a59689ad8101440a92c0d015bac43280460f3382. Thank you for the patch! > COPY FROM should raise error for non-existing input files > - > > Key: CASSANDRA-12174 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12174 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Stefan Podkowinski >Assignee: Hiroyuki Nishi >Priority: Minor > Labels: cqlsh, lhf > Fix For: 3.10 > > Attachments: CASSANDRA-12174-trunk.patch > > > Currently the CSV COPY FROM command will not raise any error for non-existing > paths. Instead only "0 rows imported" will be shown as result. > As the COPY FROM command is often used for tutorials and getting started > guides, I'd suggest to give a clear error message in case of a missing input > file. Without such error it can be confusing for the user to see the command > actually finish, without any clues why no rows have been imported. > {noformat} > CREATE KEYSPACE test > WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 > }; > USE test; > CREATE TABLE airplanes ( > name text PRIMARY KEY, > manufacturer ascii, > year int, > mach float > ); > COPY airplanes (name, manufacturer, year, mach) FROM '/tmp/1234-doesnotexist'; > Using 3 child processes > Starting copy of test.airplanes with columns [name, manufacturer, year, mach]. > Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s > 0 rows imported from 0 files in 0.216 seconds (0 skipped). > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12174) COPY FROM should raise error for non-existing input files
[ https://issues.apache.org/jira/browse/CASSANDRA-12174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-12174: - Labels: cqlsh lhf (was: lhf) > COPY FROM should raise error for non-existing input files > - > > Key: CASSANDRA-12174 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12174 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Stefan Podkowinski >Assignee: Hiroyuki Nishi >Priority: Minor > Labels: cqlsh, lhf > Fix For: 3.10 > > Attachments: CASSANDRA-12174-trunk.patch > > > Currently the CSV COPY FROM command will not raise any error for non-existing > paths. Instead only "0 rows imported" will be shown as result. > As the COPY FROM command is often used for tutorials and getting started > guides, I'd suggest to give a clear error message in case of a missing input > file. Without such error it can be confusing for the user to see the command > actually finish, without any clues why no rows have been imported. > {noformat} > CREATE KEYSPACE test > WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 > }; > USE test; > CREATE TABLE airplanes ( > name text PRIMARY KEY, > manufacturer ascii, > year int, > mach float > ); > COPY airplanes (name, manufacturer, year, mach) FROM '/tmp/1234-doesnotexist'; > Using 3 child processes > Starting copy of test.airplanes with columns [name, manufacturer, year, mach]. > Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s > 0 rows imported from 0 files in 0.216 seconds (0 skipped). > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12174) COPY FROM should raise error for non-existing input files
[ https://issues.apache.org/jira/browse/CASSANDRA-12174?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Stefania updated CASSANDRA-12174: - Resolution: Fixed Fix Version/s: 3.10 Status: Resolved (was: Patch Available) > COPY FROM should raise error for non-existing input files > - > > Key: CASSANDRA-12174 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12174 > Project: Cassandra > Issue Type: Improvement > Components: Tools >Reporter: Stefan Podkowinski >Assignee: Hiroyuki Nishi >Priority: Minor > Labels: cqlsh, lhf > Fix For: 3.10 > > Attachments: CASSANDRA-12174-trunk.patch > > > Currently the CSV COPY FROM command will not raise any error for non-existing > paths. Instead only "0 rows imported" will be shown as result. > As the COPY FROM command is often used for tutorials and getting started > guides, I'd suggest to give a clear error message in case of a missing input > file. Without such error it can be confusing for the user to see the command > actually finish, without any clues why no rows have been imported. > {noformat} > CREATE KEYSPACE test > WITH REPLICATION = { 'class' : 'NetworkTopologyStrategy', 'datacenter1' : 1 > }; > USE test; > CREATE TABLE airplanes ( > name text PRIMARY KEY, > manufacturer ascii, > year int, > mach float > ); > COPY airplanes (name, manufacturer, year, mach) FROM '/tmp/1234-doesnotexist'; > Using 3 child processes > Starting copy of test.airplanes with columns [name, manufacturer, year, mach]. > Processed: 0 rows; Rate: 0 rows/s; Avg. rate: 0 rows/s > 0 rows imported from 0 files in 0.216 seconds (0 skipped). > {noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: COPY FROM should raise error for non-existing input files
Repository: cassandra Updated Branches: refs/heads/trunk 6ca39ea42 -> a59689ad8 COPY FROM should raise error for non-existing input files patch by Hiroyuki Nishi; reviewed by Stefania Alborghetti for CASSANDRA-12174 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/a59689ad Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/a59689ad Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/a59689ad Branch: refs/heads/trunk Commit: a59689ad8101440a92c0d015bac43280460f3382 Parents: 6ca39ea Author: Hiroyuki NishiAuthored: Tue Jul 26 09:50:15 2016 +0800 Committer: Stefania Alborghetti Committed: Wed Jul 27 12:27:44 2016 +0800 -- CHANGES.txt| 1 + pylib/cqlshlib/copyutil.py | 15 +++ 2 files changed, 12 insertions(+), 4 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/a59689ad/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 95fbf76..52f8ccf 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.10 + * COPY FROM should raise error for non-existing input files (CASSANDRA-12174) * Faster write path (CASSANDRA-12269) * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424) * Support json/yaml output in nodetool tpstats (CASSANDRA-12035) http://git-wip-us.apache.org/repos/asf/cassandra/blob/a59689ad/pylib/cqlshlib/copyutil.py -- diff --git a/pylib/cqlshlib/copyutil.py b/pylib/cqlshlib/copyutil.py index d0524fe..94e8fe6 100644 --- a/pylib/cqlshlib/copyutil.py +++ b/pylib/cqlshlib/copyutil.py @@ -848,15 +848,18 @@ class FilesReader(object): try: return open(fname, 'rb') except IOError, e: -printdebugmsg("Can't open %r for reading: %s" % (fname, e)) -return None +raise IOError("Can't open %r for reading: %s" % (fname, e)) for path in paths.split(','): path = path.strip() if os.path.isfile(path): yield make_source(path) else: -for f in glob.glob(path): +result = glob.glob(path) +if len(result) == 0: +raise IOError("Can't open %r for reading: no matching file found" % (path,)) + +for f in result: yield (make_source(f)) def start(self): @@ -1269,7 +1272,11 @@ class FeedingProcess(mp.Process): self.on_fork() reader = self.reader -reader.start() +try: +reader.start() +except IOError, exc: +self.outmsg.send(ImportTaskError(exc.__class__.__name__, exc.message)) + channels = self.worker_channels max_pending_chunks = self.max_pending_chunks sent = 0
[jira] [Issue Comment Deleted] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeff Jirsa updated CASSANDRA-12309: --- Comment: was deleted (was: Cassandra allows dynamic class loading in many places, and it's generally considered a feature. Various examples include Seed providers, Authenticator, Authorizer, Compaction Strategies, Partitioners, Snitches, Secondary Index, and Replication Strategies. Classifying this as a bug is probably inappropriate - there may be an environment where such a feature is unwanted, but it's very much intentional in its current form. ) > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12309 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue on line 588 and the method returns a new > instance on line 594 or 598. > CqlConfigHelper.java, lines 584-605: > {code:java} > 584 private static AuthProvider getClientAuthProvider(String > factoryClassName, Configuration conf) > 585 { > 586 try > 587 { > 588 Class c = Class.forName(factoryClassName); > 589 if (PlainTextAuthProvider.class.equals(c)) > 590 { > 591 String username = getStringSetting(USERNAME, conf).or(""); > 592 String password = getStringSetting(PASSWORD, conf).or(""); > 593 return (AuthProvider) c.getConstructor(String.class, > String.class) > 594 .newInstance(username, password); > 595 } > 596 else > 597 { > 598 return (AuthProvider) c.newInstance(); > 599 } > 600 } > 601 catch (Exception e) > 602 { > 603 throw new RuntimeException("Failed to instantiate auth provider:" > + factoryClassName, e); > 604 } > 605 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15395015#comment-15395015 ] Jeff Jirsa commented on CASSANDRA-12309: Cassandra allows dynamic class loading in many places, and it's generally considered a feature. Various examples include Seed providers, Authenticator, Authorizer, Compaction Strategies, Partitioners, Snitches, Secondary Index, and Replication Strategies. Classifying this as a bug is probably inappropriate - there may be an environment where such a feature is unwanted, but it's very much intentional in its current form. > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12309 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue on line 588 and the method returns a new > instance on line 594 or 598. > CqlConfigHelper.java, lines 584-605: > {code:java} > 584 private static AuthProvider getClientAuthProvider(String > factoryClassName, Configuration conf) > 585 { > 586 try > 587 { > 588 Class c = Class.forName(factoryClassName); > 589 if (PlainTextAuthProvider.class.equals(c)) > 590 { > 591 String username = getStringSetting(USERNAME, conf).or(""); > 592 String password = getStringSetting(PASSWORD, conf).or(""); > 593 return (AuthProvider) c.getConstructor(String.class, > String.class) > 594 .newInstance(username, password); > 595 } > 596 else > 597 { > 598 return (AuthProvider) c.newInstance(); > 599 } > 600 } > 601 catch (Exception e) > 602 { > 603 throw new RuntimeException("Failed to instantiate auth provider:" > + factoryClassName, e); > 604 } > 605 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12151) Audit logging for database activity
[ https://issues.apache.org/jira/browse/CASSANDRA-12151?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] ZhaoYang updated CASSANDRA-12151: - Status: Patch Available (was: Open) > Audit logging for database activity > --- > > Key: CASSANDRA-12151 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12151 > Project: Cassandra > Issue Type: New Feature >Reporter: stefan setyadi > Fix For: 3.x > > Attachments: 12151.txt > > > we would like a way to enable cassandra to log database activity being done > on our server. > It should show username, remote address, timestamp, action type, keyspace, > column family, and the query statement. > it should also be able to log connection attempt and changes to the > user/roles. > I was thinking of making a new keyspace and insert an entry for every > activity that occurs. > Then It would be possible to query for specific activity or a query targeting > a specific keyspace and column family. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11465) dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test
[ https://issues.apache.org/jira/browse/CASSANDRA-11465?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394971#comment-15394971 ] Paulo Motta commented on CASSANDRA-11465: - patch and multiplexer results look good, but for some reason dtest doesn't seem to be running, tried resubmitting a few times without success. wasn't able to find out the root cause from a quick glance at the logs. could you maybe trying rebase the dtest repo? > dtest failure in cql_tracing_test.TestCqlTracing.tracing_unknown_impl_test > -- > > Key: CASSANDRA-11465 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11465 > Project: Cassandra > Issue Type: Bug >Reporter: Philip Thompson >Assignee: Stefania > Labels: dtest > > Failing on the following assert, on trunk only: > {{self.assertEqual(len(errs[0]), 1)}} > Is not failing consistently. > example failure: > http://cassci.datastax.com/job/trunk_dtest/1087/testReport/cql_tracing_test/TestCqlTracing/tracing_unknown_impl_test > Failed on CassCI build trunk_dtest #1087 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12310) Use of getByName() to retrieve IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394855#comment-15394855 ] sankalp kohli commented on CASSANDRA-12310: --- reverse DNS lookup might not be available in all environments. > Use of getByName() to retrieve IP address > - > > Key: CASSANDRA-12310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > There are many places in the Cassandra source code that rely upon a call to > getByName() to retrieve an IP address. The information returned by > getByName() is not trustworthy. Attackers can spoof DNS entries and depending > on getByName alone invites DNS spoofing attacks. > This is an example from the file DatabaseDescriptor.java where there are > examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. > {code:java} > DatabaseDescriptor.java, lines 231-238: > 231 try > 232 { > 233 rpcAddress = InetAddress.getByName(config.rpc_address); > 234 } > 235 catch (UnknownHostException e) > 236 { > 237 throw new ConfigurationException("Unknown host in rpc_address " + > config.rpc_address, false); > 238 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Daniel Kleviansky updated CASSANDRA-12294: -- Description: Addition of an LDAP authentication plugin, in tree, along side the default authenticator, so that Cassandra can leverage existing LDAP-speaking servers to manage user logins. DSE offers this: [Enabling LDAP authentication | https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], but does not exist in vanilla C* as far as I can tell. Ideally would like to introduce this as part of the 2.2.x branch, as this is what is currently running in client production environment, and where it is needed at the moment. Would aim for support of at least Microsoft Active Directory running on Windows Server 2012. Work in progress: https://github.com/lqid/cassandra — Branch 12294-22 was: Addition of an LDAP authentication plugin, in tree, along side the default authenticator, so that Cassandra can leverage existing LDAP-speaking servers to manage user logins. DSE offers this: [Enabling LDAP authentication | https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], but does not exist in vanilla C* as far as I can tell. Ideally would like to introduce this as part of the 2.2.x branch, as this is what is currently running in client production environment, and where it is needed at the moment. Would aim for support of at least Microsoft Active Directory running on Windows Server 2012. > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. > Work in progress: https://github.com/lqid/cassandra — Branch 12294-22 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)
[ https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394853#comment-15394853 ] sankalp kohli commented on CASSANDRA-11866: --- Make sure we only repair the CFs provided. > nodetool repair does not obey the column family parameter when -st and -et > are provided (subrange repair) > - > > Key: CASSANDRA-11866 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11866 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) > x86_64 >Reporter: Shiva Venkateswaran > Labels: newbie > Fix For: 2.1.x > > Attachments: 11866-2.1.txt > > > Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the > parameter AssetModifyTimes_data used to restrict the CFs > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data > -st 205279477618143669 -et 230991685737746901 -par > [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for > keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true) > [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf > for range (205279477618143669,230991685737746901] finished > Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the > parameter AssetModifyTimes_data used to restrict the CFs > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et > 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data > [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for > keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true) > [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf > for range (205279477618143669,230991685737746901] finished > [2016-05-20 17:37:15,365] Repair command #10 finished > Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL > ADL3Test1_data > [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges > for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true) > [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf > for range (6241639152751626129,6241693909092643958] finished > [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf > for range (-7096993048358106082,-7095000706885780850] finished > [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf > for range (-7218939248114487080,-7218289345961492809] finished > [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf > for range (-5244794756638190874,-5190307341355030282] finished > [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf > for range (3551629701277971766,321736534916502] finished > [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf > for range (-8139355591560661944,-8127928369093576603] finished > [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf > for range (7098010153980465751,7100863011896759020] finished > [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf > for range (1004538726866173536,1008586133746764703] finished > [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf > for range (5770817093573726645,5771418910784831587] finished > . > . > . > [2016-05-20 17:42:32,732] Repair command #11 finished -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394847#comment-15394847 ] sankalp kohli commented on CASSANDRA-12127: --- We have added an internal patch for this but would be nice if we can get this onewe can wait till next week. > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[1/4] cassandra git commit: Update build.xml and CHANGES.txt for 3.8
Repository: cassandra Updated Branches: refs/heads/trunk c4c9b0570 -> 6ca39ea42 Update build.xml and CHANGES.txt for 3.8 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3ded055 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3ded055 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3ded055 Branch: refs/heads/trunk Commit: c3ded0551f538f7845602b27d53240cd8129265c Parents: 2aa7663 Author: Aleksey YeschenkoAuthored: Mon Jul 18 16:47:52 2016 +0100 Committer: Aleksey Yeschenko Committed: Mon Jul 18 16:47:52 2016 +0100 -- CHANGES.txt | 48 +--- build.xml | 2 +- 2 files changed, 22 insertions(+), 28 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c3ded055/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 3307fb3..4330fde 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,33 +1,7 @@ -3.9 +3.8 * Fix hdr logging for single operation workloads (CASSANDRA-12145) * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073) * Increase size of flushExecutor thread pool (CASSANDRA-12071) -Merged from 3.0: - * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) - * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) - * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) - * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) - * Fix reverse queries ignoring range tombstones (CASSANDRA-11733) - * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098) - * Avoid missing sstables when getting the canonical sstables (CASSANDRA-11996) - * Always select the live sstables when getting sstables in bounds (CASSANDRA-11944) - * Fix column ordering of results with static columns for Thrift requests in - a mixed 2.x/3.x cluster, also fix potential non-resolved duplication of - those static columns in query results (CASSANDRA-12123) - * Avoid digest mismatch with empty but static rows (CASSANDRA-12090) - * Fix EOF exception when altering column type (CASSANDRA-11820) -Merged from 2.2: - * Synchronize ThriftServer::stop() (CASSANDRA-12105) - * Use dedicated thread for JMX notifications (CASSANDRA-12146) - * Improve streaming synchronization and fault tolerance (CASSANDRA-11414) - * MemoryUtil.getShort() should return an unsigned short also for architectures not supporting unaligned memory accesses (CASSANDRA-11973) -Merged from 2.1: - * Fix filtering on clustering columns when 2i is used (CASSANDRA-11907) - * Avoid stalling paxos when the paxos state expires (CASSANDRA-12043) - * Remove finished incoming streaming connections from MessagingService (CASSANDRA-11854) - - -3.8 * Partial revert of CASSANDRA-11971, cannot recycle buffer in SP.sendMessagesToNonlocalDC (CASSANDRA-11950) * Upgrade netty to 4.0.39 (CASSANDRA-12032, CASSANDRA-12034) * Improve details in compaction log message (CASSANDRA-12080) @@ -53,18 +27,38 @@ Merged from 2.1: * Add repaired percentage metric (CASSANDRA-11503) * Add Change-Data-Capture (CASSANDRA-8844) Merged from 3.0: + * Fix paging logic for deleted partitions with static columns (CASSANDRA-12107) + * Wait until the message is being send to decide which serializer must be used (CASSANDRA-11393) + * Fix migration of static thrift column names with non-text comparators (CASSANDRA-12147) + * Fix upgrading sparse tables that are incorrectly marked as dense (CASSANDRA-11315) + * Fix reverse queries ignoring range tombstones (CASSANDRA-11733) + * Avoid potential race when rebuilding CFMetaData (CASSANDRA-12098) + * Avoid missing sstables when getting the canonical sstables (CASSANDRA-11996) + * Always select the live sstables when getting sstables in bounds (CASSANDRA-11944) + * Fix column ordering of results with static columns for Thrift requests in + a mixed 2.x/3.x cluster, also fix potential non-resolved duplication of + those static columns in query results (CASSANDRA-12123) + * Avoid digest mismatch with empty but static rows (CASSANDRA-12090) + * Fix EOF exception when altering column type (CASSANDRA-11820) * cqlsh: fix error handling in rare COPY FROM failure scenario (CASSANDRA-12070) * Disable autocompaction during drain (CASSANDRA-11878) * Add a metrics timer to MemtablePool and use it to track time spent blocked on memory in MemtableAllocator (CASSANDRA-11327) * Fix upgrading schema with super columns with non-text subcomparators (CASSANDRA-12023) * Add TimeWindowCompactionStrategy (CASSANDRA-9666) Merged from 2.2: + * Synchronize ThriftServer::stop()
[4/4] cassandra git commit: Merge branch 'cassandra-3.9' into trunk
Merge branch 'cassandra-3.9' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6ca39ea4 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6ca39ea4 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6ca39ea4 Branch: refs/heads/trunk Commit: 6ca39ea42471ce1f4a36d922c366a986dc4f1d34 Parents: c4c9b05 aa64e65 Author: Dave BrosiusAuthored: Tue Jul 26 20:22:07 2016 -0400 Committer: Dave Brosius Committed: Tue Jul 26 20:22:07 2016 -0400 -- CHANGES.txt | 1 - 1 file changed, 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6ca39ea4/CHANGES.txt --
[2/4] cassandra git commit: c* uses commons-lang3, not commons-lang
c* uses commons-lang3, not commons-lang Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b27e2f93 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b27e2f93 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b27e2f93 Branch: refs/heads/trunk Commit: b27e2f93cc9bc33a95d531f43442b93e85ba4a30 Parents: c3ded05 Author: Dave BrosiusAuthored: Mon Jul 4 17:23:46 2016 -0400 Committer: Dave Brosius Committed: Tue Jul 26 20:04:42 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b27e2f93/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java index a914cc9..6acbd0d 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java @@ -23,7 +23,7 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.zip.CRC32; import com.google.common.annotations.VisibleForTesting; -import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory;
[3/4] cassandra git commit: Merge branch 'cassandra-3.8' into cassandra-3.9
Merge branch 'cassandra-3.8' into cassandra-3.9 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/aa64e65e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/aa64e65e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/aa64e65e Branch: refs/heads/trunk Commit: aa64e65e1e2f4a749454db578f60560b6d69ae0f Parents: 5fe02b3 b27e2f9 Author: Dave BrosiusAuthored: Tue Jul 26 20:20:52 2016 -0400 Committer: Dave Brosius Committed: Tue Jul 26 20:20:52 2016 -0400 -- CHANGES.txt | 1 - src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java | 2 +- 2 files changed, 1 insertion(+), 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/aa64e65e/CHANGES.txt -- diff --cc CHANGES.txt index 50f7a6d,4330fde..6bba5b1 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,21 -1,3 +1,20 @@@ +3.9 + * cqlsh: Fix handling of $$-escaped strings (CASSANDRA-12189) + * Fix SSL JMX requiring truststore containing server cert (CASSANDRA-12109) +Merged from 3.0: + * Disable RR and speculative retry with EACH_QUORUM reads (CASSANDRA-11980) + * Add option to override compaction space check (CASSANDRA-12180) + * Faster startup by only scanning each directory for temporary files once (CASSANDRA-12114) + * Respond with v1/v2 protocol header when responding to driver that attempts + to connect with too low of a protocol version (CASSANDRA-11464) + * NullPointerExpception when reading/compacting table (CASSANDRA-11988) + * Fix problem with undeleteable rows on upgrade to new sstable format (CASSANDRA-12144) +Merged from 2.2: + * cqlsh copyutil should get host metadata by connected address (CASSANDRA-11979) + * Fixed cqlshlib.test.remove_test_db (CASSANDRA-12214) +Merged from 2.1: + * cannot use cql since upgrading python to 2.7.11+ (CASSANDRA-11850) + - 3.8 * Fix hdr logging for single operation workloads (CASSANDRA-12145) * Fix SASI PREFIX search in CONTAINS mode with partial terms (CASSANDRA-12073)
[jira] [Commented] (CASSANDRA-11424) Option to leave omitted columns in INSERT JSON unset
[ https://issues.apache.org/jira/browse/CASSANDRA-11424?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394826#comment-15394826 ] Joshua Galbraith commented on CASSANDRA-11424: -- We just ran into this issue recently with 2.2.6. Is there any chance this option could make it into a future 2.2.x release? > Option to leave omitted columns in INSERT JSON unset > > > Key: CASSANDRA-11424 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11424 > Project: Cassandra > Issue Type: Improvement >Reporter: Ralf Steppacher >Assignee: Oded Peer > Labels: client-impacting, cql > Fix For: 3.10 > > Attachments: 11424-trunk-V1.txt, 11424-trunk-V2.txt, > 11424-trunk-V3.txt > > > CASSANDRA-7304 introduced the ability to distinguish between {{NULL}} and > {{UNSET}} prepared statement parameters. > When inserting JSON objects it is not possible to profit from this as a > prepared statement only has one parameter that is bound to the JSON object as > a whole. There is no way to control {{NULL}} vs {{UNSET}} behavior for > columns omitted from the JSON object. > Please extend on CASSANDRA-7304 to include JSON support. > {color:grey} > (My personal requirement is to be able to insert JSON objects with optional > fields without incurring the overhead of creating a tombstone of every column > not covered by the JSON object upon initial(!) insert.) > {color} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: c* uses commons-lang3, not commons-lang
Repository: cassandra Updated Branches: refs/heads/cassandra-3.8 c3ded0551 -> b27e2f93c c* uses commons-lang3, not commons-lang Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/b27e2f93 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/b27e2f93 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/b27e2f93 Branch: refs/heads/cassandra-3.8 Commit: b27e2f93cc9bc33a95d531f43442b93e85ba4a30 Parents: c3ded05 Author: Dave BrosiusAuthored: Mon Jul 4 17:23:46 2016 -0400 Committer: Dave Brosius Committed: Tue Jul 26 20:04:42 2016 -0400 -- src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/b27e2f93/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java -- diff --git a/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java b/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java index a914cc9..6acbd0d 100644 --- a/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java +++ b/src/java/org/apache/cassandra/db/commitlog/CommitLogReader.java @@ -23,7 +23,7 @@ import java.util.concurrent.atomic.AtomicInteger; import java.util.zip.CRC32; import com.google.common.annotations.VisibleForTesting; -import org.apache.commons.lang.StringUtils; +import org.apache.commons.lang3.StringUtils; import org.slf4j.Logger; import org.slf4j.LoggerFactory;
[jira] [Commented] (CASSANDRA-12300) Disallow unset memtable_cleanup_threshold when flush writers is set
[ https://issues.apache.org/jira/browse/CASSANDRA-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394809#comment-15394809 ] Ariel Weisberg commented on CASSANDRA-12300: I agree that advice is awful and needs to DIAF due to what changing flush writers does to MCT. I think think setting MCT based on flush writers makes sense though and I would even say that it seems like MCT should not be an option at all since it the correct value can be derived from knowing the total memory available for memtables and the # of flush writer I'm not sure your formula makes sense since # of tables shouldn't impact flush throughput. Necessary flush throughput is a function of # of megabytes/second you need to flush which is a function of ingest speed. Ingest performance to a point should be the same whether you have on table or a heaping handful. I don't think # of cores even really enters into it since more flush threads than you need necessarily implies flushing tables sooner than you otherwise could if you waited and let them grow a bit bigger. Maybe practical experience is different. I never benchmark writing to multiple tables so I am going off of my own theory here. > Disallow unset memtable_cleanup_threshold when flush writers is set > --- > > Key: CASSANDRA-12300 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12300 > Project: Cassandra > Issue Type: Improvement >Reporter: Brandon Williams > > Many times I see flush writers set, and mct unset, leading to a very small > mct, which causes unneeded frequent flushing, and then of course compaction. > I also think the default is a bit conservative, typically ending up at 0.11, > where I'd say the majority of use cases only have one or two hot tables and > are much better served at 0.7 or 0.8. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client
[ https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Yu updated CASSANDRA-12311: Description: Right now if a data node fails to perform a read because it ran into a {{TombstoneOverwhelmingException}}, it only responds back to the coordinator node with a generic failure. Under this scheme, the coordinator won't be able to know exactly why the request failed and subsequently the client only gets a generic {{ReadFailureException}}. It would be useful to inform the client that their read failed because we read too many tombstones. We should have the data nodes reply with a failure type so the coordinator can pass this information to the client. (was: Right now if a data node fails to perform a read because it ran into a TombstoneOverwhelmingException, it only responds back to the coordinator node with a generic failure. Under this scheme, the coordinator won't be able to know exactly why the request failed and subsequently the client only gets a generic ReadFailureException. It would be useful to inform the client that their read failed because we read too many tombstones. We should have the data nodes reply with a failure type so the coordinator can pass this information to the client.) > Propagate TombstoneOverwhelmingException to the client > -- > > Key: CASSANDRA-12311 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12311 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 4.x > > Attachments: 12311-trunk.txt > > > Right now if a data node fails to perform a read because it ran into a > {{TombstoneOverwhelmingException}}, it only responds back to the coordinator > node with a generic failure. Under this scheme, the coordinator won't be able > to know exactly why the request failed and subsequently the client only gets > a generic {{ReadFailureException}}. It would be useful to inform the client > that their read failed because we read too many tombstones. We should have > the data nodes reply with a failure type so the coordinator can pass this > information to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client
[ https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Yu updated CASSANDRA-12311: Attachment: 12311-trunk.txt > Propagate TombstoneOverwhelmingException to the client > -- > > Key: CASSANDRA-12311 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12311 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Attachments: 12311-trunk.txt > > > Right now if a data node fails to perform a read because it ran into a > TombstoneOverwhelmingException, it only responds back to the coordinator node > with a generic failure. Under this scheme, the coordinator won't be able to > know exactly why the request failed and subsequently the client only gets a > generic ReadFailureException. It would be useful to inform the client that > their read failed because we read too many tombstones. We should have the > data nodes reply with a failure type so the coordinator can pass this > information to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client
[ https://issues.apache.org/jira/browse/CASSANDRA-12311?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Geoffrey Yu updated CASSANDRA-12311: Fix Version/s: 4.x Status: Patch Available (was: Open) I've attached a proposed patch that implements these changes. It adds a new exception code and also makes changes to internode messaging, so I've marked it for 4.x. > Propagate TombstoneOverwhelmingException to the client > -- > > Key: CASSANDRA-12311 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12311 > Project: Cassandra > Issue Type: Improvement >Reporter: Geoffrey Yu >Assignee: Geoffrey Yu >Priority: Minor > Fix For: 4.x > > Attachments: 12311-trunk.txt > > > Right now if a data node fails to perform a read because it ran into a > TombstoneOverwhelmingException, it only responds back to the coordinator node > with a generic failure. Under this scheme, the coordinator won't be able to > know exactly why the request failed and subsequently the client only gets a > generic ReadFailureException. It would be useful to inform the client that > their read failed because we read too many tombstones. We should have the > data nodes reply with a failure type so the coordinator can pass this > information to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12311) Propagate TombstoneOverwhelmingException to the client
Geoffrey Yu created CASSANDRA-12311: --- Summary: Propagate TombstoneOverwhelmingException to the client Key: CASSANDRA-12311 URL: https://issues.apache.org/jira/browse/CASSANDRA-12311 Project: Cassandra Issue Type: Improvement Reporter: Geoffrey Yu Assignee: Geoffrey Yu Priority: Minor Right now if a data node fails to perform a read because it ran into a TombstoneOverwhelmingException, it only responds back to the coordinator node with a generic failure. Under this scheme, the coordinator won't be able to know exactly why the request failed and subsequently the client only gets a generic ReadFailureException. It would be useful to inform the client that their read failed because we read too many tombstones. We should have the data nodes reply with a failure type so the coordinator can pass this information to the client. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12300) Disallow unset memtable_cleanup_threshold when flush writers is set
[ https://issues.apache.org/jira/browse/CASSANDRA-12300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394770#comment-15394770 ] Brandon Williams commented on CASSANDRA-12300: -- I think part of it is our somewhat bad advice: {noformat} # If your data directories are backed by SSD, you should increase this # to the number of cores. #memtable_flush_writers: 8 {noformat} Oh, I have SSD, I'll set this to the number of cores. In reality, I think what you want to set this to is min(active_tables+fudge, num_cores). Instead when you do just blindly set this to the number of cores, you can get a huge divisor for mct if unset, and create a bunch of very small sstables (I've seen as bad as a handful or two of kilobytes.) > Disallow unset memtable_cleanup_threshold when flush writers is set > --- > > Key: CASSANDRA-12300 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12300 > Project: Cassandra > Issue Type: Improvement >Reporter: Brandon Williams > > Many times I see flush writers set, and mct unset, leading to a very small > mct, which causes unneeded frequent flushing, and then of course compaction. > I also think the default is a bit conservative, typically ending up at 0.11, > where I'd say the majority of use cases only have one or two hot tables and > are much better served at 0.7 or 0.8. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12212) system.compactions_in_progress needs to be used on first upgrade to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-12212?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394741#comment-15394741 ] Aleksey Yeschenko commented on CASSANDRA-12212: --- We don't for counters created in 2.1 and later. The remains of local shards from pre-2.1 can remain for an unbounded time, however, which may or may not be relevant to the ticket. > system.compactions_in_progress needs to be used on first upgrade to 3.0 > --- > > Key: CASSANDRA-12212 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12212 > Project: Cassandra > Issue Type: Bug > Components: Compaction >Reporter: Jeremiah Jordan >Assignee: Stefania > Fix For: 3.0.x, 3.x > > > CASSANDRA-7066 removed the system.compactions_in_progress table and replaced > it with the new transaction system. But system.compactions_in_progress needs > to be consulted for the first startup after upgrading from 2.1 to 3.0. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12292) Wrong buffer size after CASSANDRA-11580
[ https://issues.apache.org/jira/browse/CASSANDRA-12292?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-12292: --- Resolution: Fixed Fix Version/s: (was: 3.x) 3.10 Status: Resolved (was: Patch Available) Thanks, tests look good, so committed as {{c4c9b05700a37447322c3f84d81746051a81b33c}}. > Wrong buffer size after CASSANDRA-11580 > --- > > Key: CASSANDRA-12292 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12292 > Project: Cassandra > Issue Type: Bug >Reporter: Yuki Morishita >Assignee: Yuki Morishita >Priority: Trivial > Fix For: 3.10 > > > CASSANDRA-11580 refactored around SegmentedFile(now FileHandle) in > o.a.c.io.util, but it introduced a bug in setting buffer size. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Fix wrong buffer size
Repository: cassandra Updated Branches: refs/heads/trunk dc9ed4634 -> c4c9b0570 Fix wrong buffer size This is a bug from CASSANDRA-11580. patch by yukim; reviewed by Stefania Alborghetti for CASSANDRA-12292 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c4c9b057 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c4c9b057 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c4c9b057 Branch: refs/heads/trunk Commit: c4c9b05700a37447322c3f84d81746051a81b33c Parents: dc9ed46 Author: Yuki MorishitaAuthored: Mon Jul 25 14:41:26 2016 -0500 Committer: Yuki Morishita Committed: Tue Jul 26 18:08:44 2016 -0500 -- .../apache/cassandra/io/sstable/format/SSTableReader.java| 4 ++-- .../cassandra/io/sstable/format/big/BigTableWriter.java | 8 2 files changed, 6 insertions(+), 6 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c9b057/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java index fc0849f..d26edfa 100644 --- a/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java +++ b/src/java/org/apache/cassandra/io/sstable/format/SSTableReader.java @@ -445,8 +445,8 @@ public abstract class SSTableReader extends SSTable implements SelfRefCountedhttp://git-wip-us.apache.org/repos/asf/cassandra/blob/c4c9b057/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java index 26b1543..5696ecb 100644 --- a/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/format/big/BigTableWriter.java @@ -283,11 +283,11 @@ public class BigTableWriter extends SSTableWriter IndexSummary indexSummary = iwriter.summary.build(metadata.partitioner, boundary); long indexFileLength = new File(descriptor.filenameFor(Component.PRIMARY_INDEX)).length(); int indexBufferSize = optimizationStrategy.bufferSize(indexFileLength / indexSummary.size()); -FileHandle ifile = iwriter.builder.bufferSize(optimizationStrategy.bufferSize(indexBufferSize)).complete(boundary.indexLength); +FileHandle ifile = iwriter.builder.bufferSize(indexBufferSize).complete(boundary.indexLength); if (compression) dbuilder.withCompressionMetadata(((CompressedSequentialWriter) dataFile).open(boundary.dataLength)); int dataBufferSize = optimizationStrategy.bufferSize(stats.estimatedPartitionSize.percentile(DatabaseDescriptor.getDiskOptimizationEstimatePercentile())); -FileHandle dfile = dbuilder.bufferSize(optimizationStrategy.bufferSize(dataBufferSize)).complete(boundary.dataLength); +FileHandle dfile = dbuilder.bufferSize(dataBufferSize).complete(boundary.dataLength); invalidateCacheAtBoundary(dfile); SSTableReader sstable = SSTableReader.internalOpen(descriptor, components, metadata, @@ -330,10 +330,10 @@ public class BigTableWriter extends SSTableWriter long indexFileLength = new File(descriptor.filenameFor(Component.PRIMARY_INDEX)).length(); int dataBufferSize = optimizationStrategy.bufferSize(stats.estimatedPartitionSize.percentile(DatabaseDescriptor.getDiskOptimizationEstimatePercentile())); int indexBufferSize = optimizationStrategy.bufferSize(indexFileLength / indexSummary.size()); -FileHandle ifile = iwriter.builder.bufferSize(optimizationStrategy.bufferSize(indexBufferSize)).complete(); +FileHandle ifile = iwriter.builder.bufferSize(indexBufferSize).complete(); if (compression) dbuilder.withCompressionMetadata(((CompressedSequentialWriter) dataFile).open(0)); -FileHandle dfile = dbuilder.bufferSize(optimizationStrategy.bufferSize(dataBufferSize)).complete(); +FileHandle dfile = dbuilder.bufferSize(dataBufferSize).complete(); invalidateCacheAtBoundary(dfile); SSTableReader sstable = SSTableReader.internalOpen(descriptor, components,
[jira] [Updated] (CASSANDRA-12278) Cassandra not working with Java 8u102 on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12278: Assignee: Paulo Motta (was: Joshua McKenzie) > Cassandra not working with Java 8u102 on Windows > > > Key: CASSANDRA-12278 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12278 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Windows 10 Enterprise with Cassandra 3.7 and JDK 8u102 >Reporter: Thomas Atwood >Assignee: Paulo Motta > Attachments: 12278_v1.txt, 12278_v2.txt, 12278_v3.txt, Error from 2nd > PC.png, Error with Java version prompt too.png, Java 8u102 issue.png > > > With the latest upgrade of Java to 8u102, Cassandra will no longer run and > states "Cassandra 3.0 and later require Java 8u40 or later. Please see > attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12278) Cassandra not working with Java 8u102 on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-12278: Reviewer: Joshua McKenzie (was: Paulo Motta) > Cassandra not working with Java 8u102 on Windows > > > Key: CASSANDRA-12278 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12278 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Windows 10 Enterprise with Cassandra 3.7 and JDK 8u102 >Reporter: Thomas Atwood >Assignee: Paulo Motta > Attachments: 12278_v1.txt, 12278_v2.txt, 12278_v3.txt, Error from 2nd > PC.png, Error with Java version prompt too.png, Java 8u102 issue.png > > > With the latest upgrade of Java to 8u102, Cassandra will no longer run and > states "Cassandra 3.0 and later require Java 8u40 or later. Please see > attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10726) Read repair inserts should not be blocking
[ https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-10726: -- Reviewer: Aleksey Yeschenko > Read repair inserts should not be blocking > -- > > Key: CASSANDRA-10726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10726 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Richard Low >Assignee: Nachiket Patil > > Today, if there’s a digest mismatch in a foreground read repair, the insert > to update out of date replicas is blocking. This means, if it fails, the read > fails with a timeout. If a node is dropping writes (maybe it is overloaded or > the mutation stage is backed up for some other reason), all reads to a > replica set could fail. Further, replicas dropping writes get more out of > sync so will require more read repair. > The comment on the code for why the writes are blocking is: > {code} > // wait for the repair writes to be acknowledged, to minimize impact on any > replica that's > // behind on writes in case the out-of-sync row is read multiple times in > quick succession > {code} > but the bad side effect is that reads timeout. Either the writes should not > be blocking or we should return success for the read even if the write times > out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10939: Status: Patch Available (was: Open) > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: 10939.txt > > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10939: Attachment: 10939.txt > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: 10939.txt > > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394659#comment-15394659 ] Paulo Motta edited comment on CASSANDRA-10939 at 7/26/16 10:40 PM: --- Attaching cassandra-3.0-based patch (merges cleanly upwards) adding -XX:CompileCommandFile and -XX:HeapDumpPath options to cassandra-env.ps1. Tested locally with {{$env:CASSANDRA_CONF=C:\Users\Paulo\cassandra\conf}} and {{$env:CASSANDRA_CONF=HEAP_DUMP_DIR=C:\Users\Paulo\cassandra}} and got the following additional jvm options: {noformat} -XX:CompileCommandFile=C:\Users\Paulo\cassandra\conf\hotspot_compiler -XX:HeapDumpPath=C:\Users\Paulo\cassandra\cassandra-1469570395-pid11192.hprof {noformat} was (Author: pauloricardomg): Attaching cassandra-3.9-based patch adding -XX:CompileCommandFile and -XX:HeapDumpPath options to cassandra-env.ps1. Tested locally with {{$env:CASSANDRA_CONF=C:\Users\Paulo\cassandra\conf}} and {{$env:CASSANDRA_CONF=HEAP_DUMP_DIR=C:\Users\Paulo\cassandra}} and got the following additional jvm options: {noformat} -XX:CompileCommandFile=C:\Users\Paulo\cassandra\conf\hotspot_compiler -XX:HeapDumpPath=C:\Users\Paulo\cassandra\cassandra-1469570395-pid11192.hprof {noformat} > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: 10939.txt > > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10939: Attachment: (was: 0001-Add-missing-jvm-options-to-cassandra-env.ps1.patch) > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12142) Add "beta" version native protocol flag
[ https://issues.apache.org/jira/browse/CASSANDRA-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394691#comment-15394691 ] Tyler Hobbs commented on CASSANDRA-12142: - Thanks. Assuming the new test run looks good, I think I'm +1 on the patch. It would be good to get some feedback from the driver team before committing, though. > Add "beta" version native protocol flag > --- > > Key: CASSANDRA-12142 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12142 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Tyler Hobbs >Assignee: Alex Petrov > Labels: protocolv5 > > As discussed in CASSANDRA-10786, we'd like to add a new flag to the native > protocol to allow drivers to connect using a "beta" native protocol version. > This would be used for native protocol versions that are still in development > and may not have all of the final features. Without the "beta" flag, drivers > will be prevented from using the protocol version. > This is primarily useful for driver authors to start work against a new > protocol version when the work on that spans multiple releases. Users would > not generally be expected to utilize this flag, although it could potentially > be used to offer early feedback on new protocol features. > It seems like the {{STARTUP}} message body is the best place for the new beta > flag. We may also considering adding protocol information to the > {{SUPPORTED}} message as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10993) Make read and write requests paths fully non-blocking, eliminate related stages
[ https://issues.apache.org/jira/browse/CASSANDRA-10993?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-10993: Attachment: 10993-reads-no-evloop-integration-six-node-stress.svg I finally got a proper setup for recording some flamegraphs of the current 10993 branch during reads. The attached image was recorded while six stress processed read from a single C* node. All of the reads were served from a memtable. In short, it looks like we spend very little time dealing with the netty task queue when the node is actually saturated with reads (roughly ~1%). Previous flamegraphs were misleading, because a single stress client was not enough to saturate the node, so a higher percentage of time was spent in the event loops waiting for tasks. Based on this data, I'm not sure that it makes sense to focus on creating a custom Netty event loop for more efficient integration. So, I plan to move onto benchmarking in-memory reads with the current 10993 vs trunk (at least, the common ancestor of trunk and 10993) vs CASSANDRA-10528 (ported to the same trunk ancestor). > Make read and write requests paths fully non-blocking, eliminate related > stages > --- > > Key: CASSANDRA-10993 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10993 > Project: Cassandra > Issue Type: Sub-task > Components: Coordination, Local Write-Read Paths >Reporter: Aleksey Yeschenko >Assignee: Tyler Hobbs > Fix For: 3.x > > Attachments: 10993-reads-no-evloop-integration-six-node-stress.svg > > > Building on work done by [~tjake] (CASSANDRA-10528), [~slebresne] > (CASSANDRA-5239), and others, convert read and write request paths to be > fully non-blocking, to enable the eventual transition from SEDA to TPC > (CASSANDRA-10989) > Eliminate {{MUTATION}}, {{COUNTER_MUTATION}}, {{VIEW_MUTATION}}, {{READ}}, > and {{READ_REPAIR}} stages, move read and write execution directly to Netty > context. > For lack of decent async I/O options on Linux, we’ll still have to retain an > extra thread pool for serving read requests for data not residing in our page > cache (CASSANDRA-5863), however. > Implementation-wise, we only have two options available to us: explicit FSMs > and chained futures. Fibers would be the third, and easiest option, but > aren’t feasible in Java without resorting to direct bytecode manipulation > (ourselves or using [quasar|https://github.com/puniverse/quasar]). > I have seen 4 implementations bases on chained futures/promises now - three > in Java and one in C++ - and I’m not convinced that it’s the optimal (or > sane) choice for representing our complex logic - think 2i quorum read > requests with timeouts at all levels, read repair (blocking and > non-blocking), and speculative retries in the mix, {{SERIAL}} reads and > writes. > I’m currently leaning towards an implementation based on explicit FSMs, and > intend to provide a prototype - soonish - for comparison with > {{CompletableFuture}}-like variants. > Either way the transition is a relatively boring straightforward refactoring. > There are, however, some extension points on both write and read paths that > we do not control: > - authorisation implementations will have to be non-blocking. We have control > over built-in ones, but for any custom implementation we will have to execute > them in a separate thread pool > - 2i hooks on the write path will need to be non-blocking > - any trigger implementations will not be allowed to block > - UDFs and UDAs > We are further limited by API compatibility restrictions in the 3.x line, > forbidding us to alter, or add any non-{{default}} interface methods to those > extension points, so these pose a problem. > Depending on logistics, expecting to get this done in time for 3.4 or 3.6 > feature release. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-10939: Attachment: 0001-Add-missing-jvm-options-to-cassandra-env.ps1.patch Attaching patch adding -XX:CompileCommandFile and -XX:HeapDumpPath options to cassandra-env.ps1. Tested locally with {{$env:CASSANDRA_CONF=C:\Users\Paulo\cassandra\conf}} and {{$env:CASSANDRA_CONF=HEAP_DUMP_DIR=C:\Users\Paulo\cassandra}} and got the following additional jvm options: {noformat} -XX:CompileCommandFile=C:\Users\Paulo\cassandra\conf\hotspot_compiler -XX:HeapDumpPath=C:\Users\Paulo\cassandra\cassandra-1469570395-pid11192.hprof {noformat} > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: 0001-Add-missing-jvm-options-to-cassandra-env.ps1.patch > > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-10939) Add missing jvm options to cassandra-env.ps1
[ https://issues.apache.org/jira/browse/CASSANDRA-10939?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394659#comment-15394659 ] Paulo Motta edited comment on CASSANDRA-10939 at 7/26/16 10:19 PM: --- Attaching cassandra-3.9-based patch adding -XX:CompileCommandFile and -XX:HeapDumpPath options to cassandra-env.ps1. Tested locally with {{$env:CASSANDRA_CONF=C:\Users\Paulo\cassandra\conf}} and {{$env:CASSANDRA_CONF=HEAP_DUMP_DIR=C:\Users\Paulo\cassandra}} and got the following additional jvm options: {noformat} -XX:CompileCommandFile=C:\Users\Paulo\cassandra\conf\hotspot_compiler -XX:HeapDumpPath=C:\Users\Paulo\cassandra\cassandra-1469570395-pid11192.hprof {noformat} was (Author: pauloricardomg): Attaching patch adding -XX:CompileCommandFile and -XX:HeapDumpPath options to cassandra-env.ps1. Tested locally with {{$env:CASSANDRA_CONF=C:\Users\Paulo\cassandra\conf}} and {{$env:CASSANDRA_CONF=HEAP_DUMP_DIR=C:\Users\Paulo\cassandra}} and got the following additional jvm options: {noformat} -XX:CompileCommandFile=C:\Users\Paulo\cassandra\conf\hotspot_compiler -XX:HeapDumpPath=C:\Users\Paulo\cassandra\cassandra-1469570395-pid11192.hprof {noformat} > Add missing jvm options to cassandra-env.ps1 > > > Key: CASSANDRA-10939 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10939 > Project: Cassandra > Issue Type: Bug > Components: Configuration >Reporter: Paulo Motta >Assignee: Paulo Motta >Priority: Minor > Attachments: 0001-Add-missing-jvm-options-to-cassandra-env.ps1.patch > > > The following dynamic JVM options are missing from cassandra-env.ps1: > {{-XX:HeapDumpPath}} and {{-XX:CompileCommandFile}}. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394638#comment-15394638 ] Daniel Kleviansky commented on CASSANDRA-12294: --- Working on it here: https://github.com/lqid/cassandra --- Branch 12294-22 Have just added the LdapAuthenticator class so far as a placeholder. > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12142) Add "beta" version native protocol flag
[ https://issues.apache.org/jira/browse/CASSANDRA-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394581#comment-15394581 ] Alex Petrov commented on CASSANDRA-12142: - Yes, sorry, overlooked it somehow. Fixing it was [trivial|https://github.com/ifesdjeen/cassandra/commit/dd0bd90bc11cabe9dfa103bf42563e7f310192a1#diff-13d5c04f62552ff47ad78a3ad932ca5eR38]: we should use v6 for "unsupported" protocol version. v5 is supported and throws "use beta flag" type of error message. I've rebased and re-triggered CI just in case. Also, I've moved doc changes to v5 document. > Add "beta" version native protocol flag > --- > > Key: CASSANDRA-12142 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12142 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Tyler Hobbs >Assignee: Alex Petrov > Labels: protocolv5 > > As discussed in CASSANDRA-10786, we'd like to add a new flag to the native > protocol to allow drivers to connect using a "beta" native protocol version. > This would be used for native protocol versions that are still in development > and may not have all of the final features. Without the "beta" flag, drivers > will be prevented from using the protocol version. > This is primarily useful for driver authors to start work against a new > protocol version when the work on that spans multiple releases. Users would > not generally be expected to utilize this flag, although it could potentially > be used to offer early feedback on new protocol features. > It seems like the {{STARTUP}} message body is the best place for the new beta > flag. We may also considering adding protocol information to the > {{SUPPORTED}} message as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12278) Cassandra not working with Java 8u102 on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-12278: Attachment: 12278_v3.txt Attaching v3 moving version parsing to ParseJVMInfo and reusing {{JVM_PATCH_VERSION}} variable (took a while to generate a valid patch on git-win :(). Can you have a look [~JoshuaMcKenzie]? > Cassandra not working with Java 8u102 on Windows > > > Key: CASSANDRA-12278 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12278 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Windows 10 Enterprise with Cassandra 3.7 and JDK 8u102 >Reporter: Thomas Atwood >Assignee: Joshua McKenzie > Attachments: 12278_v1.txt, 12278_v2.txt, 12278_v3.txt, Error from 2nd > PC.png, Error with Java version prompt too.png, Java 8u102 issue.png > > > With the latest upgrade of Java to 8u102, Cassandra will no longer run and > states "Cassandra 3.0 and later require Java 8u40 or later. Please see > attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12142) Add "beta" version native protocol flag
[ https://issues.apache.org/jira/browse/CASSANDRA-12142?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394542#comment-15394542 ] Tyler Hobbs commented on CASSANDRA-12142: - Hmm, it looks like the latest patch is causing a failure in {{ProtocolErrorTest.testInvalidProtocolVersion}}. > Add "beta" version native protocol flag > --- > > Key: CASSANDRA-12142 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12142 > Project: Cassandra > Issue Type: Sub-task > Components: CQL >Reporter: Tyler Hobbs >Assignee: Alex Petrov > Labels: protocolv5 > > As discussed in CASSANDRA-10786, we'd like to add a new flag to the native > protocol to allow drivers to connect using a "beta" native protocol version. > This would be used for native protocol versions that are still in development > and may not have all of the final features. Without the "beta" flag, drivers > will be prevented from using the protocol version. > This is primarily useful for driver authors to start work against a new > protocol version when the work on that spans multiple releases. Users would > not generally be expected to utilize this flag, although it could potentially > be used to offer early feedback on new protocol features. > It seems like the {{STARTUP}} message body is the best place for the new beta > flag. We may also considering adding protocol information to the > {{SUPPORTED}} message as well. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12260) dtest failure in topology_test.TestTopology.decommissioned_node_cant_rejoin_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12260?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joel Knighton reassigned CASSANDRA-12260: - Assignee: Joel Knighton (was: Philip Thompson) > dtest failure in > topology_test.TestTopology.decommissioned_node_cant_rejoin_test > > > Key: CASSANDRA-12260 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12260 > Project: Cassandra > Issue Type: Test >Reporter: Philip Thompson >Assignee: Joel Knighton > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.9_novnode_dtest/14/testReport/topology_test/TestTopology/decommissioned_node_cant_rejoin_test -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12278) Cassandra not working with Java 8u102 on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394528#comment-15394528 ] Paulo Motta commented on CASSANDRA-12278: - oops, just saw note about {{92-b14}} version (WTF!?).. looks like this is not going to work very well in that case. > Cassandra not working with Java 8u102 on Windows > > > Key: CASSANDRA-12278 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12278 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Windows 10 Enterprise with Cassandra 3.7 and JDK 8u102 >Reporter: Thomas Atwood >Assignee: Joshua McKenzie > Attachments: 12278_v1.txt, 12278_v2.txt, Error from 2nd PC.png, Error > with Java version prompt too.png, Java 8u102 issue.png > > > With the latest upgrade of Java to 8u102, Cassandra will no longer run and > states "Cassandra 3.0 and later require Java 8u40 or later. Please see > attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12278) Cassandra not working with Java 8u102 on Windows
[ https://issues.apache.org/jira/browse/CASSANDRA-12278?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-12278: Attachment: 12278_v2.txt attaching simpler patch (v2) making use of existing variable {{JVM_PATCH_VERSION}}. Can you double check it works as expected [~JoshuaMcKenzie]? > Cassandra not working with Java 8u102 on Windows > > > Key: CASSANDRA-12278 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12278 > Project: Cassandra > Issue Type: Bug > Components: Core > Environment: Windows 10 Enterprise with Cassandra 3.7 and JDK 8u102 >Reporter: Thomas Atwood >Assignee: Joshua McKenzie > Attachments: 12278_v1.txt, 12278_v2.txt, Error from 2nd PC.png, Error > with Java version prompt too.png, Java 8u102 issue.png > > > With the latest upgrade of Java to 8u102, Cassandra will no longer run and > states "Cassandra 3.0 and later require Java 8u40 or later. Please see > attached screenshot. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12310) Use of getByName() to retrieve IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394509#comment-15394509 ] Eduardo Aguinaga commented on CASSANDRA-12310: -- Tyler, A forward and reverse lookup on the DNS record helps ensure the authenticity of the data. The idea being that it is much more difficult for a hacker to change both pieces of data. Ed Sent from Ed Aguinaga's iPhone Life is analog, digital is just samples thereof > Use of getByName() to retrieve IP address > - > > Key: CASSANDRA-12310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > There are many places in the Cassandra source code that rely upon a call to > getByName() to retrieve an IP address. The information returned by > getByName() is not trustworthy. Attackers can spoof DNS entries and depending > on getByName alone invites DNS spoofing attacks. > This is an example from the file DatabaseDescriptor.java where there are > examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. > {code:java} > DatabaseDescriptor.java, lines 231-238: > 231 try > 232 { > 233 rpcAddress = InetAddress.getByName(config.rpc_address); > 234 } > 235 catch (UnknownHostException e) > 236 { > 237 throw new ConfigurationException("Unknown host in rpc_address " + > config.rpc_address, false); > 238 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric
[ https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394505#comment-15394505 ] Russell Spitzer commented on CASSANDRA-12216: - New patch attached vs Trunk, contains doc changes as well > TTL Reading And Writing is Asymmetric > -- > > Key: CASSANDRA-12216 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12216 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Russell Spitzer >Assignee: Russell Spitzer >Priority: Minor > Fix For: 3.x > > Attachments: 12216-3.7-2.txt, 12216-3.7.txt, 12216-trunk.patch > > > There is an inherent asymmetry in the way TTL's are read and Written. > An `TTL` of 0 when written becomes a `null` in C* > When read, this `TTL` becomes a `null` > The `null` cannot be written back to C* as `TTL` > This means that end users attempting to copy tables with TTL have to do > manual mapping of the null TTL values to 0 to avoid NPE. This is a bit > onerous when C* seems to have an internal logic that 0 == NULL. I don't think > C* should return values which are not directly insertable back to C*. > Even with the advent CASSANDRA-7304 this still remains a problem that the > User needs to be aware of and take care of. > The following prepared statement > {code} > INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? > {code} > Will throw NPEs unless we specifically check that the value to be bound to > TTL is not null. > I think we should discuss whether `null` should be treated as 0 in TTL for > prepared statements. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12216) TTL Reading And Writing is Asymmetric
[ https://issues.apache.org/jira/browse/CASSANDRA-12216?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russell Spitzer updated CASSANDRA-12216: Attachment: 12216-trunk.patch > TTL Reading And Writing is Asymmetric > -- > > Key: CASSANDRA-12216 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12216 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Russell Spitzer >Assignee: Russell Spitzer >Priority: Minor > Fix For: 3.x > > Attachments: 12216-3.7-2.txt, 12216-3.7.txt, 12216-trunk.patch > > > There is an inherent asymmetry in the way TTL's are read and Written. > An `TTL` of 0 when written becomes a `null` in C* > When read, this `TTL` becomes a `null` > The `null` cannot be written back to C* as `TTL` > This means that end users attempting to copy tables with TTL have to do > manual mapping of the null TTL values to 0 to avoid NPE. This is a bit > onerous when C* seems to have an internal logic that 0 == NULL. I don't think > C* should return values which are not directly insertable back to C*. > Even with the advent CASSANDRA-7304 this still remains a problem that the > User needs to be aware of and take care of. > The following prepared statement > {code} > INSERT INTO test.table2 (k,v) (?,?) USING TTL: ? > {code} > Will throw NPEs unless we specifically check that the value to be bound to > TTL is not null. > I think we should discuss whether `null` should be treated as 0 in TTL for > prepared statements. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12301) Privacy VIolation - Heap Inspection
[ https://issues.apache.org/jira/browse/CASSANDRA-12301?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12301: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Privacy VIolation - Heap Inspection > --- > > Key: CASSANDRA-12301 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12301 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > In the file SSLTransportFactory.java on lines 72 and 76 a string object is > used to store sensitive data. String objects are immutable and should not be > used to store sensitive data. Sensitive data should be stored in char or byte > arrays and the contents of those arrays should be cleared ASAP. Operations > performed on string objects will require that the original object be copied > and the operation be applied in the new copy of the string object. This > results in the likelihood that multiple copies of sensitive data will be > present in the heap until garbage collection takes place. > The snippet below shows the issue on lines 72 and 76: > SSLTransportFactory.java, lines 47-81: > {code:java} > 47 private String truststore; > 48 private String truststorePassword; > 49 private String keystore; > 50 private String keystorePassword; > 51 private String protocol; > 52 private String[] cipherSuites; > . . . > 66 @Override > 67 public void setOptions(Mapoptions) > 68 { > 69 if (options.containsKey(TRUSTSTORE)) > 70 truststore = options.get(TRUSTSTORE); > 71 if (options.containsKey(TRUSTSTORE_PASSWORD)) > 72 truststorePassword = options.get(TRUSTSTORE_PASSWORD); > 73 if (options.containsKey(KEYSTORE)) > 74 keystore = options.get(KEYSTORE); > 75 if (options.containsKey(KEYSTORE_PASSWORD)) > 76 keystorePassword = options.get(KEYSTORE_PASSWORD); > 77 if (options.containsKey(PROTOCOL)) > 78 protocol = options.get(PROTOCOL); > 79 if (options.containsKey(CIPHER_SUITES)) > 80 cipherSuites = options.get(CIPHER_SUITES).split(","); > 81 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12299) Privacy VIolation - Heap Inspection
[ https://issues.apache.org/jira/browse/CASSANDRA-12299?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12299: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Privacy VIolation - Heap Inspection > --- > > Key: CASSANDRA-12299 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12299 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > In the file CqlConfigHelper.java on lines 508, 533, 534 and 592 a string > object is used to store sensitive data. String objects are immutable and > should not be used to store sensitive data. Sensitive data should be stored > in char or byte arrays and the contents of those arrays should be cleared > ASAP. Operations performed on string objects will require that the original > object be copied and the operation be applied in the new copy of the string > object. This results in the likelihood that multiple copies of sensitive data > will be present in the heap until garbage collection takes place. > The snippet below shows the issue on line 508: > CqlConfigHelper.java, lines 505-518: > {code:java} > 505 private static Optional > getDefaultAuthProvider(Configuration conf) > 506 { > 507 Optional username = getStringSetting(USERNAME, conf); > 508 Optional password = getStringSetting(PASSWORD, conf); > 509 > 510 if (username.isPresent() && password.isPresent()) > 511 { > 512 return Optional.of(new PlainTextAuthProvider(username.get(), > password.get())); > 513 } > 514 else > 515 { > 516 return Optional.absent(); > 517 } > 518 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12303) Privacy VIolation - Heap Inspection
[ https://issues.apache.org/jira/browse/CASSANDRA-12303?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12303: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Privacy VIolation - Heap Inspection > --- > > Key: CASSANDRA-12303 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12303 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > In the file AbstractJmxClient.java on lines 69 and 147 a string object is > used to store sensitive data. String objects are immutable and should not be > used to store sensitive data. Sensitive data should be stored in char or byte > arrays and the contents of those arrays should be cleared ASAP. Operations > performed on string objects will require that the original object be copied > and the operation be applied in the new copy of the string object. This > results in the likelihood that multiple copies of sensitive data will be > present in the heap until garbage collection takes place. > The snippet below shows the issue on line 69: > AbstractJmxClient.java, lines 51-71: > {code:java} > 51 protected final String password; > 52 protected JMXConnection jmxConn; > 53 protected PrintStream out = System.out; > . . . > 64 public AbstractJmxClient(String host, Integer port, String username, > String password) throws IOException > 65 { > 66 this.host = (host != null) ? host : DEFAULT_HOST; > 67 this.port = (port != null) ? port : DEFAULT_JMX_PORT; > 68 this.username = username; > 69 this.password = password; > 70 jmxConn = new JMXConnection(this.host, this.port, username, password); > 71 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12308) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12308: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12308 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12308 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue which ends on line 585 by instantiating a > class by name. > ConfigHelper.java, lines 558-591: > {code:java} > 558 @SuppressWarnings("resource") > 559 public static Cassandra.Client createConnection(Configuration conf, > String host, Integer port) throws IOException > 560 { > 561 try > 562 { > 563 TTransport transport = > getClientTransportFactory(conf).openTransport(host, port); > 564 return new Cassandra.Client(new TBinaryProtocol(transport, true, > true)); > 565 } > 566 catch (Exception e) > 567 { > 568 throw new IOException("Unable to connect to server " + host + ":" > + port, e); > 569 } > 570 } > 571 > 572 public static ITransportFactory getClientTransportFactory(Configuration > conf) > 573 { > 574 String factoryClassName = conf.get(ITransportFactory.PROPERTY_KEY, > TFramedTransportFactory.class.getName()); > 575 ITransportFactory factory = > getClientTransportFactory(factoryClassName); > 576 Mapoptions = getOptions(conf, > factory.supportedOptions()); > 577 factory.setOptions(options); > 578 return factory; > 579 } > 580 > 581 private static ITransportFactory getClientTransportFactory(String > factoryClassName) > 582 { > 583 try > 584 { > 585 return (ITransportFactory) > Class.forName(factoryClassName).newInstance(); > 586 } > 587 catch (Exception e) > 588 { > 589 throw new RuntimeException("Failed to instantiate transport > factory:" + factoryClassName, e); > 590 } > 591 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12304) Privacy VIolation - Heap Inspection
[ https://issues.apache.org/jira/browse/CASSANDRA-12304?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12304: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Privacy VIolation - Heap Inspection > --- > > Key: CASSANDRA-12304 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12304 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > In the file BulkLoader.java on line 387 a string object is used to store > sensitive data. String objects are immutable and should not be used to store > sensitive data. Sensitive data should be stored in char or byte arrays and > the contents of those arrays should be cleared ASAP. Operations performed on > string objects will require that the original object be copied and the > operation be applied in the new copy of the string object. This results in > the likelihood that multiple copies of sensitive data will be present in the > heap until garbage collection takes place. > The snippet below shows the issue on line 387: > BulkLoader.java, lines 318-387: > {code:java} > 318 public String passwd; > . . . > 337 public static LoaderOptions parseArgs(String cmdArgs[]) > 338 { > 339 CommandLineParser parser = new GnuParser(); > 340 CmdLineOptions options = getCmdLineOptions(); > 341 try > 342 { > . . . > 386 if (cmd.hasOption(PASSWD_OPTION)) > 387 opts.passwd = cmd.getOptionValue(PASSWD_OPTION); > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12295) Double check locking pattern
[ https://issues.apache.org/jira/browse/CASSANDRA-12295?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12295: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Double check locking pattern > > > Key: CASSANDRA-12295 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12295 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > The file ketspace.java includes a double check locking pattern. The double > check locking pattern is an incorrect idiom that does not achieve its > intended effect.For more information see LCK-10J in the CERT Oracle Coding > Standard for Java > https://www.securecoding.cert.org/confluence/display/java/LCK10-J.+Use+a+correct+form+of+the+double-checked+locking+idiom > The snippet below shows the double check locking pattern: > Keyspace.java, lines 115-135: > {code:java} > 115 private static Keyspace open(String keyspaceName, Schema schema, boolean > loadSSTables) > 116 { > 117 Keyspace keyspaceInstance = schema.getKeyspaceInstance(keyspaceName); > 118 > 119 if (keyspaceInstance == null) > 120 { > 121 // instantiate the Keyspace. we could use putIfAbsent but it's > important to making sure it is only done once > 122 // per keyspace, so we synchronize and re-check before doing it. > 123 synchronized (Keyspace.class) > 124 { > 125 keyspaceInstance = schema.getKeyspaceInstance(keyspaceName); > 126 if (keyspaceInstance == null) > 127 { > 128 // open and store the keyspace > 129 keyspaceInstance = new Keyspace(keyspaceName, > loadSSTables); > 130 schema.storeKeyspaceInstance(keyspaceInstance); > 131 } > 132 } > 133 } > 134 return keyspaceInstance; > 135 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12306) Privacy VIolation - Heap Inspection
[ https://issues.apache.org/jira/browse/CASSANDRA-12306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12306: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Privacy VIolation - Heap Inspection > --- > > Key: CASSANDRA-12306 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12306 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > In the file NodeTool.java on lines 239, 242 and 291 a string object is used > to store sensitive data. String objects are immutable and should not be used > to store sensitive data. Sensitive data should be stored in char or byte > arrays and the contents of those arrays should be cleared ASAP. Operations > performed on string objects will require that the original object be copied > and the operation be applied in the new copy of the string object. This > results in the likelihood that multiple copies of sensitive data will be > present in the heap until garbage collection takes place. > The snippet below shows the issue on line 239 and 242: > NodeTool.java, lines 229-243: > {code:java} > 229 private String password = EMPTY; > 230 > 231 @Option(type = OptionType.GLOBAL, name = {"-pwf", "--password-file"}, > description = "Path to the JMX password file") > 232 private String passwordFilePath = EMPTY; > 233 > 234 @Override > 235 public void run() > 236 { > 237 if (isNotEmpty(username)) { > 238 if (isNotEmpty(passwordFilePath)) > 239 password = readUserPasswordFromFile(username, > passwordFilePath); > 240 > 241 if (isEmpty(password)) > 242 password = promptAndReadPassword(); > 243 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12309: Reproduced In: 3.0.5 > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12309 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue on line 588 and the method returns a new > instance on line 594 or 598. > CqlConfigHelper.java, lines 584-605: > {code:java} > 584 private static AuthProvider getClientAuthProvider(String > factoryClassName, Configuration conf) > 585 { > 586 try > 587 { > 588 Class c = Class.forName(factoryClassName); > 589 if (PlainTextAuthProvider.class.equals(c)) > 590 { > 591 String username = getStringSetting(USERNAME, conf).or(""); > 592 String password = getStringSetting(PASSWORD, conf).or(""); > 593 return (AuthProvider) c.getConstructor(String.class, > String.class) > 594 .newInstance(username, password); > 595 } > 596 else > 597 { > 598 return (AuthProvider) c.newInstance(); > 599 } > 600 } > 601 catch (Exception e) > 602 { > 603 throw new RuntimeException("Failed to instantiate auth provider:" > + factoryClassName, e); > 604 } > 605 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12307) Command Injection
[ https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12307: Reproduced In: 3.0.5 Fix Version/s: (was: 3.0.5) > Command Injection > - > > Key: CASSANDRA-12307 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12307 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga >Priority: Critical > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Two commands, archiveCommand and restoreCommand, are stored as string > properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The > only processing performed on the command strings is that tokens are replaced > by data available at runtime. > A malicious command could be entered into the system by storing the malicious > command in place of the valid archiveCommand or restoreCommand. The malicious > command would then be executed on line 265 within the exec method. > Any commands that are stored and retrieved should be verified prior to > execution. Assuming that the command is safe because it is stored as a local > property invites security issues. > {code:java} > CommitLogArchiver.java, lines 91-92: > 91 String archiveCommand = commitlog_commands.getProperty("archive_command"); > 92 String restoreCommand = commitlog_commands.getProperty("restore_command"); > CommitLogArchiver.java, lines 261-266: > 261 private void exec(String command) throws IOException > 262 { > 263 ProcessBuilder pb = new ProcessBuilder(command.split(" ")); > 264 pb.redirectErrorStream(true); > 265 FBUtilities.exec(pb); > 266 } > CommitLogArchiver.java, lines 152-166: > 152 public void maybeArchive(final String path, final String name) > 153 { > 154 if (Strings.isNullOrEmpty(archiveCommand)) > 155 return; > 156 > 157 archivePending.put(name, executor.submit(new WrappedRunnable() > 158 { > 159 protected void runMayThrow() throws IOException > 160 { > 161 String command = archiveCommand.replace("%name", name); > 162 command = command.replace("%path", path); > 163 exec(command); > 164 } > 165 })); > 166 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12309: Fix Version/s: (was: 3.0.5) > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12309 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue on line 588 and the method returns a new > instance on line 594 or 598. > CqlConfigHelper.java, lines 584-605: > {code:java} > 584 private static AuthProvider getClientAuthProvider(String > factoryClassName, Configuration conf) > 585 { > 586 try > 587 { > 588 Class c = Class.forName(factoryClassName); > 589 if (PlainTextAuthProvider.class.equals(c)) > 590 { > 591 String username = getStringSetting(USERNAME, conf).or(""); > 592 String password = getStringSetting(PASSWORD, conf).or(""); > 593 return (AuthProvider) c.getConstructor(String.class, > String.class) > 594 .newInstance(username, password); > 595 } > 596 else > 597 { > 598 return (AuthProvider) c.newInstance(); > 599 } > 600 } > 601 catch (Exception e) > 602 { > 603 throw new RuntimeException("Failed to instantiate auth provider:" > + factoryClassName, e); > 604 } > 605 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12310) Use of getByName() to retrieve IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12310: Fix Version/s: (was: 3.0.5) > Use of getByName() to retrieve IP address > - > > Key: CASSANDRA-12310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > There are many places in the Cassandra source code that rely upon a call to > getByName() to retrieve an IP address. The information returned by > getByName() is not trustworthy. Attackers can spoof DNS entries and depending > on getByName alone invites DNS spoofing attacks. > This is an example from the file DatabaseDescriptor.java where there are > examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. > {code:java} > DatabaseDescriptor.java, lines 231-238: > 231 try > 232 { > 233 rpcAddress = InetAddress.getByName(config.rpc_address); > 234 } > 235 catch (UnknownHostException e) > 236 { > 237 throw new ConfigurationException("Unknown host in rpc_address " + > config.rpc_address, false); > 238 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12310) Use of getByName() to retrieve IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremiah Jordan updated CASSANDRA-12310: Reproduced In: 3.0.5 > Use of getByName() to retrieve IP address > - > > Key: CASSANDRA-12310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > There are many places in the Cassandra source code that rely upon a call to > getByName() to retrieve an IP address. The information returned by > getByName() is not trustworthy. Attackers can spoof DNS entries and depending > on getByName alone invites DNS spoofing attacks. > This is an example from the file DatabaseDescriptor.java where there are > examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. > {code:java} > DatabaseDescriptor.java, lines 231-238: > 231 try > 232 { > 233 rpcAddress = InetAddress.getByName(config.rpc_address); > 234 } > 235 catch (UnknownHostException e) > 236 { > 237 throw new ConfigurationException("Unknown host in rpc_address " + > config.rpc_address, false); > 238 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12004) Inconsistent timezone in logs
[ https://issues.apache.org/jira/browse/CASSANDRA-12004?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394461#comment-15394461 ] Paulo Motta commented on CASSANDRA-12004: - Sorry for the delay here, this slipped through the cracks. Patch LGTM, will mark as ready to commit after CI results look good: ||trunk|| |[branch|https://github.com/apache/cassandra/compare/trunk...pauloricardomg:trunk-12004]| |[testall|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-12004-testall/lastCompletedBuild/testReport/]| |[dtest|http://cassci.datastax.com/view/Dev/view/paulomotta/job/pauloricardomg-trunk-12004-dtest/lastCompletedBuild/testReport/]| > Inconsistent timezone in logs > - > > Key: CASSANDRA-12004 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12004 > Project: Cassandra > Issue Type: Bug >Reporter: Jérôme Mainaud >Priority: Trivial > Fix For: 2.1.x > > Attachments: 12004-trunk.patch2.txt, patch.txt > > > An error in provided logback.xml lead to inconsistent timestamp usage in logs. > In log files, local time zone is used. > On the console, UTC time zone is used (and millisconds are missing.) > Example, the same log line (Local time zone: CEST) : > in system.log > {code} > INFO [main] 2016-06-14 14:01:51,638 StorageService.java:2081 - Node > localhost/127.0.0.1 state jump to NORMAL}} > {code} > in console > {code} > INFO 12:01:51 Node localhost/127.0.0.1 state jump to NORMAL > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12310) Use of getByName() to retrieve IP address
[ https://issues.apache.org/jira/browse/CASSANDRA-12310?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394425#comment-15394425 ] Tyler Hobbs commented on CASSANDRA-12310: - What alternatives or mitigating techniques do you suggest? > Use of getByName() to retrieve IP address > - > > Key: CASSANDRA-12310 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > Fix For: 3.0.5 > > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > There are many places in the Cassandra source code that rely upon a call to > getByName() to retrieve an IP address. The information returned by > getByName() is not trustworthy. Attackers can spoof DNS entries and depending > on getByName alone invites DNS spoofing attacks. > This is an example from the file DatabaseDescriptor.java where there are > examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. > {code:java} > DatabaseDescriptor.java, lines 231-238: > 231 try > 232 { > 233 rpcAddress = InetAddress.getByName(config.rpc_address); > 234 } > 235 catch (UnknownHostException e) > 236 { > 237 throw new ConfigurationException("Unknown host in rpc_address " + > config.rpc_address, false); > 238 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394418#comment-15394418 ] Daniel Kleviansky edited comment on CASSANDRA-12294 at 7/26/16 7:43 PM: Have decided to use [Apache Directory|http://directory.apache.org/api/] as the LDAP API. Seems to be the most modern and easy-to-use, especially when compared to JNDI. was (Author: lqid): Have decided to use [Apache Directory|http://directory.apache.org/] as the LDAP API. Seems to be the most modern and easy-to-use, especially when compared to JNDI. > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394418#comment-15394418 ] Daniel Kleviansky commented on CASSANDRA-12294: --- Have decided to use [Apache Directory|http://directory.apache.org/] as the LDAP API. Seems to be the most modern and easy-to-use, especially when compared to JNDI. > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-8751) C* should always listen to both ssl/non-ssl ports
[ https://issues.apache.org/jira/browse/CASSANDRA-8751?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394398#comment-15394398 ] sai k potturi commented on CASSANDRA-8751: -- When will we have this available? We have not been able to enable SSL for our cluster because of the split-brain scenario mentioned. We are currently on 2.1.12 vesrion. > C* should always listen to both ssl/non-ssl ports > - > > Key: CASSANDRA-8751 > URL: https://issues.apache.org/jira/browse/CASSANDRA-8751 > Project: Cassandra > Issue Type: Improvement >Reporter: Minh Do >Assignee: Minh Do >Priority: Critical > Fix For: 3.x > > > Since there is always one thread dedicated on server socket listener and it > does not use much resource, we should always have these two listeners up no > matter what users set for internode_encryption. > The reason behind this is that we need to switch back and forth between > different internode_encryption modes and we need C* servers to keep running > in transient state or during mode switching. Currently this is not possible. > For example, we have a internode_encryption=dc cluster in a multi-region AWS > environment and want to set internode_encryption=all by rolling restart C* > nodes. However, the node with internode_encryption=all does not open to > listen to non-ssl port. As a result, we have a splitted brain cluster here. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12275) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jim Witschey updated CASSANDRA-12275: - Assignee: DS Test Eng (was: Jim Witschey) > dtest failure in > offline_tools_test.TestOfflineTools.sstableofflinerelevel_test > --- > > Key: CASSANDRA-12275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12275 > Project: Cassandra > Issue Type: Test >Reporter: Craig Kodman >Assignee: DS Test Eng > Labels: dtest, windows > Attachments: node1.log, node1_debug.log, node1_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/271/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394379#comment-15394379 ] Tyler Hobbs commented on CASSANDRA-12127: - Benjamin is currently on vacation (I think he'll be back next week). Is this ticket urgent for you? > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-11866) nodetool repair does not obey the column family parameter when -st and -et are provided (subrange repair)
[ https://issues.apache.org/jira/browse/CASSANDRA-11866?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394369#comment-15394369 ] Jeremiah Jordan commented on CASSANDRA-11866: - This is a pretty trivial fix and nodetool is broken without it. [~kohlisankalp] [~JoshuaMcKenzie] what do you want a dtest to look like? > nodetool repair does not obey the column family parameter when -st and -et > are provided (subrange repair) > - > > Key: CASSANDRA-11866 > URL: https://issues.apache.org/jira/browse/CASSANDRA-11866 > Project: Cassandra > Issue Type: Bug > Components: Tools > Environment: Red Hat Enterprise Linux Server release 6.7 (Santiago) > x86_64 >Reporter: Shiva Venkateswaran > Labels: newbie > Fix For: 2.1.x > > Attachments: 11866-2.1.txt > > > Command 1: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the > parameter AssetModifyTimes_data used to restrict the CFs > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u user-pw ** repair ADL_GLOBAL AssetModifyTimes_data > -st 205279477618143669 -et 230991685737746901 -par > [2016-05-20 17:31:39,116] Starting repair command #9, repairing 1 ranges for > keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true) > [2016-05-20 17:32:21,568] Repair session 3cae2530-1ed2-11e6-b490-d9df6932c7cf > for range (205279477618143669,230991685737746901] finished > Command 2: Repairs all the CFs in ADL_GLOBAL keyspace and ignores the > parameter AssetModifyTimes_data used to restrict the CFs > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u controlRole -pw ** repair -st 205279477618143669 -et > 230991685737746901 -par -- ADL_GLOBAL AssetModifyTimes_data > [2016-05-20 17:36:34,473] Starting repair command #10, repairing 1 ranges for > keyspace ADL_GLOBAL (parallelism=PARALLEL, full=true) > [2016-05-20 17:37:15,365] Repair session ecb996d0-1ed2-11e6-b490-d9df6932c7cf > for range (205279477618143669,230991685737746901] finished > [2016-05-20 17:37:15,365] Repair command #10 finished > Command 3: Repairs only the CF ADL3Test1_data in keyspace ADL_GLOBAL > Executing: /aladdin/local/apps/apache-cassandra-2.1.8a/bin/nodetool -h > localhost -p 7199 -u controlRole -pw ** repair -- ADL_GLOBAL > ADL3Test1_data > [2016-05-20 17:38:35,781] Starting repair command #11, repairing 1043 ranges > for keyspace ADL_GLOBAL (parallelism=SEQUENTIAL, full=true) > [2016-05-20 17:42:32,682] Repair session 3c8af050-1ed3-11e6-b490-d9df6932c7cf > for range (6241639152751626129,6241693909092643958] finished > [2016-05-20 17:42:32,683] Repair session 3caf1a20-1ed3-11e6-b490-d9df6932c7cf > for range (-7096993048358106082,-7095000706885780850] finished > [2016-05-20 17:42:32,683] Repair session 3ccfc180-1ed3-11e6-b490-d9df6932c7cf > for range (-7218939248114487080,-7218289345961492809] finished > [2016-05-20 17:42:32,683] Repair session 3cf21690-1ed3-11e6-b490-d9df6932c7cf > for range (-5244794756638190874,-5190307341355030282] finished > [2016-05-20 17:42:32,683] Repair session 3d126fd0-1ed3-11e6-b490-d9df6932c7cf > for range (3551629701277971766,321736534916502] finished > [2016-05-20 17:42:32,683] Repair session 3d32f020-1ed3-11e6-b490-d9df6932c7cf > for range (-8139355591560661944,-8127928369093576603] finished > [2016-05-20 17:42:32,683] Repair session 3d537070-1ed3-11e6-b490-d9df6932c7cf > for range (7098010153980465751,7100863011896759020] finished > [2016-05-20 17:42:32,683] Repair session 3d73f0c0-1ed3-11e6-b490-d9df6932c7cf > for range (1004538726866173536,1008586133746764703] finished > [2016-05-20 17:42:32,683] Repair session 3d947110-1ed3-11e6-b490-d9df6932c7cf > for range (5770817093573726645,5771418910784831587] finished > . > . > . > [2016-05-20 17:42:32,732] Repair command #11 finished -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12275) dtest failure in offline_tools_test.TestOfflineTools.sstableofflinerelevel_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12275?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394368#comment-15394368 ] Jim Witschey commented on CASSANDRA-12275: -- I don't know what could cause the tool to give no SSTable levels, so I've added some debugging improvements to the test: https://github.com/riptano/cassandra-dtest/pull/1137 Reassigning this to DS Test Eng while we wait for failures with more debug output. > dtest failure in > offline_tools_test.TestOfflineTools.sstableofflinerelevel_test > --- > > Key: CASSANDRA-12275 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12275 > Project: Cassandra > Issue Type: Test >Reporter: Craig Kodman >Assignee: DS Test Eng > Labels: dtest, windows > Attachments: node1.log, node1_debug.log, node1_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-2.2_dtest_win32/271/testReport/offline_tools_test/TestOfflineTools/sstableofflinerelevel_test -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12264) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12264: Assignee: (was: Philip Thompson) Issue Type: Bug (was: Test) Hopefully a dev can identify why node1 did not repair all of its sstables here > dtest failure in > repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test > > > Key: CASSANDRA-12264 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12264 > Project: Cassandra > Issue Type: Bug >Reporter: Sean McCarthy > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.9_dtest/15/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test > {code} > Error Message > 'Repaired at: 0' unexpectedly found in 'SSTable: > {code} > Related failure: > http://cassci.datastax.com/job/trunk_dtest/1315/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12310) Use of getByName() to retrieve IP address
Eduardo Aguinaga created CASSANDRA-12310: Summary: Use of getByName() to retrieve IP address Key: CASSANDRA-12310 URL: https://issues.apache.org/jira/browse/CASSANDRA-12310 Project: Cassandra Issue Type: Bug Reporter: Eduardo Aguinaga Fix For: 3.0.5 Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: There are many places in the Cassandra source code that rely upon a call to getByName() to retrieve an IP address. The information returned by getByName() is not trustworthy. Attackers can spoof DNS entries and depending on getByName alone invites DNS spoofing attacks. This is an example from the file DatabaseDescriptor.java where there are examples of the use of getByName() on line 193, 213, 233, 254, 947 and 949. {code:java} DatabaseDescriptor.java, lines 231-238: 231 try 232 { 233 rpcAddress = InetAddress.getByName(config.rpc_address); 234 } 235 catch (UnknownHostException e) 236 { 237 throw new ConfigurationException("Unknown host in rpc_address " + config.rpc_address, false); 238 } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394339#comment-15394339 ] Philip Thompson edited comment on CASSANDRA-12294 at 7/26/16 7:01 PM: -- [~beobal], am I right in assuming that a change this large should probably only go into trunk, as far as being merged into tree? was (Author: philipthompson): [~beobal], am I right in assuming that a change this large should probably only go into trunk, as far as being merged into main-line? > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (CASSANDRA-12264) dtest failure in repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12264?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson reassigned CASSANDRA-12264: --- Assignee: Philip Thompson (was: DS Test Eng) > dtest failure in > repair_tests.incremental_repair_test.TestIncRepair.sstable_marking_test > > > Key: CASSANDRA-12264 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12264 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log > > > example failure: > http://cassci.datastax.com/job/cassandra-3.9_dtest/15/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test > {code} > Error Message > 'Repaired at: 0' unexpectedly found in 'SSTable: > {code} > Related failure: > http://cassci.datastax.com/job/trunk_dtest/1315/testReport/repair_tests.incremental_repair_test/TestIncRepair/sstable_marking_test/ -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12254) dtest: Fix flaky bootstrap_test novnode failures
[ https://issues.apache.org/jira/browse/CASSANDRA-12254?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta resolved CASSANDRA-12254. - Resolution: Duplicate It seems most of these dtests were fixed by CASSANDRA-11414, which just had it's PR merged last week (all failures are from before that). > dtest: Fix flaky bootstrap_test novnode failures > > > Key: CASSANDRA-12254 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12254 > Project: Cassandra > Issue Type: Test >Reporter: Paulo Motta >Assignee: Paulo Motta > > While CASSANDRA-11281 is related to bootstrap_test failures on Windows, this > is to fix bootstrap_test failures no_vnode failures: > * > [bootstrap_test.TestBootstrap.decommissioned_wiped_node_can_join_test|http://cassci.datastax.com/job/trunk_novnode_dtest/421/testReport/bootstrap_test/TestBootstrap/decommissioned_wiped_node_can_join_test/] > ** {noformat} ('Unable to connect to any servers', {'127.0.0.1': error(111, > "Tried connecting to [('127.0.0.1', 9042)]. Last error: Connection > refused")}) {noformat} > * > [bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test|http://cassci.datastax.com/job/trunk_novnode_dtest/425/testReport/junit/bootstrap_test/TestBootstrap/failed_bootstrap_wiped_node_can_join_test/] > ** {noformat}15 Jul 2016 01:10:39 [node1] Missing: ['127.0.0.2.* now > UP']:{noformat} > * > [bootstrap_test.TestBootstrap.resumable_bootstrap_test|http://cassci.datastax.com/job/trunk_novnode_dtest/418/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/] > ** {noformat}u'COMPLETED' != 'IN_PROGRESS'{noformat} > * > [bootstrap_test.TestBootstrap.resumable_bootstrap_test|http://cassci.datastax.com/job/trunk_novnode_dtest/416/testReport/junit/bootstrap_test/TestBootstrap/resumable_bootstrap_test/] > ** {noformat}05 Jul 2016 01:27:51 [node3] Missing: ['Starting listening for > CQL clients']:{noformat} > * > [bootstrap_test.TestBootstrap.failed_bootstrap_wiped_node_can_join_test|http://cassci.datastax.com/job/trunk_novnode_dtest/413/testReport/junit/bootstrap_test/TestBootstrap/failed_bootstrap_wiped_node_can_join_test/] > ** {noformat}30 Jun 2016 01:41:41 [node1] Missing: ['127.0.0.2.* now > UP']:{noformat} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Comment Edited] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394339#comment-15394339 ] Philip Thompson edited comment on CASSANDRA-12294 at 7/26/16 7:03 PM: -- [~beobal], am I right in assuming that a change this large should probably only go into trunk, as far as being merged into tree? Or, if it's just a new authorizer, will 2.2.x be fine? was (Author: philipthompson): [~beobal], am I right in assuming that a change this large should probably only go into trunk, as far as being merged into tree? > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12294) LDAP Authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-12294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394339#comment-15394339 ] Philip Thompson commented on CASSANDRA-12294: - [~beobal], am I right in assuming that a change this large should probably only go into trunk, as far as being merged into main-line? > LDAP Authentication > --- > > Key: CASSANDRA-12294 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12294 > Project: Cassandra > Issue Type: New Feature > Components: Distributed Metadata >Reporter: Daniel Kleviansky >Assignee: Daniel Kleviansky >Priority: Minor > Labels: security > Fix For: 2.2.x, 3.x > > > Addition of an LDAP authentication plugin, in tree, along side the default > authenticator, so that Cassandra can leverage existing LDAP-speaking servers > to manage user logins. > DSE offers this: [Enabling LDAP authentication | > https://docs.datastax.com/en/datastax_enterprise/4.6/datastax_enterprise/sec/secLdapEnabling.html], > but does not exist in vanilla C* as far as I can tell. > Ideally would like to introduce this as part of the 2.2.x branch, as this is > what is currently running in client production environment, and where it is > needed at the moment. > Would aim for support of at least Microsoft Active Directory running on > Windows Server 2012. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Resolved] (CASSANDRA-12266) dtest failure in bootstrap_test.TestBootstrap.consistent_range_movement_false_with_two_replicas_down_should_fail_test
[ https://issues.apache.org/jira/browse/CASSANDRA-12266?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta resolved CASSANDRA-12266. - Resolution: Duplicate > dtest failure in > bootstrap_test.TestBootstrap.consistent_range_movement_false_with_two_replicas_down_should_fail_test > - > > Key: CASSANDRA-12266 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12266 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: DS Test Eng > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log, node3.log, node3_debug.log, node3_gc.log, > node4.log, node4_debug.log, node4_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_novnode_dtest/431/testReport/bootstrap_test/TestBootstrap/consistent_range_movement_false_with_two_replicas_down_should_fail_test > {code} > Stacktrace > File "/usr/lib/python2.7/unittest/case.py", line 329, in run > testMethod() > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 180, in > consistent_range_movement_false_with_two_replicas_down_should_fail_test > self._bootstrap_test_with_replica_down(False, stop_two_replicas=True) > File "/home/automaton/cassandra-dtest/bootstrap_test.py", line 233, in > _bootstrap_test_with_replica_down > node4.watch_log_for("Unable to find sufficient sources for streaming > range") > File "/home/automaton/ccm/ccmlib/node.py", line 449, in watch_log_for > raise TimeoutError(time.strftime("%d %b %Y %H:%M:%S", time.gmtime()) + " > [" + self.name + "] Missing: " + str([e.pattern for e in tofind]) + ":\n" + > reads[:50] + ".\nSee {} for remainder".format(filename)) > "20 Jul 2016 00:58:03 [node4] Missing: ['Unable to find sufficient sources > for streaming range']:\nINFO [main] 2016-07-20 00:48:02,304 > YamlConfigura.\nSee system.log for remainder > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12269) Faster write path
[ https://issues.apache.org/jira/browse/CASSANDRA-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-12269: --- Labels: performance (was: ) > Faster write path > - > > Key: CASSANDRA-12269 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12269 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths, Streaming and Messaging >Reporter: T Jake Luciani >Assignee: T Jake Luciani > Labels: performance > Fix For: 3.10 > > > The new storage engine (CASSANDRA-8099) has caused a regression in write > performance. This ticket is to address it and bring 3.0 as close to 2.2 as > possible. There are four main reasons for this I've discovered after much > toil: > 1. The cost of calculating the size of a serialized row is higher now since > we no longer have the cell name and value managed as ByteBuffers as we did > pre-3.0. That means we current re-serialize the row twice, once to calculate > the size and once to write the data. This happens during the SSTable writes > and was addressed in CASSANDRA-9766. > Double serialization is also happening in CommitLog and the > MessagingService. We need to apply the same techniques to these as we did to > the SSTable serialization. > 2. Even after fixing (1) there is still an issue with there being more GC > pressure and CPU usage in 3.0 due to the fact that we encode everything from > the {{Column}} to the {{Row}} to the {{Partition}} as a {{BTree}}. > Specifically, the {{BTreeSearchIterator}} is used for all iterator() methods. > Both these classes are useful for efficient removal and searching of the > trees but in the case of SerDe we almost always want to simply walk the > entire tree forwards or reversed and apply a function to each element. To > that end, we can use lambdas and do this without any extra classes. > 3. We use a lot of thread locals and check them constantly on the read/write > paths. For client warnings, tracing, temp buffers, etc. We should move all > thread locals to FastThreadLocals and threads to FastThreadLocalThreads. > 4. We changed the memtable flusher defaults in 3.2 that caused a regression > see: CASSANDRA-12228 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12269) Faster write path
[ https://issues.apache.org/jira/browse/CASSANDRA-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-12269: --- Component/s: Streaming and Messaging Local Write-Read Paths > Faster write path > - > > Key: CASSANDRA-12269 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12269 > Project: Cassandra > Issue Type: Improvement > Components: Local Write-Read Paths, Streaming and Messaging >Reporter: T Jake Luciani >Assignee: T Jake Luciani > Labels: performance > Fix For: 3.10 > > > The new storage engine (CASSANDRA-8099) has caused a regression in write > performance. This ticket is to address it and bring 3.0 as close to 2.2 as > possible. There are four main reasons for this I've discovered after much > toil: > 1. The cost of calculating the size of a serialized row is higher now since > we no longer have the cell name and value managed as ByteBuffers as we did > pre-3.0. That means we current re-serialize the row twice, once to calculate > the size and once to write the data. This happens during the SSTable writes > and was addressed in CASSANDRA-9766. > Double serialization is also happening in CommitLog and the > MessagingService. We need to apply the same techniques to these as we did to > the SSTable serialization. > 2. Even after fixing (1) there is still an issue with there being more GC > pressure and CPU usage in 3.0 due to the fact that we encode everything from > the {{Column}} to the {{Row}} to the {{Partition}} as a {{BTree}}. > Specifically, the {{BTreeSearchIterator}} is used for all iterator() methods. > Both these classes are useful for efficient removal and searching of the > trees but in the case of SerDe we almost always want to simply walk the > entire tree forwards or reversed and apply a function to each element. To > that end, we can use lambdas and do this without any extra classes. > 3. We use a lot of thread locals and check them constantly on the read/write > paths. For client warnings, tracing, temp buffers, etc. We should move all > thread locals to FastThreadLocals and threads to FastThreadLocalThreads. > 4. We changed the memtable flusher defaults in 3.2 that caused a regression > see: CASSANDRA-12228 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12269) Faster write path
[ https://issues.apache.org/jira/browse/CASSANDRA-12269?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-12269: --- Resolution: Fixed Status: Resolved (was: Patch Available) Nits addressed and CI runs clean [testall|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-write-perf-testall/lastCompletedBuild/testReport/] [dtest|http://cassci.datastax.com/view/Dev/view/tjake/job/tjake-write-perf2-dtest/lastCompletedBuild/testReport/] committed to trunk as {{dc9ed463417aa8028e77e91718e4f3d6ea563210}} > Faster write path > - > > Key: CASSANDRA-12269 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12269 > Project: Cassandra > Issue Type: Improvement >Reporter: T Jake Luciani >Assignee: T Jake Luciani > Fix For: 3.10 > > > The new storage engine (CASSANDRA-8099) has caused a regression in write > performance. This ticket is to address it and bring 3.0 as close to 2.2 as > possible. There are four main reasons for this I've discovered after much > toil: > 1. The cost of calculating the size of a serialized row is higher now since > we no longer have the cell name and value managed as ByteBuffers as we did > pre-3.0. That means we current re-serialize the row twice, once to calculate > the size and once to write the data. This happens during the SSTable writes > and was addressed in CASSANDRA-9766. > Double serialization is also happening in CommitLog and the > MessagingService. We need to apply the same techniques to these as we did to > the SSTable serialization. > 2. Even after fixing (1) there is still an issue with there being more GC > pressure and CPU usage in 3.0 due to the fact that we encode everything from > the {{Column}} to the {{Row}} to the {{Partition}} as a {{BTree}}. > Specifically, the {{BTreeSearchIterator}} is used for all iterator() methods. > Both these classes are useful for efficient removal and searching of the > trees but in the case of SerDe we almost always want to simply walk the > entire tree forwards or reversed and apply a function to each element. To > that end, we can use lambdas and do this without any extra classes. > 3. We use a lot of thread locals and check them constantly on the read/write > paths. For client warnings, tracing, temp buffers, etc. We should move all > thread locals to FastThreadLocals and threads to FastThreadLocalThreads. > 4. We changed the memtable flusher defaults in 3.2 that caused a regression > see: CASSANDRA-12228 -- This message was sent by Atlassian JIRA (v6.3.4#6332)
cassandra git commit: Improve write path performance
Repository: cassandra Updated Branches: refs/heads/trunk 2c0edce09 -> dc9ed4634 Improve write path performance Patch by tjake; reviewed by Stefania Alborghetti for CASSANDRA-12269 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/dc9ed463 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/dc9ed463 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/dc9ed463 Branch: refs/heads/trunk Commit: dc9ed463417aa8028e77e91718e4f3d6ea563210 Parents: 2c0edce Author: T Jake LucianiAuthored: Tue Jun 21 21:53:43 2016 -0400 Committer: T Jake Luciani Committed: Tue Jul 26 14:55:54 2016 -0400 -- CHANGES.txt | 1 + conf/jvm.options| 1 + .../org/apache/cassandra/config/CFMetaData.java | 18 ++- src/java/org/apache/cassandra/db/Columns.java | 11 ++ .../cassandra/db/SerializationHeader.java | 12 +- .../org/apache/cassandra/db/SystemKeyspace.java | 2 +- .../cassandra/db/commitlog/CommitLog.java | 68 + .../db/partitions/AbstractBTreePartition.java | 3 +- .../org/apache/cassandra/db/rows/BTreeRow.java | 49 +++--- src/java/org/apache/cassandra/db/rows/Row.java | 13 ++ src/java/org/apache/cassandra/db/rows/Rows.java | 24 +-- .../rows/UnfilteredRowIteratorSerializer.java | 3 + .../cassandra/db/rows/UnfilteredSerializer.java | 75 +++--- .../org/apache/cassandra/hints/HintsWriter.java | 2 +- .../io/sstable/SSTableSimpleUnsortedWriter.java | 3 +- .../cassandra/io/util/DataOutputBuffer.java | 42 +++--- .../io/util/DataOutputBufferFixed.java | 4 +- .../cassandra/io/util/SafeMemoryWriter.java | 2 +- .../cassandra/net/IncomingTcpConnection.java| 3 +- .../org/apache/cassandra/net/MessageIn.java | 2 +- .../org/apache/cassandra/net/MessageOut.java| 24 ++- .../apache/cassandra/net/MessagingService.java | 2 + .../cassandra/net/OutboundTcpConnection.java| 3 +- .../apache/cassandra/service/ClientWarn.java| 3 +- .../org/apache/cassandra/tracing/Tracing.java | 3 +- .../apache/cassandra/utils/ChecksumType.java| 13 +- .../org/apache/cassandra/utils/Wrapped.java | 48 ++ .../apache/cassandra/utils/WrappedBoolean.java | 42 ++ .../cassandra/utils/WrappedException.java | 30 .../org/apache/cassandra/utils/WrappedInt.java | 52 +++ .../org/apache/cassandra/utils/btree/BTree.java | 131 +++- .../db/commitlog/CommitLogStressTest.java | 4 +- .../test/microbench/FastThreadExecutor.java | 96 .../test/microbench/MutationBench.java | 148 +++ .../org/apache/cassandra/utils/BTreeTest.java | 25 35 files changed, 826 insertions(+), 136 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc9ed463/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index c586d10..efbbb4d 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 3.10 + * Faster write path (CASSANDRA-12269) * Option to leave omitted columns in INSERT JSON unset (CASSANDRA-11424) * Support json/yaml output in nodetool tpstats (CASSANDRA-12035) * Expose metrics for successful/failed authentication attempts (CASSANDRA-10635) http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc9ed463/conf/jvm.options -- diff --git a/conf/jvm.options b/conf/jvm.options index 692d06b..9e13e0e 100644 --- a/conf/jvm.options +++ b/conf/jvm.options @@ -118,6 +118,7 @@ # resize them at runtime. -XX:+UseTLAB -XX:+ResizeTLAB +-XX:+UseNUMA # http://www.evanjones.ca/jvm-mmap-pause.html -XX:+PerfDisableSharedMem http://git-wip-us.apache.org/repos/asf/cassandra/blob/dc9ed463/src/java/org/apache/cassandra/config/CFMetaData.java -- diff --git a/src/java/org/apache/cassandra/config/CFMetaData.java b/src/java/org/apache/cassandra/config/CFMetaData.java index b175ef1c..beb9d1a 100644 --- a/src/java/org/apache/cassandra/config/CFMetaData.java +++ b/src/java/org/apache/cassandra/config/CFMetaData.java @@ -47,6 +47,7 @@ import org.apache.cassandra.cql3.statements.CFStatement; import org.apache.cassandra.cql3.statements.CreateTableStatement; import org.apache.cassandra.db.*; import org.apache.cassandra.db.compaction.AbstractCompactionStrategy; +import org.apache.cassandra.db.filter.ColumnFilter; import org.apache.cassandra.db.marshal.*; import org.apache.cassandra.dht.IPartitioner; import org.apache.cassandra.exceptions.ConfigurationException; @@ -122,6 +123,9 @@ public final class CFMetaData public final
[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.
[ https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394318#comment-15394318 ] sankalp kohli commented on CASSANDRA-4650: -- Let me take a look > RangeStreamer should be smarter when picking endpoints for streaming in case > of N >=3 in each DC. > --- > > Key: CASSANDRA-4650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4650 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.1.5 >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Minor > Labels: streaming > Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG > > Original Estimate: 24h > Remaining Estimate: 24h > > getRangeFetchMap method in RangeStreamer should pick unique nodes to stream > data from when number of replicas in each DC is three or more. > When N>=3 in a DC, there are two options for streaming a range. Consider an > example of 4 nodes in one datacenter and replication factor of 3. > If a node goes down, it needs to recover 3 ranges of data. With current code, > two nodes could get selected as it orders the node by proximity. > We ideally will want to select 3 nodes for streaming the data. We can do this > by selecting unique nodes for each range. > Advantages: > This will increase the performance of bootstrapping a node and will also put > less pressure on nodes serving the data. > Note: This does not affect if N < 3 in each DC as then it streams data from > only 2 nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.
[ https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394314#comment-15394314 ] T Jake Luciani commented on CASSANDRA-4650: --- Both but I meant to point to [this|http://cassci.datastax.com/job/tjake-4650-dtest/lastCompletedBuild/testReport/repair_tests.incremental_repair_test/TestIncRepair/multiple_repair_test/] throws {code} ERROR [main] 2016-07-22 16:56:25,438 CassandraDaemon.java:737 - Exception encountered during startup java.lang.IllegalStateException: unable to find sufficient sources for streaming range (-6177303831872713717,-5843451309664294558] in keyspace system_auth at org.apache.cassandra.dht.RangeFetchMapCalculator.getGraph(RangeFetchMapCalculator.java:233) ~[main/:na] at org.apache.cassandra.dht.RangeFetchMapCalculator.getRangeFetchMap(RangeFetchMapCalculator.java:60) ~[main/:na] at org.apache.cassandra.dht.RangeStreamer.getOptimizedRangeFetchMap(RangeStreamer.java:291) ~[main/:na] at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:169) ~[main/:na] at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:84) ~[main/:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1380) ~[main/:na] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:939) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:663) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:548) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:375) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:591) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:720) [main/:na] {code} > RangeStreamer should be smarter when picking endpoints for streaming in case > of N >=3 in each DC. > --- > > Key: CASSANDRA-4650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4650 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.1.5 >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Minor > Labels: streaming > Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG > > Original Estimate: 24h > Remaining Estimate: 24h > > getRangeFetchMap method in RangeStreamer should pick unique nodes to stream > data from when number of replicas in each DC is three or more. > When N>=3 in a DC, there are two options for streaming a range. Consider an > example of 4 nodes in one datacenter and replication factor of 3. > If a node goes down, it needs to recover 3 ranges of data. With current code, > two nodes could get selected as it orders the node by proximity. > We ideally will want to select 3 nodes for streaming the data. We can do this > by selecting unique nodes for each range. > Advantages: > This will increase the performance of bootstrapping a node and will also put > less pressure on nodes serving the data. > Note: This does not affect if N < 3 in each DC as then it streams data from > only 2 nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12307) Command Injection
[ https://issues.apache.org/jira/browse/CASSANDRA-12307?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394315#comment-15394315 ] Eduardo Aguinaga commented on CASSANDRA-12307: -- Hey Chris,Yes but they could utilize this to clean up after themselves and attack the database without leaving too much forensic information (or whatever they wanted to do). Its just an opening that could be taken advantage of. And file permissions are part of another conversation. Depending on file permissions is a bad thing to rely upon for security. One of the top 10 things listed as a problem every year is misconfigured servers (in HP, Verizon and other relevant web/security reports). Ed > Command Injection > - > > Key: CASSANDRA-12307 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12307 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga >Priority: Critical > Fix For: 3.0.5 > > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Two commands, archiveCommand and restoreCommand, are stored as string > properties and retrieved on lines 91 and 92 of CommitLogArchiver.java. The > only processing performed on the command strings is that tokens are replaced > by data available at runtime. > A malicious command could be entered into the system by storing the malicious > command in place of the valid archiveCommand or restoreCommand. The malicious > command would then be executed on line 265 within the exec method. > Any commands that are stored and retrieved should be verified prior to > execution. Assuming that the command is safe because it is stored as a local > property invites security issues. > {code:java} > CommitLogArchiver.java, lines 91-92: > 91 String archiveCommand = commitlog_commands.getProperty("archive_command"); > 92 String restoreCommand = commitlog_commands.getProperty("restore_command"); > CommitLogArchiver.java, lines 261-266: > 261 private void exec(String command) throws IOException > 262 { > 263 ProcessBuilder pb = new ProcessBuilder(command.split(" ")); > 264 pb.redirectErrorStream(true); > 265 FBUtilities.exec(pb); > 266 } > CommitLogArchiver.java, lines 152-166: > 152 public void maybeArchive(final String path, final String name) > 153 { > 154 if (Strings.isNullOrEmpty(archiveCommand)) > 155 return; > 156 > 157 archivePending.put(name, executor.submit(new WrappedRunnable() > 158 { > 159 protected void runMayThrow() throws IOException > 160 { > 161 String command = archiveCommand.replace("%name", name); > 162 command = command.replace("%path", path); > 163 exec(command); > 164 } > 165 })); > 166 } > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12288) dtest failure in secondary_indexes_test.TestSecondaryIndexes.test_query_indexes_with_vnodes
[ https://issues.apache.org/jira/browse/CASSANDRA-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12288: Resolution: Fixed Status: Resolved (was: Patch Available) > dtest failure in > secondary_indexes_test.TestSecondaryIndexes.test_query_indexes_with_vnodes > --- > > Key: CASSANDRA-12288 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12288 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/347/testReport/secondary_indexes_test/TestSecondaryIndexes/test_query_indexes_with_vnodes > {code} > Standard Output > Unexpected error in node2 log, error: > ERROR [ReadStage-1] 2016-07-20 04:58:27,391 MessageDeliveryTask.java:74 - The > secondary index 'composites_index' is not yet available > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12127) Queries with empty ByteBuffer values in clustering column restrictions fail for non-composite compact tables
[ https://issues.apache.org/jira/browse/CASSANDRA-12127?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394307#comment-15394307 ] sankalp kohli commented on CASSANDRA-12127: --- [~blerer] any updates here. > Queries with empty ByteBuffer values in clustering column restrictions fail > for non-composite compact tables > > > Key: CASSANDRA-12127 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12127 > Project: Cassandra > Issue Type: Bug > Components: CQL >Reporter: Benjamin Lerer >Assignee: Benjamin Lerer > Fix For: 2.1.x, 2.2.x, 3.0.x, 3.x > > Attachments: 12127.txt > > > For the following table: > {code} > CREATE TABLE myTable (pk int, > c blob, > value int, > PRIMARY KEY (pk, c)) WITH COMPACT STORAGE; > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('1'), 1); > INSERT INTO myTable (pk, c, value) VALUES (1, textAsBlob('2'), 2); > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > Will result in the following Exception: > {code} > java.lang.ClassCastException: > org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast > to org.apache.cassandra.db.composites.CellName > at > org.apache.cassandra.db.composites.AbstractCellNameType.cellFromByteBuffer(AbstractCellNameType.java:188) > at > org.apache.cassandra.db.composites.AbstractSimpleCellNameType.makeCellName(AbstractSimpleCellNameType.java:125) > at > org.apache.cassandra.db.composites.AbstractCellNameType.makeCellName(AbstractCellNameType.java:254) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeExclusiveSliceBound(SelectStatement.java:1206) > at > org.apache.cassandra.cql3.statements.SelectStatement.applySliceRestriction(SelectStatement.java:1214) > at > org.apache.cassandra.cql3.statements.SelectStatement.processColumnFamily(SelectStatement.java:1292) > at > org.apache.cassandra.cql3.statements.SelectStatement.process(SelectStatement.java:1259) > at > org.apache.cassandra.cql3.statements.SelectStatement.processResults(SelectStatement.java:299) > [...] > {code} > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c < textAsBlob('');}} > Will return 2 rows instead of 0. > The query: {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} > {code} > java.lang.AssertionError > at > org.apache.cassandra.db.composites.SimpleDenseCellNameType.create(SimpleDenseCellNameType.java:60) > at > org.apache.cassandra.cql3.statements.SelectStatement.addSelectedColumns(SelectStatement.java:853) > at > org.apache.cassandra.cql3.statements.SelectStatement.getRequestedColumns(SelectStatement.java:846) > at > org.apache.cassandra.cql3.statements.SelectStatement.makeFilter(SelectStatement.java:583) > at > org.apache.cassandra.cql3.statements.SelectStatement.getSliceCommands(SelectStatement.java:383) > at > org.apache.cassandra.cql3.statements.SelectStatement.getPageableCommand(SelectStatement.java:253) > [...] > {code} > I checked 2.0 and {{SELECT * FROM myTable WHERE pk = 1 AND c > > textAsBlob('');}} works properly but {{SELECT * FROM myTable WHERE pk = 1 AND > c < textAsBlob('');}} return the same wrong results than in 2.1. > The {{SELECT * FROM myTable WHERE pk = 1 AND c = textAsBlob('');}} is > rejected if a clear error message: {{Invalid empty value for clustering > column of COMPACT TABLE}}. > As it is not possible to insert an empty ByteBuffer value within the > clustering column of a non-composite compact tables those queries do not > have a lot of meaning. {{SELECT * FROM myTable WHERE pk = 1 AND c < > textAsBlob('');}} and {{SELECT * FROM myTable WHERE pk = 1 AND c = > textAsBlob('');}} will return nothing > and {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} will > return the entire partition (pk = 1). > In my opinion those queries should probably all be rejected as it seems that > the fact that {{SELECT * FROM myTable WHERE pk = 1 AND c > textAsBlob('');}} > was accepted in {{2.0}} was due to a bug. > I am of course open to discussion. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-12268) Make MV Index creation robust for wide referent rows
[ https://issues.apache.org/jira/browse/CASSANDRA-12268?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394304#comment-15394304 ] Carl Yeksigian commented on CASSANDRA-12268: This is a side effect of the way that we read in a partition and create all of the mutations for that partition. This can also affect normal MV operations, for example when we issue a partition deletion on a very large partition. We need to be sure that we can build using smaller-than-partition ranges, which should alleviate the issue of holding large amounts of mutations in memory. > Make MV Index creation robust for wide referent rows > > > Key: CASSANDRA-12268 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12268 > Project: Cassandra > Issue Type: Improvement > Components: Core >Reporter: Jonathan Shook >Assignee: Carl Yeksigian > > When creating an index for a materialized view for extant data, heap pressure > is very dependent on the cardinality of of rows associated with each index > value. With the way that per-index value rows are created within the index, > this can cause unbounded heap pressure, which can cause OOM. This appears to > be a side-effect of how each index row is applied atomically as with batches. > The commit logs can accumulate enough during the process to prevent the node > from being restarted. Given that this occurs during global index creation, > this can happen on multiple nodes, making stable recovery of a node set > difficult, as co-replicas become unavailable to assist in back-filling data > from commitlogs. > While it is understandable that you want to avoid having relatively wide rows > even in materialized views, this represents a particularly difficult > scenario for triage. > The basic recommendation for improving this is to sub-group the index > creation into smaller chunks internally, providing a maximal bound against > the heap pressure when it is needed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-4650) RangeStreamer should be smarter when picking endpoints for streaming in case of N >=3 in each DC.
[ https://issues.apache.org/jira/browse/CASSANDRA-4650?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394301#comment-15394301 ] sankalp kohli commented on CASSANDRA-4650: -- Are you worried about all the failure in the Dtest or the one you are pointing. > RangeStreamer should be smarter when picking endpoints for streaming in case > of N >=3 in each DC. > --- > > Key: CASSANDRA-4650 > URL: https://issues.apache.org/jira/browse/CASSANDRA-4650 > Project: Cassandra > Issue Type: Improvement >Affects Versions: 1.1.5 >Reporter: sankalp kohli >Assignee: sankalp kohli >Priority: Minor > Labels: streaming > Attachments: CASSANDRA-4650_trunk.txt, photo-1.JPG > > Original Estimate: 24h > Remaining Estimate: 24h > > getRangeFetchMap method in RangeStreamer should pick unique nodes to stream > data from when number of replicas in each DC is three or more. > When N>=3 in a DC, there are two options for streaming a range. Consider an > example of 4 nodes in one datacenter and replication factor of 3. > If a node goes down, it needs to recover 3 ranges of data. With current code, > two nodes could get selected as it orders the node by proximity. > We ideally will want to select 3 nodes for streaming the data. We can do this > by selecting unique nodes for each range. > Advantages: > This will increase the performance of bootstrapping a node and will also put > less pressure on nodes serving the data. > Note: This does not affect if N < 3 in each DC as then it streams data from > only 2 nodes. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12309?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduardo Aguinaga updated CASSANDRA-12309: - Description: Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue on line 588 and the method returns a new instance on line 594 or 598. CqlConfigHelper.java, lines 584-605: {code:java} 584 private static AuthProvider getClientAuthProvider(String factoryClassName, Configuration conf) 585 { 586 try 587 { 588 Class c = Class.forName(factoryClassName); 589 if (PlainTextAuthProvider.class.equals(c)) 590 { 591 String username = getStringSetting(USERNAME, conf).or(""); 592 String password = getStringSetting(PASSWORD, conf).or(""); 593 return (AuthProvider) c.getConstructor(String.class, String.class) 594 .newInstance(username, password); 595 } 596 else 597 { 598 return (AuthProvider) c.newInstance(); 599 } 600 } 601 catch (Exception e) 602 { 603 throw new RuntimeException("Failed to instantiate auth provider:" + factoryClassName, e); 604 } 605 } {code} was: Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue on line 588. CqlConfigHelper.java, lines 584-605: {code:java} 584 private static AuthProvider getClientAuthProvider(String factoryClassName, Configuration conf) 585 { 586 try 587 { 588 Class c = Class.forName(factoryClassName); 589 if (PlainTextAuthProvider.class.equals(c)) 590 { 591 String username = getStringSetting(USERNAME, conf).or(""); 592 String password = getStringSetting(PASSWORD, conf).or(""); 593 return (AuthProvider) c.getConstructor(String.class, String.class) 594 .newInstance(username, password); 595 } 596 else 597 { 598 return (AuthProvider) c.newInstance(); 599 } 600 } 601 catch (Exception e) 602 { 603 throw new RuntimeException("Failed to instantiate auth provider:" + factoryClassName, e); 604 } 605 } {code} > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12309 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > Fix For: 3.0.5 > > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools > Understand v4. The results of that analysis includes the issue below. > Issue: > Dynamically loaded code has the potential to be malicious. The application > uses external input to select which classes or code to use, but it does not > sufficiently prevent the input from selecting improper classes or code. > The snippet below shows the issue on line 588 and the method returns a new > instance on line 594 or 598. > CqlConfigHelper.java, lines 584-605: > {code:java} > 584 private static AuthProvider getClientAuthProvider(String > factoryClassName, Configuration conf) > 585 { > 586 try > 587 { > 588 Class c = Class.forName(factoryClassName); > 589 if (PlainTextAuthProvider.class.equals(c)) > 590 { > 591 String username = getStringSetting(USERNAME, conf).or(""); > 592 String password = getStringSetting(PASSWORD, conf).or(""); > 593 return (AuthProvider) c.getConstructor(String.class, > String.class) > 594 .newInstance(username, password); > 595 } > 596 else > 597
[jira] [Created] (CASSANDRA-12309) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
Eduardo Aguinaga created CASSANDRA-12309: Summary: Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code Key: CASSANDRA-12309 URL: https://issues.apache.org/jira/browse/CASSANDRA-12309 Project: Cassandra Issue Type: Bug Reporter: Eduardo Aguinaga Fix For: 3.0.5 Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue on line 588. CqlConfigHelper.java, lines 584-605: {code:java} 584 private static AuthProvider getClientAuthProvider(String factoryClassName, Configuration conf) 585 { 586 try 587 { 588 Class c = Class.forName(factoryClassName); 589 if (PlainTextAuthProvider.class.equals(c)) 590 { 591 String username = getStringSetting(USERNAME, conf).or(""); 592 String password = getStringSetting(PASSWORD, conf).or(""); 593 return (AuthProvider) c.getConstructor(String.class, String.class) 594 .newInstance(username, password); 595 } 596 else 597 { 598 return (AuthProvider) c.newInstance(); 599 } 600 } 601 catch (Exception e) 602 { 603 throw new RuntimeException("Failed to instantiate auth provider:" + factoryClassName, e); 604 } 605 } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-12308) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
[ https://issues.apache.org/jira/browse/CASSANDRA-12308?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Eduardo Aguinaga updated CASSANDRA-12308: - Description: Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue which ends on line 585 by instantiating a class by name. ConfigHelper.java, lines 558-591: {code:java} 558 @SuppressWarnings("resource") 559 public static Cassandra.Client createConnection(Configuration conf, String host, Integer port) throws IOException 560 { 561 try 562 { 563 TTransport transport = getClientTransportFactory(conf).openTransport(host, port); 564 return new Cassandra.Client(new TBinaryProtocol(transport, true, true)); 565 } 566 catch (Exception e) 567 { 568 throw new IOException("Unable to connect to server " + host + ":" + port, e); 569 } 570 } 571 572 public static ITransportFactory getClientTransportFactory(Configuration conf) 573 { 574 String factoryClassName = conf.get(ITransportFactory.PROPERTY_KEY, TFramedTransportFactory.class.getName()); 575 ITransportFactory factory = getClientTransportFactory(factoryClassName); 576 Mapoptions = getOptions(conf, factory.supportedOptions()); 577 factory.setOptions(options); 578 return factory; 579 } 580 581 private static ITransportFactory getClientTransportFactory(String factoryClassName) 582 { 583 try 584 { 585 return (ITransportFactory) Class.forName(factoryClassName).newInstance(); 586 } 587 catch (Exception e) 588 { 589 throw new RuntimeException("Failed to instantiate transport factory:" + factoryClassName, e); 590 } 591 } {code} was: Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input with reflection to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue which ends on line 585 by instantiating a class by name. ConfigHelper.java, lines 558-591: {code:java} 558 @SuppressWarnings("resource") 559 public static Cassandra.Client createConnection(Configuration conf, String host, Integer port) throws IOException 560 { 561 try 562 { 563 TTransport transport = getClientTransportFactory(conf).openTransport(host, port); 564 return new Cassandra.Client(new TBinaryProtocol(transport, true, true)); 565 } 566 catch (Exception e) 567 { 568 throw new IOException("Unable to connect to server " + host + ":" + port, e); 569 } 570 } 571 572 public static ITransportFactory getClientTransportFactory(Configuration conf) 573 { 574 String factoryClassName = conf.get(ITransportFactory.PROPERTY_KEY, TFramedTransportFactory.class.getName()); 575 ITransportFactory factory = getClientTransportFactory(factoryClassName); 576 Map options = getOptions(conf, factory.supportedOptions()); 577 factory.setOptions(options); 578 return factory; 579 } 580 581 private static ITransportFactory getClientTransportFactory(String factoryClassName) 582 { 583 try 584 { 585 return (ITransportFactory) Class.forName(factoryClassName).newInstance(); 586 } 587 catch (Exception e) 588 { 589 throw new RuntimeException("Failed to instantiate transport factory:" + factoryClassName, e); 590 } 591 } {code} > Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select > Classes or Code > -- > > Key: CASSANDRA-12308 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12308 > Project: Cassandra > Issue Type: Bug >Reporter: Eduardo Aguinaga > Fix For: 3.0.5 > > > Overview: > In May through June of 2016 a static analysis was performed on version 3.0.5 > of the Cassandra source code. The analysis included an automated analysis > using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools >
[jira] [Updated] (CASSANDRA-12288) dtest failure in secondary_indexes_test.TestSecondaryIndexes.test_query_indexes_with_vnodes
[ https://issues.apache.org/jira/browse/CASSANDRA-12288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-12288: Reviewer: Jim Witschey Status: Patch Available (was: Open) https://github.com/riptano/cassandra-dtest/pull/1135 > dtest failure in > secondary_indexes_test.TestSecondaryIndexes.test_query_indexes_with_vnodes > --- > > Key: CASSANDRA-12288 > URL: https://issues.apache.org/jira/browse/CASSANDRA-12288 > Project: Cassandra > Issue Type: Test >Reporter: Sean McCarthy >Assignee: Philip Thompson > Labels: dtest > Attachments: node1.log, node1_debug.log, node1_gc.log, node2.log, > node2_debug.log, node2_gc.log > > > example failure: > http://cassci.datastax.com/job/trunk_offheap_dtest/347/testReport/secondary_indexes_test/TestSecondaryIndexes/test_query_indexes_with_vnodes > {code} > Standard Output > Unexpected error in node2 log, error: > ERROR [ReadStage-1] 2016-07-20 04:58:27,391 MessageDeliveryTask.java:74 - The > secondary index 'composites_index' is not yet available > {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (CASSANDRA-12308) Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code
Eduardo Aguinaga created CASSANDRA-12308: Summary: Use of Dynamic Class Loading, Use of Externally-Controlled Input to Select Classes or Code Key: CASSANDRA-12308 URL: https://issues.apache.org/jira/browse/CASSANDRA-12308 Project: Cassandra Issue Type: Bug Reporter: Eduardo Aguinaga Fix For: 3.0.5 Overview: In May through June of 2016 a static analysis was performed on version 3.0.5 of the Cassandra source code. The analysis included an automated analysis using HP Fortify v4.21 SCA and a manual analysis utilizing SciTools Understand v4. The results of that analysis includes the issue below. Issue: Dynamically loaded code has the potential to be malicious. The application uses external input with reflection to select which classes or code to use, but it does not sufficiently prevent the input from selecting improper classes or code. The snippet below shows the issue which ends on line 585 by instantiating a class by name. ConfigHelper.java, lines 558-591: {code:java} 558 @SuppressWarnings("resource") 559 public static Cassandra.Client createConnection(Configuration conf, String host, Integer port) throws IOException 560 { 561 try 562 { 563 TTransport transport = getClientTransportFactory(conf).openTransport(host, port); 564 return new Cassandra.Client(new TBinaryProtocol(transport, true, true)); 565 } 566 catch (Exception e) 567 { 568 throw new IOException("Unable to connect to server " + host + ":" + port, e); 569 } 570 } 571 572 public static ITransportFactory getClientTransportFactory(Configuration conf) 573 { 574 String factoryClassName = conf.get(ITransportFactory.PROPERTY_KEY, TFramedTransportFactory.class.getName()); 575 ITransportFactory factory = getClientTransportFactory(factoryClassName); 576 Mapoptions = getOptions(conf, factory.supportedOptions()); 577 factory.setOptions(options); 578 return factory; 579 } 580 581 private static ITransportFactory getClientTransportFactory(String factoryClassName) 582 { 583 try 584 { 585 return (ITransportFactory) Class.forName(factoryClassName).newInstance(); 586 } 587 catch (Exception e) 588 { 589 throw new RuntimeException("Failed to instantiate transport factory:" + factoryClassName, e); 590 } 591 } {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10368) Support Restricting non-PK Cols in Materialized View Select Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-10368: Assignee: Jochen Niebuhr > Support Restricting non-PK Cols in Materialized View Select Statements > -- > > Key: CASSANDRA-10368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10368 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tyler Hobbs >Assignee: Jochen Niebuhr >Priority: Minor > Fix For: 3.x > > Attachments: 10368-3.8.txt > > > CASSANDRA-9664 allows materialized views to restrict primary key columns in > the select statement. Due to CASSANDRA-10261, the patch did not include > support for restricting non-PK columns. Now that the timestamp issue has > been resolved, we can add support for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10368) Support Restricting non-PK Cols in Materialized View Select Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Tyler Hobbs updated CASSANDRA-10368: Status: Patch Available (was: Awaiting Feedback) > Support Restricting non-PK Cols in Materialized View Select Statements > -- > > Key: CASSANDRA-10368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10368 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tyler Hobbs >Assignee: Jochen Niebuhr >Priority: Minor > Fix For: 3.x > > Attachments: 10368-3.8.txt > > > CASSANDRA-9664 allows materialized views to restrict primary key columns in > the select statement. Due to CASSANDRA-10261, the patch did not include > support for restricting non-PK columns. Now that the timestamp issue has > been resolved, we can add support for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10368) Support Restricting non-PK Cols in Materialized View Select Statements
[ https://issues.apache.org/jira/browse/CASSANDRA-10368?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394243#comment-15394243 ] Tyler Hobbs commented on CASSANDRA-10368: - Thanks! Your tests look good to me, so I've started a CI test run: ||branch||testall||dtest|| |[CASSANDRA-10368-trunk|https://github.com/thobbs/cassandra/tree/CASSANDRA-10368-trunk]|[testall|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10368-trunk-testall]|[dtest|http://cassci.datastax.com/view/Dev/view/thobbs/job/thobbs-CASSANDRA-10368-trunk-dtest]| If the test results look good, I will commit this. > Support Restricting non-PK Cols in Materialized View Select Statements > -- > > Key: CASSANDRA-10368 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10368 > Project: Cassandra > Issue Type: Improvement > Components: CQL >Reporter: Tyler Hobbs >Priority: Minor > Fix For: 3.x > > Attachments: 10368-3.8.txt > > > CASSANDRA-9664 allows materialized views to restrict primary key columns in > the select statement. Due to CASSANDRA-10261, the patch did not include > support for restricting non-PK columns. Now that the timestamp issue has > been resolved, we can add support for this. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (CASSANDRA-10726) Read repair inserts should not be blocking
[ https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15394236#comment-15394236 ] sankalp kohli commented on CASSANDRA-10726: --- Assigned to [~nachiket_patil] > Read repair inserts should not be blocking > -- > > Key: CASSANDRA-10726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10726 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Richard Low >Assignee: Nachiket Patil > > Today, if there’s a digest mismatch in a foreground read repair, the insert > to update out of date replicas is blocking. This means, if it fails, the read > fails with a timeout. If a node is dropping writes (maybe it is overloaded or > the mutation stage is backed up for some other reason), all reads to a > replica set could fail. Further, replicas dropping writes get more out of > sync so will require more read repair. > The comment on the code for why the writes are blocking is: > {code} > // wait for the repair writes to be acknowledged, to minimize impact on any > replica that's > // behind on writes in case the out-of-sync row is read multiple times in > quick succession > {code} > but the bad side effect is that reads timeout. Either the writes should not > be blocking or we should return success for the read even if the write times > out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (CASSANDRA-10726) Read repair inserts should not be blocking
[ https://issues.apache.org/jira/browse/CASSANDRA-10726?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] sankalp kohli updated CASSANDRA-10726: -- Assignee: Nachiket Patil > Read repair inserts should not be blocking > -- > > Key: CASSANDRA-10726 > URL: https://issues.apache.org/jira/browse/CASSANDRA-10726 > Project: Cassandra > Issue Type: Improvement > Components: Coordination >Reporter: Richard Low >Assignee: Nachiket Patil > > Today, if there’s a digest mismatch in a foreground read repair, the insert > to update out of date replicas is blocking. This means, if it fails, the read > fails with a timeout. If a node is dropping writes (maybe it is overloaded or > the mutation stage is backed up for some other reason), all reads to a > replica set could fail. Further, replicas dropping writes get more out of > sync so will require more read repair. > The comment on the code for why the writes are blocking is: > {code} > // wait for the repair writes to be acknowledged, to minimize impact on any > replica that's > // behind on writes in case the out-of-sync row is read multiple times in > quick succession > {code} > but the bad side effect is that reads timeout. Either the writes should not > be blocking or we should return success for the read even if the write times > out. -- This message was sent by Atlassian JIRA (v6.3.4#6332)