[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005671#comment-14005671 ] Cyril Scetbon commented on CASSANDRA-6421: -- [~mshuler] your notes were great ! Hopefully version 2.1 will come out in July :) So I'm in favor too of getting it in 2.1 branch. I don't have much time currently to spend on it for 1.2 and 2.0 branches. I won't hesitate to test it and patch it again if needed (like authentification for example). Add bash completion to nodetool --- Key: CASSANDRA-6421 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Cyril Scetbon Assignee: Cyril Scetbon Priority: Trivial Fix For: 2.1 rc1 Attachments: 6421-2.1.txt, 6421.txt You can find the bash-completion file at https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool it uses cqlsh to get keyspaces and namespaces and could use an environment variable (not implemented) to get access which cqlsh if authentification is needed. But I think that's really a good start :) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura reassigned CASSANDRA-7267: -- Assignee: Mikhail Stepura (was: Sylvain Lebresne) Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005682#comment-14005682 ] Mikhail Stepura commented on CASSANDRA-7267: It looks like a cqlsh bug Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005686#comment-14005686 ] Mikhail Stepura commented on CASSANDRA-7267: The server returns non-empty {{members}}, but cqlsh fails to decode that for some reason {code} from cassandra.cluster import Cluster cluster = Cluster() session = cluster.connect('test') r = session.execute(select * from songs) print r [Row(title=u'The trooper', band=u'Iron Maiden', band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal', tags=sortedset([u'england', u'metal']))] {code} Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7267: --- Labels: cqlsh (was: ) Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005687#comment-14005687 ] Mikhail Stepura edited comment on CASSANDRA-7267 at 5/22/14 7:20 AM: - {code} r = session.execute(select band_info from songs) print r [Row(band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal')] r = session.execute(select band_info.members from songs) print r [Row(band_info_members=sortedset())] {code} was (Author: mishail): {code} r = session.execute(select band_info from songs) print r [Row(band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal')] r = session.execute(select band_info.members from songs) print r [Row(band_info_members=sortedset())] {code} Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005687#comment-14005687 ] Mikhail Stepura commented on CASSANDRA-7267: {{code}} r = session.execute(select band_info from songs) print r [Row(band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal')] r = session.execute(select band_info.members from songs) print r [Row(band_info_members=sortedset())] {{code}} Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005687#comment-14005687 ] Mikhail Stepura edited comment on CASSANDRA-7267 at 5/22/14 7:20 AM: - {code} r = session.execute(select band_info from songs) print r [Row(band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal')] r = session.execute(select band_info.members from songs) print r [Row(band_info_members=sortedset())] {code} was (Author: mishail): {{code}} r = session.execute(select band_info from songs) print r [Row(band_info='\x00\x00\x00\x08\x00\x00\x00\x00\x0b?=\xf0\x00\x00\x00f\x00\x00\x00\x06\x00\x00\x00\x0cAdrian Smith\x00\x00\x00\x0fBruce Dickinson\x00\x00\x00\x0bDave Murray\x00\x00\x00\x0bJanick Gers\x00\x00\x00\rNicko McBrain\x00\x00\x00\x0cSteve Harris\x00\x00\x00\x0fPure evil metal')] r = session.execute(select band_info.members from songs) print r [Row(band_info_members=sortedset())] {{code}} Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7267: --- Description: Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas was: Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar); 3.) Create a udt like this: create type band_info_type (founded timestamp, members Setvarchar, description text); 4.) Try to insert data: insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); 5.) Select the data: select * from songs; Returns this: The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005698#comment-14005698 ] Mikhail Stepura commented on CASSANDRA-7267: Here is the thing. There are 2 sets in the result, one is embedded in an user type ({{members}} and one is 'standalone'({{tags}}). The problem is that the server send the length of a 'standalone' set as a 2byte short value and the length for the 'embedded' as a 4byte integer. The python-driver (1.1.2 in this case) always expects 2 bytes, so it fails to decode the set /cc [~thobbs] [~slebresne] Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005698#comment-14005698 ] Mikhail Stepura edited comment on CASSANDRA-7267 at 5/22/14 7:47 AM: - Here is the thing. There are 2 sets in the result, one is embedded in an user type ({{members}}) and one is 'standalone' ({{tags}}). The problem is that the server sends the length of a 'standalone' set as a 2byte short value and the length for the 'embedded' one as a 4byte integer. The python-driver (1.1.2 in this case) always expects 2 bytes, so it fails to decode the embedded set /cc [~thobbs] [~slebresne] was (Author: mishail): Here is the thing. There are 2 sets in the result, one is embedded in an user type ({{members}} and one is 'standalone'({{tags}}). The problem is that the server send the length of a 'standalone' set as a 2byte short value and the length for the 'embedded' as a 4byte integer. The python-driver (1.1.2 in this case) always expects 2 bytes, so it fails to decode the set /cc [~thobbs] [~slebresne] Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7267: --- Assignee: Sylvain Lebresne (was: Mikhail Stepura) Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Sylvain Lebresne Labels: cqlsh Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Mikhail Stepura updated CASSANDRA-7267: --- Since Version: 2.1 beta2 Labels: (was: cqlsh) Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Sylvain Lebresne Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005702#comment-14005702 ] Sylvain Lebresne commented on CASSANDRA-7267: - Yes, this is expected post-CASSANDRA-6855 and that means cqlsh/the python driver will need to be updated to use the 4 bytes encoding format inside user types. Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Sylvain Lebresne Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7120) Bad paging state returned for prepared statements for last page
[ https://issues.apache.org/jira/browse/CASSANDRA-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7120: Attachment: 7120-alternative.txt I don't think just copying the Metadata.names as the patch does works because it loses the Metadata.columnCount which can be different from the names size (due to CASSANDRA-4911 and more precisely following call to Metadata#addNonSerializedColumn). We could add some clone() method that preserve this, though I'd have a minor preference for saving the reallocation of a Metadata object when no paging is involved (that's the common case and the one we should optimize for). So attaching an aternative patch that makes the paging state final and creates a new Metadata object when we need to attach a paging state. Bad paging state returned for prepared statements for last page --- Key: CASSANDRA-7120 URL: https://issues.apache.org/jira/browse/CASSANDRA-7120 Project: Cassandra Issue Type: Bug Components: Core Reporter: Tyler Hobbs Assignee: Tyler Hobbs Fix For: 2.1 rc1 Attachments: 7120-alternative.txt, 7120.txt When executing a paged query with a prepared statement, a non-null paging state is sometimes being returned for the final page, causing an endless paging loop. Specifically, this is the schema being used: {noformat} CREATE KEYSPACE test3rf WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}'; USE test3rf; CREATE TABLE test3rf.test ( k int PRIMARY KEY, v int ) {noformat} The inserts are like so: {noformat} INSERT INTO test3rf.test (k, v) VALUES (?, 0) {noformat} With values from [0, 99] used for k. The query is {{SELECT * FROM test3rf.test}} with a fetch size of 3. The final page returns the row with k=3, and the paging state is {{000400420004000176007fa2}}. This matches the paging state from three pages earlier. When executing this with a non-prepared statement, no paging state is returned for this page. This problem doesn't happen with the 2.0 branch. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7284) ClassCastException in HintedHandoffManager.pagingFinished
[ https://issues.apache.org/jira/browse/CASSANDRA-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7284: Attachment: 7284-v2.txt I'll admit that I have a (very) strong preference for not making getColumn() take a Composite. That it takes a CellName is very much on purpose since giving a non-complete cell name is non-sensical and using the the type system to help document and enforce that. So attaching a v2 that simply don't call getColumn() when we don't have a true cell name. ClassCastException in HintedHandoffManager.pagingFinished - Key: CASSANDRA-7284 URL: https://issues.apache.org/jira/browse/CASSANDRA-7284 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1 rc1 Attachments: 7284-v2.txt, 7284.txt During a long running stress test on bdplab, Ryan encountered the following interesting exception, which I think is an as yet unfiled bug. Looks to be a pretty simple issue, introduced by CASSANDRA-5417 {code} java.lang.ClassCastException: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.HintedHandOffManager.pagingFinished(HintedHandOffManager.java:266) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:376) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:331) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:93) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:547) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7284) ClassCastException in HintedHandoffManager.pagingFinished
[ https://issues.apache.org/jira/browse/CASSANDRA-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005803#comment-14005803 ] Benedict edited comment on CASSANDRA-7284 at 5/22/14 10:03 AM: --- Either WFM: but if we're taking that approach I'd rather perform an instanceof CellName check, just to be sure we don't get bitten in future was (Author: benedict): Either WFM: but if we're taking that approach I'd rather perform an instanceof CellName check, just to be sure we don't get bitten in future though ClassCastException in HintedHandoffManager.pagingFinished - Key: CASSANDRA-7284 URL: https://issues.apache.org/jira/browse/CASSANDRA-7284 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1 rc1 Attachments: 7284-v2.txt, 7284.txt During a long running stress test on bdplab, Ryan encountered the following interesting exception, which I think is an as yet unfiled bug. Looks to be a pretty simple issue, introduced by CASSANDRA-5417 {code} java.lang.ClassCastException: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.HintedHandOffManager.pagingFinished(HintedHandOffManager.java:266) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:376) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:331) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:93) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:547) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7284) ClassCastException in HintedHandoffManager.pagingFinished
[ https://issues.apache.org/jira/browse/CASSANDRA-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005803#comment-14005803 ] Benedict commented on CASSANDRA-7284: - Either WFM: but if we're taking that approach I'd rather perform an instanceof CellName check, just to be sure we don't get bitten in future though ClassCastException in HintedHandoffManager.pagingFinished - Key: CASSANDRA-7284 URL: https://issues.apache.org/jira/browse/CASSANDRA-7284 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1 rc1 Attachments: 7284-v2.txt, 7284.txt During a long running stress test on bdplab, Ryan encountered the following interesting exception, which I think is an as yet unfiled bug. Looks to be a pretty simple issue, introduced by CASSANDRA-5417 {code} java.lang.ClassCastException: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.HintedHandOffManager.pagingFinished(HintedHandOffManager.java:266) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:376) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:331) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:93) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:547) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7284) ClassCastException in HintedHandoffManager.pagingFinished
[ https://issues.apache.org/jira/browse/CASSANDRA-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005812#comment-14005812 ] Sylvain Lebresne commented on CASSANDRA-7284: - I would argue that an instanceof check is less future proof. If we end up passing a composite that is not empty and not a CellName in the future and we don't update that method, then an instanceof would just silence what would possibly be a bug, making it hard to track (sure, there is a chance a instanceof would be the right thing to do, but it might not be and I'm not a betting man). If we do change the code so a non empty composite can be passed to this method and this method is not updated, then getting a ClassCastException will make it very easy to track what's wrong and we can use the right fix. But well, I would live with an instanceof, I just don't think it's better. ClassCastException in HintedHandoffManager.pagingFinished - Key: CASSANDRA-7284 URL: https://issues.apache.org/jira/browse/CASSANDRA-7284 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1 rc1 Attachments: 7284-v2.txt, 7284.txt During a long running stress test on bdplab, Ryan encountered the following interesting exception, which I think is an as yet unfiled bug. Looks to be a pretty simple issue, introduced by CASSANDRA-5417 {code} java.lang.ClassCastException: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.HintedHandOffManager.pagingFinished(HintedHandOffManager.java:266) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:376) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:331) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:93) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:547) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7284) ClassCastException in HintedHandoffManager.pagingFinished
[ https://issues.apache.org/jira/browse/CASSANDRA-7284?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005813#comment-14005813 ] Benedict commented on CASSANDRA-7284: - (shrug). That's a valid point - not sure it's better either, but no strong feelings on it, so happy to +1 the current patch. ClassCastException in HintedHandoffManager.pagingFinished - Key: CASSANDRA-7284 URL: https://issues.apache.org/jira/browse/CASSANDRA-7284 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Benedict Priority: Minor Fix For: 2.1 rc1 Attachments: 7284-v2.txt, 7284.txt During a long running stress test on bdplab, Ryan encountered the following interesting exception, which I think is an as yet unfiled bug. Looks to be a pretty simple issue, introduced by CASSANDRA-5417 {code} java.lang.ClassCastException: org.apache.cassandra.db.composites.Composites$EmptyComposite cannot be cast to org.apache.cassandra.db.composites.CellName at org.apache.cassandra.db.HintedHandOffManager.pagingFinished(HintedHandOffManager.java:266) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.doDeliverHintsToEndpoint(HintedHandOffManager.java:376) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:331) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:93) ~[main/:na] at org.apache.cassandra.db.HintedHandOffManager$5.run(HintedHandOffManager.java:547) ~[main/:na] at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) ~[na:1.7.0_51] at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) ~[na:1.7.0_51] at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_51] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7010) bootstrap_test simple_bootstrap_test dtest fails in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Marcus Eriksson updated CASSANDRA-7010: --- Attachment: 7010.patch I haven't been able to reproduce the exact error in the description here, but it does fail intermittently for me as well. I think the failures I get are due to the fact that assert_almost_equals defines almost as within 16%, if I bump that to 30% I can't reproduce the error (been running in a loop for 2 hours now). I think we simply sometimes miss out on the 16% due to the randomness of vnode selection. bootstrap_test simple_bootstrap_test dtest fails in 2.1 --- Key: CASSANDRA-7010 URL: https://issues.apache.org/jira/browse/CASSANDRA-7010 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: Marcus Eriksson Fix For: 2.1 rc1 Attachments: 7010.patch I patched ccm with https://github.com/pcmanus/ccm/pull/109 and got an error from simple_bootstrap: {noformat} == FAIL: simple_bootstrap_test (bootstrap_test.TestBootstrap) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/bootstrap_test.py, line 58, in simple_bootstrap_test assert_almost_equal(initial_size, 2 * size1) File /home/mshuler/git/cassandra-dtest/assertions.py, line 26, in assert_almost_equal assert vmin vmax * (1.0 - error), values not within %.2f%% of the max: %s % (error * 100, args) AssertionError: values not within 16.00% of the max: (0, 186396) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: Fix potential ClassCastException in HintedHandoffManager
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 0fdf2ddbf - 6127f8567 Fix potential ClassCastException in HintedHandoffManager patch by slebresne; reviewed by benedict for CASSANDRA-7284 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6127f856 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6127f856 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6127f856 Branch: refs/heads/cassandra-2.1 Commit: 6127f85670e9aeb569d5cc74468a2e17cc93b0bb Parents: 0fdf2dd Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 11:51:32 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 12:33:44 2014 +0200 -- CHANGES.txt| 1 + src/java/org/apache/cassandra/db/HintedHandOffManager.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6127f856/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index d38fe5d..6b08fad 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -20,6 +20,7 @@ * Add validate method to CollectionType (CASSANDRA-7208) * New serialization format for UDT values (CASSANDRA-7209, CASSANDRA-7261) * Fix nodetool netstats (CASSANDRA-7270) + * Fix potential ClassCastException in HintedHandoffManager (CASSANDRA-7284) Merged from 2.0: * Always reallocate buffers in HSHA (CASSANDRA-6285) * (Hadoop) support authentication in CqlRecordReader (CASSANDRA-7221) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6127f856/src/java/org/apache/cassandra/db/HintedHandOffManager.java -- diff --git a/src/java/org/apache/cassandra/db/HintedHandOffManager.java b/src/java/org/apache/cassandra/db/HintedHandOffManager.java index 0337756..4aa3c1b 100644 --- a/src/java/org/apache/cassandra/db/HintedHandOffManager.java +++ b/src/java/org/apache/cassandra/db/HintedHandOffManager.java @@ -264,7 +264,7 @@ public class HintedHandOffManager implements HintedHandOffManagerMBean { // done if no hints found or the start column (same as last column processed in previous iteration) is the only one return hintColumnFamily == null - || (hintColumnFamily.getSortedColumns().size() == 1 hintColumnFamily.getColumn((CellName)startColumn) != null); + || (!startColumn.isEmpty() hintColumnFamily.getSortedColumns().size() == 1 hintColumnFamily.getColumn((CellName)startColumn) != null); } private int waitForSchemaAgreement(InetAddress endpoint) throws TimeoutException
[2/2] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/d4bf6d32 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/d4bf6d32 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/d4bf6d32 Branch: refs/heads/trunk Commit: d4bf6d3283b4eaacbf4e4deab339ed8259db7902 Parents: 0afad2c 6127f85 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 12:34:46 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 12:34:46 2014 +0200 -- CHANGES.txt| 1 + src/java/org/apache/cassandra/db/HintedHandOffManager.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/d4bf6d32/CHANGES.txt --
[1/2] git commit: Fix potential ClassCastException in HintedHandoffManager
Repository: cassandra Updated Branches: refs/heads/trunk 0afad2c1d - d4bf6d328 Fix potential ClassCastException in HintedHandoffManager patch by slebresne; reviewed by benedict for CASSANDRA-7284 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/6127f856 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/6127f856 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/6127f856 Branch: refs/heads/trunk Commit: 6127f85670e9aeb569d5cc74468a2e17cc93b0bb Parents: 0fdf2dd Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 11:51:32 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 12:33:44 2014 +0200 -- CHANGES.txt| 1 + src/java/org/apache/cassandra/db/HintedHandOffManager.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/6127f856/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index d38fe5d..6b08fad 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -20,6 +20,7 @@ * Add validate method to CollectionType (CASSANDRA-7208) * New serialization format for UDT values (CASSANDRA-7209, CASSANDRA-7261) * Fix nodetool netstats (CASSANDRA-7270) + * Fix potential ClassCastException in HintedHandoffManager (CASSANDRA-7284) Merged from 2.0: * Always reallocate buffers in HSHA (CASSANDRA-6285) * (Hadoop) support authentication in CqlRecordReader (CASSANDRA-7221) http://git-wip-us.apache.org/repos/asf/cassandra/blob/6127f856/src/java/org/apache/cassandra/db/HintedHandOffManager.java -- diff --git a/src/java/org/apache/cassandra/db/HintedHandOffManager.java b/src/java/org/apache/cassandra/db/HintedHandOffManager.java index 0337756..4aa3c1b 100644 --- a/src/java/org/apache/cassandra/db/HintedHandOffManager.java +++ b/src/java/org/apache/cassandra/db/HintedHandOffManager.java @@ -264,7 +264,7 @@ public class HintedHandOffManager implements HintedHandOffManagerMBean { // done if no hints found or the start column (same as last column processed in previous iteration) is the only one return hintColumnFamily == null - || (hintColumnFamily.getSortedColumns().size() == 1 hintColumnFamily.getColumn((CellName)startColumn) != null); + || (!startColumn.isEmpty() hintColumnFamily.getSortedColumns().size() == 1 hintColumnFamily.getColumn((CellName)startColumn) != null); } private int waitForSchemaAgreement(InetAddress endpoint) throws TimeoutException
[jira] [Commented] (CASSANDRA-5220) Repair improvements when using vnodes
[ https://issues.apache.org/jira/browse/CASSANDRA-5220?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005843#comment-14005843 ] Juho Mäkinen commented on CASSANDRA-5220: - In addition the repair operation gives poor status on its progress so it would be nice that some additional logging about repair progress would be added both to log4j and also to JMX. Repair improvements when using vnodes - Key: CASSANDRA-5220 URL: https://issues.apache.org/jira/browse/CASSANDRA-5220 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 1.2.0 beta 1 Reporter: Brandon Williams Assignee: Yuki Morishita Labels: performance, repair Fix For: 2.1.1 Attachments: 5220-yourkit.png, 5220-yourkit.tar.bz2 Currently when using vnodes, repair takes much longer to complete than without them. This appears at least in part because it's using a session per range and processing them sequentially. This generates a lot of log spam with vnodes, and while being gentler and lighter on hard disk deployments, ssd-based deployments would often prefer that repair be as fast as possible. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-5483) Repair tracing
[ https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13998924#comment-13998924 ] Ben Chan edited comment on CASSANDRA-5483 at 5/22/14 12:01 PM: --- May 14 formatting changes in [^5483-v13-608fb03-May-14-trace-formatting-changes.patch] (based off of commit 608fb03). {quote} I think the session log messages are still confusing, especially since we use the same term for repairing a subrange and streaming data. {quote} Currently the session terminology is baked into the source code, in {{StreamSession.java}} and {{RepairSession.java}}. If the messages are changed to reflect different terminology, hopefully the source code can eventually be changed to match (fewer special cases to remember). Perhaps the best thing is to always qualify them, e.g. stream session and repair session? {quote} I don't actually see the session uuid being used in the logs except at start/finish. {quote} Sorry, that was another inadvertent mixing of nodetool messages and trace output. {{\[2014-05-13 23:49:52,283] Repair session cd6aad80-db1a-11e3-b0e7-f94811c7b860 for range (3074457345618258602,-9223372036854775808] finished}} is not a trace, but a separate (pre-patch) sendNotification in {{StorageService.java}}. This message (and some of the error messages, I think) is redundant when combined with trace output. It should have been either one or the other, not both. In the trace proper, the session UUID only shows up at the start. But note: not all nodetool messages are rendered redundant by trace output. Since we can't just suppress all non-trace sendNotification, how can we unambiguously tell nodetool trace output from normal sendNotification messages? I'm currently leaning towards just marking all sendNotification trace output with a {{TRACE:}} tag. The repair session UUIDs used to be prepended to everything, but were removed in [^5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch]. Without them, things are less verbose, but it's sometimes hard to unambiguously follow traces for concurrent repair sessions. To make the point clearer, I've marked each sub-task graphically in the nodetool trace output below (I've cross-checked this with the logs, which do retain the UUIDs). If you cover up the left side, it's harder to figure out which trace goes with which sub-task. Real-world repair traces will probably be even more confusing. Note: indentation here does not denote nesting; the column roughly indicates task identity, though I reuse columns when it's not ambiguous. {noformat} 1 [2014-05-15 11:31:37,839] Starting repair command #1, repairing 3 ranges for s1.users (seq=true, full=true) x [2014-05-15 11:31:37,922] Syncing range (-3074457345618258603,3074457345618258602] x [2014-05-15 11:31:38,108] Requesting merkle trees for users from [/127.0.0.2, /127.0.0.3, /127.0.0.1] x [2014-05-15 11:31:38,833] /127.0.0.2: Sending completed merkle tree to /127.0.0.1 for s1.users x [2014-05-15 11:31:39,953] Received merkle tree for users from /127.0.0.2 x [2014-05-15 11:31:40,939] /127.0.0.3: Sending completed merkle tree to /127.0.0.1 for s1.users x [2014-05-15 11:31:41,279] Received merkle tree for users from /127.0.0.3 x [2014-05-15 11:31:42,632] Received merkle tree for users from /127.0.0.1 x [2014-05-15 11:31:42,671] Syncing range (-9223372036854775808,-3074457345618258603] x [2014-05-15 11:31:42,766] Requesting merkle trees for users from [/127.0.0.2, /127.0.0.3, /127.0.0.1] x [2014-05-15 11:31:42,905] Endpoint /127.0.0.2 is consistent with /127.0.0.3 for users x [2014-05-15 11:31:43,044] Endpoint /127.0.0.2 is consistent with /127.0.0.1 for users x [2014-05-15 11:31:43,047] Endpoint /127.0.0.3 is consistent with /127.0.0.1 for users x [2014-05-15 11:31:43,084] Completed sync of range (-3074457345618258603,3074457345618258602] x [2014-05-15 11:31:43,251] /127.0.0.2: Sending completed merkle tree to /127.0.0.1 for s1.users x [2014-05-15 11:31:43,422] Received merkle tree for users from /127.0.0.2 x [2014-05-15 11:31:44,495] /127.0.0.3: Sending completed merkle tree to /127.0.0.1 for s1.users x [2014-05-15 11:31:44,637] Received merkle tree for users from /127.0.0.3 x [2014-05-15 11:31:45,474] Received merkle tree for users from /127.0.0.1 x [2014-05-15 11:31:45,494] Syncing range (3074457345618258602,-9223372036854775808] x [2014-05-15 11:31:45,499] Endpoint /127.0.0.3 is consistent with /127.0.0.1 for users x [2014-05-15 11:31:45,520] Endpoint /127.0.0.2 is consistent with /127.0.0.1 for users x [2014-05-15 11:31:45,544] Endpoint /127.0.0.2 is consistent with /127.0.0.3 for users x [2014-05-15 11:31:45,564] Completed sync of range
[jira] [Commented] (CASSANDRA-5483) Repair tracing
[ https://issues.apache.org/jira/browse/CASSANDRA-5483?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005853#comment-14005853 ] Ben Chan commented on CASSANDRA-5483: - {quote} Is that because the traces are asynchronous? Because I think session 2 only starts after session 1 finishes. {quote} Here's my high level understanding. * Repair command #1 and #2, etc are serial. * Each repair session Syncing range ... is technically concurrent, since each is submitted to a ThreadPoolExecutor. ** However, differencing is serialized, so if there is no streaming going on, you won't see very much overlap between the sessions, except at the beginning and end (which is exactly what we see with these simple tests). ** Conversely, this means you will see much more interleaving when heavy streaming is going on. So at the very least, it might be good to eventually disambiguate the streaming portion. {quote} The easiest thing would be to make them non-redundant. Can we make the tracing extra detail on top of the normal ones instead of competing with them? {quote} I think it may be a conceptual block on my part. I tend to think of traces as a kind of profiling mechanism. * Most of the sendNotification calls in StorageService#createRepairTask consist of reporting any errors from the results of RepairFuture objects. So the timing on those is not really that useful for profiling. They're not really what I'd usually think of as a trace. * Some are request validation reporting before the repair proper even starts. * The rest are informational sendNotification messages which are redundant when tracing is active (this is the easy case). In pseudocode: {noformat} if (some error #1 in repair request) sendNotification(NO #1!); if (some error #2 in repair request) sendNotification(NO #2!); for (r : ranges) { f = something.submitRepairSession(new RepairSession(r)); futures.add(f); try { // this serializes the differencing part. f.waitForDifferencing() } catch (SomeException) { // handle, sendNotification } } try { for (f : futures) { r = f.get(); sendNotification(done: %s, r); } } catch (ExecutionException ee) { // handle, sendNotification } catch (Exception e) { // handle, sendNotification } {noformat} The main point being that I can't be sure that every single interesting exception is caught and traced in the thread where it's thrown, then rethrown. Most likely, this is not the case, and some exceptions are only reported at the StorageService#createRepairTask level. I believe most \(?) cases are already caught and traced, though. So after going through all that, I'm thinking that the easiest thing is to just accept the possibility of redundancy and delayed reporting, and just trace all sendNotification in StorageService#createRepairTask (unless it's demonstrably redundant, or already being traced through some other mechanism). Repair tracing -- Key: CASSANDRA-5483 URL: https://issues.apache.org/jira/browse/CASSANDRA-5483 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Yuki Morishita Assignee: Ben Chan Priority: Minor Labels: repair Attachments: 5483-full-trunk.txt, 5483-v06-04-Allow-tracing-ttl-to-be-configured.patch, 5483-v06-05-Add-a-command-column-to-system_traces.events.patch, 5483-v06-06-Fix-interruption-in-tracestate-propagation.patch, 5483-v07-07-Better-constructor-parameters-for-DebuggableThreadPoolExecutor.patch, 5483-v07-08-Fix-brace-style.patch, 5483-v07-09-Add-trace-option-to-a-more-complete-set-of-repair-functions.patch, 5483-v07-10-Correct-name-of-boolean-repairedAt-to-fullRepair.patch, 5483-v08-11-Shorten-trace-messages.-Use-Tracing-begin.patch, 5483-v08-12-Trace-streaming-in-Differencer-StreamingRepairTask.patch, 5483-v08-13-sendNotification-of-local-traces-back-to-nodetool.patch, 5483-v08-14-Poll-system_traces.events.patch, 5483-v08-15-Limit-trace-notifications.-Add-exponential-backoff.patch, 5483-v09-16-Fix-hang-caused-by-incorrect-exit-code.patch, 5483-v10-17-minor-bugfixes-and-changes.patch, 5483-v10-rebased-and-squashed-471f5cc.patch, 5483-v11-01-squashed.patch, 5483-v11-squashed-nits.patch, 5483-v12-02-cassandra-yaml-ttl-doc.patch, 5483-v13-608fb03-May-14-trace-formatting-changes.patch, ccm-repair-test, cqlsh-left-justify-text-columns.patch, prerepair-vs-postbuggedrepair.diff, test-5483-system_traces-events.txt, trunk@4620823-5483-v02-0001-Trace-filtering-and-tracestate-propagation.patch, trunk@4620823-5483-v02-0002-Put-a-few-traces-parallel-to-the-repair-logging.patch, tr...@8ebeee1-5483-v01-001-trace-filtering-and-tracestate-propagation.txt, tr...@8ebeee1-5483-v01-002-simple-repair-tracing.txt,
[1/3] git commit: Backport first patch of 6975
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 6127f8567 - 1147ee3a8 Backport first patch of 6975 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362e5480 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362e5480 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362e5480 Branch: refs/heads/cassandra-2.1 Commit: 362e54803434053fea25f874f64c69bdc1db78da Parents: 2635632 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed May 14 14:25:29 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 14:41:53 2014 +0200 -- .../org/apache/cassandra/cql3/CQLStatement.java | 2 +- .../apache/cassandra/cql3/QueryProcessor.java | 2 +- .../statements/AuthenticationStatement.java | 2 +- .../cql3/statements/AuthorizationStatement.java | 2 +- .../cql3/statements/BatchStatement.java | 4 +- .../cql3/statements/ModificationStatement.java | 4 +- .../statements/SchemaAlteringStatement.java | 2 +- .../cql3/statements/SelectStatement.java| 47 ++-- .../cql3/statements/TruncateStatement.java | 2 +- .../cassandra/cql3/statements/UseStatement.java | 2 +- 10 files changed, 34 insertions(+), 35 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/CQLStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/CQLStatement.java b/src/java/org/apache/cassandra/cql3/CQLStatement.java index 81cd2b2..a1642ef 100644 --- a/src/java/org/apache/cassandra/cql3/CQLStatement.java +++ b/src/java/org/apache/cassandra/cql3/CQLStatement.java @@ -57,5 +57,5 @@ public interface CQLStatement * * @param state the current query state */ -public ResultMessage executeInternal(QueryState state) throws RequestValidationException, RequestExecutionException; +public ResultMessage executeInternal(QueryState state, QueryOptions options) throws RequestValidationException, RequestExecutionException; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index 15ee59f..30d1bd7 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -205,7 +205,7 @@ public class QueryProcessor implements QueryHandler state.setKeyspace(Keyspace.SYSTEM_KS); CQLStatement statement = getStatement(query, state).statement; statement.validate(state); -ResultMessage result = statement.executeInternal(qState); +ResultMessage result = statement.executeInternal(qState, QueryOptions.DEFAULT); if (result instanceof ResultMessage.Rows) return new UntypedResultSet(((ResultMessage.Rows)result).result); else http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java b/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java index 5fcf085..b47dd92 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java @@ -45,7 +45,7 @@ public abstract class AuthenticationStatement extends ParsedStatement implements public abstract ResultMessage execute(ClientState state) throws RequestExecutionException, RequestValidationException; -public ResultMessage executeInternal(QueryState state) +public ResultMessage executeInternal(QueryState state, QueryOptions options) { // executeInternal is for local query only, thus altering users doesn't make sense and is not supported throw new UnsupportedOperationException(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java b/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java index db4581e..2c7f2cb 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java +++
[2/3] git commit: Merge commit '362e54803434053fea25f874f64c69bdc1db78da' into cassandra-2.1
Merge commit '362e54803434053fea25f874f64c69bdc1db78da' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3ec8fa1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3ec8fa1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3ec8fa1 Branch: refs/heads/cassandra-2.1 Commit: c3ec8fa11b322d01044976c43bcfe18c58b08ed8 Parents: 6127f85 362e548 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 14:43:31 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 14:43:31 2014 +0200 -- --
[4/4] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/5f643ffc Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/5f643ffc Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/5f643ffc Branch: refs/heads/trunk Commit: 5f643ffcc3ebdb9ba4295bb09098790914df7b9b Parents: d4bf6d3 1147ee3 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 14:46:33 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 14:46:33 2014 +0200 -- CHANGES.txt | 1 + .../org/apache/cassandra/cql3/CQLStatement.java | 2 +- .../org/apache/cassandra/cql3/QueryOptions.java | 5 + .../apache/cassandra/cql3/QueryProcessor.java | 103 +++- .../apache/cassandra/cql3/UntypedResultSet.java | 58 - .../statements/AuthenticationStatement.java | 2 +- .../cql3/statements/AuthorizationStatement.java | 2 +- .../cql3/statements/BatchStatement.java | 4 +- .../cql3/statements/ModificationStatement.java | 4 +- .../statements/SchemaAlteringStatement.java | 2 +- .../cql3/statements/SelectStatement.java| 48 ++-- .../cql3/statements/TruncateStatement.java | 2 +- .../cassandra/cql3/statements/UseStatement.java | 2 +- .../apache/cassandra/db/BatchlogManager.java| 27 +- .../org/apache/cassandra/db/SystemKeyspace.java | 245 --- .../ScheduledRangeTransferExecutorService.java | 8 +- .../cassandra/service/StorageService.java | 43 ++-- .../cassandra/db/BatchlogManagerTest.java | 8 +- .../apache/cassandra/db/HintedHandOffTest.java | 6 +- .../unit/org/apache/cassandra/db/ScrubTest.java | 10 +- .../db/compaction/CompactionsPurgeTest.java | 20 +- .../io/sstable/CQLSSTableWriterTest.java| 2 +- .../service/LeaveAndBootstrapTest.java | 4 +- .../cassandra/service/QueryPagerTest.java | 4 +- .../apache/cassandra/triggers/TriggersTest.java | 4 +- 25 files changed, 368 insertions(+), 248 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f643ffc/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f643ffc/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f643ffc/src/java/org/apache/cassandra/service/StorageService.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f643ffc/test/unit/org/apache/cassandra/db/ScrubTest.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/5f643ffc/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java -- diff --cc test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java index 5820312,912c7f1..80608f5 --- a/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java +++ b/test/unit/org/apache/cassandra/db/compaction/CompactionsPurgeTest.java @@@ -36,10 -36,8 +36,10 @@@ import static org.junit.Assert.assertEq import static org.apache.cassandra.db.KeyspaceTest.assertColumns; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertNotNull; +import static org.junit.Assert.assertNull; - import static org.apache.cassandra.cql3.QueryProcessor.processInternal; + import static org.apache.cassandra.cql3.QueryProcessor.executeInternal; import static org.apache.cassandra.Util.cellname; import org.apache.cassandra.utils.ByteBufferUtil;
[2/4] git commit: Merge commit '362e54803434053fea25f874f64c69bdc1db78da' into cassandra-2.1
Merge commit '362e54803434053fea25f874f64c69bdc1db78da' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/c3ec8fa1 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/c3ec8fa1 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/c3ec8fa1 Branch: refs/heads/trunk Commit: c3ec8fa11b322d01044976c43bcfe18c58b08ed8 Parents: 6127f85 362e548 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 14:43:31 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 14:43:31 2014 +0200 -- --
[1/4] git commit: Backport first patch of 6975
Repository: cassandra Updated Branches: refs/heads/trunk d4bf6d328 - 5f643ffcc Backport first patch of 6975 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/362e5480 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/362e5480 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/362e5480 Branch: refs/heads/trunk Commit: 362e54803434053fea25f874f64c69bdc1db78da Parents: 2635632 Author: Sylvain Lebresne sylv...@datastax.com Authored: Wed May 14 14:25:29 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 14:41:53 2014 +0200 -- .../org/apache/cassandra/cql3/CQLStatement.java | 2 +- .../apache/cassandra/cql3/QueryProcessor.java | 2 +- .../statements/AuthenticationStatement.java | 2 +- .../cql3/statements/AuthorizationStatement.java | 2 +- .../cql3/statements/BatchStatement.java | 4 +- .../cql3/statements/ModificationStatement.java | 4 +- .../statements/SchemaAlteringStatement.java | 2 +- .../cql3/statements/SelectStatement.java| 47 ++-- .../cql3/statements/TruncateStatement.java | 2 +- .../cassandra/cql3/statements/UseStatement.java | 2 +- 10 files changed, 34 insertions(+), 35 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/CQLStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/CQLStatement.java b/src/java/org/apache/cassandra/cql3/CQLStatement.java index 81cd2b2..a1642ef 100644 --- a/src/java/org/apache/cassandra/cql3/CQLStatement.java +++ b/src/java/org/apache/cassandra/cql3/CQLStatement.java @@ -57,5 +57,5 @@ public interface CQLStatement * * @param state the current query state */ -public ResultMessage executeInternal(QueryState state) throws RequestValidationException, RequestExecutionException; +public ResultMessage executeInternal(QueryState state, QueryOptions options) throws RequestValidationException, RequestExecutionException; } http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index 15ee59f..30d1bd7 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -205,7 +205,7 @@ public class QueryProcessor implements QueryHandler state.setKeyspace(Keyspace.SYSTEM_KS); CQLStatement statement = getStatement(query, state).statement; statement.validate(state); -ResultMessage result = statement.executeInternal(qState); +ResultMessage result = statement.executeInternal(qState, QueryOptions.DEFAULT); if (result instanceof ResultMessage.Rows) return new UntypedResultSet(((ResultMessage.Rows)result).result); else http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java b/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java index 5fcf085..b47dd92 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AuthenticationStatement.java @@ -45,7 +45,7 @@ public abstract class AuthenticationStatement extends ParsedStatement implements public abstract ResultMessage execute(ClientState state) throws RequestExecutionException, RequestValidationException; -public ResultMessage executeInternal(QueryState state) +public ResultMessage executeInternal(QueryState state, QueryOptions options) { // executeInternal is for local query only, thus altering users doesn't make sense and is not supported throw new UnsupportedOperationException(); http://git-wip-us.apache.org/repos/asf/cassandra/blob/362e5480/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java -- diff --git a/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java b/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java index db4581e..2c7f2cb 100644 --- a/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java +++ b/src/java/org/apache/cassandra/cql3/statements/AuthorizationStatement.java @@ -47,7
buildbot failure in ASF Buildbot on cassandra-2.1
The Buildbot has detected a new failure on builder cassandra-2.1 while building cassandra. Full details are available at: http://ci.apache.org/builders/cassandra-2.1/builds/52 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: portunus_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra-2.1] 1147ee3a81e483b26b4b8c5d7cc7e55fcc2baeec Blamelist: Sylvain Lebresne sylv...@datastax.com BUILD FAILED: failed shell sincerely, -The Buildbot
[jira] [Commented] (CASSANDRA-7010) bootstrap_test simple_bootstrap_test dtest fails in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005910#comment-14005910 ] Brandon Williams commented on CASSANDRA-7010: - Slightly suspicious of that analysis then, because if I recall correctly that method started at 10%, I bumped it to 15% for similar reasons, and then finally 16%. The original trace indicates it returned zero though, which wouldn't be within 100% of anything reasonable. bootstrap_test simple_bootstrap_test dtest fails in 2.1 --- Key: CASSANDRA-7010 URL: https://issues.apache.org/jira/browse/CASSANDRA-7010 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: Marcus Eriksson Fix For: 2.1 rc1 Attachments: 7010.patch I patched ccm with https://github.com/pcmanus/ccm/pull/109 and got an error from simple_bootstrap: {noformat} == FAIL: simple_bootstrap_test (bootstrap_test.TestBootstrap) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/bootstrap_test.py, line 58, in simple_bootstrap_test assert_almost_equal(initial_size, 2 * size1) File /home/mshuler/git/cassandra-dtest/assertions.py, line 26, in assert_almost_equal assert vmin vmax * (1.0 - error), values not within %.2f%% of the max: %s % (error * 100, args) AssertionError: values not within 16.00% of the max: (0, 186396) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7010) bootstrap_test simple_bootstrap_test dtest fails in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005926#comment-14005926 ] Marcus Eriksson commented on CASSANDRA-7010: Suspected that the 0-case had been fixed elsewhere since this ticket is 1+month old [~mshuler] do you have the logs for one of those 0-failures? bootstrap_test simple_bootstrap_test dtest fails in 2.1 --- Key: CASSANDRA-7010 URL: https://issues.apache.org/jira/browse/CASSANDRA-7010 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: Marcus Eriksson Fix For: 2.1 rc1 Attachments: 7010.patch I patched ccm with https://github.com/pcmanus/ccm/pull/109 and got an error from simple_bootstrap: {noformat} == FAIL: simple_bootstrap_test (bootstrap_test.TestBootstrap) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/bootstrap_test.py, line 58, in simple_bootstrap_test assert_almost_equal(initial_size, 2 * size1) File /home/mshuler/git/cassandra-dtest/assertions.py, line 26, in assert_almost_equal assert vmin vmax * (1.0 - error), values not within %.2f%% of the max: %s % (error * 100, args) AssertionError: values not within 16.00% of the max: (0, 186396) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6626) Create 2.0-2.1 counter upgrade dtests
[ https://issues.apache.org/jira/browse/CASSANDRA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005925#comment-14005925 ] Aleksey Yeschenko commented on CASSANDRA-6626: -- [~rhatch] Should probably investigate the 'very slow counter updates' thing before going further. Any ideas/details that you can gather, or should I take over now? Create 2.0-2.1 counter upgrade dtests -- Key: CASSANDRA-6626 URL: https://issues.apache.org/jira/browse/CASSANDRA-6626 Project: Cassandra Issue Type: Test Reporter: Aleksey Yeschenko Assignee: Russ Hatch Fix For: 2.1.0 Create 2.0-2.1 counter upgrade dtests. Something more extensive, yet more specific than https://github.com/riptano/cassandra-dtest/blob/master/upgrade_through_versions_test.py -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6626) Create 2.0-2.1 counter upgrade dtests
[ https://issues.apache.org/jira/browse/CASSANDRA-6626?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6626: - Fix Version/s: (was: 2.1 rc1) 2.1.0 Create 2.0-2.1 counter upgrade dtests -- Key: CASSANDRA-6626 URL: https://issues.apache.org/jira/browse/CASSANDRA-6626 Project: Cassandra Issue Type: Test Reporter: Aleksey Yeschenko Assignee: Russ Hatch Fix For: 2.1.0 Create 2.0-2.1 counter upgrade dtests. Something more extensive, yet more specific than https://github.com/riptano/cassandra-dtest/blob/master/upgrade_through_versions_test.py -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7285) Full Checksum does not include full stable
[ https://issues.apache.org/jira/browse/CASSANDRA-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-7285: Reviewer: Sylvain Lebresne Full Checksum does not include full stable -- Key: CASSANDRA-7285 URL: https://issues.apache.org/jira/browse/CASSANDRA-7285 Project: Cassandra Issue Type: Bug Reporter: sankalp kohli Assignee: sankalp kohli Attachments: CASSANDRA-7285.diff FullChecksum is calculated and stored on disk before the entire data is flushed. This causes the checksum to not match with the file. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005927#comment-14005927 ] Michael Shuler commented on CASSANDRA-6421: --- [~cscetbon] this was committed to the cassandra-2.1 branch and merged to trunk. It won't be added to the older versions, so work from 2.1 forward. If you have a patch to add to the [existing nodetool completion|https://github.com/apache/cassandra/blob/trunk/debian/nodetool-completion] at some time, just open a new jira ticket. Add bash completion to nodetool --- Key: CASSANDRA-6421 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Cyril Scetbon Assignee: Cyril Scetbon Priority: Trivial Fix For: 2.1 rc1 Attachments: 6421-2.1.txt, 6421.txt You can find the bash-completion file at https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool it uses cqlsh to get keyspaces and namespaces and could use an environment variable (not implemented) to get access which cqlsh if authentification is needed. But I think that's really a good start :) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005929#comment-14005929 ] Michael Shuler commented on CASSANDRA-6421: --- And thanks for this feature! I've had it installed locally since you first opened this ticket - it's super handy :) Add bash completion to nodetool --- Key: CASSANDRA-6421 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Cyril Scetbon Assignee: Cyril Scetbon Priority: Trivial Fix For: 2.1 rc1 Attachments: 6421-2.1.txt, 6421.txt You can find the bash-completion file at https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool it uses cqlsh to get keyspaces and namespaces and could use an environment variable (not implemented) to get access which cqlsh if authentification is needed. But I think that's really a good start :) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7285) Full Checksum does not include full stable
[ https://issues.apache.org/jira/browse/CASSANDRA-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005928#comment-14005928 ] Benedict commented on CASSANDRA-7285: - +1 Full Checksum does not include full stable -- Key: CASSANDRA-7285 URL: https://issues.apache.org/jira/browse/CASSANDRA-7285 Project: Cassandra Issue Type: Bug Reporter: sankalp kohli Assignee: sankalp kohli Attachments: CASSANDRA-7285.diff FullChecksum is calculated and stored on disk before the entire data is flushed. This causes the checksum to not match with the file. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6421) Add bash completion to nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-6421?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005931#comment-14005931 ] Cyril Scetbon commented on CASSANDRA-6421: -- Okay to open a new jira if needed, and I'm happy that others use it too ! Add bash completion to nodetool --- Key: CASSANDRA-6421 URL: https://issues.apache.org/jira/browse/CASSANDRA-6421 Project: Cassandra Issue Type: New Feature Components: Tools Reporter: Cyril Scetbon Assignee: Cyril Scetbon Priority: Trivial Fix For: 2.1 rc1 Attachments: 6421-2.1.txt, 6421.txt You can find the bash-completion file at https://raw.github.com/cscetbon/cassandra/nodetool-completion/etc/bash_completion.d/nodetool it uses cqlsh to get keyspaces and namespaces and could use an environment variable (not implemented) to get access which cqlsh if authentification is needed. But I think that's really a good start :) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7285) Full Checksum does not include full stable
[ https://issues.apache.org/jira/browse/CASSANDRA-7285?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Brandon Williams updated CASSANDRA-7285: Reviewer: Benedict (was: Sylvain Lebresne) Full Checksum does not include full stable -- Key: CASSANDRA-7285 URL: https://issues.apache.org/jira/browse/CASSANDRA-7285 Project: Cassandra Issue Type: Bug Reporter: sankalp kohli Assignee: sankalp kohli Attachments: CASSANDRA-7285.diff FullChecksum is calculated and stored on disk before the entire data is flushed. This causes the checksum to not match with the file. -- This message was sent by Atlassian JIRA (v6.2#6252)
git commit: Work around initialization problem
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 1147ee3a8 - 36cc02ca7 Work around initialization problem Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36cc02ca Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36cc02ca Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36cc02ca Branch: refs/heads/cassandra-2.1 Commit: 36cc02ca76fa11b6b1d2cb24fb068d2a5dfaa842 Parents: 1147ee3 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 16:02:23 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 16:02:23 2014 +0200 -- .../apache/cassandra/cql3/QueryProcessor.java | 46 +--- 1 file changed, 31 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/36cc02ca/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index fca9c42..fd6e6ce 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -82,7 +82,6 @@ public class QueryProcessor implements QueryHandler // A map for prepared statements used internally (which we don't want to mix with user statement, in particular we don't // bother with expiration on those. private static final ConcurrentMapString, ParsedStatement.Prepared internalStatements = new ConcurrentHashMap(); -private static final QueryState internalQueryState; static { @@ -95,16 +94,33 @@ public class QueryProcessor implements QueryHandler .weigher(thriftMemoryUsageWeigher) .build(); -ClientState state = ClientState.forInternalCalls(); -try -{ -state.setKeyspace(Keyspace.SYSTEM_KS); -} -catch (InvalidRequestException e) +} + +// Work aound initialization dependency +private static enum InternalStateInstance +{ +INSTANCE; + +private final QueryState queryState; + +InternalStateInstance() { -throw new RuntimeException(); +ClientState state = ClientState.forInternalCalls(); +try +{ +state.setKeyspace(Keyspace.SYSTEM_KS); +} +catch (InvalidRequestException e) +{ +throw new RuntimeException(); +} +this.queryState = new QueryState(state); } -internalQueryState = new QueryState(state); +} + +private static QueryState internalQueryState() +{ +return InternalStateInstance.INSTANCE.queryState; } private QueryProcessor() @@ -233,8 +249,8 @@ public class QueryProcessor implements QueryHandler return prepared; // Note: if 2 threads prepare the same query, we'll live so don't bother synchronizing -prepared = parseStatement(query, internalQueryState); -prepared.statement.validate(internalQueryState.getClientState()); +prepared = parseStatement(query, internalQueryState()); +prepared.statement.validate(internalQueryState().getClientState()); internalStatements.putIfAbsent(query, prepared); return prepared; } @@ -244,7 +260,7 @@ public class QueryProcessor implements QueryHandler try { ParsedStatement.Prepared prepared = prepareInternal(query); -ResultMessage result = prepared.statement.executeInternal(internalQueryState, makeInternalOptions(prepared, values)); +ResultMessage result = prepared.statement.executeInternal(internalQueryState(), makeInternalOptions(prepared, values)); if (result instanceof ResultMessage.Rows) return UntypedResultSet.create(((ResultMessage.Rows)result).result); else @@ -286,9 +302,9 @@ public class QueryProcessor implements QueryHandler { try { -ParsedStatement.Prepared prepared = parseStatement(query, internalQueryState); -prepared.statement.validate(internalQueryState.getClientState()); -ResultMessage result = prepared.statement.executeInternal(internalQueryState, makeInternalOptions(prepared, values)); +ParsedStatement.Prepared prepared = parseStatement(query, internalQueryState()); +prepared.statement.validate(internalQueryState().getClientState()); +ResultMessage result = prepared.statement.executeInternal(internalQueryState(), makeInternalOptions(prepared, values)); if (result instanceof
[4/4] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/864865da Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/864865da Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/864865da Branch: refs/heads/trunk Commit: 864865da90543488618eb106e2bec6346c3ec79c Parents: 5f643ff 9bd3887 Author: Brandon Williams brandonwilli...@apache.org Authored: Thu May 22 09:14:21 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu May 22 09:14:21 2014 -0500 -- .../apache/cassandra/cql3/QueryProcessor.java | 46 +--- .../cassandra/io/sstable/SSTableWriter.java | 3 +- 2 files changed, 33 insertions(+), 16 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/864865da/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/864865da/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java --
[3/4] git commit: Fix writing the checksum before closing the sstable.
Fix writing the checksum before closing the sstable. Patch by Sankalp Kohli, reviewed by Benedict for CASSANDRA-7285 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9bd38878 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9bd38878 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9bd38878 Branch: refs/heads/cassandra-2.1 Commit: 9bd38878059932e83163c1354c7453e494cda3b1 Parents: 36cc02c Author: Brandon Williams brandonwilli...@apache.org Authored: Thu May 22 09:12:11 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu May 22 09:12:11 2014 -0500 -- src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9bd38878/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java index f32bb96..9567f0e 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java @@ -459,11 +459,12 @@ public class SSTableWriter extends SSTable private PairDescriptor, StatsMetadata close(long repairedAt) { -dataFile.writeFullChecksum(descriptor); + // index and filter iwriter.close(); // main data, close will truncate if necessary dataFile.close(); +dataFile.writeFullChecksum(descriptor); // write sstable statistics MapMetadataType, MetadataComponent metadataComponents = sstableMetadataCollector.finalizeMetadata( partitioner.getClass().getCanonicalName(),
[jira] [Commented] (CASSANDRA-6523) Unable to contact any seeds! with multi-DC cluster and listen != broadcast address
[ https://issues.apache.org/jira/browse/CASSANDRA-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005965#comment-14005965 ] Chris Burroughs commented on CASSANDRA-6523: Still present in 2.0.x series. There are a variety of stack overflow and mailing list threads threads with Unable to contact any seeds. I think the problem is that CASSANDRA-5768 isn't checking have I contacted a seed but am I connected to one of these IP addresses. That ends up being requirement that the seeds/broadcast/listen address line up in a particular way. Unable to contact any seeds! with multi-DC cluster and listen != broadcast address Key: CASSANDRA-6523 URL: https://issues.apache.org/jira/browse/CASSANDRA-6523 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.13ish Reporter: Chris Burroughs New cluster: * Seeds: list of 6 internal IPs * listen address: internal ip * broadcast: external ip Two DC cluster, using GPFS where the external IPs are NATed. Clusters fails to start with Unable to contact any seeds! * Fail: Try to start a seed node * Fail: Try to start two seed nodes at the same time in the same DC * Success: Start two seed nodes at the same time in different DCs. Presumably related to CASSANDRA-5768 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6716) nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist)
[ https://issues.apache.org/jira/browse/CASSANDRA-6716?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14005971#comment-14005971 ] Nikolai Grigoriev commented on CASSANDRA-6716: -- I have made two more observations, one of them may be unrelated, but still: 1. I had tons of these exceptions when doing compaction or scrubbing on some of the nodes. Disabling Datastax agent on them and restarting the nodes eliminated the exceptions completely. All under heavy load. 2. Just started having these exceptions again on one of the nodes after a minor configuration change (compaction throughput) and restarting the node. Restarted again - same thing, several exceptions per second, all FileNotFoundException when compacting. Stopped the node. Removed the caches stored in /var/lib/cassandra/saved_caches. Started the node. Not a single exception in ~1,5 hours. Again, all this under heavy load. Now I am wondering - where else a reference to a non-existing sstable can be except the cache? If simple restart does not help and the filesystem really does not have the file the server tries to access - then it cannot be something about in-memory cache being out of sync, so it's got to be the persistent one. nodetool scrub constantly fails with RuntimeException (Tried to hard link to file that does not exist) -- Key: CASSANDRA-6716 URL: https://issues.apache.org/jira/browse/CASSANDRA-6716 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.0.5 (built from source), Linux, 6 nodes, JDK 1.7 Reporter: Nikolai Grigoriev Attachments: system.log.gz It seems that since recently I have started getting a number of exceptions like File not found on all Cassandra nodes. Currently I am getting an exception like this every couple of seconds on each node, for different keyspaces and CFs. I have tried to restart the nodes, tried to scrub them. No luck so far. It seems that scrub cannot complete on any of these nodes, at some point it fails because of the file that it can't find. One one of the nodes currently the nodetool scrub command fails instantly and consistently with this exception: {code} # /opt/cassandra/bin/nodetool scrub Exception in thread main java.lang.RuntimeException: Tried to hard link to file that does not exist /mnt/disk5/cassandra/data/mykeyspace_jmeter/test_contacts/mykeyspace_jmeter-test_contacts-jb-28049-Data.db at org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75) at org.apache.cassandra.io.sstable.SSTableReader.createLinks(SSTableReader.java:1215) at org.apache.cassandra.db.ColumnFamilyStore.snapshotWithoutFlush(ColumnFamilyStore.java:1826) at org.apache.cassandra.db.ColumnFamilyStore.scrub(ColumnFamilyStore.java:1122) at org.apache.cassandra.service.StorageService.scrub(StorageService.java:2159) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:75) at sun.reflect.GeneratedMethodAccessor15.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:279) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:112) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:46) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:138) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:252) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:819) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:801) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1487) at javax.management.remote.rmi.RMIConnectionImpl.access$300(RMIConnectionImpl.java:97) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1328) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1420) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:848)
[jira] [Updated] (CASSANDRA-7267) Embedded sets in user defined data-types are not updating
[ https://issues.apache.org/jira/browse/CASSANDRA-7267?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7267: Assignee: Mikhail Stepura (was: Sylvain Lebresne) Embedded sets in user defined data-types are not updating - Key: CASSANDRA-7267 URL: https://issues.apache.org/jira/browse/CASSANDRA-7267 Project: Cassandra Issue Type: Bug Components: Core Reporter: Thomas Zimmer Assignee: Mikhail Stepura Fix For: 2.1 rc1 Hi, i just played around with Cassandra 2.1.0 beta2 and i might have found an issue with embedded Sets in User Defined Data Types. Here is how i can reproduce it: 1.) Create a keyspace test 2.) Create a table like this: {{create table songs (title varchar PRIMARY KEY, band varchar, tags Setvarchar);}} 3.) Create a udt like this: {{create type band_info_type (founded timestamp, members Setvarchar, description text);}} 4.) Try to insert data: {code} insert into songs (title, band, band_info, tags) values ('The trooper', 'Iron Maiden', {founded:188694000, members: {'Bruce Dickinson', 'Dave Murray', 'Adrian Smith', 'Janick Gers', 'Steve Harris', 'Nicko McBrain'}, description: 'Pure evil metal'}, {'metal', 'england'}); {code} 5.) Select the data: {{select * from songs;}} Returns this: {code} The trooper | Iron Maiden | {founded: '1970-01-03 05:24:54+0100', members: {}, description: 'Pure evil metal'} | {'england', 'metal'} {code} The embedded data-set seems to empty. I also tried updating a row which also does not seem to work. Regards, Thomas -- This message was sent by Atlassian JIRA (v6.2#6252)
[2/4] git commit: Fix writing the checksum before closing the sstable.
Fix writing the checksum before closing the sstable. Patch by Sankalp Kohli, reviewed by Benedict for CASSANDRA-7285 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/9bd38878 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/9bd38878 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/9bd38878 Branch: refs/heads/trunk Commit: 9bd38878059932e83163c1354c7453e494cda3b1 Parents: 36cc02c Author: Brandon Williams brandonwilli...@apache.org Authored: Thu May 22 09:12:11 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu May 22 09:12:11 2014 -0500 -- src/java/org/apache/cassandra/io/sstable/SSTableWriter.java | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/9bd38878/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java -- diff --git a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java index f32bb96..9567f0e 100644 --- a/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java +++ b/src/java/org/apache/cassandra/io/sstable/SSTableWriter.java @@ -459,11 +459,12 @@ public class SSTableWriter extends SSTable private PairDescriptor, StatsMetadata close(long repairedAt) { -dataFile.writeFullChecksum(descriptor); + // index and filter iwriter.close(); // main data, close will truncate if necessary dataFile.close(); +dataFile.writeFullChecksum(descriptor); // write sstable statistics MapMetadataType, MetadataComponent metadataComponents = sstableMetadataCollector.finalizeMetadata( partitioner.getClass().getCanonicalName(),
[1/4] git commit: Work around initialization problem
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 36cc02ca7 - 9bd388780 refs/heads/trunk 5f643ffcc - 864865da9 Work around initialization problem Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/36cc02ca Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/36cc02ca Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/36cc02ca Branch: refs/heads/trunk Commit: 36cc02ca76fa11b6b1d2cb24fb068d2a5dfaa842 Parents: 1147ee3 Author: Sylvain Lebresne sylv...@datastax.com Authored: Thu May 22 16:02:23 2014 +0200 Committer: Sylvain Lebresne sylv...@datastax.com Committed: Thu May 22 16:02:23 2014 +0200 -- .../apache/cassandra/cql3/QueryProcessor.java | 46 +--- 1 file changed, 31 insertions(+), 15 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/36cc02ca/src/java/org/apache/cassandra/cql3/QueryProcessor.java -- diff --git a/src/java/org/apache/cassandra/cql3/QueryProcessor.java b/src/java/org/apache/cassandra/cql3/QueryProcessor.java index fca9c42..fd6e6ce 100644 --- a/src/java/org/apache/cassandra/cql3/QueryProcessor.java +++ b/src/java/org/apache/cassandra/cql3/QueryProcessor.java @@ -82,7 +82,6 @@ public class QueryProcessor implements QueryHandler // A map for prepared statements used internally (which we don't want to mix with user statement, in particular we don't // bother with expiration on those. private static final ConcurrentMapString, ParsedStatement.Prepared internalStatements = new ConcurrentHashMap(); -private static final QueryState internalQueryState; static { @@ -95,16 +94,33 @@ public class QueryProcessor implements QueryHandler .weigher(thriftMemoryUsageWeigher) .build(); -ClientState state = ClientState.forInternalCalls(); -try -{ -state.setKeyspace(Keyspace.SYSTEM_KS); -} -catch (InvalidRequestException e) +} + +// Work aound initialization dependency +private static enum InternalStateInstance +{ +INSTANCE; + +private final QueryState queryState; + +InternalStateInstance() { -throw new RuntimeException(); +ClientState state = ClientState.forInternalCalls(); +try +{ +state.setKeyspace(Keyspace.SYSTEM_KS); +} +catch (InvalidRequestException e) +{ +throw new RuntimeException(); +} +this.queryState = new QueryState(state); } -internalQueryState = new QueryState(state); +} + +private static QueryState internalQueryState() +{ +return InternalStateInstance.INSTANCE.queryState; } private QueryProcessor() @@ -233,8 +249,8 @@ public class QueryProcessor implements QueryHandler return prepared; // Note: if 2 threads prepare the same query, we'll live so don't bother synchronizing -prepared = parseStatement(query, internalQueryState); -prepared.statement.validate(internalQueryState.getClientState()); +prepared = parseStatement(query, internalQueryState()); +prepared.statement.validate(internalQueryState().getClientState()); internalStatements.putIfAbsent(query, prepared); return prepared; } @@ -244,7 +260,7 @@ public class QueryProcessor implements QueryHandler try { ParsedStatement.Prepared prepared = prepareInternal(query); -ResultMessage result = prepared.statement.executeInternal(internalQueryState, makeInternalOptions(prepared, values)); +ResultMessage result = prepared.statement.executeInternal(internalQueryState(), makeInternalOptions(prepared, values)); if (result instanceof ResultMessage.Rows) return UntypedResultSet.create(((ResultMessage.Rows)result).result); else @@ -286,9 +302,9 @@ public class QueryProcessor implements QueryHandler { try { -ParsedStatement.Prepared prepared = parseStatement(query, internalQueryState); -prepared.statement.validate(internalQueryState.getClientState()); -ResultMessage result = prepared.statement.executeInternal(internalQueryState, makeInternalOptions(prepared, values)); +ParsedStatement.Prepared prepared = parseStatement(query, internalQueryState()); +prepared.statement.validate(internalQueryState().getClientState()); +ResultMessage result = prepared.statement.executeInternal(internalQueryState(), makeInternalOptions(prepared, values));
[jira] [Created] (CASSANDRA-7286) Exception: NPE
Julien Anguenot created CASSANDRA-7286: -- Summary: Exception: NPE Key: CASSANDRA-7286 URL: https://issues.apache.org/jira/browse/CASSANDRA-7286 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7286) Exception: NPE
[ https://issues.apache.org/jira/browse/CASSANDRA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Anguenot updated CASSANDRA-7286: --- Attachment: readstage_npe.txt Exception: NPE --- Key: CASSANDRA-7286 URL: https://issues.apache.org/jira/browse/CASSANDRA-7286 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Attachments: readstage_npe.txt -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7286) Exception: NPE
[ https://issues.apache.org/jira/browse/CASSANDRA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Anguenot updated CASSANDRA-7286: --- Description: Sometimes Cassandra nodes (in a multi datacenter deployment) are throwing NPE (see attached stack trace) Let me know what additional information I could provide. Thank you. Exception: NPE --- Key: CASSANDRA-7286 URL: https://issues.apache.org/jira/browse/CASSANDRA-7286 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Attachments: readstage_npe.txt Sometimes Cassandra nodes (in a multi datacenter deployment) are throwing NPE (see attached stack trace) Let me know what additional information I could provide. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7286) Exception: NPE
[ https://issues.apache.org/jira/browse/CASSANDRA-7286?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Anguenot updated CASSANDRA-7286: --- Reproduced In: 2.0.7 Exception: NPE --- Key: CASSANDRA-7286 URL: https://issues.apache.org/jira/browse/CASSANDRA-7286 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Attachments: readstage_npe.txt Sometimes Cassandra nodes (in a multi datacenter deployment) are throwing NPE (see attached stack trace) Let me know what additional information I could provide. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7206) UDT - allow null / non-existant attributes
[ https://issues.apache.org/jira/browse/CASSANDRA-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-7206: Attachment: 7206.txt Attaching rather simple patch for that. This makes fields optional in the query strings (so queries don't break when new fields are added), though it also allows to set the field to null explicitly (which is equivalent). One minor annoyance in the patch is updating the fromString and getString for the UserType class. We can't simple use null to represent nulls since that could theoretically conflict with a field value being the string null. But since this is all only use by SSTableExport/SSTableImport for which we probably don't care to have a perfect output, I just went with representing nulls with the '@' (totally random choice) and some escaping to avoid conflicts. I've pushed a simple dtests in the user type tests. I'll jump into the let's do CQL tests in the unit suites when I have more time. UDT - allow null / non-existant attributes -- Key: CASSANDRA-7206 URL: https://issues.apache.org/jira/browse/CASSANDRA-7206 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Robert Stupp Assignee: Sylvain Lebresne Fix For: 2.1 rc1 Attachments: 7206.txt C* 2.1 CQL User-Defined-Types are really fine and useful. But it lacks the possibility to omit attributes or set them to null. Would be great to have the possibility to create UDT instances with some attributes missing. Also changing the UDT definition (for example: {{alter type add new_attr}}) will break running applications that rely on the previous definition of the UDT. For exmple: {code} CREATE TYPE foo ( attr_one text, attr_two int ); CREATE TABLE bar ( id int, comp foo ); {code} {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2}); {code} works {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra'}); {code} does not work {code} ALTER TYPE foo ADD attr_three timestamp; {code} {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2}); {code} will no longer work (missing attribute) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7262) During streaming: java.lang.AssertionError: Reference counter -1
[ https://issues.apache.org/jira/browse/CASSANDRA-7262?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-7262: --- Attachment: 7262_v2.txt Attaching v2 that makes StreamSession.closeSession() idempotent. This also resolves the CompactionTask ref count issue we see on 2.0.7. Our logic for closing out references in SSTR: {code:title=ref decr} if (references.decrementAndGet() == 0 isCompacted.get()) {code} When the closeSession call from the 2nd Socket in the ConnectionHandler comes through, the reference count is put into a state where the next decrement will be 0 so the page cache, close, and deletingtask never happen and each subsequent compaction task call on the file continues to bypass and assert. The closing logic on trunk has been refactored and updated and this is not an issue now, though the - ref count for repeated closeSession calls still fires. During streaming: java.lang.AssertionError: Reference counter -1 Key: CASSANDRA-7262 URL: https://issues.apache.org/jira/browse/CASSANDRA-7262 Project: Cassandra Issue Type: Bug Components: Core Environment: Cassandra 2.07, x86-64 Ubuntu 12.04.4, Oracle java 1.7.0_45 Reporter: Duncan Sands Assignee: Joshua McKenzie Priority: Minor Fix For: 2.0.9 Attachments: 7262_v1.txt, 7262_v2.txt, system.log.gz Got this assertion failure this weekend during repair: ERROR [STREAM-IN-/192.168.21.14] 2014-05-17 01:17:52,332 StreamSession.java (line 420) [Stream #3a3ac8a2-dd50-11e3-b3c1-6bf6dccd6457] Streaming error occurred java.lang.RuntimeException: Outgoing stream handler has been closed at org.apache.cassandra.streaming.ConnectionHandler.sendMessage(ConnectionHandler.java:170) at org.apache.cassandra.streaming.StreamSession.receive(StreamSession.java:483) at org.apache.cassandra.streaming.StreamSession.messageReceived(StreamSession.java:372) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:289) at java.lang.Thread.run(Thread.java:744) ERROR [STREAM-IN-/192.168.21.14] 2014-05-17 01:17:52,350 CassandraDaemon.java (line 198) Exception in thread Thread[STREAM-IN-/192.168.21.14,5,RMI Runtime] java.lang.AssertionError: Reference counter -1 for /mnt/ssd1/cassandra/data/ldn_production/historical_accounts/ldn_production-historical_accounts-jb-79827-Data.db at org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1107) at org.apache.cassandra.streaming.StreamTransferTask.abort(StreamTransferTask.java:80) at org.apache.cassandra.streaming.StreamSession.closeSession(StreamSession.java:322) at org.apache.cassandra.streaming.StreamSession.onError(StreamSession.java:425) at org.apache.cassandra.streaming.ConnectionHandler$IncomingMessageHandler.run(ConnectionHandler.java:300) at java.lang.Thread.run(Thread.java:744) followed by a few more (the reference counter got down to -3). Got the same kind of assertion failure on one other node (in a different data centre; there are 21 nodes altogether distributed over 4 data centres). I've attached the relevant part of the log. It starts quite a bit before the assertion failure at the first exception on this node (Cannot proceed on repair because a neighbor ... is dead), and finishes a few hours afterwards when the node was restarted. Edit: The following Reference counter assertion failures followed the 1st on a different file and have a different stack trace: ERROR [CompactionExecutor:382] 2014-05-17 01:17:53,157 CassandraDaemon.java (line 198) Exception in thread Thread[CompactionExecutor:382,1,main] java.lang.AssertionError: Reference counter -1 for /mnt/ssd1/cassandra/data/ldn_production/historical_accounts/ldn_production-historical_accounts-jb-83888-Data.db at org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1107) at org.apache.cassandra.io.sstable.SSTableReader.releaseReferences(SSTableReader.java:1429) at org.apache.cassandra.db.compaction.CompactionController.close(CompactionController.java:207) at org.apache.cassandra.db.compaction.CompactionTask.runWith(CompactionTask.java:220) at org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28) at org.apache.cassandra.db.compaction.CompactionTask.executeInternal(CompactionTask.java:60) at org.apache.cassandra.db.compaction.AbstractCompactionTask.execute(AbstractCompactionTask.java:59) at
[jira] [Commented] (CASSANDRA-7120) Bad paging state returned for prepared statements for last page
[ https://issues.apache.org/jira/browse/CASSANDRA-7120?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006128#comment-14006128 ] Tyler Hobbs commented on CASSANDRA-7120: +1 Bad paging state returned for prepared statements for last page --- Key: CASSANDRA-7120 URL: https://issues.apache.org/jira/browse/CASSANDRA-7120 Project: Cassandra Issue Type: Bug Components: Core Reporter: Tyler Hobbs Assignee: Tyler Hobbs Fix For: 2.1 rc1 Attachments: 7120-alternative.txt, 7120.txt When executing a paged query with a prepared statement, a non-null paging state is sometimes being returned for the final page, causing an endless paging loop. Specifically, this is the schema being used: {noformat} CREATE KEYSPACE test3rf WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '3'}'; USE test3rf; CREATE TABLE test3rf.test ( k int PRIMARY KEY, v int ) {noformat} The inserts are like so: {noformat} INSERT INTO test3rf.test (k, v) VALUES (?, 0) {noformat} With values from [0, 99] used for k. The query is {{SELECT * FROM test3rf.test}} with a fetch size of 3. The final page returns the row with k=3, and the paging state is {{000400420004000176007fa2}}. This matches the paging state from three pages earlier. When executing this with a non-prepared statement, no paging state is returned for this page. This problem doesn't happen with the 2.0 branch. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7231) Support more concurrent requests per native transport connection
[ https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006156#comment-14006156 ] Aleksey Yeschenko commented on CASSANDRA-7231: -- [~thobbs] Yes. But I guess I favor the no-flag approach enough now to put up a patch myself. Patch + review coming later tonight. Support more concurrent requests per native transport connection Key: CASSANDRA-7231 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.1.0 Attachments: 7231.txt Right now we only support 127 concurrent requests against a given native transport connection. This causes us to waste file handles opening multiple connections, increases driver complexity and dilutes writes across multiple connections so that batching cannot easily be performed. I propose raising this limit substantially, to somewhere in the region of 16-64K, and that this is a good time to do it since we're already bumping the protocol version. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7231) Support more concurrent requests per native transport connection
[ https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006152#comment-14006152 ] Tyler Hobbs commented on CASSANDRA-7231: I also slightly favor the non-flag approach, but not enough demand to demand a different patch. [~iamaleksey] you got review on this one? Support more concurrent requests per native transport connection Key: CASSANDRA-7231 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.1.0 Attachments: 7231.txt Right now we only support 127 concurrent requests against a given native transport connection. This causes us to waste file handles opening multiple connections, increases driver complexity and dilutes writes across multiple connections so that batching cannot easily be performed. I propose raising this limit substantially, to somewhere in the region of 16-64K, and that this is a good time to do it since we're already bumping the protocol version. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7287) Pig CqlStorage test fails with IAE
Brandon Williams created CASSANDRA-7287: --- Summary: Pig CqlStorage test fails with IAE Key: CASSANDRA-7287 URL: https://issues.apache.org/jira/browse/CASSANDRA-7287 Project: Cassandra Issue Type: Bug Components: Hadoop, Tests Reporter: Brandon Williams Assignee: Alex Liu Fix For: 2.1 rc1 {noformat} [junit] java.lang.IllegalArgumentException [junit] at java.nio.Buffer.limit(Buffer.java:267) [junit] at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:542) [junit] at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:117) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:97) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28) [junit] at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48) [junit] at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66) [junit] at org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:792) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.getNext(CqlStorage.java:118) {noformat} I'm guessing this is caused by CqlStorage passing an empty BB to BBU, but I don't know if it's pig that's broken or is a deeper issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-6484) cassandra-shuffle not working with authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko reassigned CASSANDRA-6484: Assignee: Aleksey Yeschenko cassandra-shuffle not working with authentication - Key: CASSANDRA-6484 URL: https://issues.apache.org/jira/browse/CASSANDRA-6484 Project: Cassandra Issue Type: Improvement Components: Tools Environment: cassandra 2.0.3 Reporter: Gibheer Assignee: Aleksey Yeschenko Priority: Minor Labels: lhf Fix For: 2.0.9 Attachments: 6484.txt, login.patch When enabling authentication for a cassandra cluster the tool cassandra-shuffle is unable to connect. The reason is, that cassandra-shuffle doesn't take any parameter for username and password for the thrift connection. To solve that problem, parameter for username and password should be added, It should also be able to interpret cqlshrc or a separate file file with authentication data. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6484) cassandra-shuffle not working with authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6484: - Attachment: 6484.txt cassandra-shuffle not working with authentication - Key: CASSANDRA-6484 URL: https://issues.apache.org/jira/browse/CASSANDRA-6484 Project: Cassandra Issue Type: Improvement Components: Tools Environment: cassandra 2.0.3 Reporter: Gibheer Priority: Minor Labels: lhf Fix For: 2.0.9 Attachments: 6484.txt, login.patch When enabling authentication for a cassandra cluster the tool cassandra-shuffle is unable to connect. The reason is, that cassandra-shuffle doesn't take any parameter for username and password for the thrift connection. To solve that problem, parameter for username and password should be added, It should also be able to interpret cqlshrc or a separate file file with authentication data. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6484) cassandra-shuffle not working with authentication
[ https://issues.apache.org/jira/browse/CASSANDRA-6484?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-6484: - Reviewer: Brandon Williams (was: Aleksey Yeschenko) cassandra-shuffle not working with authentication - Key: CASSANDRA-6484 URL: https://issues.apache.org/jira/browse/CASSANDRA-6484 Project: Cassandra Issue Type: Improvement Components: Tools Environment: cassandra 2.0.3 Reporter: Gibheer Assignee: Aleksey Yeschenko Priority: Minor Labels: lhf Fix For: 2.0.9 Attachments: 6484.txt, login.patch When enabling authentication for a cassandra cluster the tool cassandra-shuffle is unable to connect. The reason is, that cassandra-shuffle doesn't take any parameter for username and password for the thrift connection. To solve that problem, parameter for username and password should be added, It should also be able to interpret cqlshrc or a separate file file with authentication data. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6563) TTL histogram compactions not triggered at high Estimated droppable tombstones rate
[ https://issues.apache.org/jira/browse/CASSANDRA-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006208#comment-14006208 ] Marcus Eriksson commented on CASSANDRA-6563: bq. Should I file a new issue for this fix? yes. Thanks for the patch, could you rebase it on 2.0? TTL histogram compactions not triggered at high Estimated droppable tombstones rate - Key: CASSANDRA-6563 URL: https://issues.apache.org/jira/browse/CASSANDRA-6563 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.12ish Reporter: Chris Burroughs Assignee: Paulo Motta Fix For: 1.2.17, 2.0.8 Attachments: 1.2.16-CASSANDRA-6563-v2.txt, 1.2.16-CASSANDRA-6563-v3.txt, 1.2.16-CASSANDRA-6563.txt, 2.0.7-CASSANDRA-6563.txt, patch-v1-iostat.png, patch-v1-range1.png, patch-v2-range3.png, patched-droppadble-ratio.png, patched-storage-load.png, patched1-compacted-bytes.png, patched2-compacted-bytes.png, unpatched-droppable-ratio.png, unpatched-storage-load.png, unpatched1-compacted-bytes.png, unpatched2-compacted-bytes.png I have several column families in a largish cluster where virtually all columns are written with a (usually the same) TTL. My understanding of CASSANDRA-3442 is that sstables that have a high ( 20%) estimated percentage of droppable tombstones should be individually compacted. This does not appear to be occurring with size tired compaction. Example from one node: {noformat} $ ll /data/sstables/data/ks/Cf/*Data.db -rw-rw-r-- 31 cassandra cassandra 26651211757 Nov 26 22:59 /data/sstables/data/ks/Cf/ks-Cf-ic-295562-Data.db -rw-rw-r-- 31 cassandra cassandra 6272641818 Nov 27 02:51 /data/sstables/data/ks/Cf/ks-Cf-ic-296121-Data.db -rw-rw-r-- 31 cassandra cassandra 1814691996 Dec 4 21:50 /data/sstables/data/ks/Cf/ks-Cf-ic-320449-Data.db -rw-rw-r-- 30 cassandra cassandra 10909061157 Dec 11 17:31 /data/sstables/data/ks/Cf/ks-Cf-ic-340318-Data.db -rw-rw-r-- 29 cassandra cassandra 459508942 Dec 12 10:37 /data/sstables/data/ks/Cf/ks-Cf-ic-342259-Data.db -rw-rw-r-- 1 cassandra cassandra 336908 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342307-Data.db -rw-rw-r-- 1 cassandra cassandra 2063935 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342309-Data.db -rw-rw-r-- 1 cassandra cassandra 409 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342314-Data.db -rw-rw-r-- 1 cassandra cassandra31180007 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342319-Data.db -rw-rw-r-- 1 cassandra cassandra 2398345 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342322-Data.db -rw-rw-r-- 1 cassandra cassandra 21095 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342331-Data.db -rw-rw-r-- 1 cassandra cassandra 81454 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342335-Data.db -rw-rw-r-- 1 cassandra cassandra 1063718 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342339-Data.db -rw-rw-r-- 1 cassandra cassandra 127004 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342344-Data.db -rw-rw-r-- 1 cassandra cassandra 146785 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342346-Data.db -rw-rw-r-- 1 cassandra cassandra 697338 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342351-Data.db -rw-rw-r-- 1 cassandra cassandra 3921428 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342367-Data.db -rw-rw-r-- 1 cassandra cassandra 240332 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342370-Data.db -rw-rw-r-- 1 cassandra cassandra 45669 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342374-Data.db -rw-rw-r-- 1 cassandra cassandra53127549 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342375-Data.db -rw-rw-r-- 16 cassandra cassandra 12466853166 Dec 25 22:40 /data/sstables/data/ks/Cf/ks-Cf-ic-396473-Data.db -rw-rw-r-- 12 cassandra cassandra 3903237198 Dec 29 19:42 /data/sstables/data/ks/Cf/ks-Cf-ic-408926-Data.db -rw-rw-r-- 7 cassandra cassandra 3692260987 Jan 3 08:25 /data/sstables/data/ks/Cf/ks-Cf-ic-427733-Data.db -rw-rw-r-- 4 cassandra cassandra 3971403602 Jan 6 20:50 /data/sstables/data/ks/Cf/ks-Cf-ic-437537-Data.db -rw-rw-r-- 3 cassandra cassandra 1007832224 Jan 7 15:19 /data/sstables/data/ks/Cf/ks-Cf-ic-440331-Data.db -rw-rw-r-- 2 cassandra cassandra 896132537 Jan 8 11:05 /data/sstables/data/ks/Cf/ks-Cf-ic-447740-Data.db -rw-rw-r-- 1 cassandra cassandra 963039096 Jan 9 04:59 /data/sstables/data/ks/Cf/ks-Cf-ic-449425-Data.db -rw-rw-r-- 1 cassandra cassandra 232168351 Jan 9 10:14 /data/sstables/data/ks/Cf/ks-Cf-ic-450287-Data.db -rw-rw-r-- 1 cassandra cassandra73126319 Jan 9 11:28
[jira] [Created] (CASSANDRA-7288) Exception during compaction
Julien Anguenot created CASSANDRA-7288: -- Summary: Exception during compaction Key: CASSANDRA-7288 URL: https://issues.apache.org/jira/browse/CASSANDRA-7288 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Attachments: compaction_exception.txt Sometimes Cassandra nodes (in a multi datacenter deployment) are throwing errors during compaction. (see attached stack trace) Let me know what additional information I could provide. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7279: -- Attachment: 7279-trunk.txt MultiSliceTest.test_with_overlap* unit tests failing in trunk - Key: CASSANDRA-7279 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: T Jake Luciani Priority: Minor Fix For: 2.1 rc1 Attachments: 7279-trunk.txt Example: https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7010) bootstrap_test simple_bootstrap_test dtest fails in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006314#comment-14006314 ] Michael Shuler commented on CASSANDRA-7010: --- I didn't have log artifact archiving enabled when I started this ticket, so don't have one with 0, and I'm not getting any 0 errors now. Out of ~30 loops over this test, I got 4 failures on {{assert_almost_equal(size1, size2)}} with: {noformat} AssertionError: values not within 16.00% of the max: (84186, 104847) AssertionError: values not within 16.00% of the max: (83229, 100477) AssertionError: values not within 16.00% of the max: (100052, 83828) AssertionError: values not within 16.00% of the max: (103188, 80631) {noformat} It does look like bumping it up a bit might be reasonable. bootstrap_test simple_bootstrap_test dtest fails in 2.1 --- Key: CASSANDRA-7010 URL: https://issues.apache.org/jira/browse/CASSANDRA-7010 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: Marcus Eriksson Fix For: 2.1 rc1 Attachments: 7010.patch I patched ccm with https://github.com/pcmanus/ccm/pull/109 and got an error from simple_bootstrap: {noformat} == FAIL: simple_bootstrap_test (bootstrap_test.TestBootstrap) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/bootstrap_test.py, line 58, in simple_bootstrap_test assert_almost_equal(initial_size, 2 * size1) File /home/mshuler/git/cassandra-dtest/assertions.py, line 26, in assert_almost_equal assert vmin vmax * (1.0 - error), values not within %.2f%% of the max: %s % (error * 100, args) AssertionError: values not within 16.00% of the max: (0, 186396) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7279: -- Attachment: (was: 7279-trunk.txt) MultiSliceTest.test_with_overlap* unit tests failing in trunk - Key: CASSANDRA-7279 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: T Jake Luciani Priority: Minor Fix For: 2.1 rc1 Example: https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7279: -- Attachment: 7279-trunk.txt Attached patch to sort and crop overlapping slices. I also had to change AtomicBtreeColumns to deal with excluding a point when the start of the finish of the previous slice == the start of the next slice MultiSliceTest.test_with_overlap* unit tests failing in trunk - Key: CASSANDRA-7279 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: T Jake Luciani Priority: Minor Fix For: 2.1 rc1 Attachments: 7279-trunk.txt Example: https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[6/6] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: CHANGES.txt src/java/org/apache/cassandra/cql3/Cql.g src/java/org/apache/cassandra/cql3/Lists.java src/java/org/apache/cassandra/cql3/QueryProcessor.java src/java/org/apache/cassandra/cql3/Relation.java src/java/org/apache/cassandra/cql3/ResultSet.java src/java/org/apache/cassandra/cql3/statements/BatchStatement.java src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java src/java/org/apache/cassandra/cql3/statements/Restriction.java src/java/org/apache/cassandra/cql3/statements/SelectStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf521900 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf521900 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf521900 Branch: refs/heads/cassandra-2.1 Commit: bf5219000be9c03daa1dc4fb420b031f6ffec01d Parents: 9bd3887 4349638 Author: Tyler Hobbs ty...@datastax.com Authored: Thu May 22 14:19:42 2014 -0500 Committer: Tyler Hobbs ty...@datastax.com Committed: Thu May 22 14:19:42 2014 -0500 -- CHANGES.txt |1 + .../apache/cassandra/cql3/AbstractMarker.java | 10 +- .../org/apache/cassandra/cql3/Constants.java|6 + src/java/org/apache/cassandra/cql3/Cql.g| 109 +- src/java/org/apache/cassandra/cql3/Lists.java | 13 +- .../cassandra/cql3/MultiColumnRelation.java | 144 +++ .../org/apache/cassandra/cql3/Relation.java | 104 +- .../cassandra/cql3/SingleColumnRelation.java| 95 ++ src/java/org/apache/cassandra/cql3/Term.java| 10 + src/java/org/apache/cassandra/cql3/Tuples.java | 349 ++ .../cql3/statements/ModificationStatement.java | 20 +- .../cql3/statements/MultiColumnRestriction.java | 137 +++ .../cassandra/cql3/statements/Restriction.java | 395 +-- .../cql3/statements/SelectStatement.java| 805 - .../statements/SingleColumnRestriction.java | 413 +++ .../cassandra/db/composites/CBuilder.java |2 + .../cassandra/db/composites/Composites.java |2 + .../cassandra/db/composites/CompoundCType.java | 13 + .../cassandra/db/composites/SimpleCType.java| 10 + .../apache/cassandra/db/marshal/TupleType.java | 279 + .../cassandra/cql3/MultiColumnRelationTest.java | 1114 ++ 21 files changed, 3263 insertions(+), 768 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/CHANGES.txt -- diff --cc CHANGES.txt index 55fc400,c6c51c3..7ab411b --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -64,7 -35,24 +64,8 @@@ Merged from 2.0 * Fix 2ndary index queries with DESC clustering order (CASSANDRA-6950) * Invalid key cache entries on DROP (CASSANDRA-6525) * Fix flapping RecoveryManagerTest (CASSANDRA-7084) + * Add missing iso8601 patterns for date strings (6973) ++ * Support selecting multiple rows in a partition using IN (CASSANDRA-6875) Merged from 1.2: * Add Cloudstack snitch (CASSANDRA-7147) * Update system.peers correctly when relocating tokens (CASSANDRA-7126) http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/AbstractMarker.java -- diff --cc src/java/org/apache/cassandra/cql3/AbstractMarker.java index 2b9c6c9,4329ed9..0b59ed4 --- a/src/java/org/apache/cassandra/cql3/AbstractMarker.java +++ b/src/java/org/apache/cassandra/cql3/AbstractMarker.java @@@ -99,10 -103,10 +103,10 @@@ public abstract class AbstractMarker ex } @Override -public AbstractMarker prepare(ColumnSpecification receiver) throws InvalidRequestException +public AbstractMarker prepare(String keyspace, ColumnSpecification receiver) throws InvalidRequestException { if (receiver.type instanceof CollectionType) - throw new InvalidRequestException(Invalid IN relation on collection column); + throw new InvalidRequestException(Collection columns do not support IN relations); return new Lists.Marker(bindIndex, makeInReceiver(receiver)); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/Constants.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/Cql.g -- diff --cc src/java/org/apache/cassandra/cql3/Cql.g index 4c1f2dc,ceb2bde..57b61a5 ---
[1/3] Support multi-row selects within a partition using IN
Repository: cassandra Updated Branches: refs/heads/cassandra-2.0 263563227 - 43496384d http://git-wip-us.apache.org/repos/asf/cassandra/blob/43496384/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java new file mode 100644 index 000..b728cba --- /dev/null +++ b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java @@ -0,0 +1,1112 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3; + +import org.apache.cassandra.SchemaLoader; +import org.apache.cassandra.db.ConsistencyLevel; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.exceptions.RequestExecutionException; +import org.apache.cassandra.exceptions.RequestValidationException; +import org.apache.cassandra.exceptions.SyntaxException; +import org.apache.cassandra.gms.Gossiper; +import org.apache.cassandra.service.ClientState; +import org.apache.cassandra.service.QueryState; +import org.apache.cassandra.transport.messages.ResultMessage; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.MD5Digest; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.nio.ByteBuffer; +import java.util.*; + +import static org.apache.cassandra.cql3.QueryProcessor.process; +import static org.apache.cassandra.cql3.QueryProcessor.processInternal; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertEquals; +import static com.google.common.collect.Lists.newArrayList; +import static org.junit.Assert.fail; + +public class MultiColumnRelationTest +{ +private static final Logger logger = LoggerFactory.getLogger(MultiColumnRelationTest.class); +static ClientState clientState; +static String keyspace = multi_column_relation_test; + +@BeforeClass +public static void setUpClass() throws Throwable +{ +SchemaLoader.loadSchema(); +executeSchemaChange(CREATE KEYSPACE IF NOT EXISTS %s WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_partition (a int PRIMARY KEY, b int)); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.compound_partition (a int, b int, c int, PRIMARY KEY ((a, b; +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_clustering (a int, b int, c int, PRIMARY KEY (a, b))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering_reversed (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d)) WITH CLUSTERING ORDER BY (b DESC, c ASC, d DESC)); +clientState = ClientState.forInternalCalls(); +} + +@AfterClass +public static void stopGossiper() +{ +Gossiper.instance.stop(); +} + +private static void executeSchemaChange(String query) throws Throwable +{ +try +{ +process(String.format(query, keyspace), ConsistencyLevel.ONE); +} catch (RuntimeException exc) +{ +throw exc.getCause(); +} +} + +private static UntypedResultSet execute(String query) throws Throwable +{ +try +{ +return processInternal(String.format(query, keyspace)); +} catch (RuntimeException exc) +{ +if (exc.getCause() != null) +throw exc.getCause(); +throw exc; +} +} + +private MD5Digest prepare(String query) throws RequestValidationException +{ +ResultMessage.Prepared prepared = QueryProcessor.prepare(String.format(query, keyspace), clientState, false); +return prepared.statementId; +} + +private UntypedResultSet executePrepared(MD5Digest statementId, QueryOptions options)
[1/6] Support multi-row selects within a partition using IN
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 9bd388780 - bf5219000 http://git-wip-us.apache.org/repos/asf/cassandra/blob/43496384/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java new file mode 100644 index 000..b728cba --- /dev/null +++ b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java @@ -0,0 +1,1112 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3; + +import org.apache.cassandra.SchemaLoader; +import org.apache.cassandra.db.ConsistencyLevel; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.exceptions.RequestExecutionException; +import org.apache.cassandra.exceptions.RequestValidationException; +import org.apache.cassandra.exceptions.SyntaxException; +import org.apache.cassandra.gms.Gossiper; +import org.apache.cassandra.service.ClientState; +import org.apache.cassandra.service.QueryState; +import org.apache.cassandra.transport.messages.ResultMessage; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.MD5Digest; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.nio.ByteBuffer; +import java.util.*; + +import static org.apache.cassandra.cql3.QueryProcessor.process; +import static org.apache.cassandra.cql3.QueryProcessor.processInternal; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertEquals; +import static com.google.common.collect.Lists.newArrayList; +import static org.junit.Assert.fail; + +public class MultiColumnRelationTest +{ +private static final Logger logger = LoggerFactory.getLogger(MultiColumnRelationTest.class); +static ClientState clientState; +static String keyspace = multi_column_relation_test; + +@BeforeClass +public static void setUpClass() throws Throwable +{ +SchemaLoader.loadSchema(); +executeSchemaChange(CREATE KEYSPACE IF NOT EXISTS %s WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_partition (a int PRIMARY KEY, b int)); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.compound_partition (a int, b int, c int, PRIMARY KEY ((a, b; +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_clustering (a int, b int, c int, PRIMARY KEY (a, b))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering_reversed (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d)) WITH CLUSTERING ORDER BY (b DESC, c ASC, d DESC)); +clientState = ClientState.forInternalCalls(); +} + +@AfterClass +public static void stopGossiper() +{ +Gossiper.instance.stop(); +} + +private static void executeSchemaChange(String query) throws Throwable +{ +try +{ +process(String.format(query, keyspace), ConsistencyLevel.ONE); +} catch (RuntimeException exc) +{ +throw exc.getCause(); +} +} + +private static UntypedResultSet execute(String query) throws Throwable +{ +try +{ +return processInternal(String.format(query, keyspace)); +} catch (RuntimeException exc) +{ +if (exc.getCause() != null) +throw exc.getCause(); +throw exc; +} +} + +private MD5Digest prepare(String query) throws RequestValidationException +{ +ResultMessage.Prepared prepared = QueryProcessor.prepare(String.format(query, keyspace), clientState, false); +return prepared.statementId; +} + +private UntypedResultSet executePrepared(MD5Digest statementId, QueryOptions options)
[6/7] git commit: Merge branch 'cassandra-2.0' into cassandra-2.1
Merge branch 'cassandra-2.0' into cassandra-2.1 Conflicts: CHANGES.txt src/java/org/apache/cassandra/cql3/Cql.g src/java/org/apache/cassandra/cql3/Lists.java src/java/org/apache/cassandra/cql3/QueryProcessor.java src/java/org/apache/cassandra/cql3/Relation.java src/java/org/apache/cassandra/cql3/ResultSet.java src/java/org/apache/cassandra/cql3/statements/BatchStatement.java src/java/org/apache/cassandra/cql3/statements/ModificationStatement.java src/java/org/apache/cassandra/cql3/statements/Restriction.java src/java/org/apache/cassandra/cql3/statements/SelectStatement.java Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/bf521900 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/bf521900 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/bf521900 Branch: refs/heads/trunk Commit: bf5219000be9c03daa1dc4fb420b031f6ffec01d Parents: 9bd3887 4349638 Author: Tyler Hobbs ty...@datastax.com Authored: Thu May 22 14:19:42 2014 -0500 Committer: Tyler Hobbs ty...@datastax.com Committed: Thu May 22 14:19:42 2014 -0500 -- CHANGES.txt |1 + .../apache/cassandra/cql3/AbstractMarker.java | 10 +- .../org/apache/cassandra/cql3/Constants.java|6 + src/java/org/apache/cassandra/cql3/Cql.g| 109 +- src/java/org/apache/cassandra/cql3/Lists.java | 13 +- .../cassandra/cql3/MultiColumnRelation.java | 144 +++ .../org/apache/cassandra/cql3/Relation.java | 104 +- .../cassandra/cql3/SingleColumnRelation.java| 95 ++ src/java/org/apache/cassandra/cql3/Term.java| 10 + src/java/org/apache/cassandra/cql3/Tuples.java | 349 ++ .../cql3/statements/ModificationStatement.java | 20 +- .../cql3/statements/MultiColumnRestriction.java | 137 +++ .../cassandra/cql3/statements/Restriction.java | 395 +-- .../cql3/statements/SelectStatement.java| 805 - .../statements/SingleColumnRestriction.java | 413 +++ .../cassandra/db/composites/CBuilder.java |2 + .../cassandra/db/composites/Composites.java |2 + .../cassandra/db/composites/CompoundCType.java | 13 + .../cassandra/db/composites/SimpleCType.java| 10 + .../apache/cassandra/db/marshal/TupleType.java | 279 + .../cassandra/cql3/MultiColumnRelationTest.java | 1114 ++ 21 files changed, 3263 insertions(+), 768 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/CHANGES.txt -- diff --cc CHANGES.txt index 55fc400,c6c51c3..7ab411b --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -64,7 -35,24 +64,8 @@@ Merged from 2.0 * Fix 2ndary index queries with DESC clustering order (CASSANDRA-6950) * Invalid key cache entries on DROP (CASSANDRA-6525) * Fix flapping RecoveryManagerTest (CASSANDRA-7084) + * Add missing iso8601 patterns for date strings (6973) ++ * Support selecting multiple rows in a partition using IN (CASSANDRA-6875) Merged from 1.2: * Add Cloudstack snitch (CASSANDRA-7147) * Update system.peers correctly when relocating tokens (CASSANDRA-7126) http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/AbstractMarker.java -- diff --cc src/java/org/apache/cassandra/cql3/AbstractMarker.java index 2b9c6c9,4329ed9..0b59ed4 --- a/src/java/org/apache/cassandra/cql3/AbstractMarker.java +++ b/src/java/org/apache/cassandra/cql3/AbstractMarker.java @@@ -99,10 -103,10 +103,10 @@@ public abstract class AbstractMarker ex } @Override -public AbstractMarker prepare(ColumnSpecification receiver) throws InvalidRequestException +public AbstractMarker prepare(String keyspace, ColumnSpecification receiver) throws InvalidRequestException { if (receiver.type instanceof CollectionType) - throw new InvalidRequestException(Invalid IN relation on collection column); + throw new InvalidRequestException(Collection columns do not support IN relations); return new Lists.Marker(bindIndex, makeInReceiver(receiver)); } http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/Constants.java -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/bf521900/src/java/org/apache/cassandra/cql3/Cql.g -- diff --cc src/java/org/apache/cassandra/cql3/Cql.g index 4c1f2dc,ceb2bde..57b61a5 ---
[1/7] Support multi-row selects within a partition using IN
Repository: cassandra Updated Branches: refs/heads/trunk 864865da9 - e5ab470d5 http://git-wip-us.apache.org/repos/asf/cassandra/blob/43496384/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java -- diff --git a/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java new file mode 100644 index 000..b728cba --- /dev/null +++ b/test/unit/org/apache/cassandra/cql3/MultiColumnRelationTest.java @@ -0,0 +1,1112 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, software + * distributed under the License is distributed on an AS IS BASIS, + * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. + * See the License for the specific language governing permissions and + * limitations under the License. + */ +package org.apache.cassandra.cql3; + +import org.apache.cassandra.SchemaLoader; +import org.apache.cassandra.db.ConsistencyLevel; +import org.apache.cassandra.db.marshal.*; +import org.apache.cassandra.exceptions.InvalidRequestException; +import org.apache.cassandra.exceptions.RequestExecutionException; +import org.apache.cassandra.exceptions.RequestValidationException; +import org.apache.cassandra.exceptions.SyntaxException; +import org.apache.cassandra.gms.Gossiper; +import org.apache.cassandra.service.ClientState; +import org.apache.cassandra.service.QueryState; +import org.apache.cassandra.transport.messages.ResultMessage; +import org.apache.cassandra.utils.ByteBufferUtil; +import org.apache.cassandra.utils.MD5Digest; +import org.junit.AfterClass; +import org.junit.BeforeClass; +import org.junit.Test; +import org.slf4j.Logger; +import org.slf4j.LoggerFactory; + +import java.nio.ByteBuffer; +import java.util.*; + +import static org.apache.cassandra.cql3.QueryProcessor.process; +import static org.apache.cassandra.cql3.QueryProcessor.processInternal; +import static org.junit.Assert.assertTrue; +import static org.junit.Assert.assertEquals; +import static com.google.common.collect.Lists.newArrayList; +import static org.junit.Assert.fail; + +public class MultiColumnRelationTest +{ +private static final Logger logger = LoggerFactory.getLogger(MultiColumnRelationTest.class); +static ClientState clientState; +static String keyspace = multi_column_relation_test; + +@BeforeClass +public static void setUpClass() throws Throwable +{ +SchemaLoader.loadSchema(); +executeSchemaChange(CREATE KEYSPACE IF NOT EXISTS %s WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_partition (a int PRIMARY KEY, b int)); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.compound_partition (a int, b int, c int, PRIMARY KEY ((a, b; +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.single_clustering (a int, b int, c int, PRIMARY KEY (a, b))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d))); +executeSchemaChange(CREATE TABLE IF NOT EXISTS %s.multiple_clustering_reversed (a int, b int, c int, d int, PRIMARY KEY (a, b, c, d)) WITH CLUSTERING ORDER BY (b DESC, c ASC, d DESC)); +clientState = ClientState.forInternalCalls(); +} + +@AfterClass +public static void stopGossiper() +{ +Gossiper.instance.stop(); +} + +private static void executeSchemaChange(String query) throws Throwable +{ +try +{ +process(String.format(query, keyspace), ConsistencyLevel.ONE); +} catch (RuntimeException exc) +{ +throw exc.getCause(); +} +} + +private static UntypedResultSet execute(String query) throws Throwable +{ +try +{ +return processInternal(String.format(query, keyspace)); +} catch (RuntimeException exc) +{ +if (exc.getCause() != null) +throw exc.getCause(); +throw exc; +} +} + +private MD5Digest prepare(String query) throws RequestValidationException +{ +ResultMessage.Prepared prepared = QueryProcessor.prepare(String.format(query, keyspace), clientState, false); +return prepared.statementId; +} + +private UntypedResultSet executePrepared(MD5Digest statementId, QueryOptions options) throws
[7/7] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/e5ab470d Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/e5ab470d Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/e5ab470d Branch: refs/heads/trunk Commit: e5ab470d527c86cfa8347739eeb76c9ef1b5aa9b Parents: 864865d bf52190 Author: Tyler Hobbs ty...@datastax.com Authored: Thu May 22 14:20:49 2014 -0500 Committer: Tyler Hobbs ty...@datastax.com Committed: Thu May 22 14:20:49 2014 -0500 -- CHANGES.txt |1 + .../apache/cassandra/cql3/AbstractMarker.java | 10 +- .../org/apache/cassandra/cql3/Constants.java|6 + src/java/org/apache/cassandra/cql3/Cql.g| 109 +- src/java/org/apache/cassandra/cql3/Lists.java | 13 +- .../cassandra/cql3/MultiColumnRelation.java | 144 +++ .../org/apache/cassandra/cql3/Relation.java | 104 +- .../cassandra/cql3/SingleColumnRelation.java| 95 ++ src/java/org/apache/cassandra/cql3/Term.java| 10 + src/java/org/apache/cassandra/cql3/Tuples.java | 349 ++ .../cql3/statements/ModificationStatement.java | 20 +- .../cql3/statements/MultiColumnRestriction.java | 137 +++ .../cassandra/cql3/statements/Restriction.java | 395 +-- .../cql3/statements/SelectStatement.java| 805 - .../statements/SingleColumnRestriction.java | 413 +++ .../cassandra/db/composites/CBuilder.java |2 + .../cassandra/db/composites/Composites.java |2 + .../cassandra/db/composites/CompoundCType.java | 13 + .../cassandra/db/composites/SimpleCType.java| 10 + .../apache/cassandra/db/marshal/TupleType.java | 279 + .../cassandra/cql3/MultiColumnRelationTest.java | 1114 ++ 21 files changed, 3263 insertions(+), 768 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5ab470d/CHANGES.txt -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/e5ab470d/src/java/org/apache/cassandra/cql3/Cql.g --
[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7279: -- Attachment: (was: 7279-trunk.txt) MultiSliceTest.test_with_overlap* unit tests failing in trunk - Key: CASSANDRA-7279 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: T Jake Luciani Priority: Minor Fix For: 2.1 rc1 Attachments: 7279-trunk.txt Example: https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7279: -- Attachment: 7279-trunk.txt MultiSliceTest.test_with_overlap* unit tests failing in trunk - Key: CASSANDRA-7279 URL: https://issues.apache.org/jira/browse/CASSANDRA-7279 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: T Jake Luciani Priority: Minor Fix For: 2.1 rc1 Attachments: 7279-trunk.txt Example: https://cassci.datastax.com/job/trunk_utest/623/testReport/org.apache.cassandra.thrift/MultiSliceTest/ -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7279) MultiSliceTest.test_with_overlap* unit tests failing in trunk
[ https://issues.apache.org/jira/browse/CASSANDRA-7279?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006397#comment-14006397 ] Michael Shuler commented on CASSANDRA-7279: --- Applied the patch to trunk and tested out the modified tests (haven't run a full 'ant test') - MultiSliceTest, QueryPagerTest, RangeTombstoneTest pass, but ColumnFamilyStoreTest failed with: {noformat} test: [echo] running unit tests [junit] WARNING: multiple versions of ant detected in path for junit [junit] jar:file:/usr/share/ant/lib/ant.jar!/org/apache/tools/ant/Project.class [junit] and jar:file:/home/mshuler/git/cassandra/build/lib/jars/ant-1.6.5.jar!/org/apache/tools/ant/Project.class [junit] Testsuite: org.apache.cassandra.db.ColumnFamilyStoreTest [junit] Tests run: 35, Failures: 8, Errors: 0, Skipped: 0, Time elapsed: 17.085 sec [junit] [junit] - Standard Output --- [junit] ERROR 20:07:21 Unable to delete build/test/cassandra/data/Keyspace1/Indexed2-a7e63420e1ec11e3a5c09b2001e5c823/Keyspace1-Indexed2.birthdate_index-ka-1-Data.db (it will be removed on server restart; we'll also retry after GC) [junit] ERROR 20:07:21 Unable to delete build/test/cassandra/data/Keyspace1/Indexed2-a7e63420e1ec11e3a5c09b2001e5c823/Keyspace1-Indexed2.birthdate_index-ka-1-Data.db (it will be removed on server restart; we'll also retry after GC) [junit] ERROR 20:07:23 Missing component: build/test/cassandra/data/Keyspace1/Standard3-a7e60d12e1ec11e3a5c09b2001e5c823/Keyspace1-Standard3-ka-1-Summary.db [junit] ERROR 20:07:23 Missing component: build/test/cassandra/data/Keyspace1/Standard3-a7e60d12e1ec11e3a5c09b2001e5c823/Keyspace1-Standard3-ka-1-Summary.db [junit] ERROR 20:07:23 Missing component: build/test/cassandra/data/Keyspace1/Standard4-a7e60d13e1ec11e3a5c09b2001e5c823/Keyspace1-Standard4-ka-3-Summary.db [junit] ERROR 20:07:23 Missing component: build/test/cassandra/data/Keyspace1/Standard4-a7e60d13e1ec11e3a5c09b2001e5c823/Keyspace1-Standard4-ka-3-Summary.db [junit] - --- [junit] Testcase: testMultiRangeSomeEmptyNoIndex(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED [junit] Columns did not match. Expected: [colI, colD, colC, colA] but got:[] [junit] junit.framework.AssertionFailedError: Columns did not match. Expected: [colI, colD, colC, colA] but got:[] [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.findRowGetSlicesAndAssertColsFound(ColumnFamilyStoreTest.java:2101) [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.testMultiRangeSomeEmptyNoIndex(ColumnFamilyStoreTest.java:1419) [junit] [junit] [junit] Testcase: testMultiRangeSomeEmptyIndexed(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED [junit] Columns did not match. Expected: [colI, colD, colC, colA] but got:[colD, colC] [junit] junit.framework.AssertionFailedError: Columns did not match. Expected: [colI, colD, colC, colA] but got:[colD, colC] [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.findRowGetSlicesAndAssertColsFound(ColumnFamilyStoreTest.java:2101) [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.testMultiRangeSomeEmptyIndexed(ColumnFamilyStoreTest.java:1468) [junit] [junit] [junit] Testcase: testMultiRangeContiguousNoIndex(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED [junit] Columns did not match. Expected: [colI, colG, colF, colE, colD, colC, colA] but got:[] [junit] junit.framework.AssertionFailedError: Columns did not match. Expected: [colI, colG, colF, colE, colD, colC, colA] but got:[] [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.findRowGetSlicesAndAssertColsFound(ColumnFamilyStoreTest.java:2101) [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.testMultiRangeContiguousNoIndex(ColumnFamilyStoreTest.java:1517) [junit] [junit] [junit] Testcase: testMultiRangeContiguousIndexed(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED [junit] Columns did not match. Expected: [colI, colG, colF, colE, colD, colC, colA] but got:[colG, colF, colE, colD, colC] [junit] junit.framework.AssertionFailedError: Columns did not match. Expected: [colI, colG, colF, colE, colD, colC, colA] but got:[colG, colF, colE, colD, colC] [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.findRowGetSlicesAndAssertColsFound(ColumnFamilyStoreTest.java:2101) [junit] at org.apache.cassandra.db.ColumnFamilyStoreTest.testMultiRangeContiguousIndexed(ColumnFamilyStoreTest.java:1567) [junit] [junit] [junit] Testcase: testMultiRangeIndexed(org.apache.cassandra.db.ColumnFamilyStoreTest): FAILED [junit] Columns did not match. Expected: [colI, colG, colE, colD, colC, colA] but got:[colG, colE, colD,
buildbot failure in ASF Buildbot on cassandra-2.0
The Buildbot has detected a new failure on builder cassandra-2.0 while building cassandra. Full details are available at: http://ci.apache.org/builders/cassandra-2.0/builds/24 Buildbot URL: http://ci.apache.org/ Buildslave for this Build: portunus_ubuntu Build Reason: scheduler Build Source Stamp: [branch cassandra-2.0] 43496384d404f2fa0af943003f2dc8fdfced4073 Blamelist: Tyler Hobbs ty...@datastax.com BUILD FAILED: failed shell sincerely, -The Buildbot
[jira] [Commented] (CASSANDRA-7271) Bulk data loading in Cassandra causing OOM
[ https://issues.apache.org/jira/browse/CASSANDRA-7271?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006440#comment-14006440 ] Michael Shuler commented on CASSANDRA-7271: --- For a purely functional example using the tools I know, cqlsh has no problem loading data from a CSV - I'm using trunk, at the moment, but this works the same on all versions. {noformat} mshuler@hana:~/tmp/7271/BulkLoadFiles$ ls 7271.java cassandra.yaml cql.txt my.test.cql test.csv mshuler@hana:~/tmp/7271/BulkLoadFiles$ cat test.csv ; echo 1,2 3,4 mshuler@hana:~/tmp/7271/BulkLoadFiles$ cat my.test.cql CREATE KEYSPACE test WITH replication = {'class': 'SimpleStrategy', 'replication_factor': '1'}; CREATE TABLE test.test ( col1 int PRIMARY KEY, col2 int ); COPY test.test (col1 , col2) FROM 'test.csv'; SELECT * from test.test; mshuler@hana:~/tmp/7271/BulkLoadFiles$ echo SOURCE 'my.test.cql'; | cqlsh 2 rows imported in 0.007 seconds. col1 | col2 --+-- 1 |2 3 |4 (2 rows) {noformat} So that said, I'm not familiar with gradle, read through 7218 a bit, and the java snippet isn't quite enough for me to go on, personally, but I'll see if I can get some help with that from the devs. Bulk data loading in Cassandra causing OOM -- Key: CASSANDRA-7271 URL: https://issues.apache.org/jira/browse/CASSANDRA-7271 Project: Cassandra Issue Type: Bug Reporter: Prasanth Gullapalli Assignee: Michael Shuler Attachments: BulkLoadFiles.zip I am trying to load data from a csv file into Cassandra table using SSTableSimpleUnsortedWriter. As the latest maven cassandra dependencies have some issues with it, I have taken the _next_ beta (rc) version cut as suggested in CASSANDRA-7218. But after taking it, I am facing issues with bulk data loading Here is the piece of code which loads data: {code:java} public void loadData(TableDefinition tableDefinition, InputStream csvInputStream){ createDataInDBFormat(tableDefinition, csvInputStream); Path dbFilePath = Paths.get(TEMP_DIR, keyspace, tableDefinition.getName()); //BulkLoader.main(new String[]{-d,localhost,dbFilePath.toUri().getPath()}); try { JMXServiceURL jmxUrl = new JMXServiceURL(String.format( service:jmx:rmi:///jndi/rmi://%s:%d/jmxrmi, cassandraHost, cassandraJMXPort)); JMXConnector connector = JMXConnectorFactory.connect(jmxUrl, new HashMapString, Object()); MBeanServerConnection mbeanServerConn = connector.getMBeanServerConnection(); ObjectName name = new ObjectName(org.apache.cassandra.db:type=StorageService); StorageServiceMBean storageBean = JMX.newMBeanProxy(mbeanServerConn, name, StorageServiceMBean.class); storageBean.bulkLoad(dbFilePath.toUri().getPath()); connector.close(); } catch (IOException | MalformedObjectNameException e) { e.printStacktrace() } FileUtils.deleteQuietly(dbFilePath.toFile()); } private void createDataInDBFormat(TableDefinition tableDefinition, InputStream csvInputStream) { try(Reader reader = new InputStreamReader(csvInputStream)){ String tableName = tableDefinition.getName(); File directory = Paths.get(TEMP_DIR, keyspace, tableName).toFile(); directory.mkdirs(); String yamlPath = file:\\+CASSANDRA_HOME+File.separator+conf+File.separator+cassandra.yaml; System.setProperty(cassandra.config, yamlPath); SSTableSimpleUnsortedWriter writer = new SSTableSimpleUnsortedWriter( directory, new Murmur3Partitioner(), keyspace, tableName, AsciiType.instance, null, 10); long timestamp = System.currentTimeMillis() * 1000; CSVReader csvReader = new CSVReader(reader); String[] colValues = null; ListColumnDefinition columnDefinitions = tableDefinition.getColumnDefinitions(); while((colValues = csvReader.readNext()) != null){ if(colValues.length != 0){ writer.newRow(bytes(colValues[0])); for(int index = 1; index colValues.length; index++){ ColumnDefinition columnDefinition = columnDefinitions.get(index); writer.addColumn(bytes(columnDefinition.getName()), bytes(colValues[index]), timestamp); } } } csvReader.close(); writer.close(); } catch (IOException e) { e.printStacktrace(); } } {code} On trying to run loadData, it is giving me the following exception: {code:xml} 11:23:18.035 [45742123@qtp-1703018180-0] ERROR com.adaequare.common.config.TransactionPerRequestFilter.doInTransactionWithoutResult 39 - Problem in executing request :
[jira] [Commented] (CASSANDRA-6572) Workload recording / playback
[ https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006452#comment-14006452 ] Lyuben Todorov commented on CASSANDRA-6572: --- Commit [3bd3ddc|https://github.com/lyubent/cassandra/commit/3bd3ddc5f2b2068895f57a753b2ca2c7431c07b8]: # {{runFlush()}} is used in SS to force a flush should the query recording be disabled / cassandra gets shut down, but I renamed it to {{forceFlush}} for clarity. # added an interface that gives us access to a statement's keyspace from CFMetaData. {{isSystemOrTraceKS}} now uses the keyspace from the CFMetaData, {{BatchStatement}}s are special-cased as they contain more than just a single keyspace (might possibly change this further along once we get to replaying of batches). # Updated QR to recycle QQs which get stored onto the CLQ. # Added a small optimisation, BBs used for creating the data array in {{QR#allocate(String)}} are now recycled where the largest buffer created is stored, when a bigger one is required it gets allocated and replaces the older one. # Added forceFlush to the shutdown process. P.S. The more this develops, the more it seems like QR should be a singleton, I think it wont be a big change at all to modify it to be one, not that the current design is a problem, just an idea to consider. Workload recording / playback - Key: CASSANDRA-6572 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Lyuben Todorov Fix For: 2.1.1 Attachments: 6572-trunk.diff Write sample mode gets us part way to testing new versions against a real world workload, but we need an easy way to test the query side as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-6572) Workload recording / playback
[ https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006452#comment-14006452 ] Lyuben Todorov edited comment on CASSANDRA-6572 at 5/22/14 8:52 PM: Commit [3bd3ddc|https://github.com/lyubent/cassandra/commit/3bd3ddc5f2b2068895f57a753b2ca2c7431c07b8]: # {{runFlush()}} is used in SS to force a flush should the query recording be disabled \/ cassandra gets shut down, but I renamed it to {{forceFlush}} for clarity. # added an interface that gives us access to a statement's keyspace from CFMetaData. {{isSystemOrTraceKS}} now uses the keyspace from the CFMetaData\. Batch statements are special-cased as they contain more than just a single keyspace (might possibly change this further along once we get to replaying of batches). (separate commit [here|https://github.com/lyubent/cassandra/commit/1b10a60de35c2f5b69f8100edc551c93d4bbe027]) # Updated QR to recycle QQs which get stored onto the CLQ. # Added a small optimisation, BBs used for creating the data array in {{QR#allocate(String)}} are now recycled where the largest buffer created is stored, when a bigger one is required it gets allocated and replaces the older one. # Added forceFlush to the shutdown process. P.S. The more this develops, the more it seems like QR should be a singleton, I think it wont be a big change at all to modify it to be one, not that the current design is a problem, just an idea to consider. was (Author: lyubent): Commit [3bd3ddc|https://github.com/lyubent/cassandra/commit/3bd3ddc5f2b2068895f57a753b2ca2c7431c07b8]: # {{runFlush()}} is used in SS to force a flush should the query recording be disabled / cassandra gets shut down, but I renamed it to {{forceFlush}} for clarity. # added an interface that gives us access to a statement's keyspace from CFMetaData. {{isSystemOrTraceKS}} now uses the keyspace from the CFMetaData, {{BatchStatement}}s are special-cased as they contain more than just a single keyspace (might possibly change this further along once we get to replaying of batches). # Updated QR to recycle QQs which get stored onto the CLQ. # Added a small optimisation, BBs used for creating the data array in {{QR#allocate(String)}} are now recycled where the largest buffer created is stored, when a bigger one is required it gets allocated and replaces the older one. # Added forceFlush to the shutdown process. P.S. The more this develops, the more it seems like QR should be a singleton, I think it wont be a big change at all to modify it to be one, not that the current design is a problem, just an idea to consider. Workload recording / playback - Key: CASSANDRA-6572 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Lyuben Todorov Fix For: 2.1.1 Attachments: 6572-trunk.diff Write sample mode gets us part way to testing new versions against a real world workload, but we need an easy way to test the query side as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-6523) Unable to contact any seeds! with multi-DC cluster and listen != broadcast address
[ https://issues.apache.org/jira/browse/CASSANDRA-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-6523: - Assignee: Brandon Williams Unable to contact any seeds! with multi-DC cluster and listen != broadcast address Key: CASSANDRA-6523 URL: https://issues.apache.org/jira/browse/CASSANDRA-6523 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.13ish Reporter: Chris Burroughs Assignee: Brandon Williams New cluster: * Seeds: list of 6 internal IPs * listen address: internal ip * broadcast: external ip Two DC cluster, using GPFS where the external IPs are NATed. Clusters fails to start with Unable to contact any seeds! * Fail: Try to start a seed node * Fail: Try to start two seed nodes at the same time in the same DC * Success: Start two seed nodes at the same time in different DCs. Presumably related to CASSANDRA-5768 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7206) UDT - allow null / non-existant attributes
[ https://issues.apache.org/jira/browse/CASSANDRA-7206?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-7206: -- Reviewer: Aleksey Yeschenko UDT - allow null / non-existant attributes -- Key: CASSANDRA-7206 URL: https://issues.apache.org/jira/browse/CASSANDRA-7206 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Robert Stupp Assignee: Sylvain Lebresne Fix For: 2.1 rc1 Attachments: 7206.txt C* 2.1 CQL User-Defined-Types are really fine and useful. But it lacks the possibility to omit attributes or set them to null. Would be great to have the possibility to create UDT instances with some attributes missing. Also changing the UDT definition (for example: {{alter type add new_attr}}) will break running applications that rely on the previous definition of the UDT. For exmple: {code} CREATE TYPE foo ( attr_one text, attr_two int ); CREATE TABLE bar ( id int, comp foo ); {code} {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2}); {code} works {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra'}); {code} does not work {code} ALTER TYPE foo ADD attr_three timestamp; {code} {code} INSERT INTO bar (id, com) VALUES (1, {attr_one: 'cassandra', attr_two: 2}); {code} will no longer work (missing attribute) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7287) Pig CqlStorage test fails with IAE
[ https://issues.apache.org/jira/browse/CASSANDRA-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006491#comment-14006491 ] Alex Liu commented on CASSANDRA-7287: - The change on the Hadoop and Pig to use Cell completely breaks the code. Pig CqlStorage test fails with IAE -- Key: CASSANDRA-7287 URL: https://issues.apache.org/jira/browse/CASSANDRA-7287 Project: Cassandra Issue Type: Bug Components: Hadoop, Tests Reporter: Brandon Williams Assignee: Alex Liu Fix For: 2.1 rc1 {noformat} [junit] java.lang.IllegalArgumentException [junit] at java.nio.Buffer.limit(Buffer.java:267) [junit] at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:542) [junit] at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:117) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:97) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28) [junit] at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48) [junit] at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66) [junit] at org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:792) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.getNext(CqlStorage.java:118) {noformat} I'm guessing this is caused by CqlStorage passing an empty BB to BBU, but I don't know if it's pig that's broken or is a deeper issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7288) Exception during compaction
[ https://issues.apache.org/jira/browse/CASSANDRA-7288?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-7288: - Assignee: Marcus Eriksson Exception during compaction --- Key: CASSANDRA-7288 URL: https://issues.apache.org/jira/browse/CASSANDRA-7288 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Assignee: Marcus Eriksson Attachments: compaction_exception.txt Sometimes Cassandra nodes (in a multi datacenter deployment) are throwing errors during compaction. (see attached stack trace) Let me know what additional information I could provide. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7289) cqlsh support for null values in UDT
Mikhail Stepura created CASSANDRA-7289: -- Summary: cqlsh support for null values in UDT Key: CASSANDRA-7289 URL: https://issues.apache.org/jira/browse/CASSANDRA-7289 Project: Cassandra Issue Type: Sub-task Components: Tools Reporter: Mikhail Stepura Assignee: Mikhail Stepura -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7287) Fix broken Pig and Hadoop due to the change to use Cell
[ https://issues.apache.org/jira/browse/CASSANDRA-7287?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Alex Liu updated CASSANDRA-7287: Summary: Fix broken Pig and Hadoop due to the change to use Cell (was: Pig CqlStorage test fails with IAE) Fix broken Pig and Hadoop due to the change to use Cell --- Key: CASSANDRA-7287 URL: https://issues.apache.org/jira/browse/CASSANDRA-7287 Project: Cassandra Issue Type: Bug Components: Hadoop, Tests Reporter: Brandon Williams Assignee: Alex Liu Fix For: 2.1 rc1 {noformat} [junit] java.lang.IllegalArgumentException [junit] at java.nio.Buffer.limit(Buffer.java:267) [junit] at org.apache.cassandra.utils.ByteBufferUtil.readBytes(ByteBufferUtil.java:542) [junit] at org.apache.cassandra.serializers.CollectionSerializer.readValue(CollectionSerializer.java:117) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:97) [junit] at org.apache.cassandra.serializers.MapSerializer.deserializeForNativeProtocol(MapSerializer.java:28) [junit] at org.apache.cassandra.serializers.CollectionSerializer.deserialize(CollectionSerializer.java:48) [junit] at org.apache.cassandra.db.marshal.AbstractType.compose(AbstractType.java:66) [junit] at org.apache.cassandra.hadoop.pig.AbstractCassandraStorage.cassandraToObj(AbstractCassandraStorage.java:792) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.cqlColumnToObj(CqlStorage.java:195) [junit] at org.apache.cassandra.hadoop.pig.CqlStorage.getNext(CqlStorage.java:118) {noformat} I'm guessing this is caused by CqlStorage passing an empty BB to BBU, but I don't know if it's pig that's broken or is a deeper issue. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7010) bootstrap_test simple_bootstrap_test dtest fails in 2.1
[ https://issues.apache.org/jira/browse/CASSANDRA-7010?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7010: -- Labels: qa-resolved (was: ) bootstrap_test simple_bootstrap_test dtest fails in 2.1 --- Key: CASSANDRA-7010 URL: https://issues.apache.org/jira/browse/CASSANDRA-7010 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Michael Shuler Assignee: Marcus Eriksson Labels: qa-resolved Fix For: 2.1 rc1 Attachments: 7010.patch I patched ccm with https://github.com/pcmanus/ccm/pull/109 and got an error from simple_bootstrap: {noformat} == FAIL: simple_bootstrap_test (bootstrap_test.TestBootstrap) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/bootstrap_test.py, line 58, in simple_bootstrap_test assert_almost_equal(initial_size, 2 * size1) File /home/mshuler/git/cassandra-dtest/assertions.py, line 26, in assert_almost_equal assert vmin vmax * (1.0 - error), values not within %.2f%% of the max: %s % (error * 100, args) AssertionError: values not within 16.00% of the max: (0, 186396) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6523) Unable to contact any seeds! with multi-DC cluster and listen != broadcast address
[ https://issues.apache.org/jira/browse/CASSANDRA-6523?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006502#comment-14006502 ] Brandon Williams commented on CASSANDRA-6523: - It's comparing the actual IP address it received contact from to the seed list, so if it's being contacted by the broadcast address, but the seed is listed with the internal address, there won't be a match. So, you'd probably want a different seed list for DC 1 and DC 2, with the listen/broadcast switched for the seeds that aren't local. Unable to contact any seeds! with multi-DC cluster and listen != broadcast address Key: CASSANDRA-6523 URL: https://issues.apache.org/jira/browse/CASSANDRA-6523 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.13ish Reporter: Chris Burroughs Assignee: Brandon Williams New cluster: * Seeds: list of 6 internal IPs * listen address: internal ip * broadcast: external ip Two DC cluster, using GPFS where the external IPs are NATed. Clusters fails to start with Unable to contact any seeds! * Fail: Try to start a seed node * Fail: Try to start two seed nodes at the same time in the same DC * Success: Start two seed nodes at the same time in different DCs. Presumably related to CASSANDRA-5768 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7144) CassandraDaemon RowMutation exception
[ https://issues.apache.org/jira/browse/CASSANDRA-7144?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006517#comment-14006517 ] Tyler Hobbs commented on CASSANDRA-7144: [~jason.punyon] [~nesnub] can you provide any other details about the writes? I presume you were using prepared statements? How large were the batches? Roughly what do the individual inserts (within the batches or otherwise) look like? CassandraDaemon RowMutation exception - Key: CASSANDRA-7144 URL: https://issues.apache.org/jira/browse/CASSANDRA-7144 Project: Cassandra Issue Type: Bug Components: Core Environment: Ubuntu 12.04 w/ Oracle JVM, 5 nodes cluster. Nodes 2GB / 2 Cores in DigitalOcean. Reporter: Maxime Lamothe-Brassard Assignee: Tyler Hobbs First time reporting a bug here, apologies if I'm not posting it in the right space. At what seems like random interval, on random nodes in random situations I will get the following exception. After this the hinted handoff start timing out and the node stops participating in the cluster. I started seeing these after switching to the Cassandra Python-Driver from the Python-CQL driver. {noformat} ERROR [WRITE-/10.128.180.108] 2014-05-03 13:45:12,843 CassandraDaemon.java (line 198) Exception in thread Thread[WRITE-/10.128.180.108,5,main] java.lang.AssertionError at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271) at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259) at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120) at org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251) at org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203) at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151) ERROR [WRITE-/10.128.194.70] 2014-05-03 13:45:12,843 CassandraDaemon.java (line 198) Exception in thread Thread[WRITE-/10.128.194.70,5,main] java.lang.AssertionError at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271) at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259) at org.apache.cassandra.net.MessageOut.serialize(MessageOut.java:120) at org.apache.cassandra.net.OutboundTcpConnection.writeInternal(OutboundTcpConnection.java:251) at org.apache.cassandra.net.OutboundTcpConnection.writeConnected(OutboundTcpConnection.java:203) at org.apache.cassandra.net.OutboundTcpConnection.run(OutboundTcpConnection.java:151) ERROR [MutationStage:118] 2014-05-03 13:45:15,048 CassandraDaemon.java (line 198) Exception in thread Thread[MutationStage:118,5,main] java.lang.AssertionError at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271) at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259) at org.apache.cassandra.utils.FBUtilities.serialize(FBUtilities.java:654) at org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:137) at org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:908) at org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:881) at org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1981) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:744) ERROR [MutationStage:117] 2014-05-03 13:45:15,048 CassandraDaemon.java (line 198) Exception in thread Thread[MutationStage:117,5,main] java.lang.AssertionError at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:271) at org.apache.cassandra.db.RowMutation$RowMutationSerializer.serialize(RowMutation.java:259) at org.apache.cassandra.utils.FBUtilities.serialize(FBUtilities.java:654) at org.apache.cassandra.db.HintedHandOffManager.hintFor(HintedHandOffManager.java:137) at org.apache.cassandra.service.StorageProxy.writeHintForMutation(StorageProxy.java:908) at org.apache.cassandra.service.StorageProxy$6.runMayThrow(StorageProxy.java:881) at org.apache.cassandra.service.StorageProxy$HintRunnable.run(StorageProxy.java:1981) at
[jira] [Updated] (CASSANDRA-7231) Support more concurrent requests per native transport connection
[ https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7231: - Attachment: v1-doc-fixes.txt Support more concurrent requests per native transport connection Key: CASSANDRA-7231 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.1.0 Attachments: 7231.txt, v1-doc-fixes.txt Right now we only support 127 concurrent requests against a given native transport connection. This causes us to waste file handles opening multiple connections, increases driver complexity and dilutes writes across multiple connections so that batching cannot easily be performed. I propose raising this limit substantially, to somewhere in the region of 16-64K, and that this is a good time to do it since we're already bumping the protocol version. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6572) Workload recording / playback
[ https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006555#comment-14006555 ] Benedict commented on CASSANDRA-6572: - bq. Added a small optimisation, BBs used for creating the data array in QR#allocate(String) are now recycled where the largest buffer created is stored, when a bigger one is required it gets allocated and replaces the older one. Instead of this, it is better to simply write the data directly to the QQ; you can wrap the appropriate range in a ByteBuffer and return the ByteBuffer from the allocate() method (null indicating there wasn't enough room) # Your QQ recycling code is broken. This should only be performed by the flushing thread, once we know the flush has finished, or if we fail to swap the QQ (i.e. if line 85 returns false) # you should use getAndSet() on line 87, the result being the value pos should take # I wouldn't introduce a special interface for AccessibleKeyspace, and I wouldn't consider batch statements to be magically safe. Batch mutations have a set of CFs they maintain of their modifications, you can check these for system keyspaces, and if _any_ are present avoid logging the whole batch. We don't want to stuff up the system keyspaces somehow on replay. I would suggest potentially introducing a boolean method to CQLStatement that returns true if the statement is safe to log; there are a small number of abstract classes that would need implementations (AuthStmt, CFStmt, ModStmt) It occurs to me, on top of logging the thread id as Jonathan suggests, we also need to log the client session id as well, else use statements could cause absolute chaos. This means we may need to log disconnect of client sessions as well as a special record to permit state cleanup. Workload recording / playback - Key: CASSANDRA-6572 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Lyuben Todorov Fix For: 2.1.1 Attachments: 6572-trunk.diff Write sample mode gets us part way to testing new versions against a real world workload, but we need an easy way to test the query side as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7231) Support more concurrent requests per native transport connection
[ https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006557#comment-14006557 ] Aleksey Yeschenko commented on CASSANDRA-7231: -- Okay. As far as the code goes, I only see two issues: - the check in Header constructor is broken for server-initiated events (-1 0xFF == 0xFF) - in decode(), 'streamId = (buffer.getByte(bodyStart) 0xFF);' should be 'streamId |= buffer.getByte(bodyStart);' Also attached a patch with a bit of doc fixes. Really though, I do feel bad for flip-flopping like this (suggesting the fixed 2-byte version first, then agreeing with flag-based approach, and then pushing for fixed 2-byte version again), but, in my defense, I was assuming a different flag-based approach. Namely, that with EXTENDED_STREAM_ID flag set, we'd just have a 2-byte id in the 9-byte header, and simply have writeShort()/readShort() calls instead of writeByte()/readByte() in encode()/decode(), respectively. I'd still have a minor preference for a fixed 2-byte stream id, but very, very minor in that case. But the approach in this v1 I don't like - it's a bit too complex and hackish. If you don't feel like cooking up another patch for this, I'll do it, with either approach (1 - fixed 2-bytes or 2 - 8/9 bytes header depending on the flag, described above), assuming that you don't disagree on principle [~slebresne] [~thobbs] Support more concurrent requests per native transport connection Key: CASSANDRA-7231 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.1.0 Attachments: 7231.txt, v1-doc-fixes.txt Right now we only support 127 concurrent requests against a given native transport connection. This causes us to waste file handles opening multiple connections, increases driver complexity and dilutes writes across multiple connections so that batching cannot easily be performed. I propose raising this limit substantially, to somewhere in the region of 16-64K, and that this is a good time to do it since we're already bumping the protocol version. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6563) TTL histogram compactions not triggered at high Estimated droppable tombstones rate
[ https://issues.apache.org/jira/browse/CASSANDRA-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Paulo Motta updated CASSANDRA-6563: --- Attachment: 2.0-CASSANDRA-6563-v3.txt TTL histogram compactions not triggered at high Estimated droppable tombstones rate - Key: CASSANDRA-6563 URL: https://issues.apache.org/jira/browse/CASSANDRA-6563 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.12ish Reporter: Chris Burroughs Assignee: Paulo Motta Fix For: 1.2.17, 2.0.9 Attachments: 1.2.16-CASSANDRA-6563-v2.txt, 1.2.16-CASSANDRA-6563-v3.txt, 1.2.16-CASSANDRA-6563.txt, 2.0-CASSANDRA-6563-v3.txt, 2.0.7-CASSANDRA-6563.txt, patch-v1-iostat.png, patch-v1-range1.png, patch-v2-range3.png, patched-droppadble-ratio.png, patched-storage-load.png, patched1-compacted-bytes.png, patched2-compacted-bytes.png, unpatched-droppable-ratio.png, unpatched-storage-load.png, unpatched1-compacted-bytes.png, unpatched2-compacted-bytes.png I have several column families in a largish cluster where virtually all columns are written with a (usually the same) TTL. My understanding of CASSANDRA-3442 is that sstables that have a high ( 20%) estimated percentage of droppable tombstones should be individually compacted. This does not appear to be occurring with size tired compaction. Example from one node: {noformat} $ ll /data/sstables/data/ks/Cf/*Data.db -rw-rw-r-- 31 cassandra cassandra 26651211757 Nov 26 22:59 /data/sstables/data/ks/Cf/ks-Cf-ic-295562-Data.db -rw-rw-r-- 31 cassandra cassandra 6272641818 Nov 27 02:51 /data/sstables/data/ks/Cf/ks-Cf-ic-296121-Data.db -rw-rw-r-- 31 cassandra cassandra 1814691996 Dec 4 21:50 /data/sstables/data/ks/Cf/ks-Cf-ic-320449-Data.db -rw-rw-r-- 30 cassandra cassandra 10909061157 Dec 11 17:31 /data/sstables/data/ks/Cf/ks-Cf-ic-340318-Data.db -rw-rw-r-- 29 cassandra cassandra 459508942 Dec 12 10:37 /data/sstables/data/ks/Cf/ks-Cf-ic-342259-Data.db -rw-rw-r-- 1 cassandra cassandra 336908 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342307-Data.db -rw-rw-r-- 1 cassandra cassandra 2063935 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342309-Data.db -rw-rw-r-- 1 cassandra cassandra 409 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342314-Data.db -rw-rw-r-- 1 cassandra cassandra31180007 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342319-Data.db -rw-rw-r-- 1 cassandra cassandra 2398345 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342322-Data.db -rw-rw-r-- 1 cassandra cassandra 21095 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342331-Data.db -rw-rw-r-- 1 cassandra cassandra 81454 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342335-Data.db -rw-rw-r-- 1 cassandra cassandra 1063718 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342339-Data.db -rw-rw-r-- 1 cassandra cassandra 127004 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342344-Data.db -rw-rw-r-- 1 cassandra cassandra 146785 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342346-Data.db -rw-rw-r-- 1 cassandra cassandra 697338 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342351-Data.db -rw-rw-r-- 1 cassandra cassandra 3921428 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342367-Data.db -rw-rw-r-- 1 cassandra cassandra 240332 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342370-Data.db -rw-rw-r-- 1 cassandra cassandra 45669 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342374-Data.db -rw-rw-r-- 1 cassandra cassandra53127549 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342375-Data.db -rw-rw-r-- 16 cassandra cassandra 12466853166 Dec 25 22:40 /data/sstables/data/ks/Cf/ks-Cf-ic-396473-Data.db -rw-rw-r-- 12 cassandra cassandra 3903237198 Dec 29 19:42 /data/sstables/data/ks/Cf/ks-Cf-ic-408926-Data.db -rw-rw-r-- 7 cassandra cassandra 3692260987 Jan 3 08:25 /data/sstables/data/ks/Cf/ks-Cf-ic-427733-Data.db -rw-rw-r-- 4 cassandra cassandra 3971403602 Jan 6 20:50 /data/sstables/data/ks/Cf/ks-Cf-ic-437537-Data.db -rw-rw-r-- 3 cassandra cassandra 1007832224 Jan 7 15:19 /data/sstables/data/ks/Cf/ks-Cf-ic-440331-Data.db -rw-rw-r-- 2 cassandra cassandra 896132537 Jan 8 11:05 /data/sstables/data/ks/Cf/ks-Cf-ic-447740-Data.db -rw-rw-r-- 1 cassandra cassandra 963039096 Jan 9 04:59 /data/sstables/data/ks/Cf/ks-Cf-ic-449425-Data.db -rw-rw-r-- 1 cassandra cassandra 232168351 Jan 9 10:14 /data/sstables/data/ks/Cf/ks-Cf-ic-450287-Data.db -rw-rw-r-- 1 cassandra cassandra73126319 Jan 9 11:28 /data/sstables/data/ks/Cf/ks-Cf-ic-450307-Data.db -rw-rw-r-- 1 cassandra cassandra40921916 Jan 9 12:08
[jira] [Commented] (CASSANDRA-6563) TTL histogram compactions not triggered at high Estimated droppable tombstones rate
[ https://issues.apache.org/jira/browse/CASSANDRA-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006597#comment-14006597 ] Paulo Motta commented on CASSANDRA-6563: 2.0 patch here: https://issues.apache.org/jira/secure/attachment/12646410/2.0-CASSANDRA-6563-v3.txt * Comments: * Due to the improvements to LCS in 2.0 I had to increase the TTL, sleep times and number of rows in testUncheckedTombstoneCompactionLeveledCompaction() to create an environment with overlapping-range sstables in multiple levels. Since this was mostly an educational test to show the benefits of the patch to LCS, and it was not totally deterministic, I decided to remove it in order not to burden the test suite. The main patch functionality is already well tested in testUncheckedTombstoneSizeTieredCompaction(). TTL histogram compactions not triggered at high Estimated droppable tombstones rate - Key: CASSANDRA-6563 URL: https://issues.apache.org/jira/browse/CASSANDRA-6563 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.12ish Reporter: Chris Burroughs Assignee: Paulo Motta Fix For: 1.2.17, 2.0.9 Attachments: 1.2.16-CASSANDRA-6563-v2.txt, 1.2.16-CASSANDRA-6563-v3.txt, 1.2.16-CASSANDRA-6563.txt, 2.0-CASSANDRA-6563-v3.txt, 2.0.7-CASSANDRA-6563.txt, patch-v1-iostat.png, patch-v1-range1.png, patch-v2-range3.png, patched-droppadble-ratio.png, patched-storage-load.png, patched1-compacted-bytes.png, patched2-compacted-bytes.png, unpatched-droppable-ratio.png, unpatched-storage-load.png, unpatched1-compacted-bytes.png, unpatched2-compacted-bytes.png I have several column families in a largish cluster where virtually all columns are written with a (usually the same) TTL. My understanding of CASSANDRA-3442 is that sstables that have a high ( 20%) estimated percentage of droppable tombstones should be individually compacted. This does not appear to be occurring with size tired compaction. Example from one node: {noformat} $ ll /data/sstables/data/ks/Cf/*Data.db -rw-rw-r-- 31 cassandra cassandra 26651211757 Nov 26 22:59 /data/sstables/data/ks/Cf/ks-Cf-ic-295562-Data.db -rw-rw-r-- 31 cassandra cassandra 6272641818 Nov 27 02:51 /data/sstables/data/ks/Cf/ks-Cf-ic-296121-Data.db -rw-rw-r-- 31 cassandra cassandra 1814691996 Dec 4 21:50 /data/sstables/data/ks/Cf/ks-Cf-ic-320449-Data.db -rw-rw-r-- 30 cassandra cassandra 10909061157 Dec 11 17:31 /data/sstables/data/ks/Cf/ks-Cf-ic-340318-Data.db -rw-rw-r-- 29 cassandra cassandra 459508942 Dec 12 10:37 /data/sstables/data/ks/Cf/ks-Cf-ic-342259-Data.db -rw-rw-r-- 1 cassandra cassandra 336908 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342307-Data.db -rw-rw-r-- 1 cassandra cassandra 2063935 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342309-Data.db -rw-rw-r-- 1 cassandra cassandra 409 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342314-Data.db -rw-rw-r-- 1 cassandra cassandra31180007 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342319-Data.db -rw-rw-r-- 1 cassandra cassandra 2398345 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342322-Data.db -rw-rw-r-- 1 cassandra cassandra 21095 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342331-Data.db -rw-rw-r-- 1 cassandra cassandra 81454 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342335-Data.db -rw-rw-r-- 1 cassandra cassandra 1063718 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342339-Data.db -rw-rw-r-- 1 cassandra cassandra 127004 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342344-Data.db -rw-rw-r-- 1 cassandra cassandra 146785 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342346-Data.db -rw-rw-r-- 1 cassandra cassandra 697338 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342351-Data.db -rw-rw-r-- 1 cassandra cassandra 3921428 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342367-Data.db -rw-rw-r-- 1 cassandra cassandra 240332 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342370-Data.db -rw-rw-r-- 1 cassandra cassandra 45669 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342374-Data.db -rw-rw-r-- 1 cassandra cassandra53127549 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342375-Data.db -rw-rw-r-- 16 cassandra cassandra 12466853166 Dec 25 22:40 /data/sstables/data/ks/Cf/ks-Cf-ic-396473-Data.db -rw-rw-r-- 12 cassandra cassandra 3903237198 Dec 29 19:42 /data/sstables/data/ks/Cf/ks-Cf-ic-408926-Data.db -rw-rw-r-- 7 cassandra cassandra 3692260987 Jan 3 08:25 /data/sstables/data/ks/Cf/ks-Cf-ic-427733-Data.db -rw-rw-r-- 4 cassandra cassandra 3971403602 Jan 6 20:50
[jira] [Comment Edited] (CASSANDRA-6563) TTL histogram compactions not triggered at high Estimated droppable tombstones rate
[ https://issues.apache.org/jira/browse/CASSANDRA-6563?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006597#comment-14006597 ] Paulo Motta edited comment on CASSANDRA-6563 at 5/22/14 11:23 PM: -- 2.0 patch here: https://issues.apache.org/jira/secure/attachment/12646410/2.0-CASSANDRA-6563-v3.txt Comments: * Due to the improvements to LCS in 2.0 I had to increase the TTL, sleep times and number of rows in testUncheckedTombstoneCompactionLeveledCompaction() to create an environment with overlapping-range sstables in multiple levels. Since this was mostly an educational test to show the benefits of the patch to LCS, and it was not totally deterministic, I decided to remove it in order not to burden the test suite. The main patch functionality is already well tested in testUncheckedTombstoneSizeTieredCompaction(). was (Author: pauloricardomg): 2.0 patch here: https://issues.apache.org/jira/secure/attachment/12646410/2.0-CASSANDRA-6563-v3.txt * Comments: * Due to the improvements to LCS in 2.0 I had to increase the TTL, sleep times and number of rows in testUncheckedTombstoneCompactionLeveledCompaction() to create an environment with overlapping-range sstables in multiple levels. Since this was mostly an educational test to show the benefits of the patch to LCS, and it was not totally deterministic, I decided to remove it in order not to burden the test suite. The main patch functionality is already well tested in testUncheckedTombstoneSizeTieredCompaction(). TTL histogram compactions not triggered at high Estimated droppable tombstones rate - Key: CASSANDRA-6563 URL: https://issues.apache.org/jira/browse/CASSANDRA-6563 Project: Cassandra Issue Type: Bug Components: Core Environment: 1.2.12ish Reporter: Chris Burroughs Assignee: Paulo Motta Fix For: 1.2.17, 2.0.9 Attachments: 1.2.16-CASSANDRA-6563-v2.txt, 1.2.16-CASSANDRA-6563-v3.txt, 1.2.16-CASSANDRA-6563.txt, 2.0-CASSANDRA-6563-v3.txt, 2.0.7-CASSANDRA-6563.txt, patch-v1-iostat.png, patch-v1-range1.png, patch-v2-range3.png, patched-droppadble-ratio.png, patched-storage-load.png, patched1-compacted-bytes.png, patched2-compacted-bytes.png, unpatched-droppable-ratio.png, unpatched-storage-load.png, unpatched1-compacted-bytes.png, unpatched2-compacted-bytes.png I have several column families in a largish cluster where virtually all columns are written with a (usually the same) TTL. My understanding of CASSANDRA-3442 is that sstables that have a high ( 20%) estimated percentage of droppable tombstones should be individually compacted. This does not appear to be occurring with size tired compaction. Example from one node: {noformat} $ ll /data/sstables/data/ks/Cf/*Data.db -rw-rw-r-- 31 cassandra cassandra 26651211757 Nov 26 22:59 /data/sstables/data/ks/Cf/ks-Cf-ic-295562-Data.db -rw-rw-r-- 31 cassandra cassandra 6272641818 Nov 27 02:51 /data/sstables/data/ks/Cf/ks-Cf-ic-296121-Data.db -rw-rw-r-- 31 cassandra cassandra 1814691996 Dec 4 21:50 /data/sstables/data/ks/Cf/ks-Cf-ic-320449-Data.db -rw-rw-r-- 30 cassandra cassandra 10909061157 Dec 11 17:31 /data/sstables/data/ks/Cf/ks-Cf-ic-340318-Data.db -rw-rw-r-- 29 cassandra cassandra 459508942 Dec 12 10:37 /data/sstables/data/ks/Cf/ks-Cf-ic-342259-Data.db -rw-rw-r-- 1 cassandra cassandra 336908 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342307-Data.db -rw-rw-r-- 1 cassandra cassandra 2063935 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342309-Data.db -rw-rw-r-- 1 cassandra cassandra 409 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342314-Data.db -rw-rw-r-- 1 cassandra cassandra31180007 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342319-Data.db -rw-rw-r-- 1 cassandra cassandra 2398345 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342322-Data.db -rw-rw-r-- 1 cassandra cassandra 21095 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342331-Data.db -rw-rw-r-- 1 cassandra cassandra 81454 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342335-Data.db -rw-rw-r-- 1 cassandra cassandra 1063718 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342339-Data.db -rw-rw-r-- 1 cassandra cassandra 127004 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342344-Data.db -rw-rw-r-- 1 cassandra cassandra 146785 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342346-Data.db -rw-rw-r-- 1 cassandra cassandra 697338 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342351-Data.db -rw-rw-r-- 1 cassandra cassandra 3921428 Dec 12 12:03 /data/sstables/data/ks/Cf/ks-Cf-ic-342367-Data.db -rw-rw-r-- 1 cassandra cassandra 240332 Dec 12
[jira] [Commented] (CASSANDRA-7231) Support more concurrent requests per native transport connection
[ https://issues.apache.org/jira/browse/CASSANDRA-7231?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14006603#comment-14006603 ] Tyler Hobbs commented on CASSANDRA-7231: It looks like there's one other bug in the patch * In decode(), bodyLength is decremented if EXTENDED_STREAM_ID is set. That should remain unchanged because the streamId byte isn't included in the body length. However, I believe frameLength _should_ be incremented there. bq. But the approach in this v1 I don't like - it's a bit too complex and hackish. If you don't feel like cooking up another patch for this, I'll do it, with either approach (1 - fixed 2-bytes or 2 - 8/9 bytes header depending on the flag, described above), assuming that you don't disagree on principle +1 (I still prefer the fixed 2 bytes) Support more concurrent requests per native transport connection Key: CASSANDRA-7231 URL: https://issues.apache.org/jira/browse/CASSANDRA-7231 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Benedict Assignee: Sylvain Lebresne Priority: Minor Fix For: 2.1.0 Attachments: 7231.txt, v1-doc-fixes.txt Right now we only support 127 concurrent requests against a given native transport connection. This causes us to waste file handles opening multiple connections, increases driver complexity and dilutes writes across multiple connections so that batching cannot easily be performed. I propose raising this limit substantially, to somewhere in the region of 16-64K, and that this is a good time to do it since we're already bumping the protocol version. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7290) Compaction strategy is not reloaded when compaction strategy options is updated
Paulo Motta created CASSANDRA-7290: -- Summary: Compaction strategy is not reloaded when compaction strategy options is updated Key: CASSANDRA-7290 URL: https://issues.apache.org/jira/browse/CASSANDRA-7290 Project: Cassandra Issue Type: Bug Components: Core Reporter: Paulo Motta Fix For: 1.2.17, 2.0.9 The AbstractCompactionStrategy constructor receives an options map on its constructor: {code:java} protected AbstractCompactionStrategy(ColumnFamilyStore cfs, MapString, String options) { assert cfs != null; this.cfs = cfs; this.options = options; ... {code} This map is currently the same reference as CFMetadata.compactionStrategyOptions, so ColumnFamilyStore.reload() does not reload the compaction strategy when a compaction strategy option changes: {code:java} private void maybeReloadCompactionStrategy() { // Check if there is a need for reloading if (metadata.compactionStrategyClass.equals(compactionStrategy.getClass()) metadata.compactionStrategyOptions.equals(compactionStrategy.options)) //metadata.compactionStrategyOptions == compactionStrategy.options, so compaction is never reloaded return; {code} I spotted this in my test, when I tried changing the value of unchecked_tombstone_compaction from false to true and calling ColumnFamilyStore.reload() was not reloading the compaction strategy. I don't know if ColumnFamilyStore.reload() is only called during tests, or also whenever the schema changes. In order to fix the bug, I made AbstractCompactionStrategy.options an ImmutableMap, so if CFMetadata.compactionStrategyOptions is updated, ColumnFamilyStore.maybeReloadCompactionStrategy will actually reload the compaction strategy: {code:java} protected AbstractCompactionStrategy(ColumnFamilyStore cfs, MapString, String options) { assert cfs != null; this.cfs = cfs; this.options = ImmutableMap.copyOf(options); ... {code} -- This message was sent by Atlassian JIRA (v6.2#6252)