[jira] [Updated] (CASSANDRA-7605) compactionstats reports incorrect byte values
[ https://issues.apache.org/jira/browse/CASSANDRA-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Peter Haggerty updated CASSANDRA-7605: -- Attachment: CASSANDRA-7605.txt compactionstats reports incorrect byte values - Key: CASSANDRA-7605 URL: https://issues.apache.org/jira/browse/CASSANDRA-7605 Project: Cassandra Issue Type: Bug Components: Core Environment: 2.0.9, Java 1.7.0_55 Reporter: Peter Haggerty Attachments: CASSANDRA-7605.txt The output of nodetool compactionstats (while a compaction is running) is incorrect. The output from nodetool compactionhistory and the log both match and they disagree with the output from compactionstats. What nodetool said during the compaction was almost certainly wrong given the sizes of files on disk: completed total unit progress 144713163589146631071165 bytes98.69% nodetool compactionhistory and the log both report the same values for that compaction: 52,596,321,269 bytes to 38,575,881,134 The compactionhistory/log values make much more sense given the size of the files on disk. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7590) java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index
[ https://issues.apache.org/jira/browse/CASSANDRA-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7590: Attachment: 7590-drop-index-npe-assert-ks.txt bq. keep the behavior of throwing an IRE Previous behavior was to throw an NPE if KS did not exist. ;) It should have been checked using {{hasColumnFamilyAccess}} in {{checkAccess}}, but {{findIndexedCF}} failed. Also changed the exception to {{KeyspaceNotDefinedException}} as in {{ThriftValidation#validateKeyspace}}. java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index -- Key: CASSANDRA-7590 URL: https://issues.apache.org/jira/browse/CASSANDRA-7590 Project: Cassandra Issue Type: Bug Components: Core Reporter: Hanh Dang Assignee: Robert Stupp Priority: Minor Labels: cql Fix For: 2.1.0 Attachments: 7590-drop-index-npe-assert-ks.txt, 7590-drop-index-npe-assert.txt To reproduce: cqlsh CREATE KEYSPACE test WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1}; cqlsh USE test; cqlsh:test DROP INDEX IF EXISTS fake_index; ErrorMessage code= [Server error] message=java.lang.AssertionError -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7590) java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index
[ https://issues.apache.org/jira/browse/CASSANDRA-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7590: Attachment: 7590-drop-index-npe-assert.txt But for example {{DROP TABLE IF EXISTS}} with non-existing KS does not complain. {noformat} cqlsh drop table fool.nonex; code=2200 [Invalid query] message=Keyspace fool does not exist cqlsh drop table if exists fool.nonex; cqlsh {noformat} The patches are 7590-drop-index-npe-assert.txt ignoring non-existing KS. The patches are 7590-drop-index-npe-assert-ks.txt not ignoring non-existing KS. java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index -- Key: CASSANDRA-7590 URL: https://issues.apache.org/jira/browse/CASSANDRA-7590 Project: Cassandra Issue Type: Bug Components: Core Reporter: Hanh Dang Assignee: Robert Stupp Priority: Minor Labels: cql Fix For: 2.1.0 Attachments: 7590-drop-index-npe-assert-ks.txt, 7590-drop-index-npe-assert.txt To reproduce: cqlsh CREATE KEYSPACE test WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1}; cqlsh USE test; cqlsh:test DROP INDEX IF EXISTS fake_index; ErrorMessage code= [Server error] message=java.lang.AssertionError -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7590) java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index
[ https://issues.apache.org/jira/browse/CASSANDRA-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072901#comment-14072901 ] Robert Stupp edited comment on CASSANDRA-7590 at 7/24/14 6:57 AM: -- But for example {{DROP TABLE IF EXISTS}} with non-existing KS does not complain. {noformat} cqlsh drop table fool.nonex; code=2200 [Invalid query] message=Keyspace fool does not exist cqlsh drop table if exists fool.nonex; cqlsh {noformat} The patches are 7590-drop-index-npe-assert.txt (ignores non-existing KS) and 7590-drop-index-npe-assert-ks.txt (ignore non-existing KS). was (Author: snazy): But for example {{DROP TABLE IF EXISTS}} with non-existing KS does not complain. {noformat} cqlsh drop table fool.nonex; code=2200 [Invalid query] message=Keyspace fool does not exist cqlsh drop table if exists fool.nonex; cqlsh {noformat} The patches are 7590-drop-index-npe-assert.txt ignoring non-existing KS. The patches are 7590-drop-index-npe-assert-ks.txt not ignoring non-existing KS. java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index -- Key: CASSANDRA-7590 URL: https://issues.apache.org/jira/browse/CASSANDRA-7590 Project: Cassandra Issue Type: Bug Components: Core Reporter: Hanh Dang Assignee: Robert Stupp Priority: Minor Labels: cql Fix For: 2.1.0 Attachments: 7590-drop-index-npe-assert-ks.txt, 7590-drop-index-npe-assert.txt To reproduce: cqlsh CREATE KEYSPACE test WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1}; cqlsh USE test; cqlsh:test DROP INDEX IF EXISTS fake_index; ErrorMessage code= [Server error] message=java.lang.AssertionError -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7590) java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index
[ https://issues.apache.org/jira/browse/CASSANDRA-7590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7590: Attachment: (was: 7590-drop-index-npe-assert.txt) java.lang.AssertionError when using DROP INDEX IF EXISTS on non-existing index -- Key: CASSANDRA-7590 URL: https://issues.apache.org/jira/browse/CASSANDRA-7590 Project: Cassandra Issue Type: Bug Components: Core Reporter: Hanh Dang Assignee: Robert Stupp Priority: Minor Labels: cql Fix For: 2.1.0 Attachments: 7590-drop-index-npe-assert-ks.txt, 7590-drop-index-npe-assert.txt To reproduce: cqlsh CREATE KEYSPACE test WITH REPLICATION = {'class':'SimpleStrategy', 'replication_factor':1}; cqlsh USE test; cqlsh:test DROP INDEX IF EXISTS fake_index; ErrorMessage code= [Server error] message=java.lang.AssertionError -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7606) Add IF [NOT] EXISTS to CREATE/DROP trigger
Robert Stupp created CASSANDRA-7606: --- Summary: Add IF [NOT] EXISTS to CREATE/DROP trigger Key: CASSANDRA-7606 URL: https://issues.apache.org/jira/browse/CASSANDRA-7606 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp All CREATE/DROP statements support IF [NOT] EXISTS - except CREATE/DROP trigger. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7604) Test coverage for conditional DDL statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072917#comment-14072917 ] Robert Stupp commented on CASSANDRA-7604: - Also * {{CREATE USER IF NOT EXISTS}} * {{DROP USER IF EXISTS}} Not sure (not implemented yet - CASSANDRA-7606) : * {{CREATE USER IF NOT EXISTS}} * {{DROP USER IF EXISTS}} Test coverage for conditional DDL statements Key: CASSANDRA-7604 URL: https://issues.apache.org/jira/browse/CASSANDRA-7604 Project: Cassandra Issue Type: Test Components: Tests Reporter: Tyler Hobbs Assignee: Ryan McGuire We only have minimal test coverage of {{IF \[NOT\] EXISTS}} conditions for DDL statements. I think dtests are the right place to add those tests. We need to cover: * {{CREATE KEYSPACE IF NOT EXISTS}} * {{DROP KEYSPACE IF EXISTS}} * {{CREATE TABLE IF NOT EXISTS}} * {{DROP TABLE IF EXISTS}} * {{CREATE INDEX IF NOT EXISTS}} * {{DROP INDEX IF EXISTS}} * {{CREATE TYPE IF NOT EXISTS}} * {{DROP TYPE IF EXISTS}} The tests should also ensure that InvalidRequestExceptions are thrown if, for example, you try to drop an index from a keyspace that doesn't exist (regardless of whether {{IF EXISTS}} is used). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072935#comment-14072935 ] Robert Stupp commented on CASSANDRA-7438: - my username on github is snazy Do you know {{org.codehaus.mojo:native-maven-plugin}}? It allows JNI compilation on almost all platforms directly from Maven and does not interfere with SWIG - have used it on OSX, Linux and Win. Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072935#comment-14072935 ] Robert Stupp edited comment on CASSANDRA-7438 at 7/24/14 7:36 AM: -- my username on github is snazy Do you know {{org.codehaus.mojo:native-maven-plugin}}? It allows JNI compilation on almost all platforms directly from Maven and does not interfere with SWIG - have used it on OSX, Linux, Win and Solaris. was (Author: snazy): my username on github is snazy Do you know {{org.codehaus.mojo:native-maven-plugin}}? It allows JNI compilation on almost all platforms directly from Maven and does not interfere with SWIG - have used it on OSX, Linux and Win. Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7604) Test coverage for conditional DDL statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072917#comment-14072917 ] Robert Stupp edited comment on CASSANDRA-7604 at 7/24/14 7:42 AM: -- Also * {{CREATE USER IF NOT EXISTS}} * {{DROP USER IF EXISTS}} Not sure (not implemented yet - CASSANDRA-7606) : * {{CREATE TRIGGER IF NOT EXISTS}} * {{DROP TRIGGER IF EXISTS}} was (Author: snazy): Also * {{CREATE USER IF NOT EXISTS}} * {{DROP USER IF EXISTS}} Not sure (not implemented yet - CASSANDRA-7606) : * {{CREATE USER IF NOT EXISTS}} * {{DROP USER IF EXISTS}} Test coverage for conditional DDL statements Key: CASSANDRA-7604 URL: https://issues.apache.org/jira/browse/CASSANDRA-7604 Project: Cassandra Issue Type: Test Components: Tests Reporter: Tyler Hobbs Assignee: Ryan McGuire We only have minimal test coverage of {{IF \[NOT\] EXISTS}} conditions for DDL statements. I think dtests are the right place to add those tests. We need to cover: * {{CREATE KEYSPACE IF NOT EXISTS}} * {{DROP KEYSPACE IF EXISTS}} * {{CREATE TABLE IF NOT EXISTS}} * {{DROP TABLE IF EXISTS}} * {{CREATE INDEX IF NOT EXISTS}} * {{DROP INDEX IF EXISTS}} * {{CREATE TYPE IF NOT EXISTS}} * {{DROP TYPE IF EXISTS}} The tests should also ensure that InvalidRequestExceptions are thrown if, for example, you try to drop an index from a keyspace that doesn't exist (regardless of whether {{IF EXISTS}} is used). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072950#comment-14072950 ] Benedict commented on CASSANDRA-7438: - bq. Not sure what we are talking about, this == lurc? if yes the RB is fronting the queue so we don't need a global lock. I was referring to [~rst...@pironet-ndh.com]'s assertion of the need for some kind of memory management - you use no tools that aren't available through unsafe/NativeAllocator was my only point. Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7607) Test coverage for authorization in DDL DML statements
Robert Stupp created CASSANDRA-7607: --- Summary: Test coverage for authorization in DDL DML statements Key: CASSANDRA-7607 URL: https://issues.apache.org/jira/browse/CASSANDRA-7607 Project: Cassandra Issue Type: Test Reporter: Robert Stupp Similar to CASSANDRA-7604 Check that the statements perform proper authorization (allow / reject): * {{CREATE KEYSPACE}} * {{ALTER KEYSPACE}} * {{DROP KEYSPACE}} * {{CREATE TABLE}} * {{ALTER TABLE}} * {{DROP TABLE}} * {{CREATE TYPE}} * {{ALTER TYPE}} * {{DROP TYPE}} * {{CREATE INDEX}} * {{DROP INDEX}} * {{CREATE TRIGGER}} * {{DROP TRIGGER}} * {{CREATE USER}} * {{ALTER USER}} * {{DROP USER}} * {{TRUNCATE}} * {{GRANT}} * {{REVOKE}} * {{SELECT}} * {{UPDATE}} * {{DELETE}} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7604) Test coverage for conditional DDL statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7604?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072958#comment-14072958 ] Robert Stupp commented on CASSANDRA-7604: - Note: for consistent behavior, {{KeyspaceNotDefinedException}} should be thrown for non-existing KS (as in {{ThriftValidation#validateKeyspace}}) Also created CASSANDRA-7607 to cover authorization in unit tests Test coverage for conditional DDL statements Key: CASSANDRA-7604 URL: https://issues.apache.org/jira/browse/CASSANDRA-7604 Project: Cassandra Issue Type: Test Components: Tests Reporter: Tyler Hobbs Assignee: Ryan McGuire We only have minimal test coverage of {{IF \[NOT\] EXISTS}} conditions for DDL statements. I think dtests are the right place to add those tests. We need to cover: * {{CREATE KEYSPACE IF NOT EXISTS}} * {{DROP KEYSPACE IF EXISTS}} * {{CREATE TABLE IF NOT EXISTS}} * {{DROP TABLE IF EXISTS}} * {{CREATE INDEX IF NOT EXISTS}} * {{DROP INDEX IF EXISTS}} * {{CREATE TYPE IF NOT EXISTS}} * {{DROP TYPE IF EXISTS}} The tests should also ensure that InvalidRequestExceptions are thrown if, for example, you try to drop an index from a keyspace that doesn't exist (regardless of whether {{IF EXISTS}} is used). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7608) StressD doesn't can't create keyspaces with Write Command
Russell Alexander Spitzer created CASSANDRA-7608: Summary: StressD doesn't can't create keyspaces with Write Command Key: CASSANDRA-7608 URL: https://issues.apache.org/jira/browse/CASSANDRA-7608 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Russell Alexander Spitzer Priority: Minor It is impossible to run the default stress command via the dameon ./stress write Because the column names are HeapByteBuffers so they get ignored during serilization (no error is thrown) and then when the object is deserialized on the server the settings.columns.names is null. This leads to a null pointer on the dameon for what would have worked had it run locally. Settings object on the Local machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@1465} maxColumnsPerKey = 5 names = {java.util.Arrays$ArrayList@1471} size = 5 [0] = {java.nio.HeapByteBuffer@1478}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [1] = {java.nio.HeapByteBuffer@1483}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [2] = {java.nio.HeapByteBuffer@1484}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [3] = {java.nio.HeapByteBuffer@1485}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [4] = {java.nio.HeapByteBuffer@1486}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] {code} Setings object on the StressD Machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@810} maxColumnsPerKey = 5 names = null {code} This leads to the null pointer in {code} Exception in thread Thread-1 java.lang.NullPointerException at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:94) at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:67) at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:193) at org.apache.cassandra.stress.StressAction.run(StressAction.java:59) at java.lang.Thread.run(Thread.java:745) {code} Which refers to {code} for (int i = 0; i settings.columns.names.size(); i++) standardCfDef.addToColumn_metadata(new ColumnDef(settings.columns.names.get(i), BytesType)); {code} Possible solution: Just use the settings.columns.namestr and convert them to byte buffers at this point in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072970#comment-14072970 ] Robert Stupp commented on CASSANDRA-7438: - bq. no tools that aren't available through unsafe/NativeAllocator It's plain old {{malloc}}/{{free}}. But regardless whether JVM's native allocator or malloc is used - it needs to be thoroughly tested to ensure that there are no memory leaks. I'm not sure if it is possible with JNI code: we should also check different C malloc implementations - there are differences. Altogether IMO this (lruc) one is a good start to prevent GC - we can fully optimize it later. Maybe it's possible to create a pure Java off heap solution without the annoying JNI-savepoint. Ah - and we should check lruc against both Java7 and Java8 - I think there are optimizations in Java8 regarding JNI (or at least there were some discussions - did not follow that). Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072974#comment-14072974 ] Benedict commented on CASSANDRA-7438: - I should note I'm not here in any way worried about the JNI costs or performance of the map - Vijay's numbers demonstrate it performs well enough if it's also _correct_. My concern stems only from the lack of expertise for curating a fully native solution. Do we have any prominent community members willing and capable of vetting the code? Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14072978#comment-14072978 ] Benedict commented on CASSANDRA-7438: - I should note that, for these reasons, I would personally prefer to *eventually* replace it with a pure Java version using unsafe, anyway, since in the long run we don't have many C-proficient members capable of maintaining it even if we find one willing to review. Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7609) CSV import is taking huge time in CQL
akshay created CASSANDRA-7609: - Summary: CSV import is taking huge time in CQL Key: CASSANDRA-7609 URL: https://issues.apache.org/jira/browse/CASSANDRA-7609 Project: Cassandra Issue Type: Bug Components: Tools Environment: Ubuntu OS Reporter: akshay Priority: Critical Fix For: 2.0.9 Hello, I am trying copy command in Cassandra to import CSV file in to DB, Import is taking huge time, any suggestion to improve it? id,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z 100,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 101,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 -- -- there are ~ 50 K lines in this file , size is ~ 5 MB. I have created table as per below: create table csldata4 ( id int PRIMARY KEY,a int , b int, c int, d int, e int, f int, g int, h int,i int, j int, k int, l int,m int, n int, o int, p int, q int, r int, s int, t int, u int, v int, w int, x int, y int , z int); Copy Command: COPY csldata4 (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; Issue here is it's taking huge time to import cqlsh:mykeyspace COPY csldata (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; 66215 rows imported in 1 minute and 31.044 seconds. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073058#comment-14073058 ] Lyuben Todorov commented on CASSANDRA-7597: --- Its better to fail-fast when a bad config is detected as it's hard to interpret what the config was supposed to be and attempt to fix it. My argument for -1 here is that logging exceptions without erroring out preemptively will lead to further problems (further exception and a stacktrace that might be misleading) and eventually a crash. I'll give an example just to be clear. A missing {{commitlog_sync}} option from cassandra.yaml would generate this error when trying to start the server: {noformat} ERROR 09:58:33 Fatal configuration error org.apache.cassandra.exceptions.ConfigurationException: Missing required directive CommitLogSync at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:162) ~[main/:na] at org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:128) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:109) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) [main/:na] Missing required directive CommitLogSync Fatal configuration error; unable to start. See log for stacktrace. {noformat} I think that error is much clearer than what would happen if we just log the exception and allow cassandra to continue, the below is the final exception that is displayed: {noformat} ERROR 09:59:45 Exception encountered during startup java.lang.NullPointerException: null at java.util.Arrays$ArrayList.init(Arrays.java:2842) ~[na:1.7.0_60] at java.util.Arrays.asList(Arrays.java:2828) ~[na:1.7.0_60] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:192) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) [main/:na] java.lang.NullPointerException at java.util.Arrays$ArrayList.init(Arrays.java:2842) at java.util.Arrays.asList(Arrays.java:2828) at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:192) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) Exception encountered during startup: null {noformat} Finally removing the {{System.exit(1|-1)}} and throwing exceptions instead generates a similar stacktrace to what we do now, except its not as clean: {noformat} ERROR 10:07:59 Fatal configuration error org.apache.cassandra.exceptions.ConfigurationException: Missing required directive CommitLogSync at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:164) ~[main/:na] at org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:128) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:109) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) [main/:na] ERROR 10:07:59 Exception encountered during startup java.lang.ExceptionInInitializerError: null at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:109) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) [main/:na] Caused by: java.lang.RuntimeException: org.apache.cassandra.exceptions.ConfigurationException: Missing required directive CommitLogSync at org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:136) ~[main/:na] ... 3 common frames omitted Caused by: org.apache.cassandra.exceptions.ConfigurationException: Missing required directive CommitLogSync at org.apache.cassandra.config.DatabaseDescriptor.applyConfig(DatabaseDescriptor.java:164) ~[main/:na] at org.apache.cassandra.config.DatabaseDescriptor.clinit(DatabaseDescriptor.java:128) ~[main/:na] ... 3 common frames omitted java.lang.ExceptionInInitializerError at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:109) at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:454) at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:543) Caused by: java.lang.RuntimeException:
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073067#comment-14073067 ] Robert Stupp commented on CASSANDRA-7597: - And even the disk failure policy might cause a {{System.exit()}} for a good reason; there are ticket(s) to shutdown non-gracefully on OOM (which is also a good reason - definetly better than corrupting data). In case of just loading configuration, it might be simple. Add a new function - something like {{loadConfigNoExit()]}} that returns a boolean or throws an exception to indicate an invalid configuration. But I don't see a viable alternative for the other usages of System.exit. System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073074#comment-14073074 ] Benedict commented on CASSANDRA-7597: - I think there's no reason no to make it possible to load DatabaseDescriptor without exiting or erroring, while providing some accessor to indicate if it was valid; obviously CassandraDaemon needs to retain the current behaviour, but that's not a major obstacle. I don't think there's any mention of any other sysexit calls, so that shouldn't be a problem. System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073081#comment-14073081 ] Robert Stupp commented on CASSANDRA-7597: - [~benedict] so add a new function or change the current behaviour? Exception or boolean? I think something like an unchecked {{InvalidDatabaseConfigurationException}} would be ok. CassandraDaemon could call System.exit then on its own (if necessary). System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073082#comment-14073082 ] Benedict commented on CASSANDRA-7597: - I have no preference. So long as it works :) System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073091#comment-14073091 ] Robert Stupp commented on CASSANDRA-7597: - Branch cassandra-2.1 ok? System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073092#comment-14073092 ] Benedict commented on CASSANDRA-7597: - Sure System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7597: Attachment: 7597.txt Oops - not that easy. A static initializer in DatabaseDescriptor loads applies the config and calls System.exit on failure. I implemented a workaround for these usecases and left the default behavior untouched. Workaround as implemented in the attached patch: # Call {{org.apache.cassandra.config.Config.setIsExitOnInvalidConfig(false)}} # Check the return value of {{org.apache.cassandra.config.DatabaseDescriptor.getInvalidConfigFailure()}} - it returns {{null}} if the config has been successfully applied - otherwise it returns the failure. Also changed the {{catch (Exception e)}} to {{catch (Throwable e)}} in the static initializer to catch really everything - even runtime exceptions. This is not a good thing and I'm not sure whether to commit this one. [~benedict] ? System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073118#comment-14073118 ] Robert Stupp edited comment on CASSANDRA-7597 at 7/24/14 11:46 AM: --- Oops - not that easy. A static initializer in DatabaseDescriptor loads applies the config and calls System.exit on failure. I implemented a workaround for these usecases and left the default behavior untouched. Workaround as implemented in the attached patch: # Call {{org.apache.cassandra.config.Config.setIsExitOnInvalidConfig(false)}} # Check the return value of {{org.apache.cassandra.config.DatabaseDescriptor.getInvalidConfigFailure()}} - it returns {{null}} if the config has been successfully applied - otherwise it returns the failure. Also changed the {{catch (Exception e)}} to {{catch (Throwable e)}} in the static initializer to catch really everything - even runtime exceptions. This is not a good thing and I'm not sure whether to commit this one. [~benedict] ? was (Author: snazy): Oops - not that easy. A static initializer in DatabaseDescriptor loads applies the config and calls System.exit on failure. I implemented a workaround for these usecases and left the default behavior untouched. Workaround as implemented in the attached patch: # Call {{org.apache.cassandra.config.Config.setIsExitOnInvalidConfig(false)}} # Check the return value of {{org.apache.cassandra.config.DatabaseDescriptor.getInvalidConfigFailure()}} - it returns {{null}} if the config has been successfully applied - otherwise it returns the failure. Also changed the {{catch (Exception e)}} to {{catch (Throwable e)}} in the static initializer to catch really everything - even runtime exceptions. This is not a good thing and I'm not sure whether to commit this one. [~benedict] ? System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073125#comment-14073125 ] Benedict commented on CASSANDRA-7597: - Hmm. I think I would prefer to stop it being setup by a static initializer - we don't need it to be one, since we don't impose final on any of the values. Of course the problem there is that we have to track down all of the first calls to DatabaseDescriptor and ensure they are preceded by a call to our new initializer. In which case I would prefer to bump this to trunk only. System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073118#comment-14073118 ] Robert Stupp edited comment on CASSANDRA-7597 at 7/24/14 11:50 AM: --- Oops - not that easy. A static initializer in DatabaseDescriptor loads applies the config and calls System.exit on failure. I implemented a workaround for these usecases and left the default behavior untouched. Workaround as implemented in the attached patch: # Call {{org.apache.cassandra.config.Config.setIsExitOnInvalidConfig(false)}} # Check the return value of {{org.apache.cassandra.config.DatabaseDescriptor.getInvalidConfigFailure()}} - it returns {{null}} if the config has been successfully applied - otherwise it returns the failure. Also changed the {{catch (Exception e)}} to {{catch (Throwable e)}} in the static initializer to catch really everything - even runtime exceptions. This is not a good thing and I'm not sure whether to commit this one. [~benedict] ? EDIT: DatabaseDescriptor class must not be used anywhere until the {{Config.set...}} call has been made - otherwise the VM would initialize DatabaseDescriptor which would load the configuration and exit the VM. I'm not sold on this one. was (Author: snazy): Oops - not that easy. A static initializer in DatabaseDescriptor loads applies the config and calls System.exit on failure. I implemented a workaround for these usecases and left the default behavior untouched. Workaround as implemented in the attached patch: # Call {{org.apache.cassandra.config.Config.setIsExitOnInvalidConfig(false)}} # Check the return value of {{org.apache.cassandra.config.DatabaseDescriptor.getInvalidConfigFailure()}} - it returns {{null}} if the config has been successfully applied - otherwise it returns the failure. Also changed the {{catch (Exception e)}} to {{catch (Throwable e)}} in the static initializer to catch really everything - even runtime exceptions. This is not a good thing and I'm not sure whether to commit this one. [~benedict] ? System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073128#comment-14073128 ] Robert Stupp commented on CASSANDRA-7597: - bq. I would prefer to bump this to trunk only +1 Suggest to resolve this as won't fix. Have created CASSANDRA-7610. System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp resolved CASSANDRA-7597. - Resolution: Won't Fix Proper solution via CASSANDRA-7610 System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7610) Remove static initializer in DatabaseDescriptor
Robert Stupp created CASSANDRA-7610: --- Summary: Remove static initializer in DatabaseDescriptor Key: CASSANDRA-7610 URL: https://issues.apache.org/jira/browse/CASSANDRA-7610 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Robert Stupp As discussed in CASSANDRA-7597 it's difficult to properly react on invalid configuration values in a client tool that uses cassandra code (was an sstable loader). Reason is that the static initializer in DatabaseDescriptor calls System.exit in case of configuration failures. Recommend to implement some loadAndApplyConfig method on DatabaseDescriptor and remove the static initializer and let the calling code react accordingly (print error, exit VM). All direct and indirect uses of DatabaseDescriptor must be catched to solve this ticket - so it's not a 2.x ticket. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Reopened] (CASSANDRA-7597) System.exit() calls should be removed from DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict reopened CASSANDRA-7597: - Assignee: Robert Stupp System.exit() calls should be removed from DatabaseDescriptor - Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Bug Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Assignee: Robert Stupp Attachments: 7597.txt We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7610) Remove static initializer in DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7610?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict resolved CASSANDRA-7610. - Resolution: Duplicate In general we prefer to transform a ticket if there is a simple change in scope. Simply update the description with the updated goal given the new information. It makes it easier to track the discussion in future, and reduces the number of extraneous tickets in the bug tracker Remove static initializer in DatabaseDescriptor --- Key: CASSANDRA-7610 URL: https://issues.apache.org/jira/browse/CASSANDRA-7610 Project: Cassandra Issue Type: Improvement Components: Core Reporter: Robert Stupp As discussed in CASSANDRA-7597 it's difficult to properly react on invalid configuration values in a client tool that uses cassandra code (was an sstable loader). Reason is that the static initializer in DatabaseDescriptor calls System.exit in case of configuration failures. Recommend to implement some loadAndApplyConfig method on DatabaseDescriptor and remove the static initializer and let the calling code react accordingly (print error, exit VM). All direct and indirect uses of DatabaseDescriptor must be catched to solve this ticket - so it's not a 2.x ticket. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-6572) Workload recording / playback
[ https://issues.apache.org/jira/browse/CASSANDRA-6572?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073141#comment-14073141 ] Lyuben Todorov commented on CASSANDRA-6572: --- bq. It looks to me like you need some way to share the statement preparation across threads, as it can be used by any thread (and across log segments) once prepared. Probably easiest to do it during parsing of the log file Seems simple enough, creating a concurrent map that is shared across a WorkloadReplayer should do the job. The problem posed with doing it whilst parsing the log is that the statement might be for a ks / cf that isn't yet created bq. We also have an issue with replay potentially over-parallelizing, and also potentially OOMing, as you're submitting straight to a thread pool after parsing each file. So there's nothing stopping us racing ahead and reading all of the log files (you have an unbounded queue) Possible solution is to move the multimap at the class level rather than having {{WP#read}} creating one each time it's called (again per WorkloadReplayer which is fine since we should only have 1 per replay). Then every time a read is completed we submit the collection of {{QuerylogSegments}} to be replayed, empty the map and populate it again if the same thread-id is met in {{WP#read}}. The tricky part is submitting the same thread-id only once we know the executor doesn't have a task with the same thread-id already running. bq. Also, we're still replaying based on offset from last query, which means we will skew very quickly. We should be fixing an epoch (in nanos) such that you have a log epoch of L, and queries are run at T=L+X; when re-run we have a replay epoch of R, and we run queries at R+X It's on the todo list. Workload recording / playback - Key: CASSANDRA-6572 URL: https://issues.apache.org/jira/browse/CASSANDRA-6572 Project: Cassandra Issue Type: New Feature Components: Core, Tools Reporter: Jonathan Ellis Assignee: Lyuben Todorov Fix For: 2.1.1 Attachments: 6572-trunk.diff Write sample mode gets us part way to testing new versions against a real world workload, but we need an easy way to test the query side as well. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7597) Remove static initializer in DatabaseDescriptor
[ https://issues.apache.org/jira/browse/CASSANDRA-7597?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Robert Stupp updated CASSANDRA-7597: Description: As discussed below, it's difficult to properly react on invalid configuration values in a client tool that uses cassandra code (here: an sstable loader). Reason is that the static initializer in DatabaseDescriptor calls System.exit in case of configuration failures. Recommend to implement some loadAndApplyConfig method on DatabaseDescriptor and remove the static initializer and let the calling code react accordingly (print error, exit VM). All direct and indirect uses of DatabaseDescriptor must be catched to solve this ticket - so this is not a 2.1 ticket. -- Old Description: We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called was: We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called Issue Type: Improvement (was: Bug) Summary: Remove static initializer in DatabaseDescriptor (was: System.exit() calls should be removed from DatabaseDescriptor) Remove static initializer in DatabaseDescriptor --- Key: CASSANDRA-7597 URL: https://issues.apache.org/jira/browse/CASSANDRA-7597 Project: Cassandra Issue Type: Improvement Environment: Cassandra 2.0.9 (earlier version should be affected as well) Reporter: Pavel Sakun Assignee: Robert Stupp Attachments: 7597.txt As discussed below, it's difficult to properly react on invalid configuration values in a client tool that uses cassandra code (here: an sstable loader). Reason is that the static initializer in DatabaseDescriptor calls System.exit in case of configuration failures. Recommend to implement some loadAndApplyConfig method on DatabaseDescriptor and remove the static initializer and let the calling code react accordingly (print error, exit VM). All direct and indirect uses of DatabaseDescriptor must be catched to solve this ticket - so this is not a 2.1 ticket. -- Old Description: We're using SSTableSimpleUnsortedWriter API to generate SSTable to be loaded into cassandra. In case of any issue with config DatabaseDescriptor calls System.exit() which is apparently not the thing you expect while using API. Test case is simple: System.setProperty( cassandra.config, ); new YamlConfigurationLoader().loadConfig(); Thread.sleep( 5000 ); System.out.println(We're still alive); // this will never be called -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7511) Commit log grows infinitely after truncate
[ https://issues.apache.org/jira/browse/CASSANDRA-7511?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Benedict updated CASSANDRA-7511: Attachment: 7511-2.0-v2.txt Attaching an updated patch for 2.0 For 2.1, I would like to modify the behaviour of truncate, and have it rely explicitly on ReplayPosition since this is considerably safer. However whilst more correct it has its own challenges, as we do not have an easy way to find the ReplayPosition relating to a hint/batchlog entry, but we *could* serialize these alongside, which is the most correct thing to do. Commit log grows infinitely after truncate -- Key: CASSANDRA-7511 URL: https://issues.apache.org/jira/browse/CASSANDRA-7511 Project: Cassandra Issue Type: Bug Environment: CentOS 6.5, Oracle Java 7u60, C* 2.0.6, 2.0.9, including earlier 1.0.* versions. Reporter: Viktor Jevdokimov Assignee: Benedict Priority: Minor Labels: commitlog Fix For: 2.0.10 Attachments: 7511-2.0-v2.txt, 7511.txt Commit log grows infinitely after CF truncate operation via cassandra-cli, regardless CF receives writes or not thereafter. CF's could be non-CQL Standard and Super column type. Creation of snapshots after truncate is turned off. Commit log may start grow promptly, may start grow later, on a few only or on all nodes at once. Nothing special in the system log. No idea how to reproduce. After rolling restart commit logs are cleared and back to normal. Just annoying to do rolling restart after each truncate. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7603) cqlsh fails to decode blobs
[ https://issues.apache.org/jira/browse/CASSANDRA-7603?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073192#comment-14073192 ] Aleksey Yeschenko commented on CASSANDRA-7603: -- +1 cqlsh fails to decode blobs --- Key: CASSANDRA-7603 URL: https://issues.apache.org/jira/browse/CASSANDRA-7603 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Tyler Hobbs Assignee: Tyler Hobbs Fix For: 2.1.0 Attachments: 7603.txt As described in [PYTHON-101|https://datastax-oss.atlassian.net/browse/PYTHON-101], cqlsh patches the python driver's deserialization method for BytesType for formatting purposes. When the bundled driver was upgraded to 2.1, the patch was not updated to match, resulting in an error like this: {noformat} cqlsh:unicode select * from test; Traceback (most recent call last): File /home/ryan/.ccm/repository/git_cassandra-2.1/bin/cqlsh, line 908, in perform_simple_statement rows = self.session.execute(statement, trace=self.tracing_enabled) File /home/ryan/.ccm/repository/git_cassandra-2.1/bin/../lib/cassandra-driver-internal-only-2.1.0b1.post.zip/cassandra-driver-2.1.0b1.post/cassandra/cluster.py, line 1186, in execute result = future.result(timeout) File /home/ryan/.ccm/repository/git_cassandra-2.1/bin/../lib/cassandra-driver-internal-only-2.1.0b1.post.zip/cassandra-driver-2.1.0b1.post/cassandra/cluster.py, line 2610, in result raise self._final_exception TypeError: validate() takes exactly 1 argument (2 given) {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7601) Data loss after nodetool taketoken
[ https://issues.apache.org/jira/browse/CASSANDRA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073217#comment-14073217 ] T Jake Luciani commented on CASSANDRA-7601: --- This test is specifically for = 2.1 @since('2.1') in the code. Can you try changing that to 2.0 and running it? Data loss after nodetool taketoken -- Key: CASSANDRA-7601 URL: https://issues.apache.org/jira/browse/CASSANDRA-7601 Project: Cassandra Issue Type: Bug Components: Core, Tests Environment: Mac OSX Mavericks. Ubuntu 14.04 Reporter: Philip Thompson Priority: Minor Attachments: consistent_bootstrap_test.py, taketoken.tar.gz The dtest consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test is failing on HEAD of the git branches 2.1 and 2.1.0. It is passing on 1.2 and 2.0. The test performs the following actions: - Create a cluster of 3 nodes - Create a keyspace with RF 2 - Take node 3 down - Write 980 rows to node 2 with CL ONE - Flush node 2 - Bring node 3 back up - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 3 - Check for data loss When the check for data loss is performed, only ~725 rows can be read via CL ALL. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7605) compactionstats reports incorrect byte values
[ https://issues.apache.org/jira/browse/CASSANDRA-7605?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-7605: - Assignee: Marcus Eriksson compactionstats reports incorrect byte values - Key: CASSANDRA-7605 URL: https://issues.apache.org/jira/browse/CASSANDRA-7605 Project: Cassandra Issue Type: Bug Components: Core Environment: 2.0.9, Java 1.7.0_55 Reporter: Peter Haggerty Assignee: Marcus Eriksson Attachments: CASSANDRA-7605.txt The output of nodetool compactionstats (while a compaction is running) is incorrect. The output from nodetool compactionhistory and the log both match and they disagree with the output from compactionstats. What nodetool said during the compaction was almost certainly wrong given the sizes of files on disk: completed total unit progress 144713163589146631071165 bytes98.69% nodetool compactionhistory and the log both report the same values for that compaction: 52,596,321,269 bytes to 38,575,881,134 The compactionhistory/log values make much more sense given the size of the files on disk. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7609) CSV import is taking huge time in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-7609: -- Priority: Minor (was: Critical) CSV import is taking huge time in CQL - Key: CASSANDRA-7609 URL: https://issues.apache.org/jira/browse/CASSANDRA-7609 Project: Cassandra Issue Type: Bug Components: Tools Environment: Ubuntu OS Reporter: akshay Priority: Minor Fix For: 2.0.9 Hello, I am trying copy command in Cassandra to import CSV file in to DB, Import is taking huge time, any suggestion to improve it? id,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z 100,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 101,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 -- -- there are ~ 50 K lines in this file , size is ~ 5 MB. I have created table as per below: create table csldata4 ( id int PRIMARY KEY,a int , b int, c int, d int, e int, f int, g int, h int,i int, j int, k int, l int,m int, n int, o int, p int, q int, r int, s int, t int, u int, v int, w int, x int, y int , z int); Copy Command: COPY csldata4 (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; Issue here is it's taking huge time to import cqlsh:mykeyspace COPY csldata (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; 66215 rows imported in 1 minute and 31.044 seconds. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7609) CSV import is taking huge time in CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-7609?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-7609. --- Resolution: Duplicate CSV import is taking huge time in CQL - Key: CASSANDRA-7609 URL: https://issues.apache.org/jira/browse/CASSANDRA-7609 Project: Cassandra Issue Type: Bug Components: Tools Environment: Ubuntu OS Reporter: akshay Priority: Critical Fix For: 2.0.9 Hello, I am trying copy command in Cassandra to import CSV file in to DB, Import is taking huge time, any suggestion to improve it? id,a,b,c,d,e,f,g,h,i,j,k,l,m,n,o,p,q,r,s,t,u,v,w,x,y,z 100,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 101,1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26 -- -- there are ~ 50 K lines in this file , size is ~ 5 MB. I have created table as per below: create table csldata4 ( id int PRIMARY KEY,a int , b int, c int, d int, e int, f int, g int, h int,i int, j int, k int, l int,m int, n int, o int, p int, q int, r int, s int, t int, u int, v int, w int, x int, y int , z int); Copy Command: COPY csldata4 (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; Issue here is it's taking huge time to import cqlsh:mykeyspace COPY csldata (id , a , b , c , d , e , f , g , h , i , j , k , l , m , n , o , p , q , r , s , t , u , v , w , x , y , z ) FROM 'csldata1.csv' WITH HEADER=TRUE; 66215 rows imported in 1 minute and 31.044 seconds. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073233#comment-14073233 ] Jonathan Ellis commented on CASSANDRA-7438: --- I'm not wild about taking on the complexity of building and distributing native libraries if we have a reasonable alternative. [~vijay2...@yahoo.com] what do we win with the native approach over using java unsafe? Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7601) Data loss after nodetool taketoken
[ https://issues.apache.org/jira/browse/CASSANDRA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073251#comment-14073251 ] Philip Thompson commented on CASSANDRA-7601: Wow, sorry about that Jake. I didn't run it on 1.2 and 2.0, I only checked to see if it was in the list of failing tests on cassci. My mistake. If I switch to since('2.0') and update the taketoken syntax, the test still fails because of data loss. Data loss after nodetool taketoken -- Key: CASSANDRA-7601 URL: https://issues.apache.org/jira/browse/CASSANDRA-7601 Project: Cassandra Issue Type: Bug Components: Core, Tests Environment: Mac OSX Mavericks. Ubuntu 14.04 Reporter: Philip Thompson Priority: Minor Attachments: consistent_bootstrap_test.py, taketoken.tar.gz The dtest consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test is failing on HEAD of the git branches 2.1 and 2.1.0. It is passing on 1.2 and 2.0. The test performs the following actions: - Create a cluster of 3 nodes - Create a keyspace with RF 2 - Take node 3 down - Write 980 rows to node 2 with CL ONE - Flush node 2 - Bring node 3 back up - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 3 - Check for data loss When the check for data loss is performed, only ~725 rows can be read via CL ALL. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7601) Data loss after nodetool taketoken
[ https://issues.apache.org/jira/browse/CASSANDRA-7601?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Philip Thompson updated CASSANDRA-7601: --- Description: The dtest consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test is failing on HEAD of the git branches 2.1 and 2.1.0. The test performs the following actions: - Create a cluster of 3 nodes - Create a keyspace with RF 2 - Take node 3 down - Write 980 rows to node 2 with CL ONE - Flush node 2 - Bring node 3 back up - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 3 - Check for data loss When the check for data loss is performed, only ~725 rows can be read via CL ALL. was: The dtest consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test is failing on HEAD of the git branches 2.1 and 2.1.0. It is passing on 1.2 and 2.0. The test performs the following actions: - Create a cluster of 3 nodes - Create a keyspace with RF 2 - Take node 3 down - Write 980 rows to node 2 with CL ONE - Flush node 2 - Bring node 3 back up - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 3 - Check for data loss When the check for data loss is performed, only ~725 rows can be read via CL ALL. Data loss after nodetool taketoken -- Key: CASSANDRA-7601 URL: https://issues.apache.org/jira/browse/CASSANDRA-7601 Project: Cassandra Issue Type: Bug Components: Core, Tests Environment: Mac OSX Mavericks. Ubuntu 14.04 Reporter: Philip Thompson Priority: Minor Attachments: consistent_bootstrap_test.py, taketoken.tar.gz The dtest consistent_bootstrap_test.py:TestBootstrapConsistency.consistent_reads_after_relocate_test is failing on HEAD of the git branches 2.1 and 2.1.0. The test performs the following actions: - Create a cluster of 3 nodes - Create a keyspace with RF 2 - Take node 3 down - Write 980 rows to node 2 with CL ONE - Flush node 2 - Bring node 3 back up - Run nodetool taketoken on node 3 to transfer 80% of node 1's tokens to node 3 - Check for data loss When the check for data loss is performed, only ~725 rows can be read via CL ALL. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-4206) AssertionError: originally calculated column size of 629444349 but now it is 588008950
[ https://issues.apache.org/jira/browse/CASSANDRA-4206?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073259#comment-14073259 ] mat gomes commented on CASSANDRA-4206: -- Same issues on v 1.2.12 and 1.2.18 Error occurred during compaction java.util.concurrent.ExecutionException: java.lang.AssertionError: originally calculated column size of 116397997 but now it is 116398382 ... AssertionError: originally calculated column size of 629444349 but now it is 588008950 -- Key: CASSANDRA-4206 URL: https://issues.apache.org/jira/browse/CASSANDRA-4206 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 1.0.9 Environment: Debian Squeeze Linux, kernel 2.6.32, sun-java6-bin 6.26-0squeeze1 Reporter: Patrik Modesto I've 4 node cluster of Cassandra 1.0.9. There is a rfTest3 keyspace with RF=3 and one CF with two secondary indexes. I'm importing data into this CF using Hadoop Mapreduce job, each row has less than 10 colkumns. From JMX: MaxRowSize: 1597 MeanRowSize: 369 And there are some tens of millions of rows. It's write-heavy usage and there is a big pressure on each node, there are quite some dropped mutations on each node. After ~12 hours of inserting I see these assertion exceptiona on 3 out of four nodes: {noformat} ERROR 06:25:40,124 Fatal exception in thread Thread[HintedHandoff:1,1,main] java.lang.RuntimeException: java.util.concurrent.ExecutionException: java.lang.AssertionError: originally calculated column size of 629444349 but now it is 588008950 at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:388) at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpoint(HintedHandOffManager.java:256) at org.apache.cassandra.db.HintedHandOffManager.access$300(HintedHandOffManager.java:84) at org.apache.cassandra.db.HintedHandOffManager$3.runMayThrow(HintedHandOffManager.java:437) at org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:30) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Caused by: java.util.concurrent.ExecutionException: java.lang.AssertionError: originally calculated column size of 629444349 but now it is 588008950 at java.util.concurrent.FutureTask$Sync.innerGet(FutureTask.java:222) at java.util.concurrent.FutureTask.get(FutureTask.java:83) at org.apache.cassandra.db.HintedHandOffManager.deliverHintsToEndpointInternal(HintedHandOffManager.java:384) ... 7 more Caused by: java.lang.AssertionError: originally calculated column size of 629444349 but now it is 588008950 at org.apache.cassandra.db.compaction.LazilyCompactedRow.write(LazilyCompactedRow.java:124) at org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:160) at org.apache.cassandra.db.compaction.CompactionTask.execute(CompactionTask.java:161) at org.apache.cassandra.db.compaction.CompactionManager$7.call(CompactionManager.java:380) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) ... 3 more {noformat} Few lines regarding Hints from the output.log: {noformat} INFO 06:21:26,202 Compacting large row system/HintsColumnFamily:7000 (1712834057 bytes) incrementally INFO 06:22:52,610 Compacting large row system/HintsColumnFamily:1000 (2616073981 bytes) incrementally INFO 06:22:59,111 flushing high-traffic column family CFS(Keyspace='system', ColumnFamily='HintsColumnFamily') (estimated 305147360 bytes) INFO 06:22:59,813 Enqueuing flush of Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops) INFO 06:22:59,814 Writing Memtable-HintsColumnFamily@833933926(3814342/305147360 serialized/live bytes, 7452 ops) {noformat} I think the problem may be somehow connected to an IntegerType secondary index. I had a different problem with CF with two secondary indexes, the first UTF8Type, the second IntegerType. After a few hours of inserting data in the afternoon and midnight repair+compact, the next day I couldn't find any row using the IntegerType secondary index. The output was like this: {noformat} [default@rfTest3] get IndexTest where col1 = '3230727:http://zaskolak.cz/download.php'; --- RowKey: 3230727:8383582:http://zaskolak.cz/download.php = (column=col1,
[jira] [Updated] (CASSANDRA-6066) LHF 2i performance improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6066: -- Priority: Major (was: Minor) LHF 2i performance improvements --- Key: CASSANDRA-6066 URL: https://issues.apache.org/jira/browse/CASSANDRA-6066 Project: Cassandra Issue Type: Improvement Reporter: Aleksey Yeschenko Assignee: Lyuben Todorov Labels: performance Fix For: 2.1.1 We should perform more aggressive paging over the index partition (costs us nothing) and also fetch the rows from the base table in one slice query (at least the ones belonging to the same partition). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-6066) LHF 2i performance improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-6066?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-6066: -- Fix Version/s: (was: 2.0.10) 2.1.1 LHF 2i performance improvements --- Key: CASSANDRA-6066 URL: https://issues.apache.org/jira/browse/CASSANDRA-6066 Project: Cassandra Issue Type: Improvement Reporter: Aleksey Yeschenko Assignee: Lyuben Todorov Priority: Minor Labels: performance Fix For: 2.1.1 We should perform more aggressive paging over the index partition (costs us nothing) and also fetch the rows from the base table in one slice query (at least the ones belonging to the same partition). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7232) Enable live replay of commit logs
[ https://issues.apache.org/jira/browse/CASSANDRA-7232?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073280#comment-14073280 ] Jonathan Ellis commented on CASSANDRA-7232: --- Is this really 2.0-appropriate at this point? Enable live replay of commit logs - Key: CASSANDRA-7232 URL: https://issues.apache.org/jira/browse/CASSANDRA-7232 Project: Cassandra Issue Type: Improvement Components: Tools Reporter: Patrick McFadin Assignee: Lyuben Todorov Priority: Minor Fix For: 2.0.10 Attachments: 0001-Expose-CommitLog-recover-to-JMX-add-nodetool-cmd-for.patch, 0001-TRUNK-JMX-and-nodetool-cmd-for-commitlog-replay.patch Replaying commit logs takes a restart but restoring sstables can be an online operation with refresh. In order to restore a point-in-time without a restart, the node needs to live replay the commit logs from JMX and a nodetool command. nodetool refreshcommitlogs keyspace table -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7608) StressD can't create keyspaces with Write Command
[ https://issues.apache.org/jira/browse/CASSANDRA-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russell Alexander Spitzer updated CASSANDRA-7608: - Summary: StressD can't create keyspaces with Write Command (was: StressD doesn't can't create keyspaces with Write Command) StressD can't create keyspaces with Write Command - Key: CASSANDRA-7608 URL: https://issues.apache.org/jira/browse/CASSANDRA-7608 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Russell Alexander Spitzer Priority: Minor It is impossible to run the default stress command via the dameon ./stress write Because the column names are HeapByteBuffers so they get ignored during serilization (no error is thrown) and then when the object is deserialized on the server the settings.columns.names is null. This leads to a null pointer on the dameon for what would have worked had it run locally. Settings object on the Local machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@1465} maxColumnsPerKey = 5 names = {java.util.Arrays$ArrayList@1471} size = 5 [0] = {java.nio.HeapByteBuffer@1478}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [1] = {java.nio.HeapByteBuffer@1483}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [2] = {java.nio.HeapByteBuffer@1484}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [3] = {java.nio.HeapByteBuffer@1485}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [4] = {java.nio.HeapByteBuffer@1486}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] {code} Setings object on the StressD Machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@810} maxColumnsPerKey = 5 names = null {code} This leads to the null pointer in {code} Exception in thread Thread-1 java.lang.NullPointerException at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:94) at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:67) at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:193) at org.apache.cassandra.stress.StressAction.run(StressAction.java:59) at java.lang.Thread.run(Thread.java:745) {code} Which refers to {code} for (int i = 0; i settings.columns.names.size(); i++) standardCfDef.addToColumn_metadata(new ColumnDef(settings.columns.names.get(i), BytesType)); {code} Possible solution: Just use the settings.columns.namestr and convert them to byte buffers at this point in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7312) sstable2json fails with FSReadError
[ https://issues.apache.org/jira/browse/CASSANDRA-7312?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073306#comment-14073306 ] Charles Sibbald commented on CASSANDRA-7312: Scrub did not address the issue, and deleted all the 'bad rows' in the file and effectively outputted and empty file. However restoring same sstable files to a newly built cluster works, and they are ingested without issues. sstable2json fails with FSReadError --- Key: CASSANDRA-7312 URL: https://issues.apache.org/jira/browse/CASSANDRA-7312 Project: Cassandra Issue Type: Bug Components: Core Reporter: Charles Sibbald {code} /apps/cassandra/bin/sstable2json /apps/data/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-ic-32-Data.db /tmp/test1.json ERROR 13:08:07,047 Error in ThreadPoolExecutor FSReadError in /apps/data/cassandra/data/mykeyspace/householdparentalcontrol/mykeyspace-householdparentalcontrol-ic-32-Index.db at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168) at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:417) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:209) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:157) at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:273) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Channel not open for writing - cannot extend file to required size at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:851) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192) ... 10 more Exception in thread main FSReadError in /apps/data/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-ic-32-Index.db at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168) at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:417) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:209) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:157) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:147) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:139) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:422) at org.apache.cassandra.tools.SSTableExport.export(SSTableExport.java:435) at org.apache.cassandra.tools.SSTableExport.main(SSTableExport.java:517) Caused by: java.io.IOException: Channel not open for writing - cannot extend file to required size at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:851) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:192) ... 9 more {code} {code} FSReadError in /apps/data/cassandra/data/mykeyspace/mycolumnfamily/mykeyspace-mycolumnfamily-ic-32-Index.db at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.createSegments(MmappedSegmentedFile.java:200) at org.apache.cassandra.io.util.MmappedSegmentedFile$Builder.complete(MmappedSegmentedFile.java:168) at org.apache.cassandra.io.sstable.SSTableReader.load(SSTableReader.java:417) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:209) at org.apache.cassandra.io.sstable.SSTableReader.open(SSTableReader.java:157) at org.apache.cassandra.io.sstable.SSTableReader$1.run(SSTableReader.java:273) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: java.io.IOException: Channel not open for writing - cannot extend file to required size at
[jira] [Commented] (CASSANDRA-7560) 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession
[ https://issues.apache.org/jira/browse/CASSANDRA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1407#comment-1407 ] Yuki Morishita commented on CASSANDRA-7560: --- From the jstack logs, it looks like repair session on coordinator node is waiting for validations (merkle trees), but none of the logs show ValidationExecutor running. By default, repair takes snapshot before validating, so it is possible that snapshotting is taking longer on replica node. One possible 'hang' point is snapshot time out. Coordinator waits snapshot response for rpc_timeout millisec, and after that, response handler can be removed. This is addressed in CASSANDRA-6747, and fixed for 2.1.0. You can try temporarily set rpc_timeout longer and see if that solves the problem. 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession -- Key: CASSANDRA-7560 URL: https://issues.apache.org/jira/browse/CASSANDRA-7560 Project: Cassandra Issue Type: Bug Components: Core Reporter: Vladimir Avram Attachments: cassandra_daemon.log, cassandra_daemon_rep1.log, cassandra_daemon_rep2.log, nodetool_command.log Running {{nodetool repair -pr}} will sometimes hang on one of the resulting AntiEntropySessions. The system logs will show the repair command starting {noformat} INFO [Thread-3079] 2014-07-15 02:22:56,514 StorageService.java (line 2569) Starting repair command #1, repairing 256 ranges for keyspace x {noformat} You can then see a few AntiEntropySessions completing with: {noformat} INFO [AntiEntropySessions:2] 2014-07-15 02:28:12,766 RepairSession.java (line 282) [repair #eefb3c30-0bc6-11e4-83f7-a378978d0c49] session completed successfully {noformat} Finally we reach an AntiEntropySession at some point that hangs just before requesting the merkle trees for the next column family in line for repair. So we first see the previous CF being finished and the whole repair sessions hangs here with no visible progress or errors on this or any of the related nodes. {noformat} INFO [AntiEntropyStage:1] 2014-07-15 02:38:20,325 RepairSession.java (line 221) [repair #8f85c1b0-0bc8-11e4-83f7-a378978d0c49] previous_cf is fully synced {noformat} Notes: * Single DC 6 node cluster with an average load of 86 GB per node. * This appears to be random; it does not always happen on the same CF or on the same session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7560) 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession
[ https://issues.apache.org/jira/browse/CASSANDRA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1407#comment-1407 ] Yuki Morishita edited comment on CASSANDRA-7560 at 7/24/14 4:18 PM: From the jstack logs, it looks like repair session on coordinator node is waiting for validations (merkle trees), but none of the logs show ValidationExecutor running. By default, repair takes snapshot before validating, so it is possible that snapshotting is taking longer on replica node. One possible 'hang' point is snapshot time out. Coordinator waits snapshot response for rpc_timeout millisec, and after that, response handler can be removed. -This is addressed in CASSANDRA-6747, and fixed for 2.1.0.- edit: actually, it is not solving the problem. we need to handle timeouts described here. You can try temporarily set rpc_timeout longer and see if that solves the problem. was (Author: yukim): From the jstack logs, it looks like repair session on coordinator node is waiting for validations (merkle trees), but none of the logs show ValidationExecutor running. By default, repair takes snapshot before validating, so it is possible that snapshotting is taking longer on replica node. One possible 'hang' point is snapshot time out. Coordinator waits snapshot response for rpc_timeout millisec, and after that, response handler can be removed. This is addressed in CASSANDRA-6747, and fixed for 2.1.0. You can try temporarily set rpc_timeout longer and see if that solves the problem. 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession -- Key: CASSANDRA-7560 URL: https://issues.apache.org/jira/browse/CASSANDRA-7560 Project: Cassandra Issue Type: Bug Components: Core Reporter: Vladimir Avram Attachments: cassandra_daemon.log, cassandra_daemon_rep1.log, cassandra_daemon_rep2.log, nodetool_command.log Running {{nodetool repair -pr}} will sometimes hang on one of the resulting AntiEntropySessions. The system logs will show the repair command starting {noformat} INFO [Thread-3079] 2014-07-15 02:22:56,514 StorageService.java (line 2569) Starting repair command #1, repairing 256 ranges for keyspace x {noformat} You can then see a few AntiEntropySessions completing with: {noformat} INFO [AntiEntropySessions:2] 2014-07-15 02:28:12,766 RepairSession.java (line 282) [repair #eefb3c30-0bc6-11e4-83f7-a378978d0c49] session completed successfully {noformat} Finally we reach an AntiEntropySession at some point that hangs just before requesting the merkle trees for the next column family in line for repair. So we first see the previous CF being finished and the whole repair sessions hangs here with no visible progress or errors on this or any of the related nodes. {noformat} INFO [AntiEntropyStage:1] 2014-07-15 02:38:20,325 RepairSession.java (line 221) [repair #8f85c1b0-0bc8-11e4-83f7-a378978d0c49] previous_cf is fully synced {noformat} Notes: * Single DC 6 node cluster with an average load of 86 GB per node. * This appears to be random; it does not always happen on the same CF or on the same session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7611) incomplete CREATE/DROP USER help and tab completion
Kristine Hahn created CASSANDRA-7611: Summary: incomplete CREATE/DROP USER help and tab completion Key: CASSANDRA-7611 URL: https://issues.apache.org/jira/browse/CASSANDRA-7611 Project: Cassandra Issue Type: Bug Components: Documentation website Reporter: Kristine Hahn Priority: Trivial IF NOT EXISTS/IF EXISTS doesn't appear in the online help and tab completion. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Comment Edited] (CASSANDRA-7560) 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession
[ https://issues.apache.org/jira/browse/CASSANDRA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=1407#comment-1407 ] Yuki Morishita edited comment on CASSANDRA-7560 at 7/24/14 4:33 PM: From the jstack logs, it looks like repair session on coordinator node is waiting for validations (merkle trees), but none of the logs show ValidationExecutor running. By default, repair takes snapshot before validating, so it is possible that snapshotting is taking longer on replica node. One possible 'hang' point is snapshot time out. Coordinator waits snapshot response for rpc_timeout millisec, and after that, response handler can be removed. -This is addressed in CASSANDRA-6747, and fixed for 2.1.0.- -edit: actually, it is not solving the problem. we need to handle timeouts described here.- edit: edit: CASSANDRA-6747 handles timeout also, but the reason we put that to 2.1.0 is that we needed protocol change. It is possible that we can backport only timeout part. You can try temporarily set rpc_timeout longer and see if that solves the problem. was (Author: yukim): From the jstack logs, it looks like repair session on coordinator node is waiting for validations (merkle trees), but none of the logs show ValidationExecutor running. By default, repair takes snapshot before validating, so it is possible that snapshotting is taking longer on replica node. One possible 'hang' point is snapshot time out. Coordinator waits snapshot response for rpc_timeout millisec, and after that, response handler can be removed. -This is addressed in CASSANDRA-6747, and fixed for 2.1.0.- edit: actually, it is not solving the problem. we need to handle timeouts described here. You can try temporarily set rpc_timeout longer and see if that solves the problem. 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession -- Key: CASSANDRA-7560 URL: https://issues.apache.org/jira/browse/CASSANDRA-7560 Project: Cassandra Issue Type: Bug Components: Core Reporter: Vladimir Avram Attachments: cassandra_daemon.log, cassandra_daemon_rep1.log, cassandra_daemon_rep2.log, nodetool_command.log Running {{nodetool repair -pr}} will sometimes hang on one of the resulting AntiEntropySessions. The system logs will show the repair command starting {noformat} INFO [Thread-3079] 2014-07-15 02:22:56,514 StorageService.java (line 2569) Starting repair command #1, repairing 256 ranges for keyspace x {noformat} You can then see a few AntiEntropySessions completing with: {noformat} INFO [AntiEntropySessions:2] 2014-07-15 02:28:12,766 RepairSession.java (line 282) [repair #eefb3c30-0bc6-11e4-83f7-a378978d0c49] session completed successfully {noformat} Finally we reach an AntiEntropySession at some point that hangs just before requesting the merkle trees for the next column family in line for repair. So we first see the previous CF being finished and the whole repair sessions hangs here with no visible progress or errors on this or any of the related nodes. {noformat} INFO [AntiEntropyStage:1] 2014-07-15 02:38:20,325 RepairSession.java (line 221) [repair #8f85c1b0-0bc8-11e4-83f7-a378978d0c49] previous_cf is fully synced {noformat} Notes: * Single DC 6 node cluster with an average load of 86 GB per node. * This appears to be random; it does not always happen on the same CF or on the same session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[1/3] git commit: remove errant code block
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 90b3acadc - 7f571b57a refs/heads/trunk 7cb78e12f - ae93155c7 remove errant code block Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f571b57 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f571b57 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f571b57 Branch: refs/heads/cassandra-2.1 Commit: 7f571b57abe6e93493a1406a582d5b0ababc0118 Parents: 90b3aca Author: Brandon Williams brandonwilli...@apache.org Authored: Thu Jul 24 11:48:55 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu Jul 24 11:48:55 2014 -0500 -- src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java | 2 -- 1 file changed, 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f571b57/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java b/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java index 68249f9..51d473e 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java @@ -127,8 +127,6 @@ public class CqlNativeStorage extends CqlStorage CqlConfigHelper.setInputMaxConnections(conf, nativeMaxConnections); if (nativeMinSimultReqs != null) CqlConfigHelper.setInputMinSimultReqPerConnections(conf, nativeMinSimultReqs); -if (nativeMinSimultReqs != null) -CqlConfigHelper.setInputMaxSimultReqPerConnections(conf, nativeMaxSimultReqs); if (nativeConnectionTimeout != null) CqlConfigHelper.setInputNativeConnectionTimeout(conf, nativeConnectionTimeout); if (nativeReadConnectionTimeout != null)
[3/3] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/ae93155c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/ae93155c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/ae93155c Branch: refs/heads/trunk Commit: ae93155c76aa3c04f34717859998c855727b59f3 Parents: 7cb78e1 7f571b5 Author: Brandon Williams brandonwilli...@apache.org Authored: Thu Jul 24 11:49:05 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu Jul 24 11:49:05 2014 -0500 -- src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java | 2 -- 1 file changed, 2 deletions(-) --
[2/3] git commit: remove errant code block
remove errant code block Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/7f571b57 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/7f571b57 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/7f571b57 Branch: refs/heads/trunk Commit: 7f571b57abe6e93493a1406a582d5b0ababc0118 Parents: 90b3aca Author: Brandon Williams brandonwilli...@apache.org Authored: Thu Jul 24 11:48:55 2014 -0500 Committer: Brandon Williams brandonwilli...@apache.org Committed: Thu Jul 24 11:48:55 2014 -0500 -- src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java | 2 -- 1 file changed, 2 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/7f571b57/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java -- diff --git a/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java b/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java index 68249f9..51d473e 100644 --- a/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java +++ b/src/java/org/apache/cassandra/hadoop/pig/CqlNativeStorage.java @@ -127,8 +127,6 @@ public class CqlNativeStorage extends CqlStorage CqlConfigHelper.setInputMaxConnections(conf, nativeMaxConnections); if (nativeMinSimultReqs != null) CqlConfigHelper.setInputMinSimultReqPerConnections(conf, nativeMinSimultReqs); -if (nativeMinSimultReqs != null) -CqlConfigHelper.setInputMaxSimultReqPerConnections(conf, nativeMaxSimultReqs); if (nativeConnectionTimeout != null) CqlConfigHelper.setInputNativeConnectionTimeout(conf, nativeConnectionTimeout); if (nativeReadConnectionTimeout != null)
[jira] [Commented] (CASSANDRA-7438) Serializing Row cache alternative (Fully off heap)
[ https://issues.apache.org/jira/browse/CASSANDRA-7438?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073380#comment-14073380 ] Jonathan Ellis commented on CASSANDRA-7438: --- Are you comfortable reviewing, [~snazy]? Serializing Row cache alternative (Fully off heap) -- Key: CASSANDRA-7438 URL: https://issues.apache.org/jira/browse/CASSANDRA-7438 Project: Cassandra Issue Type: Improvement Components: Core Environment: Linux Reporter: Vijay Assignee: Vijay Labels: performance Fix For: 3.0 Attachments: 0001-CASSANDRA-7438.patch Currently SerializingCache is partially off heap, keys are still stored in JVM heap as BB, * There is a higher GC costs for a reasonably big cache. * Some users have used the row cache efficiently in production for better results, but this requires careful tunning. * Overhead in Memory for the cache entries are relatively high. So the proposal for this ticket is to move the LRU cache logic completely off heap and use JNI to interact with cache. We might want to ensure that the new implementation match the existing API's (ICache), and the implementation needs to have safe memory access, low overhead in memory and less memcpy's (As much as possible). We might also want to make this cache configurable. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7578) Give read access to system.schema_usertypes to all authenticated users
[ https://issues.apache.org/jira/browse/CASSANDRA-7578?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7578: - Fix Version/s: (was: 2.1.1) 2.1.0 Give read access to system.schema_usertypes to all authenticated users -- Key: CASSANDRA-7578 URL: https://issues.apache.org/jira/browse/CASSANDRA-7578 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Mike Adamson Assignee: Mike Adamson Priority: Minor Fix For: 2.1.0 Attachments: 7578.patch When I try to login to cqlsh as a non-superuser I get the following error: {noformat} ErrorMessage code=2100 [Unauthorized] message=User mike has no SELECT permission on table system.schema_usertypes or any of its parents {noformat} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7582) 2.1 multi-dc upgrade errors
[ https://issues.apache.org/jira/browse/CASSANDRA-7582?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Ryan McGuire reassigned CASSANDRA-7582: --- Assignee: Benedict (was: Russ Hatch) 2.1 multi-dc upgrade errors --- Key: CASSANDRA-7582 URL: https://issues.apache.org/jira/browse/CASSANDRA-7582 Project: Cassandra Issue Type: Bug Components: Core Reporter: Ryan McGuire Assignee: Benedict Priority: Critical Fix For: 2.1.0 Multi-dc upgrade [was working from 2.0 - 2.1 fairly recently|http://cassci.datastax.com/job/cassandra_upgrade_dtest/55/testReport/upgrade_through_versions_test/TestUpgrade_from_cassandra_2_0_latest_tag_to_cassandra_2_1_HEAD/], but is currently failing. Running upgrade_through_versions_test.py:TestUpgrade_from_cassandra_2_0_HEAD_to_cassandra_2_1_HEAD.bootstrap_multidc_test I get the following errors when starting 2.1 upgraded from 2.0: {code} ERROR [main] 2014-07-21 23:54:20,862 CommitLog.java:143 - Commit log replay failed due to replaying a mutation for a missing table. This error can be ignored by providing -Dcassandra.commitlog.stop_on_missing_tables=false on the command line ERROR [main] 2014-07-21 23:54:20,869 CassandraDaemon.java:474 - Exception encountered during startup java.lang.RuntimeException: org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find cfId=a1b676f3-0c5d-3276-bfd5-07cf43397004 at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:300) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [main/:na] Caused by: org.apache.cassandra.db.UnknownColumnFamilyException: Couldn't find cfId=a1b676f3-0c5d-3276-bfd5-07cf43397004 at org.apache.cassandra.db.ColumnFamilySerializer.deserializeCfId(ColumnFamilySerializer.java:164) ~[main/:na] at org.apache.cassandra.db.ColumnFamilySerializer.deserialize(ColumnFamilySerializer.java:97) ~[main/:na] at org.apache.cassandra.db.Mutation$MutationSerializer.deserializeOneCf(Mutation.java:353) ~[main/:na] at org.apache.cassandra.db.Mutation$MutationSerializer.deserialize(Mutation.java:333) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:365) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLogReplayer.recover(CommitLogReplayer.java:98) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:137) ~[main/:na] at org.apache.cassandra.db.commitlog.CommitLog.recover(CommitLog.java:115) ~[main/:na] {code} -- This message was sent by Atlassian JIRA (v6.2#6252)
[3/3] git commit: Fix native protocol drop user type notification
Fix native protocol drop user type notification patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-7571 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33719e75 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33719e75 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33719e75 Branch: refs/heads/cassandra-2.1.0 Commit: 33719e759aa600149b4b1286c63e4f895a920b4e Parents: 4f6e108 Author: Aleksey Yeschenko alek...@apache.org Authored: Sat Jul 19 00:19:34 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:36:50 2014 +0300 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DefsTables.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7117a77..3776319 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-final + * Fix native protocol drop user type notification (CASSANDRA-7571) * Give read access to system.schema_usertypes to all authenticated users (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/src/java/org/apache/cassandra/db/DefsTables.java -- diff --git a/src/java/org/apache/cassandra/db/DefsTables.java b/src/java/org/apache/cassandra/db/DefsTables.java index 98ced8d..96327c4 100644 --- a/src/java/org/apache/cassandra/db/DefsTables.java +++ b/src/java/org/apache/cassandra/db/DefsTables.java @@ -537,7 +537,7 @@ public class DefsTables ksm.userTypes.removeType(ut); if (!StorageService.instance.isClientMode()) -MigrationManager.instance.notifyUpdateUserType(ut); +MigrationManager.instance.notifyDropUserType(ut); } private static KSMetaData makeNewKeyspaceDefinition(KSMetaData ksm, CFMetaData toExclude)
[1/3] git commit: Give read access to system.schema_usertypes to all authenticated users
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1.0 c7b7a2493 - 33719e759 Give read access to system.schema_usertypes to all authenticated users patch by Mike Adamson; reviewed by Aleksey Yeschenko for CASSANDRA-7578 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/465d520c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/465d520c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/465d520c Branch: refs/heads/cassandra-2.1.0 Commit: 465d520cc2053abbc08c5b272c820de35c9392fa Parents: c7b7a24 Author: Mike Adamson madam...@datastax.com Authored: Mon Jul 21 14:36:01 2014 +0100 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:30:07 2014 +0300 -- CHANGES.txt| 3 +++ src/java/org/apache/cassandra/service/ClientState.java | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index caef095..78b0186 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,10 @@ 2.1.0-final + * Give read access to system.schema_usertypes to all authenticated users + (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) + 2.1.0-rc4 * Fix word count hadoop example (CASSANDRA-7200) * Updated memtable_cleanup_threshold and memtable_flush_writers defaults http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/src/java/org/apache/cassandra/service/ClientState.java -- diff --git a/src/java/org/apache/cassandra/service/ClientState.java b/src/java/org/apache/cassandra/service/ClientState.java index be3b895..c0396cb 100644 --- a/src/java/org/apache/cassandra/service/ClientState.java +++ b/src/java/org/apache/cassandra/service/ClientState.java @@ -67,7 +67,8 @@ public class ClientState SystemKeyspace.PEERS_CF, SystemKeyspace.SCHEMA_KEYSPACES_CF, SystemKeyspace.SCHEMA_COLUMNFAMILIES_CF, - SystemKeyspace.SCHEMA_COLUMNS_CF }; + SystemKeyspace.SCHEMA_COLUMNS_CF, + SystemKeyspace.SCHEMA_USER_TYPES_CF}; for (String cf : cfs) READABLE_SYSTEM_RESOURCES.add(DataResource.columnFamily(Keyspace.SYSTEM_KS, cf));
[2/3] git commit: Fix ReversedType(DateType) mapping to native protocol
Fix ReversedType(DateType) mapping to native protocol patch by Karl Rieb; reviewed by Aleksey Yeschenko for CASSANDRA-7576 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f6e108e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f6e108e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f6e108e Branch: refs/heads/cassandra-2.1.0 Commit: 4f6e108e146f5226a6a20ad83fb5c46c7d18d046 Parents: 465d520 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jul 22 01:42:24 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:33:49 2014 +0300 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/transport/DataType.java | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 78b0186..7117a77 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,8 @@ (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) +Merged from 2.0: + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576) 2.1.0-rc4 http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/src/java/org/apache/cassandra/transport/DataType.java -- diff --git a/src/java/org/apache/cassandra/transport/DataType.java b/src/java/org/apache/cassandra/transport/DataType.java index 0710645..a45d7ce 100644 --- a/src/java/org/apache/cassandra/transport/DataType.java +++ b/src/java/org/apache/cassandra/transport/DataType.java @@ -202,8 +202,9 @@ public enum DataType implements OptionCodec.CodecableDataType // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; + // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) -else if (type instanceof DateType) +if (type instanceof DateType) type = TimestampType.instance; DataType dt = dataTypeMap.get(type);
[jira] [Updated] (CASSANDRA-7576) DateType columns not properly converted to TimestampType when in ReversedType columns.
[ https://issues.apache.org/jira/browse/CASSANDRA-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7576: - Fix Version/s: (was: 2.1.1) 2.1.0 DateType columns not properly converted to TimestampType when in ReversedType columns. -- Key: CASSANDRA-7576 URL: https://issues.apache.org/jira/browse/CASSANDRA-7576 Project: Cassandra Issue Type: Bug Components: Core Reporter: Karl Rieb Assignee: Karl Rieb Fix For: 2.0.10, 2.1.0 Attachments: DataType_CASSANDRA_7576.patch Original Estimate: 0.25h Remaining Estimate: 0.25h The {{org.apache.cassandra.transport.DataType.fromType(AbstractType)}} method has a bug that prevents sending the correct Protocol ID for reversed {{DateType}} columns. This results in clients receiving Protocol ID {{0}}, which maps to a {{CUSTOM}} type, for timestamp columns that are clustered in reverse order. Some clients can handle this properly since they recognize the {{org.apache.cassandra.db.marshal.DateType}} marshaling type, however the native Datastax java-driver does not. It will produce errors like the one below when trying to prepare queries against such tables: {noformat} com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for value 2 of CQL type 'org.apache.cassandra.db.marshal.DateType', expecting class java.nio.ByteBuffer but class java.util.Date provided at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:190) at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:103) {noformat} On the Cassandra side, there is a check for {{DateType}} columns that is supposed to convert these columns to TimestampType. However, the check is skipped when the column is also reversed. Specifically: {code:title=DataType.java|borderStyle=solid} public static PairDataType, Object fromType(AbstractType type) { // For CQL3 clients, ReversedType is an implementation detail and they // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) else if (type instanceof DateType) type = TimestampType.instance; // ... {code} The *else if* should be changed to just an *if*, like so: {code:title=DataType.java|borderStyle=solid} public static PairDataType, Object fromType(AbstractType type) { // For CQL3 clients, ReversedType is an implementation detail and they // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) if (type instanceof DateType) type = TimestampType.instance; // ... {code} This bug is preventing us from upgrading our 1.2.11 cluster to 2.0.9 because our clients keep throwing exceptions trying to read or write data to tables with reversed timestamp columns. This issue can be reproduced by creating a CQL table in Cassandra 1.2.11 that clusters on a timestamp in reverse, then upgrading the node to 2.0.9. When querying the metadata for the table, the node will return Protocol ID 0 (CUSTOM) instead of Protocol ID 11 (TIMESTAMP). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7571) DefsTables#dropType() calls MM#notifyUpdateUserType() instead of MM#notifyDropUserType().
[ https://issues.apache.org/jira/browse/CASSANDRA-7571?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Aleksey Yeschenko updated CASSANDRA-7571: - Fix Version/s: (was: 2.1.1) 2.1.0 DefsTables#dropType() calls MM#notifyUpdateUserType() instead of MM#notifyDropUserType(). - Key: CASSANDRA-7571 URL: https://issues.apache.org/jira/browse/CASSANDRA-7571 Project: Cassandra Issue Type: Bug Reporter: Aleksey Yeschenko Assignee: Aleksey Yeschenko Priority: Trivial Fix For: 2.1.0 Attachments: 7571.txt Subject. This shouldn't affect the drivers much, so trivial/2.1.1, but should be fixed. -- This message was sent by Atlassian JIRA (v6.2#6252)
[4/4] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51dfe782 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51dfe782 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51dfe782 Branch: refs/heads/cassandra-2.1 Commit: 51dfe78218f33a137eb858b43e9a8462d650c42f Parents: 7f571b5 33719e7 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Jul 24 20:38:50 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:39:58 2014 +0300 -- CHANGES.txt | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/51dfe782/CHANGES.txt -- diff --cc CHANGES.txt index adb4142,3776319..d9f82f2 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,14 -1,4 +1,17 @@@ +2.1.1 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454) + * Add listen_interface and rpc_interface options (CASSANDRA-7417) + * Fail to start if commit log replay detects a problem (CASSANDRA-7125) + * Improve schema merge performance (CASSANDRA-7444) + * Adjust MT depth based on # of partition validating (CASSANDRA-5263) + * Optimise NativeCell comparisons (CASSANDRA-6755) + * Configurable client timeout for cqlsh (CASSANDRA-7516) + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111) +Merged from 2.0: + * Catch errors when the JVM pulls the rug out from GCInspector (CASSANDRA-5345) ++ ++ + 2.1.0-final * Fix native protocol drop user type notification (CASSANDRA-7571) * Give read access to system.schema_usertypes to all authenticated users (CASSANDRA-7578)
[1/4] git commit: Give read access to system.schema_usertypes to all authenticated users
Repository: cassandra Updated Branches: refs/heads/cassandra-2.1 7f571b57a - 51dfe7821 Give read access to system.schema_usertypes to all authenticated users patch by Mike Adamson; reviewed by Aleksey Yeschenko for CASSANDRA-7578 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/465d520c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/465d520c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/465d520c Branch: refs/heads/cassandra-2.1 Commit: 465d520cc2053abbc08c5b272c820de35c9392fa Parents: c7b7a24 Author: Mike Adamson madam...@datastax.com Authored: Mon Jul 21 14:36:01 2014 +0100 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:30:07 2014 +0300 -- CHANGES.txt| 3 +++ src/java/org/apache/cassandra/service/ClientState.java | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index caef095..78b0186 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,10 @@ 2.1.0-final + * Give read access to system.schema_usertypes to all authenticated users + (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) + 2.1.0-rc4 * Fix word count hadoop example (CASSANDRA-7200) * Updated memtable_cleanup_threshold and memtable_flush_writers defaults http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/src/java/org/apache/cassandra/service/ClientState.java -- diff --git a/src/java/org/apache/cassandra/service/ClientState.java b/src/java/org/apache/cassandra/service/ClientState.java index be3b895..c0396cb 100644 --- a/src/java/org/apache/cassandra/service/ClientState.java +++ b/src/java/org/apache/cassandra/service/ClientState.java @@ -67,7 +67,8 @@ public class ClientState SystemKeyspace.PEERS_CF, SystemKeyspace.SCHEMA_KEYSPACES_CF, SystemKeyspace.SCHEMA_COLUMNFAMILIES_CF, - SystemKeyspace.SCHEMA_COLUMNS_CF }; + SystemKeyspace.SCHEMA_COLUMNS_CF, + SystemKeyspace.SCHEMA_USER_TYPES_CF}; for (String cf : cfs) READABLE_SYSTEM_RESOURCES.add(DataResource.columnFamily(Keyspace.SYSTEM_KS, cf));
[2/4] git commit: Fix ReversedType(DateType) mapping to native protocol
Fix ReversedType(DateType) mapping to native protocol patch by Karl Rieb; reviewed by Aleksey Yeschenko for CASSANDRA-7576 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f6e108e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f6e108e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f6e108e Branch: refs/heads/cassandra-2.1 Commit: 4f6e108e146f5226a6a20ad83fb5c46c7d18d046 Parents: 465d520 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jul 22 01:42:24 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:33:49 2014 +0300 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/transport/DataType.java | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 78b0186..7117a77 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,8 @@ (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) +Merged from 2.0: + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576) 2.1.0-rc4 http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/src/java/org/apache/cassandra/transport/DataType.java -- diff --git a/src/java/org/apache/cassandra/transport/DataType.java b/src/java/org/apache/cassandra/transport/DataType.java index 0710645..a45d7ce 100644 --- a/src/java/org/apache/cassandra/transport/DataType.java +++ b/src/java/org/apache/cassandra/transport/DataType.java @@ -202,8 +202,9 @@ public enum DataType implements OptionCodec.CodecableDataType // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; + // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) -else if (type instanceof DateType) +if (type instanceof DateType) type = TimestampType.instance; DataType dt = dataTypeMap.get(type);
[2/5] git commit: Fix ReversedType(DateType) mapping to native protocol
Fix ReversedType(DateType) mapping to native protocol patch by Karl Rieb; reviewed by Aleksey Yeschenko for CASSANDRA-7576 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/4f6e108e Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/4f6e108e Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/4f6e108e Branch: refs/heads/trunk Commit: 4f6e108e146f5226a6a20ad83fb5c46c7d18d046 Parents: 465d520 Author: Aleksey Yeschenko alek...@apache.org Authored: Tue Jul 22 01:42:24 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:33:49 2014 +0300 -- CHANGES.txt | 2 ++ src/java/org/apache/cassandra/transport/DataType.java | 3 ++- 2 files changed, 4 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 78b0186..7117a77 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -3,6 +3,8 @@ (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) +Merged from 2.0: + * Fix ReversedType(DateType) mapping to native protocol (CASSANDRA-7576) 2.1.0-rc4 http://git-wip-us.apache.org/repos/asf/cassandra/blob/4f6e108e/src/java/org/apache/cassandra/transport/DataType.java -- diff --git a/src/java/org/apache/cassandra/transport/DataType.java b/src/java/org/apache/cassandra/transport/DataType.java index 0710645..a45d7ce 100644 --- a/src/java/org/apache/cassandra/transport/DataType.java +++ b/src/java/org/apache/cassandra/transport/DataType.java @@ -202,8 +202,9 @@ public enum DataType implements OptionCodec.CodecableDataType // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; + // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) -else if (type instanceof DateType) +if (type instanceof DateType) type = TimestampType.instance; DataType dt = dataTypeMap.get(type);
[3/4] git commit: Fix native protocol drop user type notification
Fix native protocol drop user type notification patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-7571 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33719e75 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33719e75 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33719e75 Branch: refs/heads/cassandra-2.1 Commit: 33719e759aa600149b4b1286c63e4f895a920b4e Parents: 4f6e108 Author: Aleksey Yeschenko alek...@apache.org Authored: Sat Jul 19 00:19:34 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:36:50 2014 +0300 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DefsTables.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7117a77..3776319 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-final + * Fix native protocol drop user type notification (CASSANDRA-7571) * Give read access to system.schema_usertypes to all authenticated users (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/src/java/org/apache/cassandra/db/DefsTables.java -- diff --git a/src/java/org/apache/cassandra/db/DefsTables.java b/src/java/org/apache/cassandra/db/DefsTables.java index 98ced8d..96327c4 100644 --- a/src/java/org/apache/cassandra/db/DefsTables.java +++ b/src/java/org/apache/cassandra/db/DefsTables.java @@ -537,7 +537,7 @@ public class DefsTables ksm.userTypes.removeType(ut); if (!StorageService.instance.isClientMode()) -MigrationManager.instance.notifyUpdateUserType(ut); +MigrationManager.instance.notifyDropUserType(ut); } private static KSMetaData makeNewKeyspaceDefinition(KSMetaData ksm, CFMetaData toExclude)
[jira] [Commented] (CASSANDRA-7576) DateType columns not properly converted to TimestampType when in ReversedType columns.
[ https://issues.apache.org/jira/browse/CASSANDRA-7576?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073423#comment-14073423 ] Aleksey Yeschenko commented on CASSANDRA-7576: -- [~krieb] I know it wasn't a big deal to you, but anyway - when cherry-picking the patch back to 2.1.0, I did correct the name to yours in 'patch by' (: DateType columns not properly converted to TimestampType when in ReversedType columns. -- Key: CASSANDRA-7576 URL: https://issues.apache.org/jira/browse/CASSANDRA-7576 Project: Cassandra Issue Type: Bug Components: Core Reporter: Karl Rieb Assignee: Karl Rieb Fix For: 2.0.10, 2.1.0 Attachments: DataType_CASSANDRA_7576.patch Original Estimate: 0.25h Remaining Estimate: 0.25h The {{org.apache.cassandra.transport.DataType.fromType(AbstractType)}} method has a bug that prevents sending the correct Protocol ID for reversed {{DateType}} columns. This results in clients receiving Protocol ID {{0}}, which maps to a {{CUSTOM}} type, for timestamp columns that are clustered in reverse order. Some clients can handle this properly since they recognize the {{org.apache.cassandra.db.marshal.DateType}} marshaling type, however the native Datastax java-driver does not. It will produce errors like the one below when trying to prepare queries against such tables: {noformat} com.datastax.driver.core.exceptions.InvalidTypeException: Invalid type for value 2 of CQL type 'org.apache.cassandra.db.marshal.DateType', expecting class java.nio.ByteBuffer but class java.util.Date provided at com.datastax.driver.core.BoundStatement.bind(BoundStatement.java:190) at com.datastax.driver.core.DefaultPreparedStatement.bind(DefaultPreparedStatement.java:103) {noformat} On the Cassandra side, there is a check for {{DateType}} columns that is supposed to convert these columns to TimestampType. However, the check is skipped when the column is also reversed. Specifically: {code:title=DataType.java|borderStyle=solid} public static PairDataType, Object fromType(AbstractType type) { // For CQL3 clients, ReversedType is an implementation detail and they // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) else if (type instanceof DateType) type = TimestampType.instance; // ... {code} The *else if* should be changed to just an *if*, like so: {code:title=DataType.java|borderStyle=solid} public static PairDataType, Object fromType(AbstractType type) { // For CQL3 clients, ReversedType is an implementation detail and they // shouldn't have to care about it. if (type instanceof ReversedType) type = ((ReversedType)type).baseType; // For compatibility sake, we still return DateType as the timestamp type in resultSet metadata (#5723) if (type instanceof DateType) type = TimestampType.instance; // ... {code} This bug is preventing us from upgrading our 1.2.11 cluster to 2.0.9 because our clients keep throwing exceptions trying to read or write data to tables with reversed timestamp columns. This issue can be reproduced by creating a CQL table in Cassandra 1.2.11 that clusters on a timestamp in reverse, then upgrading the node to 2.0.9. When querying the metadata for the table, the node will return Protocol ID 0 (CUSTOM) instead of Protocol ID 11 (TIMESTAMP). -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7560) 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession
[ https://issues.apache.org/jira/browse/CASSANDRA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita updated CASSANDRA-7560: -- Attachment: 0001-backport-CASSANDRA-6747.patch Attaching CASSANDRA-6747 backport. It turns out, the logic uses custom message property and does not bump messaging version, we are able to backport all the feature to 2.0 branch. 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession -- Key: CASSANDRA-7560 URL: https://issues.apache.org/jira/browse/CASSANDRA-7560 Project: Cassandra Issue Type: Bug Components: Core Reporter: Vladimir Avram Fix For: 2.0.10 Attachments: 0001-backport-CASSANDRA-6747.patch, cassandra_daemon.log, cassandra_daemon_rep1.log, cassandra_daemon_rep2.log, nodetool_command.log Running {{nodetool repair -pr}} will sometimes hang on one of the resulting AntiEntropySessions. The system logs will show the repair command starting {noformat} INFO [Thread-3079] 2014-07-15 02:22:56,514 StorageService.java (line 2569) Starting repair command #1, repairing 256 ranges for keyspace x {noformat} You can then see a few AntiEntropySessions completing with: {noformat} INFO [AntiEntropySessions:2] 2014-07-15 02:28:12,766 RepairSession.java (line 282) [repair #eefb3c30-0bc6-11e4-83f7-a378978d0c49] session completed successfully {noformat} Finally we reach an AntiEntropySession at some point that hangs just before requesting the merkle trees for the next column family in line for repair. So we first see the previous CF being finished and the whole repair sessions hangs here with no visible progress or errors on this or any of the related nodes. {noformat} INFO [AntiEntropyStage:1] 2014-07-15 02:38:20,325 RepairSession.java (line 221) [repair #8f85c1b0-0bc8-11e4-83f7-a378978d0c49] previous_cf is fully synced {noformat} Notes: * Single DC 6 node cluster with an average load of 86 GB per node. * This appears to be random; it does not always happen on the same CF or on the same session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7560) 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession
[ https://issues.apache.org/jira/browse/CASSANDRA-7560?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Yuki Morishita reassigned CASSANDRA-7560: - Assignee: Yuki Morishita 'nodetool repair -pr' leads to indefinitely hanging AntiEntropySession -- Key: CASSANDRA-7560 URL: https://issues.apache.org/jira/browse/CASSANDRA-7560 Project: Cassandra Issue Type: Bug Components: Core Reporter: Vladimir Avram Assignee: Yuki Morishita Fix For: 2.0.10 Attachments: 0001-backport-CASSANDRA-6747.patch, cassandra_daemon.log, cassandra_daemon_rep1.log, cassandra_daemon_rep2.log, nodetool_command.log Running {{nodetool repair -pr}} will sometimes hang on one of the resulting AntiEntropySessions. The system logs will show the repair command starting {noformat} INFO [Thread-3079] 2014-07-15 02:22:56,514 StorageService.java (line 2569) Starting repair command #1, repairing 256 ranges for keyspace x {noformat} You can then see a few AntiEntropySessions completing with: {noformat} INFO [AntiEntropySessions:2] 2014-07-15 02:28:12,766 RepairSession.java (line 282) [repair #eefb3c30-0bc6-11e4-83f7-a378978d0c49] session completed successfully {noformat} Finally we reach an AntiEntropySession at some point that hangs just before requesting the merkle trees for the next column family in line for repair. So we first see the previous CF being finished and the whole repair sessions hangs here with no visible progress or errors on this or any of the related nodes. {noformat} INFO [AntiEntropyStage:1] 2014-07-15 02:38:20,325 RepairSession.java (line 221) [repair #8f85c1b0-0bc8-11e4-83f7-a378978d0c49] previous_cf is fully synced {noformat} Notes: * Single DC 6 node cluster with an average load of 86 GB per node. * This appears to be random; it does not always happen on the same CF or on the same session. -- This message was sent by Atlassian JIRA (v6.2#6252)
[5/5] git commit: Merge branch 'cassandra-2.1' into trunk
Merge branch 'cassandra-2.1' into trunk Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/16a3a378 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/16a3a378 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/16a3a378 Branch: refs/heads/trunk Commit: 16a3a37874198884611de48b372672c91c154ccb Parents: ae93155 51dfe78 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Jul 24 20:40:45 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:40:45 2014 +0300 -- CHANGES.txt | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/16a3a378/CHANGES.txt --
[1/5] git commit: Give read access to system.schema_usertypes to all authenticated users
Repository: cassandra Updated Branches: refs/heads/trunk ae93155c7 - 16a3a3787 Give read access to system.schema_usertypes to all authenticated users patch by Mike Adamson; reviewed by Aleksey Yeschenko for CASSANDRA-7578 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/465d520c Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/465d520c Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/465d520c Branch: refs/heads/trunk Commit: 465d520cc2053abbc08c5b272c820de35c9392fa Parents: c7b7a24 Author: Mike Adamson madam...@datastax.com Authored: Mon Jul 21 14:36:01 2014 +0100 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:30:07 2014 +0300 -- CHANGES.txt| 3 +++ src/java/org/apache/cassandra/service/ClientState.java | 3 ++- 2 files changed, 5 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index caef095..78b0186 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,7 +1,10 @@ 2.1.0-final + * Give read access to system.schema_usertypes to all authenticated users + (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) * Get java version correctly when JAVA_TOOL_OPTIONS is set (CASSANDRA-7572) + 2.1.0-rc4 * Fix word count hadoop example (CASSANDRA-7200) * Updated memtable_cleanup_threshold and memtable_flush_writers defaults http://git-wip-us.apache.org/repos/asf/cassandra/blob/465d520c/src/java/org/apache/cassandra/service/ClientState.java -- diff --git a/src/java/org/apache/cassandra/service/ClientState.java b/src/java/org/apache/cassandra/service/ClientState.java index be3b895..c0396cb 100644 --- a/src/java/org/apache/cassandra/service/ClientState.java +++ b/src/java/org/apache/cassandra/service/ClientState.java @@ -67,7 +67,8 @@ public class ClientState SystemKeyspace.PEERS_CF, SystemKeyspace.SCHEMA_KEYSPACES_CF, SystemKeyspace.SCHEMA_COLUMNFAMILIES_CF, - SystemKeyspace.SCHEMA_COLUMNS_CF }; + SystemKeyspace.SCHEMA_COLUMNS_CF, + SystemKeyspace.SCHEMA_USER_TYPES_CF}; for (String cf : cfs) READABLE_SYSTEM_RESOURCES.add(DataResource.columnFamily(Keyspace.SYSTEM_KS, cf));
[3/5] git commit: Fix native protocol drop user type notification
Fix native protocol drop user type notification patch by Aleksey Yeschenko; reviewed by Tyler Hobbs for CASSANDRA-7571 Conflicts: CHANGES.txt Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/33719e75 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/33719e75 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/33719e75 Branch: refs/heads/trunk Commit: 33719e759aa600149b4b1286c63e4f895a920b4e Parents: 4f6e108 Author: Aleksey Yeschenko alek...@apache.org Authored: Sat Jul 19 00:19:34 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:36:50 2014 +0300 -- CHANGES.txt | 1 + src/java/org/apache/cassandra/db/DefsTables.java | 2 +- 2 files changed, 2 insertions(+), 1 deletion(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/CHANGES.txt -- diff --git a/CHANGES.txt b/CHANGES.txt index 7117a77..3776319 100644 --- a/CHANGES.txt +++ b/CHANGES.txt @@ -1,4 +1,5 @@ 2.1.0-final + * Fix native protocol drop user type notification (CASSANDRA-7571) * Give read access to system.schema_usertypes to all authenticated users (CASSANDRA-7578) * Fix cqlsh display when zero rows are returned (CASSANDRA-7580) http://git-wip-us.apache.org/repos/asf/cassandra/blob/33719e75/src/java/org/apache/cassandra/db/DefsTables.java -- diff --git a/src/java/org/apache/cassandra/db/DefsTables.java b/src/java/org/apache/cassandra/db/DefsTables.java index 98ced8d..96327c4 100644 --- a/src/java/org/apache/cassandra/db/DefsTables.java +++ b/src/java/org/apache/cassandra/db/DefsTables.java @@ -537,7 +537,7 @@ public class DefsTables ksm.userTypes.removeType(ut); if (!StorageService.instance.isClientMode()) -MigrationManager.instance.notifyUpdateUserType(ut); +MigrationManager.instance.notifyDropUserType(ut); } private static KSMetaData makeNewKeyspaceDefinition(KSMetaData ksm, CFMetaData toExclude)
[4/5] git commit: Merge branch 'cassandra-2.1.0' into cassandra-2.1
Merge branch 'cassandra-2.1.0' into cassandra-2.1 Project: http://git-wip-us.apache.org/repos/asf/cassandra/repo Commit: http://git-wip-us.apache.org/repos/asf/cassandra/commit/51dfe782 Tree: http://git-wip-us.apache.org/repos/asf/cassandra/tree/51dfe782 Diff: http://git-wip-us.apache.org/repos/asf/cassandra/diff/51dfe782 Branch: refs/heads/trunk Commit: 51dfe78218f33a137eb858b43e9a8462d650c42f Parents: 7f571b5 33719e7 Author: Aleksey Yeschenko alek...@apache.org Authored: Thu Jul 24 20:38:50 2014 +0300 Committer: Aleksey Yeschenko alek...@apache.org Committed: Thu Jul 24 20:39:58 2014 +0300 -- CHANGES.txt | 9 ++--- 1 file changed, 6 insertions(+), 3 deletions(-) -- http://git-wip-us.apache.org/repos/asf/cassandra/blob/51dfe782/CHANGES.txt -- diff --cc CHANGES.txt index adb4142,3776319..d9f82f2 --- a/CHANGES.txt +++ b/CHANGES.txt @@@ -1,14 -1,4 +1,17 @@@ +2.1.1 + * Pig support for hadoop CqlInputFormat (CASSANDRA-6454) + * Add listen_interface and rpc_interface options (CASSANDRA-7417) + * Fail to start if commit log replay detects a problem (CASSANDRA-7125) + * Improve schema merge performance (CASSANDRA-7444) + * Adjust MT depth based on # of partition validating (CASSANDRA-5263) + * Optimise NativeCell comparisons (CASSANDRA-6755) + * Configurable client timeout for cqlsh (CASSANDRA-7516) + * Include snippet of CQL query near syntax error in messages (CASSANDRA-7111) +Merged from 2.0: + * Catch errors when the JVM pulls the rug out from GCInspector (CASSANDRA-5345) ++ ++ + 2.1.0-final * Fix native protocol drop user type notification (CASSANDRA-7571) * Give read access to system.schema_usertypes to all authenticated users (CASSANDRA-7578)
[jira] [Created] (CASSANDRA-7612) java.io.IOException: Connection reset by peer
Julien Anguenot created CASSANDRA-7612: -- Summary: java.io.IOException: Connection reset by peer Key: CASSANDRA-7612 URL: https://issues.apache.org/jira/browse/CASSANDRA-7612 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Priority: Minor Attachments: cassandra_ioexception_209.txt Exception thrown by all nodes in a multi-DC 2.0.9 cluster on a regular basis. (2 or 3 times a day) Let me know if I can provide more information. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7608) StressD can't create keyspaces with Write Command
[ https://issues.apache.org/jira/browse/CASSANDRA-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7608: -- Fix Version/s: 2.1.1 StressD can't create keyspaces with Write Command - Key: CASSANDRA-7608 URL: https://issues.apache.org/jira/browse/CASSANDRA-7608 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Russell Alexander Spitzer Priority: Minor Fix For: 2.1.1 It is impossible to run the default stress command via the dameon ./stress write Because the column names are HeapByteBuffers so they get ignored during serilization (no error is thrown) and then when the object is deserialized on the server the settings.columns.names is null. This leads to a null pointer on the dameon for what would have worked had it run locally. Settings object on the Local machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@1465} maxColumnsPerKey = 5 names = {java.util.Arrays$ArrayList@1471} size = 5 [0] = {java.nio.HeapByteBuffer@1478}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [1] = {java.nio.HeapByteBuffer@1483}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [2] = {java.nio.HeapByteBuffer@1484}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [3] = {java.nio.HeapByteBuffer@1485}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [4] = {java.nio.HeapByteBuffer@1486}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] {code} Setings object on the StressD Machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@810} maxColumnsPerKey = 5 names = null {code} This leads to the null pointer in {code} Exception in thread Thread-1 java.lang.NullPointerException at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:94) at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:67) at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:193) at org.apache.cassandra.stress.StressAction.run(StressAction.java:59) at java.lang.Thread.run(Thread.java:745) {code} Which refers to {code} for (int i = 0; i settings.columns.names.size(); i++) standardCfDef.addToColumn_metadata(new ColumnDef(settings.columns.names.get(i), BytesType)); {code} Possible solution: Just use the settings.columns.namestr and convert them to byte buffers at this point in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Assigned] (CASSANDRA-7608) StressD can't create keyspaces with Write Command
[ https://issues.apache.org/jira/browse/CASSANDRA-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Russell Alexander Spitzer reassigned CASSANDRA-7608: Assignee: Russell Alexander Spitzer StressD can't create keyspaces with Write Command - Key: CASSANDRA-7608 URL: https://issues.apache.org/jira/browse/CASSANDRA-7608 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Russell Alexander Spitzer Assignee: Russell Alexander Spitzer Priority: Minor Fix For: 2.1.1 It is impossible to run the default stress command via the dameon ./stress write Because the column names are HeapByteBuffers so they get ignored during serilization (no error is thrown) and then when the object is deserialized on the server the settings.columns.names is null. This leads to a null pointer on the dameon for what would have worked had it run locally. Settings object on the Local machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@1465} maxColumnsPerKey = 5 names = {java.util.Arrays$ArrayList@1471} size = 5 [0] = {java.nio.HeapByteBuffer@1478}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [1] = {java.nio.HeapByteBuffer@1483}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [2] = {java.nio.HeapByteBuffer@1484}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [3] = {java.nio.HeapByteBuffer@1485}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [4] = {java.nio.HeapByteBuffer@1486}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] {code} Setings object on the StressD Machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@810} maxColumnsPerKey = 5 names = null {code} This leads to the null pointer in {code} Exception in thread Thread-1 java.lang.NullPointerException at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:94) at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:67) at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:193) at org.apache.cassandra.stress.StressAction.run(StressAction.java:59) at java.lang.Thread.run(Thread.java:745) {code} Which refers to {code} for (int i = 0; i settings.columns.names.size(); i++) standardCfDef.addToColumn_metadata(new ColumnDef(settings.columns.names.get(i), BytesType)); {code} Possible solution: Just use the settings.columns.namestr and convert them to byte buffers at this point in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7612) java.io.IOException: Connection reset by peer
[ https://issues.apache.org/jira/browse/CASSANDRA-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073437#comment-14073437 ] Michael Shuler commented on CASSANDRA-7612: --- Is there a particular way to reproduce this error? Is it between nodes in the same datacenter or over the network between data centers? Is there something interesting that happens around this time, or do the nodes recover and keep running fine? Networks are flaky. Connection reset by peer is common among all applications that actually communicate over a network - how they deal with the dropped connection, and subsequently reconnect to continue on working - that's the key. Cassandra should do the right thing here - does it? java.io.IOException: Connection reset by peer - Key: CASSANDRA-7612 URL: https://issues.apache.org/jira/browse/CASSANDRA-7612 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Priority: Minor Attachments: cassandra_ioexception_209.txt Exception thrown by all nodes in a multi-DC 2.0.9 cluster on a regular basis. (2 or 3 times a day) Let me know if I can provide more information. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7608) StressD can't create keyspaces with Write Command
[ https://issues.apache.org/jira/browse/CASSANDRA-7608?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073442#comment-14073442 ] Russell Alexander Spitzer commented on CASSANDRA-7608: -- Looks like there are actually a bunch of uses of columns.names, so i'll switch it to a byte array. StressD can't create keyspaces with Write Command - Key: CASSANDRA-7608 URL: https://issues.apache.org/jira/browse/CASSANDRA-7608 Project: Cassandra Issue Type: Bug Components: Tools Reporter: Russell Alexander Spitzer Assignee: Russell Alexander Spitzer Priority: Minor Fix For: 2.1.1 It is impossible to run the default stress command via the dameon ./stress write Because the column names are HeapByteBuffers so they get ignored during serilization (no error is thrown) and then when the object is deserialized on the server the settings.columns.names is null. This leads to a null pointer on the dameon for what would have worked had it run locally. Settings object on the Local machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@1465} maxColumnsPerKey = 5 names = {java.util.Arrays$ArrayList@1471} size = 5 [0] = {java.nio.HeapByteBuffer@1478}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [1] = {java.nio.HeapByteBuffer@1483}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [2] = {java.nio.HeapByteBuffer@1484}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [3] = {java.nio.HeapByteBuffer@1485}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] [4] = {java.nio.HeapByteBuffer@1486}java.nio.HeapByteBuffer[pos=0 lim=2 cap=2] {code} Setings object on the StressD Machine {code} columns = {org.apache.cassandra.stress.settings.SettingsColumn@810} maxColumnsPerKey = 5 names = null {code} This leads to the null pointer in {code} Exception in thread Thread-1 java.lang.NullPointerException at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpacesThrift(SettingsSchema.java:94) at org.apache.cassandra.stress.settings.SettingsSchema.createKeySpaces(SettingsSchema.java:67) at org.apache.cassandra.stress.settings.StressSettings.maybeCreateKeyspaces(StressSettings.java:193) at org.apache.cassandra.stress.StressAction.run(StressAction.java:59) at java.lang.Thread.run(Thread.java:745) {code} Which refers to {code} for (int i = 0; i settings.columns.names.size(); i++) standardCfDef.addToColumn_metadata(new ColumnDef(settings.columns.names.get(i), BytesType)); {code} Possible solution: Just use the settings.columns.namestr and convert them to byte buffers at this point in the code. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7542) Reduce CAS contention
[ https://issues.apache.org/jira/browse/CASSANDRA-7542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073445#comment-14073445 ] Benedict commented on CASSANDRA-7542: - I decided to have a quick crack at this to see if I could get an initial patch done for you to take for a spin. It's available [here|https://github.com/belliottsmith/cassandra/tree/7542-cascontend] I find on my local machine that it speeds up the paxos dtest by about 20%, which may be a little conservative for improvement as it takes a little while for it to get an idea of how long a paxos round takes. There are some improvements that should probably be made before rolling it out generally, such as tracking latency for each token range independently, and we should probably track the level of recent contention so that we can exponentially backoff if we're too aggressive (this might permit us to be a little _more_ aggressive in the typical case). On my box, it's worth noting this patch brings the average wait time due to contention down to around 15ms, instead of 50ms. Since this is more than a 20% decline, there is probably some more tuning to be done besides to get improved throughput. This only explores two of the potential ideas: reducing intra-node competition, and reducing sleep interval. Reduce CAS contention - Key: CASSANDRA-7542 URL: https://issues.apache.org/jira/browse/CASSANDRA-7542 Project: Cassandra Issue Type: Improvement Reporter: sankalp kohli Assignee: Benedict Fix For: 2.0.10 CAS updates on same CQL partition can lead to heavy contention inside C*. I am looking for simple ways(no algorithmic changes) to reduce contention as the penalty of it is high in terms of latency, specially for reads. We can put some sort of synchronization on CQL partition at StorageProxy level. This will reduce contention at least for all requests landing on one box for same partition. Here is an example of why it will help: 1) Say 1 write and 2 read CAS requests for the same partition key is send to C* in parallel. 2) Since client is token-aware, it sends these 3 request to the same C* instance A. (Lets assume that all 3 requests goto same instance A) 3) In this C* instance A, all 3 CAS requests will contend with each other in Paxos. (This is bad) To improve contention in 3), what I am proposing is to add a lock on partition key similar to what we do in PaxosState.java to serialize these 3 requests. This will remove the contention and improve performance as these 3 requests will not collide with each other. Another improvement we can do in client is to pick a deterministic live replica for a given partition doing CAS. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7606) Add IF [NOT] EXISTS to CREATE/DROP trigger
[ https://issues.apache.org/jira/browse/CASSANDRA-7606?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7606: -- Fix Version/s: 2.1.1 Add IF [NOT] EXISTS to CREATE/DROP trigger -- Key: CASSANDRA-7606 URL: https://issues.apache.org/jira/browse/CASSANDRA-7606 Project: Cassandra Issue Type: Improvement Reporter: Robert Stupp Fix For: 2.1.1 All CREATE/DROP statements support IF [NOT] EXISTS - except CREATE/DROP trigger. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7607) Test coverage for authorization in DDL DML statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7607: -- Labels: unit-test (was: ) Test coverage for authorization in DDL DML statements --- Key: CASSANDRA-7607 URL: https://issues.apache.org/jira/browse/CASSANDRA-7607 Project: Cassandra Issue Type: Test Components: Tests Reporter: Robert Stupp Labels: lhf, unit-test Fix For: 2.0.10, 2.1.1 Similar to CASSANDRA-7604 Check that the statements perform proper authorization (allow / reject): * {{CREATE KEYSPACE}} * {{ALTER KEYSPACE}} * {{DROP KEYSPACE}} * {{CREATE TABLE}} * {{ALTER TABLE}} * {{DROP TABLE}} * {{CREATE TYPE}} * {{ALTER TYPE}} * {{DROP TYPE}} * {{CREATE INDEX}} * {{DROP INDEX}} * {{CREATE TRIGGER}} * {{DROP TRIGGER}} * {{CREATE USER}} * {{ALTER USER}} * {{DROP USER}} * {{TRUNCATE}} * {{GRANT}} * {{REVOKE}} * {{SELECT}} * {{UPDATE}} * {{DELETE}} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7607) Test coverage for authorization in DDL DML statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7607: -- Fix Version/s: 2.1.1 2.0.10 Test coverage for authorization in DDL DML statements --- Key: CASSANDRA-7607 URL: https://issues.apache.org/jira/browse/CASSANDRA-7607 Project: Cassandra Issue Type: Test Components: Tests Reporter: Robert Stupp Labels: lhf, unit-test Fix For: 2.0.10, 2.1.1 Similar to CASSANDRA-7604 Check that the statements perform proper authorization (allow / reject): * {{CREATE KEYSPACE}} * {{ALTER KEYSPACE}} * {{DROP KEYSPACE}} * {{CREATE TABLE}} * {{ALTER TABLE}} * {{DROP TABLE}} * {{CREATE TYPE}} * {{ALTER TYPE}} * {{DROP TYPE}} * {{CREATE INDEX}} * {{DROP INDEX}} * {{CREATE TRIGGER}} * {{DROP TRIGGER}} * {{CREATE USER}} * {{ALTER USER}} * {{DROP USER}} * {{TRUNCATE}} * {{GRANT}} * {{REVOKE}} * {{SELECT}} * {{UPDATE}} * {{DELETE}} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7607) Test coverage for authorization in DDL DML statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7607: -- Component/s: Tests Test coverage for authorization in DDL DML statements --- Key: CASSANDRA-7607 URL: https://issues.apache.org/jira/browse/CASSANDRA-7607 Project: Cassandra Issue Type: Test Components: Tests Reporter: Robert Stupp Labels: lhf, unit-test Fix For: 2.0.10, 2.1.1 Similar to CASSANDRA-7604 Check that the statements perform proper authorization (allow / reject): * {{CREATE KEYSPACE}} * {{ALTER KEYSPACE}} * {{DROP KEYSPACE}} * {{CREATE TABLE}} * {{ALTER TABLE}} * {{DROP TABLE}} * {{CREATE TYPE}} * {{ALTER TYPE}} * {{DROP TYPE}} * {{CREATE INDEX}} * {{DROP INDEX}} * {{CREATE TRIGGER}} * {{DROP TRIGGER}} * {{CREATE USER}} * {{ALTER USER}} * {{DROP USER}} * {{TRUNCATE}} * {{GRANT}} * {{REVOKE}} * {{SELECT}} * {{UPDATE}} * {{DELETE}} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7607) Test coverage for authorization in DDL DML statements
[ https://issues.apache.org/jira/browse/CASSANDRA-7607?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7607: -- Labels: lhf unit-test (was: unit-test) Test coverage for authorization in DDL DML statements --- Key: CASSANDRA-7607 URL: https://issues.apache.org/jira/browse/CASSANDRA-7607 Project: Cassandra Issue Type: Test Components: Tests Reporter: Robert Stupp Labels: lhf, unit-test Fix For: 2.0.10, 2.1.1 Similar to CASSANDRA-7604 Check that the statements perform proper authorization (allow / reject): * {{CREATE KEYSPACE}} * {{ALTER KEYSPACE}} * {{DROP KEYSPACE}} * {{CREATE TABLE}} * {{ALTER TABLE}} * {{DROP TABLE}} * {{CREATE TYPE}} * {{ALTER TYPE}} * {{DROP TYPE}} * {{CREATE INDEX}} * {{DROP INDEX}} * {{CREATE TRIGGER}} * {{DROP TRIGGER}} * {{CREATE USER}} * {{ALTER USER}} * {{DROP USER}} * {{TRUNCATE}} * {{GRANT}} * {{REVOKE}} * {{SELECT}} * {{UPDATE}} * {{DELETE}} -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7575) Custom 2i validation
[ https://issues.apache.org/jira/browse/CASSANDRA-7575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073464#comment-14073464 ] Michael Shuler commented on CASSANDRA-7575: --- [~jbellis] do you think this feature should this be targeted for 2.1.x or 3.0? Custom 2i validation Key: CASSANDRA-7575 URL: https://issues.apache.org/jira/browse/CASSANDRA-7575 Project: Cassandra Issue Type: Improvement Components: API Reporter: Andrés de la Peña Priority: Minor Labels: 2i, cql3, secondaryIndex, secondary_index, select Attachments: 2i_validation.patch There are several projects using custom secondary indexes as an extension point to integrate C* with other systems such as Solr or Lucene. The usual approach is to embed third party indexing queries in CQL clauses. For example, [DSE Search|http://www.datastax.com/what-we-offer/products-services/datastax-enterprise] embeds Solr syntax this way: {code} SELECT title FROM solr WHERE solr_query='title:natio*'; {code} [Stratio platform|https://github.com/Stratio/stratio-cassandra] embeds custom JSON syntax for searching in Lucene indexes: {code} SELECT * FROM tweets WHERE lucene='{ filter : { type: range, field: time, lower: 2014/04/25, upper: 2014/04/1 }, query : { type: phrase, field: body, values: [big, data] }, sort : {fields: [ {field:time, reverse:true} ] } }'; {code} Tuplejump [Stargate|http://tuplejump.github.io/stargate/] also uses the Stratio's open source JSON syntax: {code} SELECT name,company FROM PERSON WHERE stargate ='{ filter: { type: range, field: company, lower: a, upper: p }, sort:{ fields: [{field:name,reverse:true}] } }'; {code} These syntaxes are validated by the corresponding 2i implementation. This validation is done behind the StorageProxy command distribution. So, far as I know, there is no way to give rich feedback about syntax errors to CQL users. I'm uploading a patch with some changes trying to improve this. I propose adding an empty validation method to SecondaryIndexSearcher that can be overridden by custom 2i implementations: {code} public void validate(ListIndexExpression clause) {} {code} And call it from SelectStatement#getRangeCommand: {code} ColumnFamilyStore cfs = Keyspace.open(keyspace()).getColumnFamilyStore(columnFamily()); for (SecondaryIndexSearcher searcher : cfs.indexManager.getIndexSearchersForQuery(expressions)) { try { searcher.validate(expressions); } catch (RuntimeException e) { String exceptionMessage = e.getMessage(); if (exceptionMessage != null !exceptionMessage.trim().isEmpty()) throw new InvalidRequestException( Invalid index expression: + e.getMessage()); else throw new InvalidRequestException( Invalid index expression); } } {code} In this way C* allows custom 2i implementations to give feedback about syntax errors. We are currently using these changes in a fork with no problems. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7613) cqlsh exceptions
[ https://issues.apache.org/jira/browse/CASSANDRA-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7613: -- Description: cqlsh isn't working for me in 2.1 cqlsh exceptions - Key: CASSANDRA-7613 URL: https://issues.apache.org/jira/browse/CASSANDRA-7613 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Fix For: 2.1.0 cqlsh isn't working for me in 2.1 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Created] (CASSANDRA-7613) cqlsh exceptions
T Jake Luciani created CASSANDRA-7613: - Summary: cqlsh exceptions Key: CASSANDRA-7613 URL: https://issues.apache.org/jira/browse/CASSANDRA-7613 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Fix For: 2.1.0 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7613) cqlsh exceptions
[ https://issues.apache.org/jira/browse/CASSANDRA-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani updated CASSANDRA-7613: -- Description: cqlsh isn't working for me in 2.1 when executing a query with TRACING ON; I get the following error myformat_colname() takes exactly 3 arguments (2 given) was:cqlsh isn't working for me in 2.1 cqlsh exceptions - Key: CASSANDRA-7613 URL: https://issues.apache.org/jira/browse/CASSANDRA-7613 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Fix For: 2.1.0 cqlsh isn't working for me in 2.1 when executing a query with TRACING ON; I get the following error myformat_colname() takes exactly 3 arguments (2 given) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7573) Cassandra 2.0.9 with IBM Java 7 crashes during start up
[ https://issues.apache.org/jira/browse/CASSANDRA-7573?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler resolved CASSANDRA-7573. --- Resolution: Not a Problem Thanks for testing out IBM's JVM. We recommend Oracle's and that's where most testing is done, along with some OpenJDK testing. Looking at your lz4 bug report, this really doesn't look like a Cassandra bug. Cassandra 2.0.9 with IBM Java 7 crashes during start up --- Key: CASSANDRA-7573 URL: https://issues.apache.org/jira/browse/CASSANDRA-7573 Project: Cassandra Issue Type: Bug Environment: *Operating system: Linux AMD64, Red Hat Enterprise Linux Workstation release 6.5 (Santiago) 2.6.32-431.21.1.el6.x86_64 *JDK full version: pxa6470_27sr1-20140411_01(SR1) and pxa6470sr7-20140410_01(SR7) java version 1.7.0 Java(TM) SE Runtime Environment (build pxa6470_27sr1-20140411_01(SR1)) IBM J9 VM (build 2.7, JRE 1.7.0 Linux amd64-64 Compressed References 20140410_195893 (JIT enabled, AOT enabled) J9VM - R27_Java727_SR1_20140410_1931_B195893 JIT - tr.r13.java_20140410_61421 GC - R27_Java727_SR1_20140410_1931_B195893_CMPRSS J9CL - 20140410_195893) JCL - 20140409_01 based on Oracle 7u55-b13 Reporter: Jason Plurad Cassandra crashes during start up when using IBM Java 7 and generates a core dump. A workaround is to avoid the JNI native compressor by changing LZ4Factory.fastestInstance() to LZ4Factory.fastestJavaInstance() in src/java/org/apache/cassandra/transport/FrameCompressor.java and src/java/org/apache/cassandra/io/compress/LZ4Compressor.java I opened up an issue against lz4-java also at https://github.com/jpountz/lz4-java/issues/42 -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7612) java.io.IOException: Connection reset by peer
[ https://issues.apache.org/jira/browse/CASSANDRA-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Julien Anguenot resolved CASSANDRA-7612. Resolution: Not a Problem java.io.IOException: Connection reset by peer - Key: CASSANDRA-7612 URL: https://issues.apache.org/jira/browse/CASSANDRA-7612 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Priority: Minor Attachments: cassandra_ioexception_209.txt Exception thrown by all nodes in a multi-DC 2.0.9 cluster on a regular basis. (2 or 3 times a day) Let me know if I can provide more information. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7612) java.io.IOException: Connection reset by peer
[ https://issues.apache.org/jira/browse/CASSANDRA-7612?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073481#comment-14073481 ] Julien Anguenot commented on CASSANDRA-7612: I actually just found what is causing this reading your comment about the network issue: when a DataStax cluster session at the app level is killed ungracefully it is indeed throwing this error Cassandra side and it makes total sense. I should have thought about that earlier. We just migrated our application from Astyanax to the DataStax CQL Java driver (at the same time as upgrading to 2.0.9) and we did not use to have this stack trace showing up in our monitoring logs when our CI builders were kicking in throughout the day re-deploying our application nodes. Sorry about the noise and thank you very much for your answer. Closing this. java.io.IOException: Connection reset by peer - Key: CASSANDRA-7612 URL: https://issues.apache.org/jira/browse/CASSANDRA-7612 Project: Cassandra Issue Type: Bug Reporter: Julien Anguenot Priority: Minor Attachments: cassandra_ioexception_209.txt Exception thrown by all nodes in a multi-DC 2.0.9 cluster on a regular basis. (2 or 3 times a day) Let me know if I can provide more information. Thank you. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Resolved] (CASSANDRA-7613) cqlsh exceptions
[ https://issues.apache.org/jira/browse/CASSANDRA-7613?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] T Jake Luciani resolved CASSANDRA-7613. --- Resolution: Duplicate Seems to be a dup of CASSANDRA-7603 cqlsh exceptions - Key: CASSANDRA-7613 URL: https://issues.apache.org/jira/browse/CASSANDRA-7613 Project: Cassandra Issue Type: Bug Reporter: T Jake Luciani Fix For: 2.1.0 cqlsh isn't working for me in 2.1 when executing a query with TRACING ON; I get the following error myformat_colname() takes exactly 3 arguments (2 given) -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Commented] (CASSANDRA-7568) Replacing a dead node using replace_address fails
[ https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14073495#comment-14073495 ] Michael Shuler commented on CASSANDRA-7568: --- It looks like this particular error was corrected, but in reproducing this, I found a hardcoded assertion for 256 tokens, if vnodes are enabled, so this should be corrected to use $NUM_TOKENS: {noformat} $ export MAX_HEAP_SIZE=1G; export HEAP_NEWSIZE=256M; NUM_TOKENS=32 PRINT_DEBUG=true KEEP_TEST_DIR=true nosetests --nocapture --nologcapture --verbosity=3 replace_address_test.py nose.config: INFO: Ignoring files matching ['^\\.', '^_', '^setup\\.py$'] replace_active_node_test (replace_address_test.TestReplaceAddress) ... cluster ccm directory: /tmp/dtest-eAEKKO Starting cluster with 3 nodes. Inserting Data... Created keyspaces. Sleeping 1s for propagation. total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time 1,1000,1000,17.3,128.5,518.0,8 Total operation time : 00:00:08 END Starting node 4 to replace active node 3 ok replace_nonexistent_node_test (replace_address_test.TestReplaceAddress) ... cluster ccm directory: /tmp/dtest-2siZVd Starting cluster with 3 nodes. Inserting Data... Created keyspaces. Sleeping 1s for propagation. total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time 1,1000,1000,20.8,145.0,360.4,9 Total operation time : 00:00:09 END Start node 4 and replace an address with no node ok replace_stopped_node_test (replace_address_test.TestReplaceAddress) ... cluster ccm directory: /tmp/dtest-M5WycR Starting cluster with 3 nodes. Inserting Data... Created keyspaces. Sleeping 1s for propagation. total,interval_op_rate,interval_key_rate,latency,95th,99.9th,elapsed_time 1,1000,1000,18.5,120.5,319.7,8 Total operation time : 00:00:08 END Stopping node 3. Testing node stoppage (query should fail). Starting node 4 to replace node 3 Verifying querying works again. Verifying tokens migrated sucessfully (' WARN [main] 2014-07-24 13:30:00,043 TokenMetadata.java (line 201) Token -3570582696082918770 changing ownership from /127.0.0.3 to /127.0.0.4\n', _sre.SRE_Match object at 0x7f6e4c23a8b8) FAIL == FAIL: replace_stopped_node_test (replace_address_test.TestReplaceAddress) -- Traceback (most recent call last): File /home/mshuler/git/cassandra-dtest/replace_address_test.py, line 78, in replace_stopped_node_test self.assertEqual(len(movedTokensList), 256) AssertionError: 32 != 256 -- Ran 3 tests in 252.630s FAILED (failures=1) {noformat} Replacing a dead node using replace_address fails - Key: CASSANDRA-7568 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Ala' Alkhaldi Priority: Minor Failed assertion {code} ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception encountered during startup java.lang.AssertionError: Expected 1 endpoint but found 0 at org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222) ~[main/:na] at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) ~[main/:na] at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) ~[main/:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049) ~[main/:na] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:626) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:511) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [main/:na] {code} To replicate the bug run the replace_address_test.replace_stopped_node_test dtest -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7568) Replacing a dead node using replace_address fails
[ https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7568: -- Component/s: (was: Core) Tests Replacing a dead node using replace_address fails - Key: CASSANDRA-7568 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568 Project: Cassandra Issue Type: Bug Components: Tests Reporter: Ala' Alkhaldi Assignee: Shawn Kumar Priority: Minor Failed assertion {code} ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception encountered during startup java.lang.AssertionError: Expected 1 endpoint but found 0 at org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222) ~[main/:na] at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) ~[main/:na] at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) ~[main/:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049) ~[main/:na] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:626) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:511) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [main/:na] {code} To replicate the bug run the replace_address_test.replace_stopped_node_test dtest -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (CASSANDRA-7568) Replacing a dead node using replace_address fails
[ https://issues.apache.org/jira/browse/CASSANDRA-7568?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Michael Shuler updated CASSANDRA-7568: -- Issue Type: Test (was: Bug) Replacing a dead node using replace_address fails - Key: CASSANDRA-7568 URL: https://issues.apache.org/jira/browse/CASSANDRA-7568 Project: Cassandra Issue Type: Test Components: Tests Reporter: Ala' Alkhaldi Assignee: Shawn Kumar Priority: Minor Failed assertion {code} ERROR [main] 2014-07-17 10:24:21,171 CassandraDaemon.java:474 - Exception encountered during startup java.lang.AssertionError: Expected 1 endpoint but found 0 at org.apache.cassandra.dht.RangeStreamer.getAllRangesWithStrictSourcesFor(RangeStreamer.java:222) ~[main/:na] at org.apache.cassandra.dht.RangeStreamer.addRanges(RangeStreamer.java:131) ~[main/:na] at org.apache.cassandra.dht.BootStrapper.bootstrap(BootStrapper.java:72) ~[main/:na] at org.apache.cassandra.service.StorageService.bootstrap(StorageService.java:1049) ~[main/:na] at org.apache.cassandra.service.StorageService.joinTokenRing(StorageService.java:811) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:626) ~[main/:na] at org.apache.cassandra.service.StorageService.initServer(StorageService.java:511) ~[main/:na] at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:338) [main/:na] at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:457) [main/:na] at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:546) [main/:na] {code} To replicate the bug run the replace_address_test.replace_stopped_node_test dtest -- This message was sent by Atlassian JIRA (v6.2#6252)