[jira] [Commented] (CASSANDRA-3031) Add 4 byte integer type
[ https://issues.apache.org/jira/browse/CASSANDRA-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096625#comment-13096625 ] Radim Kolar commented on CASSANDRA-3031: with this patch applied new CF created via cqlsh are using new org.apache.cassandra.db.marshal.IntType type for int columns. Old applications using CF created with old int type are unaffected. In python returned type to old apps is int or long depending on value inserted. Patch is fully backward compatible, i havent discovered any incompatibility yet. in CQL we should have types: int (new IntType), Long (LongType) and number/varint (IntegerType). Better to do changes early while CQL is not widely used. IntType is needed: * for type safety to applications loading data from cassandra to int type variable to make sure that input does not overflow. Unlike other nosql databases cassandra can have optional type safety and int4 type is currently missing. * to resolve compatibility issues with applications writing fixed size 4 bytes integers to IntegerType. (most hector apps are doing this). These integers can not be manipulated using cassandra-cli or read back in applications in python or PHP. Add 4 byte integer type --- Key: CASSANDRA-3031 URL: https://issues.apache.org/jira/browse/CASSANDRA-3031 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.4 Environment: any Reporter: Radim Kolar Priority: Minor Labels: hector, lhf Fix For: 1.0 Attachments: apache-cassandra-0.8.4-SNAPSHOT.jar, src.diff, test.diff Cassandra currently lacks support for 4byte fixed size integer data type. Java API Hector and C libcassandra likes to serialize integers as 4 bytes in network order. Problem is that you cant use cassandra-cli to manipulate stored rows. Compatibility with other applications using api following cassandra integer encoding standard is problematic too. Because adding new datatype/validator is fairly simple I recommend to add int4 data type. Compatibility with hector is important because it is most used Java cassandra api and lot of applications are using it. This problem was discussed several times already http://comments.gmane.org/gmane.comp.db.hector.user/2125 https://issues.apache.org/jira/browse/CASSANDRA-2585 It would be nice to have compatibility with cassandra-cli and other applications without rewriting hector apps. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3124) java heap limit for nodetool
[ https://issues.apache.org/jira/browse/CASSANDRA-3124?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096636#comment-13096636 ] Zenek Kraweznik commented on CASSANDRA-3124: I have over 4GB memory free. Nodetool requires max 32MB memory to run and it shoud be limited to that value, no matter of default limits. java heap limit for nodetool Key: CASSANDRA-3124 URL: https://issues.apache.org/jira/browse/CASSANDRA-3124 Project: Cassandra Issue Type: Improvement Components: Core, Tools Affects Versions: 0.8.1, 0.8.2, 0.8.3, 0.8.4 Environment: not important Reporter: Zenek Kraweznik Priority: Minor by defaull (from debian package) # nodetool Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. # and: --- /usr/bin/nodetool.old 2011-09-02 14:15:14.228152799 +0200 +++ /usr/bin/nodetool 2011-09-02 14:14:28.745154552 +0200 @@ -55,7 +55,7 @@ ;; esac -$JAVA -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \ +$JAVA -Xmx32m -cp $CLASSPATH -Dstorage-config=$CASSANDRA_CONF \ -Dlog4j.configuration=log4j-tools.properties \ org.apache.cassandra.tools.NodeCmd $@ after every upgrade i had to add limit manually. I think it's good idea to add it by default ;) -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-3133) nodetool netstats doesn't show streams during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Zenek Kraweznik updated CASSANDRA-3133: --- Component/s: Tools nodetool netstats doesn't show streams during decommission -- Key: CASSANDRA-3133 URL: https://issues.apache.org/jira/browse/CASSANDRA-3133 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: debian 6.0.2.1 (squeeze), java 1.6.26 (sun, non-free packages). Reporter: Zenek Kraweznik nodetool netstats is now showing transferred files from demonission -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3133) nodetool netstats doesn't show streams during decommission
nodetool netstats doesn't show streams during decommission -- Key: CASSANDRA-3133 URL: https://issues.apache.org/jira/browse/CASSANDRA-3133 Project: Cassandra Issue Type: Bug Affects Versions: 0.8.4 Environment: debian 6.0.2.1 (squeeze), java 1.6.26 (sun, non-free packages). Reporter: Zenek Kraweznik nodetool netstats is now showing transferred files from demonission -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096663#comment-13096663 ] deng commented on CASSANDRA-3118: - it means that the problem can not be resloved? nodetool can not decommission a node -- Key: CASSANDRA-3118 URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: Cassandra0.84 Reporter: deng Fix For: 0.8.5 Attachments: 3118-debug.txt when i use nodetool ring and get the result ,and than i want to decommission 100.86.17.90 node ,but i get the error: [root@ip bin]# ./nodetool -h10.86.12.225 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.17.90 datacenter1 rack1 Up Leaving 1.08 MB 11.21% 3493450320433654773610109291263389161 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB 14.25% 27742979166206700793970535921354744095 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% 38945137636148605752956920077679425910 ERROR: root@ip bin]# ./nodetool -h100.86.17.90 decommission Exception in thread main java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:144) at java.util.AbstractList$Itr.remove(AbstractList.java:360) at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) at org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) at org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) at org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1426) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1264) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1359) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096669#comment-13096669 ] Jonathan Ellis commented on CASSANDRA-3118: --- Not without you providing more information, such as applying the patch I attached and posting the results with log level debug. nodetool can not decommission a node -- Key: CASSANDRA-3118 URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: Cassandra0.84 Reporter: deng Fix For: 0.8.5 Attachments: 3118-debug.txt when i use nodetool ring and get the result ,and than i want to decommission 100.86.17.90 node ,but i get the error: [root@ip bin]# ./nodetool -h10.86.12.225 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.17.90 datacenter1 rack1 Up Leaving 1.08 MB 11.21% 3493450320433654773610109291263389161 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB 14.25% 27742979166206700793970535921354744095 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% 38945137636148605752956920077679425910 ERROR: root@ip bin]# ./nodetool -h100.86.17.90 decommission Exception in thread main java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:144) at java.util.AbstractList$Itr.remove(AbstractList.java:360) at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) at org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) at org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) at org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1426) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1264) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1359) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at
[jira] [Commented] (CASSANDRA-3031) Add 4 byte integer type
[ https://issues.apache.org/jira/browse/CASSANDRA-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096674#comment-13096674 ] Eric Evans commented on CASSANDRA-3031: --- {quote} with this patch applied new CF created via cqlsh are using new org.apache.cassandra.db.marshal.IntType type for int columns. Old applications using CF created with old int type are unaffected. In python returned type to old apps is int or long depending on value inserted. Patch is fully backward compatible, i havent discovered any incompatibility yet. {quote} It is true that drivers should do the right thing insofar as they will continue to see LongType schema and deserialize the 8 byte values to whatever makes sense. However, an expectation is still being broken, and it's hard to imagine all of the cases where this will blow up someones face. Imagine someone that uses a schema written as CQL to setup new nodes. Depending on the version it is applied to, they're going to get entirely different and incompatible schemas. Add 4 byte integer type --- Key: CASSANDRA-3031 URL: https://issues.apache.org/jira/browse/CASSANDRA-3031 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.4 Environment: any Reporter: Radim Kolar Priority: Minor Labels: hector, lhf Fix For: 1.0 Attachments: apache-cassandra-0.8.4-SNAPSHOT.jar, src.diff, test.diff Cassandra currently lacks support for 4byte fixed size integer data type. Java API Hector and C libcassandra likes to serialize integers as 4 bytes in network order. Problem is that you cant use cassandra-cli to manipulate stored rows. Compatibility with other applications using api following cassandra integer encoding standard is problematic too. Because adding new datatype/validator is fairly simple I recommend to add int4 data type. Compatibility with hector is important because it is most used Java cassandra api and lot of applications are using it. This problem was discussed several times already http://comments.gmane.org/gmane.comp.db.hector.user/2125 https://issues.apache.org/jira/browse/CASSANDRA-2585 It would be nice to have compatibility with cassandra-cli and other applications without rewriting hector apps. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096685#comment-13096685 ] deng commented on CASSANDRA-3118: - my cluster is upgraidng from cassandra0.7.5 to 0.8.4,I deleted all the servers and data and install new Cassandra0.8.4 in the cluster, but this time the error is different. i set use username and password to login in the cluster,I do not know the reason is the auth. Do you test decommission with auth ? the error is like this : [root@devapp3 bin]# ./nodetool -h100.86.12.225 -p 9160 decommission Error connection to remote JMX agent! java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset] at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:140) at org.apache.cassandra.tools.NodeProbe.init(NodeProbe.java:110) at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:580) Caused by: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset] at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101) at com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185) at javax.naming.InitialContext.lookup(InitialContext.java:392) at javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1888) at javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1858) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257) ... 4 more Caused by: java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.newCall(UnicastRef.java:322) at sun.rmi.registry.RegistryImpl_Stub.lookup(Unknown Source) at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:97) ... 9 more Caused by: java.net.SocketException: Connection reset at java.net.SocketInputStream.read(SocketInputStream.java:168) at java.io.BufferedInputStream.fill(BufferedInputStream.java:218) at java.io.BufferedInputStream.read(BufferedInputStream.java:237) at java.io.DataInputStream.readByte(DataInputStream.java:248) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:228) ... 13 more [root@devapp3 bin]# ./nodetool -h100.86.12.225 -uUser -pw App -p 9160 decommission Error connection to remote JMX agent! java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset] at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:140) at org.apache.cassandra.tools.NodeProbe.init(NodeProbe.java:96) at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:580) Caused by: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset] at com.sun.jndi.rmi.registry.RegistryContext.lookup(RegistryContext.java:101) at com.sun.jndi.toolkit.url.GenericURLContext.lookup(GenericURLContext.java:185) at javax.naming.InitialContext.lookup(InitialContext.java:392) at javax.management.remote.rmi.RMIConnector.findRMIServerJNDI(RMIConnector.java:1888) at javax.management.remote.rmi.RMIConnector.findRMIServer(RMIConnector.java:1858) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:257) ... 4 more Caused by: java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:286) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096694#comment-13096694 ] deng commented on CASSANDRA-3118: - I make a test about ring,if i add -p,it can not work : [root@devapp3 bin]# ./nodetool -h100.86.12.224 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB 25.46% 27742979166206700793970535921354744095 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% 38945137636148605752956920077679425910 72.28.16.127 datacenter1 rack1 Up Normal 4.83 GB 38.87% 105086686663776022032160538278345356251 [root@devapp3 bin]# ./nodetool -h100.86.12.224 -p 9160 ring Error connection to remote JMX agent! java.io.IOException: Failed to retrieve RMIServer stub: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: java.net.SocketException: Connection reset] at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:340) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:140) at org.apache.cassandra.tools.NodeProbe.init(NodeProbe.java:110) at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:580) Caused by: javax.naming.CommunicationException [Root exception is java.rmi.ConnectIOException: error during JRMP connection establishment; nested exception is: nodetool can not decommission a node -- Key: CASSANDRA-3118 URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: Cassandra0.84 Reporter: deng Fix For: 0.8.5 Attachments: 3118-debug.txt when i use nodetool ring and get the result ,and than i want to decommission 100.86.17.90 node ,but i get the error: [root@ip bin]# ./nodetool -h10.86.12.225 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.17.90 datacenter1 rack1 Up Leaving 1.08 MB 11.21% 3493450320433654773610109291263389161 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB 14.25% 27742979166206700793970535921354744095 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% 38945137636148605752956920077679425910 ERROR: root@ip bin]# ./nodetool -h100.86.17.90 decommission Exception in thread main java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:144) at java.util.AbstractList$Itr.remove(AbstractList.java:360) at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) at org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) at org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) at org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096697#comment-13096697 ] deng commented on CASSANDRA-3118: - when i test decommission and do not add port or username and password ,it happened errors ,and then i see the conifuge about 100.86.12.225 nodes and the listen_address and rpc_address are the same 100.86.12.225.in addation ,i run the command at 100.86.12.224 nodes. [root@devapp3 bin]# ./nodetool -h100.86.12.225 -uAppUser -pw EquityApp decommission Error connection to remote JMX agent! java.rmi.ConnectException: Connection refused to host: 127.0.0.1; nested exception is: java.net.ConnectException: Connection refused at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:601) at sun.rmi.transport.tcp.TCPChannel.createConnection(TCPChannel.java:198) at sun.rmi.transport.tcp.TCPChannel.newConnection(TCPChannel.java:184) at sun.rmi.server.UnicastRef.invoke(UnicastRef.java:110) at javax.management.remote.rmi.RMIServerImpl_Stub.newClient(Unknown Source) at javax.management.remote.rmi.RMIConnector.getConnection(RMIConnector.java:2329) at javax.management.remote.rmi.RMIConnector.connect(RMIConnector.java:279) at javax.management.remote.JMXConnectorFactory.connect(JMXConnectorFactory.java:248) at org.apache.cassandra.tools.NodeProbe.connect(NodeProbe.java:140) at org.apache.cassandra.tools.NodeProbe.init(NodeProbe.java:96) at org.apache.cassandra.tools.NodeCmd.main(NodeCmd.java:580) Caused by: java.net.ConnectException: Connection refused at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:351) at java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:213) at java.net.PlainSocketImpl.connect(PlainSocketImpl.java:200) at java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at java.net.Socket.connect(Socket.java:529) at java.net.Socket.connect(Socket.java:478) at java.net.Socket.init(Socket.java:375) at java.net.Socket.init(Socket.java:189) at sun.rmi.transport.proxy.RMIDirectSocketFactory.createSocket(RMIDirectSocketFactory.java:22) at sun.rmi.transport.proxy.RMIMasterSocketFactory.createSocket(RMIMasterSocketFactory.java:128) at sun.rmi.transport.tcp.TCPEndpoint.newSocket(TCPEndpoint.java:595) ... 10 more nodetool can not decommission a node -- Key: CASSANDRA-3118 URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: Cassandra0.84 Reporter: deng Fix For: 0.8.5 Attachments: 3118-debug.txt when i use nodetool ring and get the result ,and than i want to decommission 100.86.17.90 node ,but i get the error: [root@ip bin]# ./nodetool -h10.86.12.225 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.17.90 datacenter1 rack1 Up Leaving 1.08 MB 11.21% 3493450320433654773610109291263389161 100.86.12.225datacenter1 rack1 Up Normal 558.25 MB 14.25% 27742979166206700793970535921354744095 100.86.12.224datacenter1 rack1 Up Normal 5.01 GB 6.58% 38945137636148605752956920077679425910 ERROR: root@ip bin]# ./nodetool -h100.86.17.90 decommission Exception in thread main java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:144) at java.util.AbstractList$Itr.remove(AbstractList.java:360) at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) at org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) at org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) at org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) at
[jira] [Commented] (CASSANDRA-3118) nodetool can not decommission a node
[ https://issues.apache.org/jira/browse/CASSANDRA-3118?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096704#comment-13096704 ] deng commented on CASSANDRA-3118: - I seted the parament JVM_OPTS=$JVM_OPTS -Djava.rmi.server.hostname=100.86.12.225 and run the decommission ,the error is like first error: [root@devapp3 bin]# ./nodetool -h100.86.12.225 decommission Exception in thread main java.lang.UnsupportedOperationException at java.util.AbstractList.remove(AbstractList.java:144) at java.util.AbstractList$Itr.remove(AbstractList.java:360) at java.util.AbstractCollection.removeAll(AbstractCollection.java:337) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1041) at org.apache.cassandra.service.StorageService.calculatePendingRanges(StorageService.java:1006) at org.apache.cassandra.service.StorageService.handleStateLeaving(StorageService.java:877) at org.apache.cassandra.service.StorageService.onChange(StorageService.java:732) at org.apache.cassandra.gms.Gossiper.doNotifications(Gossiper.java:839) at org.apache.cassandra.gms.Gossiper.addLocalApplicationState(Gossiper.java:986) at org.apache.cassandra.service.StorageService.startLeaving(StorageService.java:1836) at org.apache.cassandra.service.StorageService.decommission(StorageService.java:1855) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:93) at com.sun.jmx.mbeanserver.StandardMBeanIntrospector.invokeM2(StandardMBeanIntrospector.java:27) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) nodetool can not decommission a node -- Key: CASSANDRA-3118 URL: https://issues.apache.org/jira/browse/CASSANDRA-3118 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: Cassandra0.84 Reporter: deng Fix For: 0.8.5 Attachments: 3118-debug.txt when i use nodetool ring and get the result ,and than i want to decommission 100.86.17.90 node ,but i get the error: [root@ip bin]# ./nodetool -h10.86.12.225 ring Address DC RackStatus State LoadOwns Token 154562542458917734942660802527609328132 100.86.17.90 datacenter1 rack1 Up Leaving
[jira] [Commented] (CASSANDRA-1497) Add input support for Hadoop Streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096754#comment-13096754 ] Jeremy Hanna commented on CASSANDRA-1497: - I don't think anyone was ever particularly against allowing hadoop streaming functionality. I think there just wasn't the interest for a while. On the input side, it will also require CASSANDRA-2799 which should be trivial. Add input support for Hadoop Streaming -- Key: CASSANDRA-1497 URL: https://issues.apache.org/jira/browse/CASSANDRA-1497 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Jeremy Hanna Attachments: 0001-An-updated-avro-based-input-streaming-solution.patch related to CASSANDRA-1368 - create similar functionality for input streaming. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1497) Add input support for Hadoop Streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096766#comment-13096766 ] Brandyn White commented on CASSANDRA-1497: -- Good point, it'll be easier to update the Cassandra Hadoop API than Hadoop streaming. Add input support for Hadoop Streaming -- Key: CASSANDRA-1497 URL: https://issues.apache.org/jira/browse/CASSANDRA-1497 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Jeremy Hanna Attachments: 0001-An-updated-avro-based-input-streaming-solution.patch related to CASSANDRA-1368 - create similar functionality for input streaming. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-1497) Add input support for Hadoop Streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096766#comment-13096766 ] Brandyn White edited comment on CASSANDRA-1497 at 9/3/11 10:02 PM: --- Good point. It'll be easier to update the Cassandra Hadoop API to support the old-style Hadoop interface. After that we can add in the Cassandra IO and command line switches with a small patch. was (Author: bwhite): Good point, it'll be easier to update the Cassandra Hadoop API than Hadoop streaming. Add input support for Hadoop Streaming -- Key: CASSANDRA-1497 URL: https://issues.apache.org/jira/browse/CASSANDRA-1497 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Jeremy Hanna Attachments: 0001-An-updated-avro-based-input-streaming-solution.patch related to CASSANDRA-1368 - create similar functionality for input streaming. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jérémy Sevellec updated CASSANDRA-2961: --- Comment: was deleted (was: here is the patch) Expire dead gossip states based on time --- Key: CASSANDRA-2961 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 Project: Cassandra Issue Type: Improvement Affects Versions: 1.0 Reporter: Brandon Williams Labels: patch Fix For: 1.0 Currently dead states are held until aVeryLongTime, 3 days. The problem is that if a node reboots within this period, it begins a new 3 days and will repopulate the ring with the dead state. While mostly harmless, perpetuating the state forever is at least wasting a small amount of bandwidth. Instead, we can expire states based on a ttl, which will require that the cluster be loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jérémy Sevellec updated CASSANDRA-2961: --- Attachment: trunk-2961.patch Here is the patch Expire dead gossip states based on time --- Key: CASSANDRA-2961 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 Project: Cassandra Issue Type: Improvement Affects Versions: 1.0 Reporter: Brandon Williams Labels: patch Fix For: 1.0 Attachments: trunk-2961.patch Currently dead states are held until aVeryLongTime, 3 days. The problem is that if a node reboots within this period, it begins a new 3 days and will repopulate the ring with the dead state. While mostly harmless, perpetuating the state forever is at least wasting a small amount of bandwidth. Instead, we can expire states based on a ttl, which will require that the cluster be loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jérémy Sevellec updated CASSANDRA-2961: --- Comment: was deleted (was: Here is the patch) Expire dead gossip states based on time --- Key: CASSANDRA-2961 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 Project: Cassandra Issue Type: Improvement Affects Versions: 1.0 Reporter: Brandon Williams Labels: patch Fix For: 1.0 Attachments: trunk-2961.patch Currently dead states are held until aVeryLongTime, 3 days. The problem is that if a node reboots within this period, it begins a new 3 days and will repopulate the ring with the dead state. While mostly harmless, perpetuating the state forever is at least wasting a small amount of bandwidth. Instead, we can expire states based on a ttl, which will require that the cluster be loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2961) Expire dead gossip states based on time
[ https://issues.apache.org/jira/browse/CASSANDRA-2961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096789#comment-13096789 ] Jérémy Sevellec commented on CASSANDRA-2961: In addition, I have include a lib for test (hamcrest) to simplify the writing of assertion into Junit. Expire dead gossip states based on time --- Key: CASSANDRA-2961 URL: https://issues.apache.org/jira/browse/CASSANDRA-2961 Project: Cassandra Issue Type: Improvement Affects Versions: 1.0 Reporter: Brandon Williams Labels: patch Fix For: 1.0 Attachments: trunk-2961.patch Currently dead states are held until aVeryLongTime, 3 days. The problem is that if a node reboots within this period, it begins a new 3 days and will repopulate the ring with the dead state. While mostly harmless, perpetuating the state forever is at least wasting a small amount of bandwidth. Instead, we can expire states based on a ttl, which will require that the cluster be loosely time synced; within the quarantine period of 60s. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1497) Add input support for Hadoop Streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096800#comment-13096800 ] Jonathan Ellis commented on CASSANDRA-1497: --- I'd recommend creating a new ticket for this since it's a completely different approach than the old one. I'm not familar with TypedBytes, but now that Cassandra's AbstractBytes have to/from string support (getString/fromString), that would probably be the natural way to go for us. Add input support for Hadoop Streaming -- Key: CASSANDRA-1497 URL: https://issues.apache.org/jira/browse/CASSANDRA-1497 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Jeremy Hanna Attachments: 0001-An-updated-avro-based-input-streaming-solution.patch related to CASSANDRA-1368 - create similar functionality for input streaming. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3133) nodetool netstats doesn't show streams during decommission
[ https://issues.apache.org/jira/browse/CASSANDRA-3133?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096801#comment-13096801 ] Jonathan Ellis commented on CASSANDRA-3133: --- Can you be more specific as to what you observed? nodetool netstats doesn't show streams during decommission -- Key: CASSANDRA-3133 URL: https://issues.apache.org/jira/browse/CASSANDRA-3133 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Environment: debian 6.0.2.1 (squeeze), java 1.6.26 (sun, non-free packages). Reporter: Zenek Kraweznik nodetool netstats is now showing transferred files from demonission -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3031) Add 4 byte integer type
[ https://issues.apache.org/jira/browse/CASSANDRA-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096802#comment-13096802 ] Jonathan Ellis commented on CASSANDRA-3031: --- I suppose we could provide a conversion tool that converts int - long in cql scripts. I don't understand the claim that we need this for Hector compatibility, though. My understanding is that our varint would be just fine with 32bit ints -- since the length is part of the byte[] encoding, we don't have to do clever things like hadoop's vint does (http://hadoop.apache.org/common/docs/current/api/org/apache/hadoop/io/WritableUtils.html). Add 4 byte integer type --- Key: CASSANDRA-3031 URL: https://issues.apache.org/jira/browse/CASSANDRA-3031 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.4 Environment: any Reporter: Radim Kolar Priority: Minor Labels: hector, lhf Fix For: 1.0 Attachments: apache-cassandra-0.8.4-SNAPSHOT.jar, src.diff, test.diff Cassandra currently lacks support for 4byte fixed size integer data type. Java API Hector and C libcassandra likes to serialize integers as 4 bytes in network order. Problem is that you cant use cassandra-cli to manipulate stored rows. Compatibility with other applications using api following cassandra integer encoding standard is problematic too. Because adding new datatype/validator is fairly simple I recommend to add int4 data type. Compatibility with hector is important because it is most used Java cassandra api and lot of applications are using it. This problem was discussed several times already http://comments.gmane.org/gmane.comp.db.hector.user/2125 https://issues.apache.org/jira/browse/CASSANDRA-2585 It would be nice to have compatibility with cassandra-cli and other applications without rewriting hector apps. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3031) Add 4 byte integer type
[ https://issues.apache.org/jira/browse/CASSANDRA-3031?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096804#comment-13096804 ] Jonathan Ellis commented on CASSANDRA-3031: --- Perhaps we should make int default to varint, and add int4 and int8 types for when you really want fixed width? Defaulting to fixed seems archaic. Add 4 byte integer type --- Key: CASSANDRA-3031 URL: https://issues.apache.org/jira/browse/CASSANDRA-3031 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.4 Environment: any Reporter: Radim Kolar Priority: Minor Labels: hector, lhf Fix For: 1.0 Attachments: apache-cassandra-0.8.4-SNAPSHOT.jar, src.diff, test.diff Cassandra currently lacks support for 4byte fixed size integer data type. Java API Hector and C libcassandra likes to serialize integers as 4 bytes in network order. Problem is that you cant use cassandra-cli to manipulate stored rows. Compatibility with other applications using api following cassandra integer encoding standard is problematic too. Because adding new datatype/validator is fairly simple I recommend to add int4 data type. Compatibility with hector is important because it is most used Java cassandra api and lot of applications are using it. This problem was discussed several times already http://comments.gmane.org/gmane.comp.db.hector.user/2125 https://issues.apache.org/jira/browse/CASSANDRA-2585 It would be nice to have compatibility with cassandra-cli and other applications without rewriting hector apps. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-3134) Patch Hadoop Streaming Source to Support Cassandra IO
Patch Hadoop Streaming Source to Support Cassandra IO - Key: CASSANDRA-3134 URL: https://issues.apache.org/jira/browse/CASSANDRA-3134 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Brandyn White Priority: Minor (text is a repost from [CASSANDRA-1497|https://issues.apache.org/jira/browse/CASSANDRA-1497]) I'm the author of the Hadoopy http://bwhite.github.com/hadoopy/ python library and I'm interested in taking another stab at streaming support. Hadoopy and Dumbo both use the TypedBytes format that is in CDH for communication with the streaming jar. A simple way to get this to work is modify the streaming code (make hadoop-cassandra-streaming.jar) so that it uses the same TypedBytes communication with streaming programs, but the actual job IO is using the Cassandra IO. The user would have the exact same streaming interface, but the user would specify the keyspace, etc using environmental variables. The benefits of this are 1. Easy implementation: Take the cloudera-patched version of streaming and change the IO, add environmental variable reading. 2. Only Client side: As the streaming jar is included in the job, no server side changes are required. 3. Simple maintenance: If the Hadoop Cassandra interface changes, then this would require the same simple fixup as any other Hadoop job. 4. The TypedBytes format supports all of the necessary Cassandara types (https://issues.apache.org/jira/browse/HADOOP-5450) 5. Compatible with existing streaming libraries: Hadoopy and dumbo would only need to know the path of this new streaming jar 6. No need for avro The negatives of this are 1. Duplicative code: This would be a dupe and patch of the streaming jar. This can be stored itself as a patch. 2. I'd have to check but this solution should work on a stock hadoop (cluster side) but it requires TypedBytes (client side) which can be included in the jar. I can code this up but I wanted to get some feedback from the community first. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-1497) Add input support for Hadoop Streaming
[ https://issues.apache.org/jira/browse/CASSANDRA-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096805#comment-13096805 ] Brandyn White commented on CASSANDRA-1497: -- Made a new ticket here [CASSANDRA-3134|https://issues.apache.org/jira/browse/CASSANDRA-3134] Add input support for Hadoop Streaming -- Key: CASSANDRA-1497 URL: https://issues.apache.org/jira/browse/CASSANDRA-1497 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Jeremy Hanna Attachments: 0001-An-updated-avro-based-input-streaming-solution.patch related to CASSANDRA-1368 - create similar functionality for input streaming. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Resolved] (CASSANDRA-3132) Import comparator for super column
[ https://issues.apache.org/jira/browse/CASSANDRA-3132?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-3132. --- Resolution: Invalid comparator is used to decode supercolumn names, and the [map, not row] key we are passing to stringAsType represents a supercolumn name, so using metadata.comparator is correct here. Import comparator for super column -- Key: CASSANDRA-3132 URL: https://issues.apache.org/jira/browse/CASSANDRA-3132 Project: Cassandra Issue Type: Bug Components: Tools Affects Versions: 0.8.4 Reporter: Zhong Li The method addToSuperCF in the class org.apache.cassandra.tools.SSTableImport uses key comparator to process super column name. It should use subcolumnComparator. -AbstractType comparator = metaData.comparator; +AbstractType comparator = metaData.subcolumnComparator; -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2434) node bootstrapping can violate consistency
[ https://issues.apache.org/jira/browse/CASSANDRA-2434?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096807#comment-13096807 ] Jonathan Ellis commented on CASSANDRA-2434: --- I'm okay with either A or B. bq. I would say we simply can't do anything that might violate the consistency guarantee without explicit permission from the user I'm not sure I understand, are you saying that B would violate this, or just that the status quo does? node bootstrapping can violate consistency -- Key: CASSANDRA-2434 URL: https://issues.apache.org/jira/browse/CASSANDRA-2434 Project: Cassandra Issue Type: Bug Reporter: Peter Schuller Assignee: paul cannon Fix For: 1.1 Attachments: 2434.patch.txt My reading (a while ago) of the code indicates that there is no logic involved during bootstrapping that avoids consistency level violations. If I recall correctly it just grabs neighbors that are currently up. There are at least two issues I have with this behavior: * If I have a cluster where I have applications relying on QUORUM with RF=3, and bootstrapping complete based on only one node, I have just violated the supposedly guaranteed consistency semantics of the cluster. * Nodes can flap up and down at any time, so even if a human takes care to look at which nodes are up and things about it carefully before bootstrapping, there's no guarantee. A complication is that not only does it depend on use-case where this is an issue (if all you ever do you do at CL.ONE, it's fine); even in a cluster which is otherwise used for QUORUM operations you may wish to accept less-than-quorum nodes during bootstrap in various emergency situations. A potential easy fix is to have bootstrap take an argument which is the number of hosts to bootstrap from, or to assume QUORUM if none is given. (A related concern is bootstrapping across data centers. You may *want* to bootstrap to a local node and then do a repair to avoid sending loads of data across DC:s while still achieving consistency. Or even if you don't care about the consistency issues, I don't think there is currently a way to bootstrap from local nodes only.) Thoughts? -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2474) CQL support for compound columns
[ https://issues.apache.org/jira/browse/CASSANDRA-2474?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096808#comment-13096808 ] Jonathan Ellis commented on CASSANDRA-2474: --- Remember that the ideal for CQL is to have SELECT x, y, z and get back exactly columns x, y, and z. The farther we get from that the more conceptual debt we incur. bq. SELECT name AS (tweet_id, username, location), value AS body This is probably acceptable because many people are familiar with destructuring assignment and the intuition from that is accurate here: you have a single [cassandra] column, that gets destructured into multiple [resultset] columns. bq. SELECT name as (tweet_id, username|body) But this doesn't match intuition for either SQL, or destructuring assignment. Because what you're doing is actually turning multiple cassandra columns, into a single row. I think the component syntax does a better job of describing this -- you use componentX in the composite tree until you get to the parent of the named fields, and then you can use those names directly. I think we should go with the component syntax for now (since it can handle both sparse and dense) and consider adding the destructuring syntax for dense encodings later. CQL support for compound columns Key: CASSANDRA-2474 URL: https://issues.apache.org/jira/browse/CASSANDRA-2474 Project: Cassandra Issue Type: Sub-task Components: API, Core Reporter: Eric Evans Assignee: Pavel Yaskevich Labels: cql Fix For: 1.0 Attachments: screenshot-1.jpg, screenshot-2.jpg For the most part, this boils down to supporting the specification of compound column names (the CQL syntax is colon-delimted terms), and then teaching the decoders (drivers) to create structures from the results. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3134) Patch Hadoop Streaming Source to Support Cassandra IO
[ https://issues.apache.org/jira/browse/CASSANDRA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096809#comment-13096809 ] Jeremy Hanna commented on CASSANDRA-3134: - fwiw - it might be simpler but not sure that you necessarily need CDH's streaming jar. Could HADOOP-1722 be backported to 0.20.203 by itself? That would allow it to be seamlessly integrated into Brisk as well. btw, this sounds great - both streaming support as well as seamless support in hadoopy and dumbo. Patch Hadoop Streaming Source to Support Cassandra IO - Key: CASSANDRA-3134 URL: https://issues.apache.org/jira/browse/CASSANDRA-3134 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Brandyn White Priority: Minor Labels: hadoop, hadoop_examples_streaming Original Estimate: 504h Remaining Estimate: 504h (text is a repost from [CASSANDRA-1497|https://issues.apache.org/jira/browse/CASSANDRA-1497]) I'm the author of the Hadoopy http://bwhite.github.com/hadoopy/ python library and I'm interested in taking another stab at streaming support. Hadoopy and Dumbo both use the TypedBytes format that is in CDH for communication with the streaming jar. A simple way to get this to work is modify the streaming code (make hadoop-cassandra-streaming.jar) so that it uses the same TypedBytes communication with streaming programs, but the actual job IO is using the Cassandra IO. The user would have the exact same streaming interface, but the user would specify the keyspace, etc using environmental variables. The benefits of this are 1. Easy implementation: Take the cloudera-patched version of streaming and change the IO, add environmental variable reading. 2. Only Client side: As the streaming jar is included in the job, no server side changes are required. 3. Simple maintenance: If the Hadoop Cassandra interface changes, then this would require the same simple fixup as any other Hadoop job. 4. The TypedBytes format supports all of the necessary Cassandara types (https://issues.apache.org/jira/browse/HADOOP-5450) 5. Compatible with existing streaming libraries: Hadoopy and dumbo would only need to know the path of this new streaming jar 6. No need for avro The negatives of this are 1. Duplicative code: This would be a dupe and patch of the streaming jar. This can be stored itself as a patch. 2. I'd have to check but this solution should work on a stock hadoop (cluster side) but it requires TypedBytes (client side) which can be included in the jar. I can code this up but I wanted to get some feedback from the community first. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-3134) Patch Hadoop Streaming Source to Support Cassandra IO
[ https://issues.apache.org/jira/browse/CASSANDRA-3134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13096810#comment-13096810 ] Jonathan Ellis commented on CASSANDRA-3134: --- Also re-post: I'm not familar with TypedBytes, but now that Cassandra's AbstractBytes have to/from string support (getString/fromString), that would probably be the natural way to go for us. Patch Hadoop Streaming Source to Support Cassandra IO - Key: CASSANDRA-3134 URL: https://issues.apache.org/jira/browse/CASSANDRA-3134 Project: Cassandra Issue Type: New Feature Components: Hadoop Reporter: Brandyn White Priority: Minor Labels: hadoop, hadoop_examples_streaming Original Estimate: 504h Remaining Estimate: 504h (text is a repost from [CASSANDRA-1497|https://issues.apache.org/jira/browse/CASSANDRA-1497]) I'm the author of the Hadoopy http://bwhite.github.com/hadoopy/ python library and I'm interested in taking another stab at streaming support. Hadoopy and Dumbo both use the TypedBytes format that is in CDH for communication with the streaming jar. A simple way to get this to work is modify the streaming code (make hadoop-cassandra-streaming.jar) so that it uses the same TypedBytes communication with streaming programs, but the actual job IO is using the Cassandra IO. The user would have the exact same streaming interface, but the user would specify the keyspace, etc using environmental variables. The benefits of this are 1. Easy implementation: Take the cloudera-patched version of streaming and change the IO, add environmental variable reading. 2. Only Client side: As the streaming jar is included in the job, no server side changes are required. 3. Simple maintenance: If the Hadoop Cassandra interface changes, then this would require the same simple fixup as any other Hadoop job. 4. The TypedBytes format supports all of the necessary Cassandara types (https://issues.apache.org/jira/browse/HADOOP-5450) 5. Compatible with existing streaming libraries: Hadoopy and dumbo would only need to know the path of this new streaming jar 6. No need for avro The negatives of this are 1. Duplicative code: This would be a dupe and patch of the streaming jar. This can be stored itself as a patch. 2. I'd have to check but this solution should work on a stock hadoop (cluster side) but it requires TypedBytes (client side) which can be included in the jar. I can code this up but I wanted to get some feedback from the community first. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira