[jira] [Commented] (CASSANDRA-2790) SimpleStrategy enforces endpoints = replicas when reading with ConsistencyLevel.ONE
[ https://issues.apache.org/jira/browse/CASSANDRA-2790?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13051462#comment-13051462 ] Brandon Williams commented on CASSANDRA-2790: - I think I may have this solved in CASSANDRA-2129, could you try the patch there? SimpleStrategy enforces endpoints = replicas when reading with ConsistencyLevel.ONE Key: CASSANDRA-2790 URL: https://issues.apache.org/jira/browse/CASSANDRA-2790 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Linux 2.6.32-31-generic #61-Ubuntu SMP / Java HotSpot(TM) 64-Bit Server VM (build 14.2-b01, mixed mode) Reporter: Ivan Gorgiev We use replication factor of 3 across our system, but in a one case on the application bootstrap we read a stored value with a local (in-process) call to StorageProxy.read(commands, ConsistencyLevel.ONE). This results in the following exception from SimpleStrategy: replication factor 3 exceeds number of endpoints 1. Shouldn't such a read operation always succeed as there is a guaranteed single Cassandra endpoint - the one processing the request? This code used to work with Cassandra 0.6.1 before we upgraded to 0.7.6. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-2388) ColumnFamilyRecordReader fails for a given split because a host is down, even if records could reasonably be read from other replica.
[ https://issues.apache.org/jira/browse/CASSANDRA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13049724#comment-13049724 ] Mck SembWever edited comment on CASSANDRA-2388 at 6/18/11 8:32 AM: --- bq. [snip] One possibility is to use the ip octets like the RackInferringSnitch. In our usecase we have three nodes defined via PropertyFileSnitch:{noformat}152.90.241.22=DC1:RAC1 #node1 152.90.241.23=DC2:RAC1 #node2 152.90.241.24=DC1:RAC1 #node3{noformat} The only to infer here is even addresses belong to one dc, odd to the other. This is not how RackInferringSnithc works. When we make the connection through the other (node2) endpoint taking the rack inferring approach 152.90. will say it's in DC2. (again) this is the wrong DC and will return itself as a valid endpoint Step (3) seems to me to be too specific to be included here. If i go only with steps (1),(2),and (4) we get this code:{noformat}public String[] sort_endpoints_by_proximity(String endpoint, String[] endpoints, boolean restrictToSameDC) throws TException, InvalidRequestException { try { ListString results = new ArrayListString(); InetAddress address = InetAddress.getByName(endpoint); boolean endpointValid = null != Gossiper.instance.getEndpointStateForEndpoint(address); String datacenter = DatabaseDescriptor .getEndpointSnitch().getDatacenter(endpointValid ? address : FBUtilities.getLocalAddress()); ListInetAddress addresses = new ArrayListInetAddress(); for(String ep : endpoints) { addresses.add(InetAddress.getByName(endpoint)); } DatabaseDescriptor.getEndpointSnitch().sortByProximity(address, addresses); for(InetAddress ep : addresses) { String dc = DatabaseDescriptor.getEndpointSnitch().getDatacenter(ep); if(FailureDetector.instance.isAlive(ep) (!restrictToSameDC || datacenter.equals(dc))) { results.add(ep.getHostName()); } } return results.toArray(new String[results.size()]); } catch (UnknownHostException e) { throw new InvalidRequestException(e.getMessage()); } }{noformat} I'm happy with this (except that {{Gossiper.instance.getEndpointStateForEndpoint(address)}} is only my guess on how to tell if an endpoint is valid as such). was (Author: michaelsembwever): bq. [snip] One possibility is to use the ip octets like the RackInferringSnitch. In our usecase we have three nodes defined via PropertyFileSnitch:{noformat}152.90.241.22=DC1:RAC1 #node1 152.90.241.23=DC2:RAC1 #node2 152.90.241.24=DC1:RAC1 #node3{noformat} When we make the connection through the other (node2) endpoint taking the rack inferring approach 152.90. will say it's in DC2. (again) this is the wrong DC and will return itself as a valid endpoint Step (3) seems to me to be too specific to be included here. If i go only with steps (1),(2),and (4) we get this code:{noformat}public String[] sort_endpoints_by_proximity(String endpoint, String[] endpoints, boolean restrictToSameDC) throws TException, InvalidRequestException { try { ListString results = new ArrayListString(); InetAddress address = InetAddress.getByName(endpoint); boolean endpointValid = null != Gossiper.instance.getEndpointStateForEndpoint(address); String datacenter = DatabaseDescriptor .getEndpointSnitch().getDatacenter(endpointValid ? address : FBUtilities.getLocalAddress()); ListInetAddress addresses = new ArrayListInetAddress(); for(String ep : endpoints) { addresses.add(InetAddress.getByName(endpoint)); } DatabaseDescriptor.getEndpointSnitch().sortByProximity(address, addresses); for(InetAddress ep : addresses) { String dc = DatabaseDescriptor.getEndpointSnitch().getDatacenter(ep); if(FailureDetector.instance.isAlive(ep) (!restrictToSameDC || datacenter.equals(dc))) { results.add(ep.getHostName()); } } return results.toArray(new String[results.size()]); } catch (UnknownHostException e) { throw new InvalidRequestException(e.getMessage()); } }{noformat} I'm happy with this (except that {{Gossiper.instance.getEndpointStateForEndpoint(address)}} is only my guess on how to tell if an endpoint is valid as such). ColumnFamilyRecordReader fails for a given split because a host is down, even if records could reasonably be read from other
[jira] [Issue Comment Edited] (CASSANDRA-2388) ColumnFamilyRecordReader fails for a given split because a host is down, even if records could reasonably be read from other replica.
[ https://issues.apache.org/jira/browse/CASSANDRA-2388?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13049724#comment-13049724 ] Mck SembWever edited comment on CASSANDRA-2388 at 6/18/11 8:33 AM: --- bq. [snip] One possibility is to use the ip octets like the RackInferringSnitch. In our usecase we have three nodes defined via PropertyFileSnitch:{noformat}152.90.241.22=DC1:RAC1 #node1 152.90.241.23=DC2:RAC1 #node2 152.90.241.24=DC1:RAC1 #node3{noformat} The only way to infer here is even addresses belong to one dc, odd to the other. This is not how RackInferringSnithc works. When we make the connection through the other (node2) endpoint taking the rack inferring approach 152.90. will say it's in DC2. (again) this is the wrong DC and will return itself as a valid endpoint Step (3) seems to me to be too specific to be included here. If i go only with steps (1),(2),and (4) we get this code:{noformat}public String[] sort_endpoints_by_proximity(String endpoint, String[] endpoints, boolean restrictToSameDC) throws TException, InvalidRequestException { try { ListString results = new ArrayListString(); InetAddress address = InetAddress.getByName(endpoint); boolean endpointValid = null != Gossiper.instance.getEndpointStateForEndpoint(address); String datacenter = DatabaseDescriptor .getEndpointSnitch().getDatacenter(endpointValid ? address : FBUtilities.getLocalAddress()); ListInetAddress addresses = new ArrayListInetAddress(); for(String ep : endpoints) { addresses.add(InetAddress.getByName(endpoint)); } DatabaseDescriptor.getEndpointSnitch().sortByProximity(address, addresses); for(InetAddress ep : addresses) { String dc = DatabaseDescriptor.getEndpointSnitch().getDatacenter(ep); if(FailureDetector.instance.isAlive(ep) (!restrictToSameDC || datacenter.equals(dc))) { results.add(ep.getHostName()); } } return results.toArray(new String[results.size()]); } catch (UnknownHostException e) { throw new InvalidRequestException(e.getMessage()); } }{noformat} I'm happy with this (except that {{Gossiper.instance.getEndpointStateForEndpoint(address)}} is only my guess on how to tell if an endpoint is valid as such). was (Author: michaelsembwever): bq. [snip] One possibility is to use the ip octets like the RackInferringSnitch. In our usecase we have three nodes defined via PropertyFileSnitch:{noformat}152.90.241.22=DC1:RAC1 #node1 152.90.241.23=DC2:RAC1 #node2 152.90.241.24=DC1:RAC1 #node3{noformat} The only to infer here is even addresses belong to one dc, odd to the other. This is not how RackInferringSnithc works. When we make the connection through the other (node2) endpoint taking the rack inferring approach 152.90. will say it's in DC2. (again) this is the wrong DC and will return itself as a valid endpoint Step (3) seems to me to be too specific to be included here. If i go only with steps (1),(2),and (4) we get this code:{noformat}public String[] sort_endpoints_by_proximity(String endpoint, String[] endpoints, boolean restrictToSameDC) throws TException, InvalidRequestException { try { ListString results = new ArrayListString(); InetAddress address = InetAddress.getByName(endpoint); boolean endpointValid = null != Gossiper.instance.getEndpointStateForEndpoint(address); String datacenter = DatabaseDescriptor .getEndpointSnitch().getDatacenter(endpointValid ? address : FBUtilities.getLocalAddress()); ListInetAddress addresses = new ArrayListInetAddress(); for(String ep : endpoints) { addresses.add(InetAddress.getByName(endpoint)); } DatabaseDescriptor.getEndpointSnitch().sortByProximity(address, addresses); for(InetAddress ep : addresses) { String dc = DatabaseDescriptor.getEndpointSnitch().getDatacenter(ep); if(FailureDetector.instance.isAlive(ep) (!restrictToSameDC || datacenter.equals(dc))) { results.add(ep.getHostName()); } } return results.toArray(new String[results.size()]); } catch (UnknownHostException e) { throw new InvalidRequestException(e.getMessage()); } }{noformat} I'm happy with this (except that {{Gossiper.instance.getEndpointStateForEndpoint(address)}} is only my guess on how to tell if an endpoint is valid as such).
[jira] [Created] (CASSANDRA-2792) Bootstrapping node stalls. Bootstrapper thinks it is still streaming some sstables. The source nodes do not. Caused by IllegalStateException on source nodes.
Bootstrapping node stalls. Bootstrapper thinks it is still streaming some sstables. The source nodes do not. Caused by IllegalStateException on source nodes. - Key: CASSANDRA-2792 URL: https://issues.apache.org/jira/browse/CASSANDRA-2792 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Ubuntu Reporter: Dominic Williams Fix For: 0.7.7 I am bootstrapping a node into a 4 node cluster with RF3 (1 node is currently down due to sstable issues, but the cluster is running without issues). There are two keyspaces FightMyMonster and FMM_Studio. The first keyspace successfully streams and the whole operation is probably at 99% when it stalls on some sstables in the much smaller FMM_Studio keyspace. Netstats on the bootstrapping node reports it is still streaming: Mode: Bootstrapping Not sending any streams. Streaming from: /192.168.1.4 FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-101-Data.db sections=1 progress=0/76453 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-103-Data.db sections=1 progress=0/90475 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-102-Data.db sections=1 progress=0/4304182 - 0% Streaming from: /192.168.1.3 FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-158-Data.db sections=2 progress=0/146990 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/AuthorClasses-f-81-Data.db sections=1 progress=0/3992 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/Studio-f-70-Data.db sections=1 progress=0/1776 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-159-Data.db sections=2 progress=0/136829 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-157-Data.db sections=2 progress=0/5779597 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/AuthorClasses-f-82-Data.db sections=1 progress=0/161 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/Studio-f-71-Data.db sections=1 progress=0/135 - 0% Pool NameActive Pending Completed Commandsn/a 0334 Responses n/a 0 421957 However, running netstats on the source nodes reports they are not streaming: Mode: Normal Nothing streaming to /192.168.1.9 Not receiving any streams. Pool NameActive Pending Completed Commandsn/a 01949476 Responses n/a 11778768 Examination of the logs on the source nodes show an IllegalStateException that has likely interrupted/broken the streaming process. 17 22:27:05,924 StreamOut.java (line 126) Beginning transfer to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:05,925 StreamOut.java (line 100) Flushing memtables for FMM_Studio... INFO [StreamStage:1] 2011-06-17 22:27:06,004 StreamOut.java (line 173) Stream context metadata [/var/opt/cassandra/data/FMM_Studio/Classes-f-107-Data.db sections=1 progress=0/1585378 - 0%, /var/opt/cas sandra/data/FMM_Studio/PartsData-f-100-Data.db sections=1 progress=0/76453 - 0%, /var/opt/cassandra/data/FMM_Studio/PartsData-f-98-Data.db sections=1 progress=0/4309514 - 0%, /var/opt/cassandra/data/FMM _Studio/PartsData-f-99-Data.db sections=1 progress=0/90475 - 0%], 11 sstables. INFO [StreamStage:1] 2011-06-17 22:27:06,005 StreamOutSession.java (line 174) Streaming to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:06,006 StreamOut.java (line 126) Beginning transfer to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:06,007 StreamOut.java (line 100) Flushing memtables for FightMyMonster... INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-MonsterMarket_1@1054909557(338 bytes, 24 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-UserFights@239934867(1124836 bytes, 965 operations) INFO [FlushWriter:409] 2011-06-17 22:27:06,007 Memtable.java (line 157) Writing Memtable-MonsterMarket_1@1054909557(338 bytes, 24 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-Users_CisIndex@1758504250(242 bytes, 8 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,008 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-Tribes@1510979736(18318 bytes, 703 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,008 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-ColumnViews_TimeUUID@864545260(2073 bytes, 63 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,008 ColumnFamilyStore.java (line 1065) Enqueuing flush of
[jira] [Updated] (CASSANDRA-2530) Additional AbstractType data type definitions to enrich CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Shaw updated CASSANDRA-2530: - Attachment: rebase-cql-and-ccfs-term-v1.txt Will do. Sorry I am a bit new to the patching business. Additional AbstractType data type definitions to enrich CQL --- Key: CASSANDRA-2530 URL: https://issues.apache.org/jira/browse/CASSANDRA-2530 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 beta 2 Reporter: Rick Shaw Priority: Trivial Labels: cql Attachments: patch-to-add-4-new-AbstractTypes-and-CQL-support-v4.txt, patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt, rebase-cql-and-ccfs-term-v1.txt Provide 5 additional Datatypes: ByteType, DateType, BooleanType, FloatType, DoubleType. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Created] (CASSANDRA-2793) SSTable Corrupt (negative) value length encountered exception blocks compaction.
SSTable Corrupt (negative) value length encountered exception blocks compaction. -- Key: CASSANDRA-2793 URL: https://issues.apache.org/jira/browse/CASSANDRA-2793 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Ubuntu Reporter: Dominic Williams Fix For: 0.7.7 A node was consistently experiencing high CPU load. Examination of the logs showed that compaction of an sstable was failing with an error: INFO [CompactionExecutor:1] 2011-06-17 00:18:51,676 CompactionManager.java (line 395) Compacting [SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-6993-Data.db'),SSTableReader( path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-6994-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-6995-Data.db'),SSTableReader(path='/var/opt/cassandra /data/FightMyMonster/UserMonsters-f-6996-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-6998-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/Use rMonsters-f-7000-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7002-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7004-Data.db '),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7006-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7008-Data.db'),SSTableReader(path='/ var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7010-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7012-Data.db'),SSTableReader(path='/var/opt/cassandra/data/F ightMyMonster/UserMonsters-f-7014-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7016-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonste rs-f-7018-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7020-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7022-Data.db'),SSTa bleReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7024-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7026-Data.db'),SSTableReader(path='/var/opt /cassandra/data/FightMyMonster/UserMonsters-f-7028-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7030-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyM onster/UserMonsters-f-7032-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7034-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-70 36-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7038-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7040-Data.db'),SSTableRead er(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7042-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7044-Data.db'),SSTableReader(path='/var/opt/cassan dra/data/FightMyMonster/UserMonsters-f-7046-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7048-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7050-Data.db'),SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7052-Data.db')] ERROR [CompactionExecutor:1] 2011-06-17 00:19:21,446 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[CompactionExecutor:1,1,main] java.io.IOError: java.io.IOException: Corrupt (negative) value length encounteredat org.apache.cassandra.io.util.ColumnIterator.deserializeNext(ColumnSortedMap.java:252) at org.apache.cassandra.io.util.ColumnIterator.next(ColumnSortedMap.java:268) at org.apache.cassandra.io.util.ColumnIterator.next(ColumnSortedMap.java:227) at java.util.concurrent.ConcurrentSkipListMap.buildFromSorted(ConcurrentSkipListMap.java:1493) at java.util.concurrent.ConcurrentSkipListMap.init(ConcurrentSkipListMap.java:1443) at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:379) at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:362) at org.apache.cassandra.db.SuperColumnSerializer.deserialize(SuperColumn.java:322) at org.apache.cassandra.db.ColumnFamilySerializer.deserializeColumns(ColumnFamilySerializer.java:129) at org.apache.cassandra.io.sstable.SSTableIdentityIterator.getColumnFamilyWithColumns(SSTableIdentityIterator.java:201) at org.apache.cassandra.io.PrecompactedRow.init(PrecompactedRow.java:78) at
[jira] [Created] (CASSANDRA-2794) Scrub fails on system with blocked compaction with java.io.FileNotFoundException / Too many open files
Scrub fails on system with blocked compaction with java.io.FileNotFoundException / Too many open files -- Key: CASSANDRA-2794 URL: https://issues.apache.org/jira/browse/CASSANDRA-2794 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Ubuntu Reporter: Dominic Williams Fix For: 0.7.7 A node is suffering from CASSANDRA-2793. Scrub is run to try and fix the problem. Although ulimit shows unlimited file handles allowed scrub eventually fails with java.io.FileNotFoundException / Too many open files: INFO [CompactionExecutor:1] 2011-06-17 00:46:26,115 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7018-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:26,225 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7018-Data.db') complete: 275 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:26,226 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7580-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:28,383 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7580-Data.db') complete: 297 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:28,384 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7574-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:29,300 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7574-Data.db') complete: 347 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:29,300 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7010-Data.db') ERROR [CompactionExecutor:1] 2011-06-17 00:46:29,374 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[CompactionExecutor:1,1,main] java.io.FileNotFoundException: /var/opt/cassandra/data/FightMyMonster/UserMonsters-tmp-f-7823-Data.db (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.init(RandomAccessFile.java:212) at org.apache.cassandra.io.util.BufferedRandomAccessFile.init(BufferedRandomAccessFile.java:113) at org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:78) at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:2243) at org.apache.cassandra.db.CompactionManager.maybeCreateWriter(CompactionManager.java:794) at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:534) at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56) at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) This error eventually takes over: ERROR [ReadStage:328] 2011-06-17 00:46:38,634 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[ReadStage:328,5,main] java.io.IOError: java.io.FileNotFoundException: /var/opt/cassandra/data/FightMyMonster/MonsterMarket_3-f-3466-Data.db (Too many open files) at org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:78) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:553) at org.apache.cassandra.db.RowIteratorFactory.getIterator(RowIteratorFactory.java:95) at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1442) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:49) at org.apache.cassandra.net.MessageDeliveryTask.run(MessageDeliveryTask.java:72) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) Caused by: java.io.FileNotFoundException: /var/opt/cassandra/data/FightMyMonster/MonsterMarket_3-f-3466-Data.db (Too many open files)
[jira] [Resolved] (CASSANDRA-2794) Scrub fails on system with blocked compaction with java.io.FileNotFoundException / Too many open files
[ https://issues.apache.org/jira/browse/CASSANDRA-2794?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis resolved CASSANDRA-2794. --- Resolution: Duplicate Fix Version/s: (was: 0.7.7) scrub leaking file handles was fixed for 0.7.7 in CASSANDRA-2669 Scrub fails on system with blocked compaction with java.io.FileNotFoundException / Too many open files -- Key: CASSANDRA-2794 URL: https://issues.apache.org/jira/browse/CASSANDRA-2794 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Ubuntu Reporter: Dominic Williams A node is suffering from CASSANDRA-2793. Scrub is run to try and fix the problem. Although ulimit shows unlimited file handles allowed scrub eventually fails with java.io.FileNotFoundException / Too many open files: INFO [CompactionExecutor:1] 2011-06-17 00:46:26,115 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7018-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:26,225 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7018-Data.db') complete: 275 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:26,226 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7580-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:28,383 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7580-Data.db') complete: 297 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:28,384 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7574-Data.db') INFO [CompactionExecutor:1] 2011-06-17 00:46:29,300 CompactionManager.java (line 652) Scrub of SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7574-Data.db') complete: 347 rows in new sstable and 0 empty (tombstoned) rows dropped INFO [CompactionExecutor:1] 2011-06-17 00:46:29,300 CompactionManager.java (line 511) Scrubbing SSTableReader(path='/var/opt/cassandra/data/FightMyMonster/UserMonsters-f-7010-Data.db') ERROR [CompactionExecutor:1] 2011-06-17 00:46:29,374 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[CompactionExecutor:1,1,main] java.io.FileNotFoundException: /var/opt/cassandra/data/FightMyMonster/UserMonsters-tmp-f-7823-Data.db (Too many open files) at java.io.RandomAccessFile.open(Native Method) at java.io.RandomAccessFile.init(RandomAccessFile.java:212) at org.apache.cassandra.io.util.BufferedRandomAccessFile.init(BufferedRandomAccessFile.java:113) at org.apache.cassandra.io.sstable.SSTableWriter.init(SSTableWriter.java:78) at org.apache.cassandra.db.ColumnFamilyStore.createCompactionWriter(ColumnFamilyStore.java:2243) at org.apache.cassandra.db.CompactionManager.maybeCreateWriter(CompactionManager.java:794) at org.apache.cassandra.db.CompactionManager.doScrub(CompactionManager.java:534) at org.apache.cassandra.db.CompactionManager.access$600(CompactionManager.java:56) at org.apache.cassandra.db.CompactionManager$3.call(CompactionManager.java:195) at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303) at java.util.concurrent.FutureTask.run(FutureTask.java:138) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:619) This error eventually takes over: ERROR [ReadStage:328] 2011-06-17 00:46:38,634 AbstractCassandraDaemon.java (line 114) Fatal exception in thread Thread[ReadStage:328,5,main] java.io.IOError: java.io.FileNotFoundException: /var/opt/cassandra/data/FightMyMonster/MonsterMarket_3-f-3466-Data.db (Too many open files) at org.apache.cassandra.io.sstable.SSTableScanner.init(SSTableScanner.java:78) at org.apache.cassandra.io.sstable.SSTableReader.getScanner(SSTableReader.java:553) at org.apache.cassandra.db.RowIteratorFactory.getIterator(RowIteratorFactory.java:95) at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1442) at org.apache.cassandra.service.RangeSliceVerbHandler.doVerb(RangeSliceVerbHandler.java:49) at
[jira] [Commented] (CASSANDRA-2530) Additional AbstractType data type definitions to enrich CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13051587#comment-13051587 ] Jonathan Ellis commented on CASSANDRA-2530: --- Still getting patch failures on the 0.8 branch: {noformat} form:svn-0.8 jonathan$ patch -p0 patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt patching file src/java/org/apache/cassandra/cql/Cql.g Hunk #1 succeeded at 296 with fuzz 2 (offset 5 lines). Hunk #2 FAILED at 397. 1 out of 2 hunks FAILED -- saving rejects to file src/java/org/apache/cassandra/cql/Cql.g.rej patching file src/java/org/apache/cassandra/cql/CreateColumnFamilyStatement.java Hunk #1 FAILED at 71. 1 out of 1 hunk FAILED -- saving rejects to file src/java/org/apache/cassandra/cql/CreateColumnFamilyStatement.java.rej patching file src/java/org/apache/cassandra/cql/Term.java patching file src/java/org/apache/cassandra/db/marshal/BooleanType.java patching file src/java/org/apache/cassandra/db/marshal/DateType.java patching file src/java/org/apache/cassandra/db/marshal/DoubleType.java patching file src/java/org/apache/cassandra/db/marshal/FloatType.java {noformat} Is this against something else? Additional AbstractType data type definitions to enrich CQL --- Key: CASSANDRA-2530 URL: https://issues.apache.org/jira/browse/CASSANDRA-2530 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 beta 2 Reporter: Rick Shaw Priority: Trivial Labels: cql Attachments: patch-to-add-4-new-AbstractTypes-and-CQL-support-v4.txt, patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt, rebase-cql-and-ccfs-term-v1.txt Provide 5 additional Datatypes: ByteType, DateType, BooleanType, FloatType, DoubleType. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2530) Additional AbstractType data type definitions to enrich CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis updated CASSANDRA-2530: -- Comment: was deleted (was: Still getting patch failures on the 0.8 branch: {noformat} form:svn-0.8 jonathan$ patch -p0 patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt patching file src/java/org/apache/cassandra/cql/Cql.g Hunk #1 succeeded at 296 with fuzz 2 (offset 5 lines). Hunk #2 FAILED at 397. 1 out of 2 hunks FAILED -- saving rejects to file src/java/org/apache/cassandra/cql/Cql.g.rej patching file src/java/org/apache/cassandra/cql/CreateColumnFamilyStatement.java Hunk #1 FAILED at 71. 1 out of 1 hunk FAILED -- saving rejects to file src/java/org/apache/cassandra/cql/CreateColumnFamilyStatement.java.rej patching file src/java/org/apache/cassandra/cql/Term.java patching file src/java/org/apache/cassandra/db/marshal/BooleanType.java patching file src/java/org/apache/cassandra/db/marshal/DateType.java patching file src/java/org/apache/cassandra/db/marshal/DoubleType.java patching file src/java/org/apache/cassandra/db/marshal/FloatType.java {noformat} Is this against something else?) Additional AbstractType data type definitions to enrich CQL --- Key: CASSANDRA-2530 URL: https://issues.apache.org/jira/browse/CASSANDRA-2530 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 beta 2 Reporter: Rick Shaw Priority: Trivial Labels: cql Attachments: patch-to-add-4-new-AbstractTypes-and-CQL-support-v4.txt, patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt, rebase-cql-and-ccfs-term-v1.txt Provide 5 additional Datatypes: ByteType, DateType, BooleanType, FloatType, DoubleType. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Assigned] (CASSANDRA-2792) Bootstrapping node stalls. Bootstrapper thinks it is still streaming some sstables. The source nodes do not. Caused by IllegalStateException on source nodes.
[ https://issues.apache.org/jira/browse/CASSANDRA-2792?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jonathan Ellis reassigned CASSANDRA-2792: - Assignee: Sylvain Lebresne Bootstrapping node stalls. Bootstrapper thinks it is still streaming some sstables. The source nodes do not. Caused by IllegalStateException on source nodes. - Key: CASSANDRA-2792 URL: https://issues.apache.org/jira/browse/CASSANDRA-2792 Project: Cassandra Issue Type: Bug Components: Core Affects Versions: 0.7.6 Environment: Ubuntu Reporter: Dominic Williams Assignee: Sylvain Lebresne Fix For: 0.7.7 Original Estimate: 4h Remaining Estimate: 4h I am bootstrapping a node into a 4 node cluster with RF3 (1 node is currently down due to sstable issues, but the cluster is running without issues). There are two keyspaces FightMyMonster and FMM_Studio. The first keyspace successfully streams and the whole operation is probably at 99% when it stalls on some sstables in the much smaller FMM_Studio keyspace. Netstats on the bootstrapping node reports it is still streaming: Mode: Bootstrapping Not sending any streams. Streaming from: /192.168.1.4 FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-101-Data.db sections=1 progress=0/76453 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-103-Data.db sections=1 progress=0/90475 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-102-Data.db sections=1 progress=0/4304182 - 0% Streaming from: /192.168.1.3 FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-158-Data.db sections=2 progress=0/146990 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/AuthorClasses-f-81-Data.db sections=1 progress=0/3992 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/Studio-f-70-Data.db sections=1 progress=0/1776 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-159-Data.db sections=2 progress=0/136829 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/PartsData-f-157-Data.db sections=2 progress=0/5779597 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/AuthorClasses-f-82-Data.db sections=1 progress=0/161 - 0% FMM_Studio: /var/opt/cassandra/data/FMM_Studio/Studio-f-71-Data.db sections=1 progress=0/135 - 0% Pool NameActive Pending Completed Commandsn/a 0334 Responses n/a 0 421957 However, running netstats on the source nodes reports they are not streaming: Mode: Normal Nothing streaming to /192.168.1.9 Not receiving any streams. Pool NameActive Pending Completed Commandsn/a 01949476 Responses n/a 11778768 Examination of the logs on the source nodes show an IllegalStateException that has likely interrupted/broken the streaming process. 17 22:27:05,924 StreamOut.java (line 126) Beginning transfer to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:05,925 StreamOut.java (line 100) Flushing memtables for FMM_Studio... INFO [StreamStage:1] 2011-06-17 22:27:06,004 StreamOut.java (line 173) Stream context metadata [/var/opt/cassandra/data/FMM_Studio/Classes-f-107-Data.db sections=1 progress=0/1585378 - 0%, /var/opt/cas sandra/data/FMM_Studio/PartsData-f-100-Data.db sections=1 progress=0/76453 - 0%, /var/opt/cassandra/data/FMM_Studio/PartsData-f-98-Data.db sections=1 progress=0/4309514 - 0%, /var/opt/cassandra/data/FMM _Studio/PartsData-f-99-Data.db sections=1 progress=0/90475 - 0%], 11 sstables. INFO [StreamStage:1] 2011-06-17 22:27:06,005 StreamOutSession.java (line 174) Streaming to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:06,006 StreamOut.java (line 126) Beginning transfer to /192.168.1.9 INFO [StreamStage:1] 2011-06-17 22:27:06,007 StreamOut.java (line 100) Flushing memtables for FightMyMonster... INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-MonsterMarket_1@1054909557(338 bytes, 24 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-UserFights@239934867(1124836 bytes, 965 operations) INFO [FlushWriter:409] 2011-06-17 22:27:06,007 Memtable.java (line 157) Writing Memtable-MonsterMarket_1@1054909557(338 bytes, 24 operations) INFO [StreamStage:1] 2011-06-17 22:27:06,007 ColumnFamilyStore.java (line 1065) Enqueuing flush of Memtable-Users_CisIndex@1758504250(242 bytes, 8 operations) INFO [StreamStage:1] 2011-06-17
[jira] [Commented] (CASSANDRA-2787) java agent option missing in cassandra.bat file
[ https://issues.apache.org/jira/browse/CASSANDRA-2787?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13051589#comment-13051589 ] Jonathan Ellis commented on CASSANDRA-2787: --- Really, forward slashes? java agent option missing in cassandra.bat file --- Key: CASSANDRA-2787 URL: https://issues.apache.org/jira/browse/CASSANDRA-2787 Project: Cassandra Issue Type: Bug Components: Packaging Affects Versions: 0.8.0 Reporter: rene kochen Priority: Minor This option must be included in cassandra.bat: -javaagent:%CASSANDRA_HOME%/lib/jamm-0.2.2.jar Otherwise you see the following warnings in cassandra log: WARN 12:02:32,478 MemoryMeter uninitialized (jamm not specified as java agent); assuming liveRatio of 10.0. Usually this means cassandra-env.sh disabled jamm because you are using a buggy JRE; upgrade to the Sun JRE instead -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
svn commit: r1137231 [3/3] - in /cassandra/drivers/java: ./ src/org/apache/cassandra/cql/jdbc/
Modified: cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraStatement.java URL: http://svn.apache.org/viewvc/cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraStatement.java?rev=1137231r1=1137230r2=1137231view=diff == --- cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraStatement.java (original) +++ cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraStatement.java Sat Jun 18 19:19:13 2011 @@ -20,16 +20,22 @@ */ package org.apache.cassandra.cql.jdbc; +import static org.apache.cassandra.cql.jdbc.Utils.*; + import java.sql.Connection; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; +import java.sql.SQLFeatureNotSupportedException; +import java.sql.SQLNonTransientConnectionException; +import java.sql.SQLRecoverableException; +import java.sql.SQLSyntaxErrorException; +import java.sql.SQLTransientConnectionException; import java.sql.SQLWarning; import java.sql.Statement; import java.util.regex.Pattern; import org.apache.cassandra.thrift.CqlResult; -import org.apache.cassandra.thrift.CqlResultType; import org.apache.cassandra.thrift.InvalidRequestException; import org.apache.cassandra.thrift.SchemaDisagreementException; import org.apache.cassandra.thrift.TimedOutException; @@ -40,555 +46,373 @@ import org.apache.thrift.TException; * Cassandra statement: implementation class for {@link PreparedStatement}. */ -class CassandraStatement implements Statement +class CassandraStatement extends AbstractStatement implements Statement { protected static final Pattern UpdatePattern = Pattern.compile(UPDATE .*, Pattern.CASE_INSENSITIVE); - + /** The connection. */ -protected final org.apache.cassandra.cql.jdbc.Connection connection; +protected org.apache.cassandra.cql.jdbc.Connection connection; /** The cql. */ -protected final String cql; +protected String cql; + +protected int fetchDirection = ResultSet.FETCH_FORWARD; + +protected int fetchSize = 0; -/** - * Constructor using fields. - * @param con cassandra connection. - */ -CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con) +protected int maxFieldSize = 0; + +protected int maxRows = 0; + +protected int resultSetType = ResultSet.TYPE_FORWARD_ONLY; + +protected int resultSetConcurrency = ResultSet.TYPE_FORWARD_ONLY; + +protected int resultSetHoldability = ResultSet.HOLD_CURSORS_OVER_COMMIT; + +protected ResultSet currentResultSet = null; + +protected int updateCount = -1; + +protected boolean escapeProcessing = true; + +CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con) throws SQLException { this(con, null); } -/** - * Constructor using fields. - * - * @param con cassandra connection - * @param cql the cql - */ -CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con, String cql) +CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con, String cql) throws SQLException { this.connection = con; this.cql = cql; } - -/** - * @param iface - * @return - * @throws SQLException - */ -public boolean isWrapperFor(Class? iface) throws SQLException +CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con, String cql, int resultSetType, int resultSetConcurrency) throws SQLException { -throw new UnsupportedOperationException(method not supported); +this(con,cql,resultSetType,resultSetConcurrency, ResultSet.HOLD_CURSORS_OVER_COMMIT); } -/** - * @param T - * @param iface - * @return - * @throws SQLException - */ -public T T unwrap(ClassT iface) throws SQLException +CassandraStatement(org.apache.cassandra.cql.jdbc.Connection con, String cql, int resultSetType, int resultSetConcurrency, + int resultSetHoldability) throws SQLException { -throw new UnsupportedOperationException(method not supported); -} +this.connection = con; +this.cql = cql; + +if (!(resultSetType == ResultSet.TYPE_FORWARD_ONLY + || resultSetType == ResultSet.TYPE_SCROLL_INSENSITIVE + || resultSetType == ResultSet.TYPE_SCROLL_SENSITIVE)) throw new SQLSyntaxErrorException(BAD_TYPE_RSET); +this.resultSetType = resultSetType; + +if (!(resultSetConcurrency == ResultSet.CONCUR_READ_ONLY + || resultSetConcurrency == ResultSet.CONCUR_UPDATABLE )) throw new SQLSyntaxErrorException(BAD_TYPE_RSET); +this.resultSetConcurrency = resultSetConcurrency; + +if (!(resultSetHoldability == ResultSet.HOLD_CURSORS_OVER_COMMIT + || resultSetHoldability == ResultSet.CLOSE_CURSORS_AT_COMMIT)) throw new
svn commit: r1137231 [1/3] - in /cassandra/drivers/java: ./ src/org/apache/cassandra/cql/jdbc/
Author: jbellis Date: Sat Jun 18 19:19:13 2011 New Revision: 1137231 URL: http://svn.apache.org/viewvc?rev=1137231view=rev Log: improve jdbc semantics patch by Rick Shaw; reviewed by jbellis for CASSANDRA-2754 Added: cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractCassandraConnection.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractStatement.java Modified: cassandra/drivers/java/CHANGES.txt cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractResultSet.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CResultSet.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraConnection.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraDriver.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraPreparedStatement.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraResultSet.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/CassandraStatement.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/ColumnDecoder.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/Connection.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/TypedColumn.java cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/Utils.java Modified: cassandra/drivers/java/CHANGES.txt URL: http://svn.apache.org/viewvc/cassandra/drivers/java/CHANGES.txt?rev=1137231r1=1137230r2=1137231view=diff == --- cassandra/drivers/java/CHANGES.txt (original) +++ cassandra/drivers/java/CHANGES.txt Sat Jun 18 19:19:13 2011 @@ -1,2 +1,2 @@ 1.0.4 - * improve JDBC spec compliance (CASSANDRA-2720) + * improve JDBC spec compliance (CASSANDRA-2720, 2754) Added: cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractCassandraConnection.java URL: http://svn.apache.org/viewvc/cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractCassandraConnection.java?rev=1137231view=auto == --- cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractCassandraConnection.java (added) +++ cassandra/drivers/java/src/org/apache/cassandra/cql/jdbc/AbstractCassandraConnection.java Sat Jun 18 19:19:13 2011 @@ -0,0 +1,258 @@ +/* + * + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * License); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + * + */ +package org.apache.cassandra.cql.jdbc; + +import java.sql.Array; +import java.sql.Blob; +import java.sql.CallableStatement; +import java.sql.Clob; +import java.sql.NClob; +import java.sql.PreparedStatement; +import java.sql.SQLException; +import java.sql.SQLFeatureNotSupportedException; +import java.sql.SQLXML; +import java.sql.Savepoint; +import java.sql.Struct; +import java.util.Map; + +public class AbstractCassandraConnection +{ +protected static final String NOT_SUPPORTED = the Cassandra implementation does not support this method; + +public Array createArrayOf(String arg0, Object[] arg1) throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public Blob createBlob() throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public Clob createClob() throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public NClob createNClob() throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public SQLXML createSQLXML() throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public Struct createStruct(String arg0, Object[] arg1) throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public MapString, Class? getTypeMap() throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public CallableStatement prepareCall(String arg0) throws SQLException +{ +throw new SQLFeatureNotSupportedException(NOT_SUPPORTED); +} + +public CallableStatement
[jira] [Updated] (CASSANDRA-2530) Additional AbstractType data type definitions to enrich CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Shaw updated CASSANDRA-2530: - Attachment: (was: patch-to-add-4-new-AbstractTypes-and-CQL-support-v5.txt) Additional AbstractType data type definitions to enrich CQL --- Key: CASSANDRA-2530 URL: https://issues.apache.org/jira/browse/CASSANDRA-2530 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 beta 2 Reporter: Rick Shaw Priority: Trivial Labels: cql Attachments: rebase-for-new-abstracttypes-and cql-stuff-v1.txt Provide 5 additional Datatypes: ByteType, DateType, BooleanType, FloatType, DoubleType. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Updated] (CASSANDRA-2530) Additional AbstractType data type definitions to enrich CQL
[ https://issues.apache.org/jira/browse/CASSANDRA-2530?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Rick Shaw updated CASSANDRA-2530: - Attachment: (was: patch-to-add-4-new-AbstractTypes-and-CQL-support-v4.txt) Additional AbstractType data type definitions to enrich CQL --- Key: CASSANDRA-2530 URL: https://issues.apache.org/jira/browse/CASSANDRA-2530 Project: Cassandra Issue Type: Improvement Components: Core Affects Versions: 0.8.0 beta 2 Reporter: Rick Shaw Priority: Trivial Labels: cql Attachments: rebase-for-new-abstracttypes-and cql-stuff-v1.txt Provide 5 additional Datatypes: ByteType, DateType, BooleanType, FloatType, DoubleType. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Issue Comment Edited] (CASSANDRA-2754) Consolidating Ticket for JDBC Semantic Improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13051596#comment-13051596 ] Rick Shaw edited comment on CASSANDRA-2754 at 6/18/11 9:03 PM: --- the property: {{cassandra.dir}} in {{build.properties.default}} takes a good guess as to how one checked out the sources. The intention is for the user to add a personalized {{build.properties}} that states their overriding intentions for all properties. I will be glad to set the default value to something else that is a better guess at where the C* home is located. Perhaps if we added an empty {{build.properties}} file, save for a one line comment stating the files purpose? was (Author: ardot): the property: {{cassandra.dir}} in {{build.properties.default}} takes a good guess as to how one checked out the sources. The intention is for the user to add a personalized {{build.properties}} that states their overriding intentions for all properties. I will be glad to set the default value to something else that is a better guess at where the C* home is located. Perhaps if we added an empty {{build.properties}} file, save for a on line comment stating the files purpose? Consolidating Ticket for JDBC Semantic Improvements --- Key: CASSANDRA-2754 URL: https://issues.apache.org/jira/browse/CASSANDRA-2754 Project: Cassandra Issue Type: Improvement Affects Versions: 0.8.0 Reporter: Rick Shaw Assignee: Rick Shaw Priority: Minor Labels: CQL, JDBC Fix For: 0.8.2 Attachments: jdbc-consoidated-v1.txt, jdbc-consoidated-v2.txt First round of improved semantics for the JDBC Suite require a coordinated patch that covers multiple Classes in o.a.c.cql.jdbc package. This ticket is meant to house the consolidated patch and to organize the multiple existing tickets as sub-tickets relating to this topic. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira
[jira] [Commented] (CASSANDRA-2754) Consolidating Ticket for JDBC Semantic Improvements
[ https://issues.apache.org/jira/browse/CASSANDRA-2754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13051596#comment-13051596 ] Rick Shaw commented on CASSANDRA-2754: -- the property: {{cassandra.dir}} in {{build.properties.default}} takes a good guess as to how one checked out the sources. The intention is for the user to add a personalized {{build.properties}} that states their overriding intentions for all properties. I will be glad to set the default value to something else that is a better guess at where the C* home is located. Perhaps if we added an empty {{build.properties}} file, save for a on line comment stating the files purpose? Consolidating Ticket for JDBC Semantic Improvements --- Key: CASSANDRA-2754 URL: https://issues.apache.org/jira/browse/CASSANDRA-2754 Project: Cassandra Issue Type: Improvement Affects Versions: 0.8.0 Reporter: Rick Shaw Assignee: Rick Shaw Priority: Minor Labels: CQL, JDBC Fix For: 0.8.2 Attachments: jdbc-consoidated-v1.txt, jdbc-consoidated-v2.txt First round of improved semantics for the JDBC Suite require a coordinated patch that covers multiple Classes in o.a.c.cql.jdbc package. This ticket is meant to house the consolidated patch and to organize the multiple existing tickets as sub-tickets relating to this topic. -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira