[ 
https://issues.apache.org/jira/browse/CASSANDRA-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14287332#comment-14287332
 ] 

Benedict commented on CASSANDRA-8619:
-------------------------------------

bq.  (I'm not sure we need the columnFamily = null line but it doesn't hurt 
either).

We don't, but it made me slightly more comfortable, for no great reason. If 
somebody is misusing the API and concurrently adding at the same time as sync, 
this may permit a NPE to be thrown and alert them to their misuse. Not a great 
reason, but I didn't like passing the buffer to another thread whilst its 
contents were still accessible. Not that they're at all guaranteed not 
accessible this way. I deleted/inserted that line a couple of times before 
leaving it, so also happy to remove it again.

bq. And further note that the addColumn in BufferWriter doesn't override the 
one of SSUW. 

Perhaps it _should_ to throw UnsupportedOE and make it clearer?  Not that it 
matters tremendously, it's neither the cleanest nor the ugliest code in the 
tree.


> using CQLSSTableWriter gives ConcurrentModificationException
> ------------------------------------------------------------
>
>                 Key: CASSANDRA-8619
>                 URL: https://issues.apache.org/jira/browse/CASSANDRA-8619
>             Project: Cassandra
>          Issue Type: Bug
>          Components: Tools
>         Environment: sun jdk 7
> linux - ubuntu
>            Reporter: Igor Berman
>            Assignee: Benedict
>             Fix For: 2.1.3, 2.0.13
>
>         Attachments: 8619, TimeSeriesCassandraLoaderTest.java
>
>
> Using CQLSSTableWriter gives ConcurrentModificationException 
> I'm trying to load many timeseries into cassandra 2.0.11-1
> using  'org.apache.cassandra:cassandra-all:2.0.11'
> {noformat}
> java.util.ConcurrentModificationException
>       at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
>       at java.util.TreeMap$ValueIterator.next(TreeMap.java:1160)
>       at 
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:126)
>       at 
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
>       at 
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
>       at 
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215)
> schema
> CREATE TABLE test.sample (ts_id bigint, yr int, t timestamp, v double, tgs 
> set<varchar>, PRIMARY KEY((ts_id,yr), t)) WITH CLUSTERING ORDER BY (t DESC) 
> AND COMPRESSION = {'sstable_compression': 'LZ4Compressor'};
> statement:
> INSERT INTO  test.sample(ts_id, yr, t, v) VALUES (?,?,?,?)
> {noformat}
> with .withBufferSizeInMB(128); it happens more than with
> .withBufferSizeInMB(256);
> code based on 
> http://planetcassandra.org/blog/using-the-cassandra-bulk-loader-updated/
> writer.addRow(tsId, year, new Date(time), value);
> Any suggestions will be highly appreciated



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Reply via email to