[
https://issues.apache.org/jira/browse/CASSANDRA-8619?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14286295#comment-14286295
]
Sylvain Lebresne commented on CASSANDRA-8619:
---------------------------------------------
patch lgtm, +1 (I'm not sure we need the {{columnFamily = null}} line but it
doesn't hurt either).
bq. it looks to me like we may also be counting additions twice in the
BufferedWriter
I don't think that's the case but I would agree that it's hard to follow
(though the comment on BufferedWriter tries to explain it). Basically, while
BufferedWriter extends SSUW, it bypass some of its method and in particular the
{{addColumn}} which does the counting for SSUW. So counting is only done by the
call in BufferedWriter. And further note that the {{addColumn}} in BufferWriter
doesn't override the one of SSUW. I can agree that all this is not the cleanest
code ever, but it was a simple way to reuse the code of SSUW. I suspect that in
the future we might just want to deprecate and finally remove SSUW (in favor of
simply CQLSSTableWriter), at which point we can clean that up.
> using CQLSSTableWriter gives ConcurrentModificationException
> ------------------------------------------------------------
>
> Key: CASSANDRA-8619
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8619
> Project: Cassandra
> Issue Type: Bug
> Components: Tools
> Environment: sun jdk 7
> linux - ubuntu
> Reporter: Igor Berman
> Assignee: Benedict
> Fix For: 2.1.3, 2.0.13
>
> Attachments: 8619, TimeSeriesCassandraLoaderTest.java
>
>
> Using CQLSSTableWriter gives ConcurrentModificationException
> I'm trying to load many timeseries into cassandra 2.0.11-1
> using 'org.apache.cassandra:cassandra-all:2.0.11'
> {noformat}
> java.util.ConcurrentModificationException
> at java.util.TreeMap$PrivateEntryIterator.nextEntry(TreeMap.java:1115)
> at java.util.TreeMap$ValueIterator.next(TreeMap.java:1160)
> at
> org.apache.cassandra.db.ColumnIndex$Builder.build(ColumnIndex.java:126)
> at
> org.apache.cassandra.io.sstable.SSTableWriter.rawAppend(SSTableWriter.java:202)
> at
> org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:187)
> at
> org.apache.cassandra.io.sstable.SSTableSimpleUnsortedWriter$DiskWriter.run(SSTableSimpleUnsortedWriter.java:215)
> schema
> CREATE TABLE test.sample (ts_id bigint, yr int, t timestamp, v double, tgs
> set<varchar>, PRIMARY KEY((ts_id,yr), t)) WITH CLUSTERING ORDER BY (t DESC)
> AND COMPRESSION = {'sstable_compression': 'LZ4Compressor'};
> statement:
> INSERT INTO test.sample(ts_id, yr, t, v) VALUES (?,?,?,?)
> {noformat}
> with .withBufferSizeInMB(128); it happens more than with
> .withBufferSizeInMB(256);
> code based on
> http://planetcassandra.org/blog/using-the-cassandra-bulk-loader-updated/
> writer.addRow(tsId, year, new Date(time), value);
> Any suggestions will be highly appreciated
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)