[jira] [Commented] (CASSANDRA-8530) Query on a secondary index creates huge CPU spike + unable to trace

2015-09-15 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-8530?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14745552#comment-14745552
 ] 

Tom van den Berge commented on CASSANDRA-8530:
--

Pavel,

I'm having similar problems since I'm using vnodes, but I'm not sure if that's 
causing the problems. Are you using vnodes? Did you manage to find a solution 
or workaround for this problem?

Tom

> Query on a secondary index creates huge CPU spike + unable to trace
> ---
>
> Key: CASSANDRA-8530
> URL: https://issues.apache.org/jira/browse/CASSANDRA-8530
> Project: Cassandra
>  Issue Type: Bug
>  Components: API, Core
> Environment: CentOs 6.5 / Cassandra 2.1.2
>Reporter: Pavel Baranov
>
> After upgrading cassandra from 2.0.10 to 2.1.2 we are having all kinds of 
> issues, especially with performance.
> java version "1.7.0_65"
> Table creation:
> {noformat}
> tweets> desc table tweets;
> CREATE TABLE tweets.tweets (
> uname text,
> tweet_id bigint,
> tweet text,
> tweet_date timestamp,
> tweet_date_only text,
> uid bigint,
> PRIMARY KEY (uname, tweet_id)
> ) WITH CLUSTERING ORDER BY (tweet_id ASC)
> AND bloom_filter_fp_chance = 0.01
> AND caching = '{"keys":"ALL", "rows_per_partition":"NONE"}'
> AND comment = ''
> AND compaction = {'min_threshold': '10', 'class': 
> 'org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy', 
> 'max_threshold': '32'}
> AND compression = {'sstable_compression': 
> 'org.apache.cassandra.io.compress.LZ4Compressor'}
> AND dclocal_read_repair_chance = 0.0
> AND default_time_to_live = 0
> AND gc_grace_seconds = 864000
> AND max_index_interval = 2048
> AND memtable_flush_period_in_ms = 0
> AND min_index_interval = 128
> AND read_repair_chance = 0.1
> AND speculative_retry = '99.0PERCENTILE';
> CREATE INDEX tweets_tweet_date_only_idx ON tweets.tweets (tweet_date_only);
> CREATE INDEX tweets_uid ON tweets.tweets (uid);
> {noformat}
> With Cassandra 2.0.10 this query:
> {noformat}
> select uname from tweets where uid = 636732672 limit 1;
> {noformat}
> did not have any issues. After upgrade, I can see the cpu spikes and load avg 
> goes from ~1 to ~13, especially if I execute the query over and over again.
> Doing "tracing on" does not work and just returns: 
> "Statement trace did not complete within 10 seconds"
> I've done:
> nodetool upgradesstables
> recreated indexes



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-10287) nodetool rebuild does not work with join_ring=false

2015-09-11 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-10287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14740431#comment-14740431
 ] 

Tom van den Berge commented on CASSANDRA-10287:
---

Cassandra 2.1.6.


> nodetool rebuild does not work with join_ring=false
> ---
>
> Key: CASSANDRA-10287
> URL: https://issues.apache.org/jira/browse/CASSANDRA-10287
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Tom van den Berge
>
> I'm setting up a new data center as described in 
> http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_add_dc_to_cluster_t.html,
>  with one change: I'm starting the new node with join_ring=false, because I 
> want to prevent reads to be routed to the new node.
> When starting nodetool rebuild as described in 7b, the command immediately 
> returns. In the log file, I can see a message that states the the streaming 
> is started. Then nothing happens, not even after a few hours of waiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9753) LOCAL_QUORUM reads can block cross-DC if there is a digest mismatch

2015-09-09 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9753?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=14737396#comment-14737396
 ] 

Tom van den Berge commented on CASSANDRA-9753:
--

I was setting up a new DC, and noticed that other DCs were sending read queries 
to this DC, even though clients did not connect to the new DC, and all queries 
were using LOCAL_* consistency. This bug proved to be the cause of this problem.

I was able to work around it by disabling speculative_retry on all tables:
  alter table  with speculative_retry='NONE';

> LOCAL_QUORUM reads can block cross-DC if there is a digest mismatch
> ---
>
> Key: CASSANDRA-9753
> URL: https://issues.apache.org/jira/browse/CASSANDRA-9753
> Project: Cassandra
>  Issue Type: Bug
>  Components: Core
>Reporter: Richard Low
>
> When there is a digest mismatch during the initial read, a data read request 
> is sent to all replicas involved in the initial read. This can be more than 
> the initial blockFor if read repair was done and if speculative retry kicked 
> in. E.g. for RF 3 in two DCs, the number of reads could be 4: 2 for 
> LOCAL_QUORUM, 1 for read repair and 1 for speculative read if one replica was 
> slow. If there is then a digest mismatch, Cassandra will issue the data read 
> to all 4 and set blockFor=4. Now the read query is blocked on cross-DC 
> latency. The digest mismatch read blockFor should be capped at RF for the 
> local DC when using CL.LOCAL_*.
> You can reproduce this behaviour by creating a keyspace with 
> NetworkTopologyStrategy, RF 3 per DC, dc_local_read_repair=1.0 and ALWAYS for 
> speculative read. If you force a digest mismatch (e.g. by deleting a replicas 
> SSTables and restarting) you can see in tracing that it is blocking for 4 
> responses.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (CASSANDRA-10287) nodetool rebuild does not work with join_ring=false

2015-09-08 Thread Tom van den Berge (JIRA)
Tom van den Berge created CASSANDRA-10287:
-

 Summary: nodetool rebuild does not work with join_ring=false
 Key: CASSANDRA-10287
 URL: https://issues.apache.org/jira/browse/CASSANDRA-10287
 Project: Cassandra
  Issue Type: Bug
Reporter: Tom van den Berge


I'm setting up a new data center as described in 
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_add_dc_to_cluster_t.html,
 with one change: I'm starting the new node with join_ring=false, because I 
want to prevent reads to be routed to the new node.

When starting nodetool rebuild as described in 7b, the command immediately 
returns. In the log file, I can see a message that states the the streaming is 
started. Then nothing happens, not even after a few hours of waiting.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-12 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14583088#comment-14583088
 ] 

Tom van den Berge commented on CASSANDRA-9582:
--

{code}
 keyspace_name | columnfamily_name | column_name| component_index | 
index_name | index_options | index_type | type   | validator
---+---++-++---+++-
 drillster |   InvoiceItem |column1 |   0 | 
  null |  null |   null | clustering_key |
org.apache.cassandra.db.marshal.UUIDType
 drillster |   InvoiceItem |   currencyCode |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.UTF8Type
 drillster |   InvoiceItem |description |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.UTF8Type
 drillster |   InvoiceItem |key |null | 
  null |  null |   null |  partition_key |   
org.apache.cassandra.db.marshal.BytesType
 drillster |   InvoiceItem | priceGross |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.LongType
 drillster |   InvoiceItem |  priceNett |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.LongType
 drillster |   InvoiceItem |   quantity |null | 
  null |  null |   null |regular | 
org.apache.cassandra.db.marshal.IntegerType
 drillster |   InvoiceItem |sku |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.UTF8Type
 drillster |   InvoiceItem | unitPriceGross |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.LongType
 drillster |   InvoiceItem |  unitPriceNett |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.LongType
 drillster |   InvoiceItem |vat |null | 
  null |  null |   null |regular |
org.apache.cassandra.db.marshal.LongType
 drillster |   InvoiceItem | vatRateBasisPoints |null | 
  null |  null |   null |regular | 
org.apache.cassandra.db.marshal.IntegerType
{code}

{code}
 keyspace_name | columnfamily_name | bloom_filter_fp_chance | caching   | 
column_aliases | comment | compaction_strategy_class
   | compaction_strategy_options | comparator   
| compression_parameters | default_time_to_live | default_validator 
| dropped_columns | gc_grace_seconds | index_interval | 
is_dense | key_aliases | key_validator | 
local_read_repair_chance | max_compaction_threshold | 
memtable_flush_period_in_ms | min_compaction_threshold | 
populate_io_cache_on_flush | read_repair_chance | replicate_on_write | 
speculative_retry | subcomparator| type  | 
value_alias
---+---++---++-+-+-+--++--+---+-+--++--+-+---+--+--+-+--++++---+--+---+-
 drillster |   InvoiceItem |   null | KEYS_ONLY |   
  [] | | 
org.apache.cassandra.db.compaction.SizeTieredCompactionStrategy |   
   {} | org.apache.cassandra.db.marshal.UUIDType | 
{} |0 | org.apache.cassandra.db.marshal.BytesType | 
   null |   864000 |128 |False |  [] | 
org.apache.cassandra.db.marshal.BytesType |0 |  
 32 |   0 |4 |  
False |  1 |   True |
99.0PERCENTILE | org.apache.cassandra.db.marshal.UTF8Type | Super |null
{code}

 MarshalException 

[jira] [Created] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Tom van den Berge (JIRA)
Tom van den Berge created CASSANDRA-9582:


 Summary: MarshalException after upgrading to 2.1.6
 Key: CASSANDRA-9582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9582
 Project: Cassandra
  Issue Type: Bug
Reporter: Tom van den Berge


I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
run nodetool upgradesstables and nodetool scrub.

When starting up the node with 2.1.6, I'm getting a MarshalException 
(stacktrace included below). For some reason, it seems that C* is trying to 
convert a text value from the column 'currencyCode' to a UUID, which it isn't.
I've had similar errors for two other columns as well, which I could work 
around by dropping the table, since it wasn't used anymore.

The only thing I could do was restoring a snapshot and starting up the old 
2.0.10 again.

The schema of the table (I've got only one table containing a column named 
'currencyCode') is:
CREATE TABLE InvoiceItem (
  key blob,
  column1 uuid,
  currencyCode text,
  description text,
  priceGross bigint,
  priceNett bigint,
  quantity varint,
  sku text,
  unitPriceGross bigint,
  unitPriceNett bigint,
  vat bigint,
  vatRateBasisPoints varint,
  PRIMARY KEY ((key), column1)
) WITH COMPACT STORAGE AND
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  index_interval=128 AND
  read_repair_chance=1.00 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  default_time_to_live=0 AND
  speculative_retry='99.0PERCENTILE' AND
  memtable_flush_period_in_ms=0 AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={};


The stack trace when starting up:

ERROR 13:51:57 Exception encountered during startup
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536) 
[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
[apache-cassandra-2.1.6.jar:2.1.6]
Caused by: org.apache.cassandra.serializers.MarshalException: unable to coerce 
'currencyCode' to a  formatted date (long)
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
~[apache-cassandra-2.1.6.jar:2.1.6]
... 12 common frames omitted
Caused by: java.text.ParseException: Unable to parse the date: currencyCode
at 
org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
 ~[commons-lang3-3.1.jar:3.1]
at 
org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
~[commons-lang3-3.1.jar:3.1]
at 
org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:107)
 ~[apache-cassandra-2.1.6.jar:2.1.6]
... 13 common frames omitted
org.apache.cassandra.serializers.MarshalException: unable to make version 1 
UUID from 'currencyCode'
at 
org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188)
at 
org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
at 

[jira] [Commented] (CASSANDRA-9582) MarshalException after upgrading to 2.1.6

2015-06-11 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-9582?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14582125#comment-14582125
 ] 

Tom van den Berge commented on CASSANDRA-9582:
--

This table is created as a super column family using cassandra-cli.

 MarshalException after upgrading to 2.1.6
 -

 Key: CASSANDRA-9582
 URL: https://issues.apache.org/jira/browse/CASSANDRA-9582
 Project: Cassandra
  Issue Type: Bug
Reporter: Tom van den Berge

 I've upgraded a node from 2.0.10 to 2.1.6. Before taking down the node, I've 
 run nodetool upgradesstables and nodetool scrub.
 When starting up the node with 2.1.6, I'm getting a MarshalException 
 (stacktrace included below). For some reason, it seems that C* is trying to 
 convert a text value from the column 'currencyCode' to a UUID, which it isn't.
 I've had similar errors for two other columns as well, which I could work 
 around by dropping the table, since it wasn't used anymore.
 The only thing I could do was restoring a snapshot and starting up the old 
 2.0.10 again.
 The schema of the table (I've got only one table containing a column named 
 'currencyCode') is:
 CREATE TABLE InvoiceItem (
   key blob,
   column1 uuid,
   currencyCode text,
   description text,
   priceGross bigint,
   priceNett bigint,
   quantity varint,
   sku text,
   unitPriceGross bigint,
   unitPriceNett bigint,
   vat bigint,
   vatRateBasisPoints varint,
   PRIMARY KEY ((key), column1)
 ) WITH COMPACT STORAGE AND
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   index_interval=128 AND
   read_repair_chance=1.00 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   default_time_to_live=0 AND
   speculative_retry='99.0PERCENTILE' AND
   memtable_flush_period_in_ms=0 AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={};
 The stack trace when starting up:
 ERROR 13:51:57 Exception encountered during startup
 org.apache.cassandra.serializers.MarshalException: unable to make version 1 
 UUID from 'currencyCode'
   at 
 org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:188) 
 ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.db.marshal.AbstractCompositeType.fromString(AbstractCompositeType.java:242)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.ColumnDefinition.fromSchema(ColumnDefinition.java:397)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.CFMetaData.fromSchemaNoTriggers(CFMetaData.java:1750)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.CFMetaData.fromSchema(CFMetaData.java:1860) 
 ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.KSMetaData.deserializeColumnFamilies(KSMetaData.java:321)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.KSMetaData.fromSchema(KSMetaData.java:302) 
 ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.db.DefsTables.loadFromKeyspace(DefsTables.java:133) 
 ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:696)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.config.DatabaseDescriptor.loadSchemas(DatabaseDescriptor.java:672)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:293) 
 [apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:536)
  [apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:625) 
 [apache-cassandra-2.1.6.jar:2.1.6]
 Caused by: org.apache.cassandra.serializers.MarshalException: unable to 
 coerce 'currencyCode' to a  formatted date (long)
   at 
 org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:111)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   at 
 org.apache.cassandra.db.marshal.UUIDType.fromString(UUIDType.java:184) 
 ~[apache-cassandra-2.1.6.jar:2.1.6]
   ... 12 common frames omitted
 Caused by: java.text.ParseException: Unable to parse the date: currencyCode
   at 
 org.apache.commons.lang3.time.DateUtils.parseDateWithLeniency(DateUtils.java:336)
  ~[commons-lang3-3.1.jar:3.1]
   at 
 org.apache.commons.lang3.time.DateUtils.parseDateStrictly(DateUtils.java:286) 
 ~[commons-lang3-3.1.jar:3.1]
   at 
 org.apache.cassandra.serializers.TimestampSerializer.dateStringToTimestamp(TimestampSerializer.java:107)
  ~[apache-cassandra-2.1.6.jar:2.1.6]
   ... 13 common frames omitted
 

[jira] [Created] (CASSANDRA-7418) Cannot upgrade: Tried to create duplicate hard link

2014-06-19 Thread Tom van den Berge (JIRA)
Tom van den Berge created CASSANDRA-7418:


 Summary: Cannot upgrade: Tried to create duplicate hard link
 Key: CASSANDRA-7418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7418
 Project: Cassandra
  Issue Type: Bug
Reporter: Tom van den Berge
Priority: Critical


I'm trying to migrate a cluster from 1.2.14 to 2.0.8. When starting up 2.0.8, 
I'm seeing the following error in the logs:


 INFO 17:40:25,405 Snapshotting drillster, Account to pre-sstablemetamigration
ERROR 17:40:25,407 Exception encountered during startup
java.lang.RuntimeException: Tried to create duplicate hard link to 
/Users/tom/cassandra-data/data/drillster/Account/snapshots/pre-sstablemetamigration/drillster-Account-ic-65-Filter.db
at 
org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
at 
org.apache.cassandra.db.compaction.LegacyLeveledManifest.snapshotWithoutCFS(LegacyLeveledManifest.java:129)
at 
org.apache.cassandra.db.compaction.LegacyLeveledManifest.migrateManifests(LegacyLeveledManifest.java:91)
at 
org.apache.cassandra.db.compaction.LeveledManifest.maybeMigrateManifests(LeveledManifest.java:617)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:274)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)


As a result, I can't start up 2.0.8, and switching back to 1.2.14 also doesn't 
work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Updated] (CASSANDRA-7418) Cannot upgrade: Tried to create duplicate hard link

2014-06-19 Thread Tom van den Berge (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom van den Berge updated CASSANDRA-7418:
-

Environment: OSX

 Cannot upgrade: Tried to create duplicate hard link
 ---

 Key: CASSANDRA-7418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7418
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX
Reporter: Tom van den Berge
Priority: Critical

 I'm trying to migrate a cluster from 1.2.14 to 2.0.8. When starting up 2.0.8, 
 I'm seeing the following error in the logs:
  INFO 17:40:25,405 Snapshotting drillster, Account to pre-sstablemetamigration
 ERROR 17:40:25,407 Exception encountered during startup
 java.lang.RuntimeException: Tried to create duplicate hard link to 
 /Users/tom/cassandra-data/data/drillster/Account/snapshots/pre-sstablemetamigration/drillster-Account-ic-65-Filter.db
 at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.snapshotWithoutCFS(LegacyLeveledManifest.java:129)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.migrateManifests(LegacyLeveledManifest.java:91)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.maybeMigrateManifests(LeveledManifest.java:617)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:274)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 As a result, I can't start up 2.0.8, and switching back to 1.2.14 also 
 doesn't work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-7418) Cannot upgrade: Tried to create duplicate hard link

2014-06-19 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-7418?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14037783#comment-14037783
 ] 

Tom van den Berge commented on CASSANDRA-7418:
--

You are right! Removing all pre-sstablemetamigration snapshots solved it. 
Thanks a lot!




 Cannot upgrade: Tried to create duplicate hard link
 ---

 Key: CASSANDRA-7418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7418
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX
Reporter: Tom van den Berge
Priority: Critical

 I'm trying to migrate a cluster from 1.2.14 to 2.0.8. When starting up 2.0.8, 
 I'm seeing the following error in the logs:
  INFO 17:40:25,405 Snapshotting drillster, Account to pre-sstablemetamigration
 ERROR 17:40:25,407 Exception encountered during startup
 java.lang.RuntimeException: Tried to create duplicate hard link to 
 /Users/tom/cassandra-data/data/drillster/Account/snapshots/pre-sstablemetamigration/drillster-Account-ic-65-Filter.db
 at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.snapshotWithoutCFS(LegacyLeveledManifest.java:129)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.migrateManifests(LegacyLeveledManifest.java:91)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.maybeMigrateManifests(LeveledManifest.java:617)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:274)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 As a result, I can't start up 2.0.8, and switching back to 1.2.14 also 
 doesn't work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Resolved] (CASSANDRA-7418) Cannot upgrade: Tried to create duplicate hard link

2014-06-19 Thread Tom van den Berge (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-7418?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Tom van den Berge resolved CASSANDRA-7418.
--

Resolution: Not a Problem

 Cannot upgrade: Tried to create duplicate hard link
 ---

 Key: CASSANDRA-7418
 URL: https://issues.apache.org/jira/browse/CASSANDRA-7418
 Project: Cassandra
  Issue Type: Bug
 Environment: OSX
Reporter: Tom van den Berge
Priority: Critical

 I'm trying to migrate a cluster from 1.2.14 to 2.0.8. When starting up 2.0.8, 
 I'm seeing the following error in the logs:
  INFO 17:40:25,405 Snapshotting drillster, Account to pre-sstablemetamigration
 ERROR 17:40:25,407 Exception encountered during startup
 java.lang.RuntimeException: Tried to create duplicate hard link to 
 /Users/tom/cassandra-data/data/drillster/Account/snapshots/pre-sstablemetamigration/drillster-Account-ic-65-Filter.db
 at 
 org.apache.cassandra.io.util.FileUtils.createHardLink(FileUtils.java:75)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.snapshotWithoutCFS(LegacyLeveledManifest.java:129)
 at 
 org.apache.cassandra.db.compaction.LegacyLeveledManifest.migrateManifests(LegacyLeveledManifest.java:91)
 at 
 org.apache.cassandra.db.compaction.LeveledManifest.maybeMigrateManifests(LeveledManifest.java:617)
 at 
 org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:274)
 at 
 org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:496)
 at 
 org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:585)
 As a result, I can't start up 2.0.8, and switching back to 1.2.14 also 
 doesn't work.



--
This message was sent by Atlassian JIRA
(v6.2#6252)


[jira] [Commented] (CASSANDRA-5706) OOM while loading key cache at startup

2013-10-18 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-5706?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13798919#comment-13798919
 ] 

Tom van den Berge commented on CASSANDRA-5706:
--

I'm having this problem again since I've upgraded to 1.2.10.

 OOM while loading key cache at startup
 --

 Key: CASSANDRA-5706
 URL: https://issues.apache.org/jira/browse/CASSANDRA-5706
 Project: Cassandra
  Issue Type: Bug
Affects Versions: 1.2.0
Reporter: Fabien Rousseau
Assignee: Fabien Rousseau
 Fix For: 1.2.7

 Attachments: 5706-OOM-while-loading-key-cache-at-startup.patch, 
 5706-v2-txt


 Steps to be able to reproduce it :
  - have a heap of 1Gb
  - have a saved key cache without the SSTables
 When looking at KeyCacheSerializer.serialize : it always writes a Boolean
 When looking at KeyCacheSerializer.deserialize : no Boolean is read if 
 SSTable is missing...
 In case of a promoted index, RowIndexEntry.serializer.skip(...) should be 
 called rather than RowIndexEntry.serializer.skipPromotedIndex(...) (again for 
 symmetry between serialization/deserialization)
 Attached is a proposed patch



--
This message was sent by Atlassian JIRA
(v6.1#6144)


[jira] [Commented] (CASSANDRA-4785) Secondary Index Sporadically Doesn't Return Rows

2013-10-07 Thread Tom van den Berge (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-4785?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=13788480#comment-13788480
 ] 

Tom van den Berge commented on CASSANDRA-4785:
--

I'm seeing this problem, too, (cassandra 1.2.3), but my CF has caching 
KEYS_ONLY. It only happens to specific rows in the CF; not all. Also, it only 
happens on one single node in my 2-node cluster (replication factor 2). 

Storing the indexed value again solves the problem for this particular row, but 
I've seen this problem happen several times now, even on the same rows -- also 
after having fixed it as I just described. I'm not 100% sure, but I think the 
problem occurred again after having rebuilt the node.

 Secondary Index Sporadically Doesn't Return Rows
 

 Key: CASSANDRA-4785
 URL: https://issues.apache.org/jira/browse/CASSANDRA-4785
 Project: Cassandra
  Issue Type: Bug
  Components: Core
Affects Versions: 1.1.5, 1.1.6
 Environment: Ubuntu 10.04
 Java 6 Sun
 Cassandra 1.1.5 upgraded from 1.1.2 - 1.1.3 - 1.1.5
Reporter: Arya Goudarzi
Assignee: Sam Tunnicliffe
 Attachments: entity_aliases.txt, repro.py


 I have a ColumnFamily with caching = ALL. I have 2 secondary indexes on it. I 
 have noticed if I query using the secondary index in the where clause, 
 sometimes I get the results and sometimes I don't. Until 2 weeks ago, the 
 caching option on this CF was set to NONE. So, I suspect something happened 
 in secondary index caching scheme. 
 Here are things I tried:
 1. I rebuild indexes for that CF on all nodes;
 2. I set the caching to KEYS_ONLY and rebuild the index again;
 3. I set the caching to NONE and rebuild the index again;
 None of the above helped. I suppose the caching still exists as this behavior 
 looks like cache mistmatch.
 I did a bit research, and found CASSANDRA-4197 that could be related.
 Please advice.



--
This message was sent by Atlassian JIRA
(v6.1#6144)