[jira] [Updated] (CASSANDRA-13109) Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Jeremy Hanna updated CASSANDRA-13109: - Labels: LWT (was: ) > Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 > --- > > Key: CASSANDRA-13109 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13109 > Project: Cassandra > Issue Type: Bug >Reporter: Samuel Klock >Assignee: Samuel Klock >Priority: Major > Labels: LWT > Fix For: 3.0.11, 3.11.0 > > Attachments: 13109-3.0.txt > > > We've observed this upgrading from 2.1.15 to 3.0.8 and from 2.1.16 to 3.0.10: > some lightweight transactions executed on upgraded nodes fail with a read > failure. The following conditions seem relevant to this occurring: > * The transaction must be conditioned on the current value of at least one > column, e.g., {{IF NOT EXISTS}} transactions don't seem to be affected. > * There should be a collection column (in our case, a map) defined on the > table on which the transaction is executed. > * The transaction should be executed before sstables on the node are > upgraded. The failure does not occur after the sstables have been upgraded > (whether via {{nodetool upgradesstables}} or effectively via compaction). > * Upgraded nodes seem to be able to participate in lightweight transactions > as long as they're not the coordinator. > * The values in the row being manipulated by the transaction must have been > consistently manipulated by lightweight transactions (perhaps the existence > of Paxos state for the partition is somehow relevant?). > * In 3.0.10, it _seems_ to be necessary to have the partition split across > multiple legacy sstables. This was not necessary to reproduce the bug in > 3.0.8 or .9. > For applications affected by this bug, a possible workaround is to prevent > nodes being upgraded from coordinating requests until sstables have been > upgraded. > We're able to reproduce this when upgrading from 2.1.16 to 3.0.10 with the > following steps on a single-node cluster using a mostly pristine > {{cassandra.yaml}} from the source distribution. > # Start Cassandra-2.1.16 on the node. > # Create a table with a collection column and insert some data into it. > {code:sql} > CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > CREATE TABLE test.test (key TEXT PRIMARY KEY, cas_target TEXT, > some_collection MAP); > INSERT INTO test.test (key, cas_target, some_collection) VALUES ('key', > 'value', {}) IF NOT EXISTS; > {code} > # Flush the row to an sstable: {{nodetool flush}}. > # Update the row: > {code:sql} > UPDATE test.test SET cas_target = 'newvalue', some_collection = {} WHERE key > = 'key' IF cas_target = 'value'; > {code} > # Drain the node: {{nodetool drain}} > # Stop the node, upgrade to 3.0.10, and start the node. > # Attempt to update the row again: > {code:sql} > UPDATE test.test SET cas_target = 'lastvalue' WHERE key = 'key' IF cas_target > = 'newvalue'; > {code} > Using {{cqlsh}}, if the error is reproduced, the following output will be > returned: > {code:sql} > $ ./cqlsh <<< "UPDATE test.test SET cas_target = 'newvalue', some_collection > = {} WHERE key = 'key' IF cas_target = 'value';" > (start: 2016-12-22 10:14:27 EST) > :2:ReadFailure: Error from server: code=1300 [Replica(s) failed to > execute read] message="Operation failed - received 0 responses and 1 > failures" info={'failures': 1, 'received_responses': 0, 'required_responses': > 1, 'consistency': 'QUORUM'} > {code} > and the following stack trace will be present in the system log: > {noformat} > WARN 15:14:28 Uncaught exception on thread > Thread[SharedPool-Worker-10,10,main]: {} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2476) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[main/:na] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:617) > ~[main/:na] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:569) >
[jira] [Updated] (CASSANDRA-13109) Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Sylvain Lebresne updated CASSANDRA-13109: - Resolution: Fixed Fix Version/s: 3.11.0 3.0.11 Reproduced In: 3.0.10, 3.0.9, 3.0.8 (was: 3.0.8, 3.0.9, 3.0.10) Status: Resolved (was: Patch Available) CI was clean so committed, thanks. > Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 > --- > > Key: CASSANDRA-13109 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13109 > Project: Cassandra > Issue Type: Bug >Reporter: Samuel Klock >Assignee: Samuel Klock > Fix For: 3.0.11, 3.11.0 > > Attachments: 13109-3.0.txt > > > We've observed this upgrading from 2.1.15 to 3.0.8 and from 2.1.16 to 3.0.10: > some lightweight transactions executed on upgraded nodes fail with a read > failure. The following conditions seem relevant to this occurring: > * The transaction must be conditioned on the current value of at least one > column, e.g., {{IF NOT EXISTS}} transactions don't seem to be affected. > * There should be a collection column (in our case, a map) defined on the > table on which the transaction is executed. > * The transaction should be executed before sstables on the node are > upgraded. The failure does not occur after the sstables have been upgraded > (whether via {{nodetool upgradesstables}} or effectively via compaction). > * Upgraded nodes seem to be able to participate in lightweight transactions > as long as they're not the coordinator. > * The values in the row being manipulated by the transaction must have been > consistently manipulated by lightweight transactions (perhaps the existence > of Paxos state for the partition is somehow relevant?). > * In 3.0.10, it _seems_ to be necessary to have the partition split across > multiple legacy sstables. This was not necessary to reproduce the bug in > 3.0.8 or .9. > For applications affected by this bug, a possible workaround is to prevent > nodes being upgraded from coordinating requests until sstables have been > upgraded. > We're able to reproduce this when upgrading from 2.1.16 to 3.0.10 with the > following steps on a single-node cluster using a mostly pristine > {{cassandra.yaml}} from the source distribution. > # Start Cassandra-2.1.16 on the node. > # Create a table with a collection column and insert some data into it. > {code:sql} > CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > CREATE TABLE test.test (key TEXT PRIMARY KEY, cas_target TEXT, > some_collection MAP); > INSERT INTO test.test (key, cas_target, some_collection) VALUES ('key', > 'value', {}) IF NOT EXISTS; > {code} > # Flush the row to an sstable: {{nodetool flush}}. > # Update the row: > {code:sql} > UPDATE test.test SET cas_target = 'newvalue', some_collection = {} WHERE key > = 'key' IF cas_target = 'value'; > {code} > # Drain the node: {{nodetool drain}} > # Stop the node, upgrade to 3.0.10, and start the node. > # Attempt to update the row again: > {code:sql} > UPDATE test.test SET cas_target = 'lastvalue' WHERE key = 'key' IF cas_target > = 'newvalue'; > {code} > Using {{cqlsh}}, if the error is reproduced, the following output will be > returned: > {code:sql} > $ ./cqlsh <<< "UPDATE test.test SET cas_target = 'newvalue', some_collection > = {} WHERE key = 'key' IF cas_target = 'value';" > (start: 2016-12-22 10:14:27 EST) > :2:ReadFailure: Error from server: code=1300 [Replica(s) failed to > execute read] message="Operation failed - received 0 responses and 1 > failures" info={'failures': 1, 'received_responses': 0, 'required_responses': > 1, 'consistency': 'QUORUM'} > {code} > and the following stack trace will be present in the system log: > {noformat} > WARN 15:14:28 Uncaught exception on thread > Thread[SharedPool-Worker-10,10,main]: {} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2476) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[main/:na] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at >
[jira] [Updated] (CASSANDRA-13109) Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Joshua McKenzie updated CASSANDRA-13109: Reproduced In: 3.0.10, 3.0.9, 3.0.8 (was: 3.0.8, 3.0.9, 3.0.10) Reviewer: Sylvain Lebresne > Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 > --- > > Key: CASSANDRA-13109 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13109 > Project: Cassandra > Issue Type: Bug >Reporter: Samuel Klock > Attachments: 13109-3.0.txt > > > We've observed this upgrading from 2.1.15 to 3.0.8 and from 2.1.16 to 3.0.10: > some lightweight transactions executed on upgraded nodes fail with a read > failure. The following conditions seem relevant to this occurring: > * The transaction must be conditioned on the current value of at least one > column, e.g., {{IF NOT EXISTS}} transactions don't seem to be affected. > * There should be a collection column (in our case, a map) defined on the > table on which the transaction is executed. > * The transaction should be executed before sstables on the node are > upgraded. The failure does not occur after the sstables have been upgraded > (whether via {{nodetool upgradesstables}} or effectively via compaction). > * Upgraded nodes seem to be able to participate in lightweight transactions > as long as they're not the coordinator. > * The values in the row being manipulated by the transaction must have been > consistently manipulated by lightweight transactions (perhaps the existence > of Paxos state for the partition is somehow relevant?). > * In 3.0.10, it _seems_ to be necessary to have the partition split across > multiple legacy sstables. This was not necessary to reproduce the bug in > 3.0.8 or .9. > For applications affected by this bug, a possible workaround is to prevent > nodes being upgraded from coordinating requests until sstables have been > upgraded. > We're able to reproduce this when upgrading from 2.1.16 to 3.0.10 with the > following steps on a single-node cluster using a mostly pristine > {{cassandra.yaml}} from the source distribution. > # Start Cassandra-2.1.16 on the node. > # Create a table with a collection column and insert some data into it. > {code:sql} > CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > CREATE TABLE test.test (key TEXT PRIMARY KEY, cas_target TEXT, > some_collection MAP); > INSERT INTO test.test (key, cas_target, some_collection) VALUES ('key', > 'value', {}) IF NOT EXISTS; > {code} > # Flush the row to an sstable: {{nodetool flush}}. > # Update the row: > {code:sql} > UPDATE test.test SET cas_target = 'newvalue', some_collection = {} WHERE key > = 'key' IF cas_target = 'value'; > {code} > # Drain the node: {{nodetool drain}} > # Stop the node, upgrade to 3.0.10, and start the node. > # Attempt to update the row again: > {code:sql} > UPDATE test.test SET cas_target = 'lastvalue' WHERE key = 'key' IF cas_target > = 'newvalue'; > {code} > Using {{cqlsh}}, if the error is reproduced, the following output will be > returned: > {code:sql} > $ ./cqlsh <<< "UPDATE test.test SET cas_target = 'newvalue', some_collection > = {} WHERE key = 'key' IF cas_target = 'value';" > (start: 2016-12-22 10:14:27 EST) > :2:ReadFailure: Error from server: code=1300 [Replica(s) failed to > execute read] message="Operation failed - received 0 responses and 1 > failures" info={'failures': 1, 'received_responses': 0, 'required_responses': > 1, 'consistency': 'QUORUM'} > {code} > and the following stack trace will be present in the system log: > {noformat} > WARN 15:14:28 Uncaught exception on thread > Thread[SharedPool-Worker-10,10,main]: {} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2476) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[main/:na] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:617) > ~[main/:na] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:569) > ~[main/:na] > at >
[jira] [Updated] (CASSANDRA-13109) Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samuel Klock updated CASSANDRA-13109: - Attachment: 13109-3.0.txt Attaching the patch. > Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 > --- > > Key: CASSANDRA-13109 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13109 > Project: Cassandra > Issue Type: Bug >Reporter: Samuel Klock > Attachments: 13109-3.0.txt > > > We've observed this upgrading from 2.1.15 to 3.0.8 and from 2.1.16 to 3.0.10: > some lightweight transactions executed on upgraded nodes fail with a read > failure. The following conditions seem relevant to this occurring: > * The transaction must be conditioned on the current value of at least one > column, e.g., {{IF NOT EXISTS}} transactions don't seem to be affected. > * There should be a collection column (in our case, a map) defined on the > table on which the transaction is executed. > * The transaction should be executed before sstables on the node are > upgraded. The failure does not occur after the sstables have been upgraded > (whether via {{nodetool upgradesstables}} or effectively via compaction). > * Upgraded nodes seem to be able to participate in lightweight transactions > as long as they're not the coordinator. > * The values in the row being manipulated by the transaction must have been > consistently manipulated by lightweight transactions (perhaps the existence > of Paxos state for the partition is somehow relevant?). > * In 3.0.10, it _seems_ to be necessary to have the partition split across > multiple legacy sstables. This was not necessary to reproduce the bug in > 3.0.8 or .9. > For applications affected by this bug, a possible workaround is to prevent > nodes being upgraded from coordinating requests until sstables have been > upgraded. > We're able to reproduce this when upgrading from 2.1.16 to 3.0.10 with the > following steps on a single-node cluster using a mostly pristine > {{cassandra.yaml}} from the source distribution. > # Start Cassandra-2.1.16 on the node. > # Create a table with a collection column and insert some data into it. > {code:sql} > CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > CREATE TABLE test.test (key TEXT PRIMARY KEY, cas_target TEXT, > some_collection MAP); > INSERT INTO test.test (key, cas_target, some_collection) VALUES ('key', > 'value', {}) IF NOT EXISTS; > {code} > # Flush the row to an sstable: {{nodetool flush}}. > # Update the row: > {code:sql} > UPDATE test.test SET cas_target = 'newvalue', some_collection = {} WHERE key > = 'key' IF cas_target = 'value'; > {code} > # Drain the node: {{nodetool drain}} > # Stop the node, upgrade to 3.0.10, and start the node. > # Attempt to update the row again: > {code:sql} > UPDATE test.test SET cas_target = 'lastvalue' WHERE key = 'key' IF cas_target > = 'newvalue'; > {code} > Using {{cqlsh}}, if the error is reproduced, the following output will be > returned: > {code:sql} > $ ./cqlsh <<< "UPDATE test.test SET cas_target = 'newvalue', some_collection > = {} WHERE key = 'key' IF cas_target = 'value';" > (start: 2016-12-22 10:14:27 EST) > :2:ReadFailure: Error from server: code=1300 [Replica(s) failed to > execute read] message="Operation failed - received 0 responses and 1 > failures" info={'failures': 1, 'received_responses': 0, 'required_responses': > 1, 'consistency': 'QUORUM'} > {code} > and the following stack trace will be present in the system log: > {noformat} > WARN 15:14:28 Uncaught exception on thread > Thread[SharedPool-Worker-10,10,main]: {} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2476) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[main/:na] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:617) > ~[main/:na] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:569) > ~[main/:na] > at > org.apache.cassandra.utils.MergeIterator$ManyToOne.consume(MergeIterator.java:220)
[jira] [Updated] (CASSANDRA-13109) Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0
[ https://issues.apache.org/jira/browse/CASSANDRA-13109?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] Samuel Klock updated CASSANDRA-13109: - Reproduced In: 3.0.10, 3.0.9, 3.0.8 (was: 3.0.8, 3.0.9, 3.0.10) Status: Patch Available (was: Open) > Lightweight transactions temporarily fail after upgrade from 2.1 to 3.0 > --- > > Key: CASSANDRA-13109 > URL: https://issues.apache.org/jira/browse/CASSANDRA-13109 > Project: Cassandra > Issue Type: Bug >Reporter: Samuel Klock > Attachments: 13109-3.0.txt > > > We've observed this upgrading from 2.1.15 to 3.0.8 and from 2.1.16 to 3.0.10: > some lightweight transactions executed on upgraded nodes fail with a read > failure. The following conditions seem relevant to this occurring: > * The transaction must be conditioned on the current value of at least one > column, e.g., {{IF NOT EXISTS}} transactions don't seem to be affected. > * There should be a collection column (in our case, a map) defined on the > table on which the transaction is executed. > * The transaction should be executed before sstables on the node are > upgraded. The failure does not occur after the sstables have been upgraded > (whether via {{nodetool upgradesstables}} or effectively via compaction). > * Upgraded nodes seem to be able to participate in lightweight transactions > as long as they're not the coordinator. > * The values in the row being manipulated by the transaction must have been > consistently manipulated by lightweight transactions (perhaps the existence > of Paxos state for the partition is somehow relevant?). > * In 3.0.10, it _seems_ to be necessary to have the partition split across > multiple legacy sstables. This was not necessary to reproduce the bug in > 3.0.8 or .9. > For applications affected by this bug, a possible workaround is to prevent > nodes being upgraded from coordinating requests until sstables have been > upgraded. > We're able to reproduce this when upgrading from 2.1.16 to 3.0.10 with the > following steps on a single-node cluster using a mostly pristine > {{cassandra.yaml}} from the source distribution. > # Start Cassandra-2.1.16 on the node. > # Create a table with a collection column and insert some data into it. > {code:sql} > CREATE KEYSPACE test WITH REPLICATION = {'class': 'SimpleStrategy', > 'replication_factor': 1}; > CREATE TABLE test.test (key TEXT PRIMARY KEY, cas_target TEXT, > some_collection MAP); > INSERT INTO test.test (key, cas_target, some_collection) VALUES ('key', > 'value', {}) IF NOT EXISTS; > {code} > # Flush the row to an sstable: {{nodetool flush}}. > # Update the row: > {code:sql} > UPDATE test.test SET cas_target = 'newvalue', some_collection = {} WHERE key > = 'key' IF cas_target = 'value'; > {code} > # Drain the node: {{nodetool drain}} > # Stop the node, upgrade to 3.0.10, and start the node. > # Attempt to update the row again: > {code:sql} > UPDATE test.test SET cas_target = 'lastvalue' WHERE key = 'key' IF cas_target > = 'newvalue'; > {code} > Using {{cqlsh}}, if the error is reproduced, the following output will be > returned: > {code:sql} > $ ./cqlsh <<< "UPDATE test.test SET cas_target = 'newvalue', some_collection > = {} WHERE key = 'key' IF cas_target = 'value';" > (start: 2016-12-22 10:14:27 EST) > :2:ReadFailure: Error from server: code=1300 [Replica(s) failed to > execute read] message="Operation failed - received 0 responses and 1 > failures" info={'failures': 1, 'received_responses': 0, 'required_responses': > 1, 'consistency': 'QUORUM'} > {code} > and the following stack trace will be present in the system log: > {noformat} > WARN 15:14:28 Uncaught exception on thread > Thread[SharedPool-Worker-10,10,main]: {} > java.lang.RuntimeException: java.lang.NullPointerException > at > org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:2476) > ~[main/:na] > at > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) > ~[na:1.8.0_101] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$FutureTask.run(AbstractLocalAwareExecutorService.java:164) > ~[main/:na] > at > org.apache.cassandra.concurrent.AbstractLocalAwareExecutorService$LocalSessionFutureTask.run(AbstractLocalAwareExecutorService.java:136) > [main/:na] > at org.apache.cassandra.concurrent.SEPWorker.run(SEPWorker.java:105) > [main/:na] > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101] > Caused by: java.lang.NullPointerException: null > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:617) > ~[main/:na] > at > org.apache.cassandra.db.rows.Row$Merger$ColumnDataReducer.getReduced(Row.java:569) > ~[main/:na] > at >