[jira] [Assigned] (CASSANDRA-13978) dtest failure in cqlsh_tests/cqlsh_tests.py:TestCqlsh.test_pep8_compliance due to deprecation warning

2017-11-19 Thread Mahdi Mohammadi (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-13978?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Mahdi Mohammadi reassigned CASSANDRA-13978:
---

Assignee: Mahdi Mohammadi

> dtest failure in cqlsh_tests/cqlsh_tests.py:TestCqlsh.test_pep8_compliance 
> due to deprecation warning
> -
>
> Key: CASSANDRA-13978
> URL: https://issues.apache.org/jira/browse/CASSANDRA-13978
> Project: Cassandra
>  Issue Type: Test
>  Components: Testing
>Reporter: Joel Knighton
>Assignee: Mahdi Mohammadi
>Priority: Trivial
>  Labels: lhf, low-hanging-fruit
>
> The dtest {{cqlsh_tests/cqlsh_tests.py:TestCqlsh.test_pep8_compliance}} fails 
> on all branches with pep8 package version 1.7.1. The pep8 package has been 
> deprecated and renamed pycodestyle.
> {code}
> ==
> FAIL: test_pep8_compliance (cqlsh_tests.cqlsh_tests.TestCqlsh)
> --
> Traceback (most recent call last):
>   File "/home/jkni/projects/cassandra-dtest/tools/decorators.py", line 48, in 
> wrapped
> f(obj)
>   File "/home/jkni/projects/cassandra-dtest/cqlsh_tests/cqlsh_tests.py", line 
> 68, in test_pep8_compliance
> self.assertEqual(len(stderr), 0, stderr)
> AssertionError: 
> /home/jkni/projects/cassandra-dtest/venv/lib/python2.7/site-packages/pep8.py:2124:
>  UserWarning: 
> pep8 has been renamed to pycodestyle (GitHub issue #466)
> Use of the pep8 tool will be removed in a future release.
> Please install and use `pycodestyle` instead.
> $ pip install pycodestyle
> $ pycodestyle ...
>   '\n\n'
> {code}
> We should update this dependency from pep8 to pycodestyle. With this change, 
> several new errors are thrown. I don't know if these are new checks that we 
> should choose to ignore, false positives due to new behaviors, or false 
> negatives that are now successfully caught. If they were previously false 
> negatives, we'll need to fix these in cqlsh on some branches.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-14061) trunk eclipse-warnings

2017-11-19 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-14061:
--

 Summary: trunk eclipse-warnings
 Key: CASSANDRA-14061
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14061
 Project: Cassandra
  Issue Type: Bug
  Components: Testing
Reporter: Jay Zhuang
Priority: Minor


{noformat}
eclipse-warnings:
[mkdir] Created dir: /home/ubuntu/cassandra/build/ecj
 [echo] Running Eclipse Code Analysis.  Output logged to 
/home/ubuntu/cassandra/build/ecj/eclipse_compiler_checks.txt
 [java] --
 [java] 1. ERROR in 
/home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java
 (at line 59)
 [java] return new SSTableIdentityIterator(sstable, key, 
partitionLevelDeletion, file.getPath(), iterator);
 [java] 
^^^
 [java] Potential resource leak: 'iterator' may not be closed at this 
location
 [java] --
 [java] 2. ERROR in 
/home/ubuntu/cassandra/src/java/org/apache/cassandra/io/sstable/SSTableIdentityIterator.java
 (at line 79)
 [java] return new SSTableIdentityIterator(sstable, key, 
partitionLevelDeletion, dfile.getPath(), iterator);
 [java] 

 [java] Potential resource leak: 'iterator' may not be closed at this 
location
 [java] --
 [java] 2 problems (2 errors)
{noformat}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14060) Separate CorruptSSTableException and FSError handling policies

2017-11-19 Thread Jay Zhuang (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14060?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jay Zhuang updated CASSANDRA-14060:
---
Description: 
Currently, if 
[{{disk_failure_policy}}|https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L230]
 is set to {{stop}} (default), StorageService will shutdown for {{FSError}}, 
but not {{CorruptSSTableException}} 
[DefaultFSErrorHandler.java:40|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DefaultFSErrorHandler.java#L40].

But when we use policy: {{die}}, it has different behave, JVM will be killed 
for both {{FSError}} and {{CorruptSSTableException}} 
[JVMStabilityInspector.java:63|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/JVMStabilityInspector.java#L63]:

||{{disk_failure_policy}}|| hit {{FSError}} Exception || hit 
{{CorruptSSTableException}} ||
|{{stop}}| (/) stop | (x) not stop |
|{{die}}| (/) die | (/) die |

We saw {{CorruptSSTableException}} from time to time in our production, but 
mostly it's *not* because of a disk issue. So I would suggest having a separate 
policy for CorruptSSTable.

  was:
Currently, if 
[{{disk_failure_policy}}|https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L230]
 is set to {{stop}} (default), StorageService will shutdown for {{FSError}}, 
but not {{CorruptSSTableException}} 
[DefaultFSErrorHandler.java:40|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DefaultFSErrorHandler.java#L40].

But when we use policy: {{die}}, it has different behave, JVM will be killed 
for both {{FSError}} and {{CorruptSSTableException}} 
[JVMStabilityInspector.java:63|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/JVMStabilityInspector.java#L63]:

||{{dis_failure_policy}}|| hit {{FSError}} Exception || hit 
{{CorruptSSTableException}} ||
|{{stop}}| (/) stop | (x) not stop |
|{{die}}| (/) die | (/) die |

We saw {{CorruptSSTableException}} from time to time in our production, but 
mostly it's *not* because of a disk issue. So I would suggest having a separate 
policy for CorruptSSTable.


> Separate CorruptSSTableException and FSError handling policies
> --
>
> Key: CASSANDRA-14060
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14060
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Configuration
>Reporter: Jay Zhuang
>Assignee: Jay Zhuang
>Priority: Minor
>
> Currently, if 
> [{{disk_failure_policy}}|https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L230]
>  is set to {{stop}} (default), StorageService will shutdown for {{FSError}}, 
> but not {{CorruptSSTableException}} 
> [DefaultFSErrorHandler.java:40|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DefaultFSErrorHandler.java#L40].
> But when we use policy: {{die}}, it has different behave, JVM will be killed 
> for both {{FSError}} and {{CorruptSSTableException}} 
> [JVMStabilityInspector.java:63|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/JVMStabilityInspector.java#L63]:
> ||{{disk_failure_policy}}|| hit {{FSError}} Exception || hit 
> {{CorruptSSTableException}} ||
> |{{stop}}| (/) stop | (x) not stop |
> |{{die}}| (/) die | (/) die |
> We saw {{CorruptSSTableException}} from time to time in our production, but 
> mostly it's *not* because of a disk issue. So I would suggest having a 
> separate policy for CorruptSSTable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14012) Document gossip protocol

2017-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258811#comment-16258811
 ] 

Jörn Heissler edited comment on CASSANDRA-14012 at 11/20/17 4:39 AM:
-

tl;dr: local DNS name as broadcast address, DNS resolver answered with 
127.0.1.1.

And here's the story of me being stupid:

Cassandra configuration looks like this:
{noformat}
broadcast_rpc_address: cassandra1
{noformat}

I tried to join this node to an existing cluster, replacing an old "cassandra1" 
with different IP address. It didn't work, logs indicated some kind of 
communication problem, nothing helpful.

So I started a network sniffer (ngrep) to analyze the problem. The new node 
connected to the seeds and sent some messages, including the bytes {{7f 00 01 
01}} along with lots of other bytes. I didn't know the protocol so I couldn't 
make much sense of any of this. Those 4 bytes didn't draw my attention either 
(in retrospect they should have...). What I was able to guess was that my new 
node sends the same message multiple times but never gets a response, aside 
from an initial connection ack of some kind. I also looked at the communication 
between two old nodes: gossip connections appear to be unidirectional. For two 
nodes to communicate, two connections are needed, each established by the 
respective sender.
So I was wondering if the initial connect to the seed would also be 2x 
unidirectional and if so, how the seed node would learn the ip address of my 
new node. I couldn't locate the new IP address in the packet dump which I 
thought strange.
I asked on IRC, they suggested that it could be related to my broadcast address.

Some time later I ran tcpdump to verify if cassandra would try to resolve the 
broadcast address. And my DNS resolver answered with two addresses, 127.0.1.1 
and the real one. Problem was such an entry in /etc/hosts and dnsmasq picking 
it up. I fixed it and cassandra joined my new node to the cluster. And my 
cassandra nodes lived happily ever after.

For clarity, {{is a bad broadcast address}} isn't printed anywhere, that's only 
how I describe this issue.


was (Author: wulf4096):
tl;dr: local DNS name as broadcast address, DNS resolver answered with 
127.0.1.1.

And here's the story of me being stupid:

Cassandra configuration looks like this:
{noformat}
broadcast_rpc_address: cassandra1
{noformat}

I tried to join this node to an existing cluster, replacing an old "cassandra1" 
with different IP address. It didn't work, logs indicated some kind of 
communication problem, nothing helpful.

So I started a network sniffer (ngrep) to analyze the problem. The new node 
connected to the seeds and sent some messages, including the bytes {noformat}7f 
00 01 01{noformat} along with lots of other bytes. I didn't know the protocol 
so I couldn't make much sense of any of this. Those 4 bytes didn't draw my 
attention either (in retrospect they should have...). What I was able to guess 
was that my new node sends the same message multiple times but never gets a 
response, aside from an initial connection ack of some kind. I also looked at 
the communication between two old nodes: gossip connections appear to be 
unidirectional. For two nodes to communicate, two connections are needed, each 
established by the respective sender.
So I was wondering if the initial connect to the seed would also be 2x 
unidirectional and if so, how the seed node would learn the ip address of my 
new node. I couldn't locate the new IP address in the packet dump which I 
thought strange.
I asked on IRC, they suggested that it could be related to my broadcast address.

Some time later I ran tcpdump to verify if cassandra would try to resolve the 
broadcast address. And my DNS resolver answered with two addresses, 127.0.1.1 
and the real one. Problem was such an entry in /etc/hosts and dnsmasq picking 
it up. I fixed it and cassandra joined my new node to the cluster. And my 
cassandra nodes lived happily ever after.

For clarity, `is a bad broadcast address` isn't printed anywhere, that's only 
how I describe this issue.

> Document gossip protocol
> 
>
> Key: CASSANDRA-14012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14012
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jörn Heissler
>Priority: Minor
>  Labels: Documentation
>
> I had an issue today with two nodes communicating with each other; there's a 
> flaw in my configuration (wrong broadcast address).
> I saw a little bit of traffic on port 7000, but I couldn't understand it for 
> lack of documentation.
> With documentation I would have understood my issue very quickly (7f 00 01 01 
> is a bad broadcast address!). But I didn't recognize those 4 bytes as the bc 
> address.
> Could you please document the gossip 

[jira] [Commented] (CASSANDRA-14012) Document gossip protocol

2017-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258811#comment-16258811
 ] 

Jörn Heissler commented on CASSANDRA-14012:
---

tl;dr: local DNS name as broadcast address, DNS resolver answered with 
127.0.1.1.

And here's the story of me being stupid:

Cassandra configuration looks like this:
{noformat}
broadcast_rpc_address: cassandra1
{noformat}

I tried to join this node to an existing cluster, replacing an old "cassandra1" 
with different IP address. It didn't work, logs indicated some kind of 
communication problem, nothing helpful.

So I started a network sniffer (ngrep) to analyze the problem. The new node 
connected to the seeds and sent some messages, including the bytes {noformat}7f 
00 01 01{noformat} along with lots of other bytes. I didn't know the protocol 
so I couldn't make much sense of any of this. Those 4 bytes didn't draw my 
attention either (in retrospect they should have...). What I was able to guess 
was that my new node sends the same message multiple times but never gets a 
response, aside from an initial connection ack of some kind. I also looked at 
the communication between two old nodes: gossip connections appear to be 
unidirectional. For two nodes to communicate, two connections are needed, each 
established by the respective sender.
So I was wondering if the initial connect to the seed would also be 2x 
unidirectional and if so, how the seed node would learn the ip address of my 
new node. I couldn't locate the new IP address in the packet dump which I 
thought strange.
I asked on IRC, they suggested that it could be related to my broadcast address.

Some time later I ran tcpdump to verify if cassandra would try to resolve the 
broadcast address. And my DNS resolver answered with two addresses, 127.0.1.1 
and the real one. Problem was such an entry in /etc/hosts and dnsmasq picking 
it up. I fixed it and cassandra joined my new node to the cluster. And my 
cassandra nodes lived happily ever after.

For clarity, {noformat}is a bad broadcast address{noformat} isn't printed 
anywhere, that's only how I describe this issue.

> Document gossip protocol
> 
>
> Key: CASSANDRA-14012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14012
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jörn Heissler
>Priority: Minor
>  Labels: Documentation
>
> I had an issue today with two nodes communicating with each other; there's a 
> flaw in my configuration (wrong broadcast address).
> I saw a little bit of traffic on port 7000, but I couldn't understand it for 
> lack of documentation.
> With documentation I would have understood my issue very quickly (7f 00 01 01 
> is a bad broadcast address!). But I didn't recognize those 4 bytes as the bc 
> address.
> Could you please document the gossip protocol?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-14012) Document gossip protocol

2017-11-19 Thread JIRA

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258811#comment-16258811
 ] 

Jörn Heissler edited comment on CASSANDRA-14012 at 11/20/17 4:38 AM:
-

tl;dr: local DNS name as broadcast address, DNS resolver answered with 
127.0.1.1.

And here's the story of me being stupid:

Cassandra configuration looks like this:
{noformat}
broadcast_rpc_address: cassandra1
{noformat}

I tried to join this node to an existing cluster, replacing an old "cassandra1" 
with different IP address. It didn't work, logs indicated some kind of 
communication problem, nothing helpful.

So I started a network sniffer (ngrep) to analyze the problem. The new node 
connected to the seeds and sent some messages, including the bytes {noformat}7f 
00 01 01{noformat} along with lots of other bytes. I didn't know the protocol 
so I couldn't make much sense of any of this. Those 4 bytes didn't draw my 
attention either (in retrospect they should have...). What I was able to guess 
was that my new node sends the same message multiple times but never gets a 
response, aside from an initial connection ack of some kind. I also looked at 
the communication between two old nodes: gossip connections appear to be 
unidirectional. For two nodes to communicate, two connections are needed, each 
established by the respective sender.
So I was wondering if the initial connect to the seed would also be 2x 
unidirectional and if so, how the seed node would learn the ip address of my 
new node. I couldn't locate the new IP address in the packet dump which I 
thought strange.
I asked on IRC, they suggested that it could be related to my broadcast address.

Some time later I ran tcpdump to verify if cassandra would try to resolve the 
broadcast address. And my DNS resolver answered with two addresses, 127.0.1.1 
and the real one. Problem was such an entry in /etc/hosts and dnsmasq picking 
it up. I fixed it and cassandra joined my new node to the cluster. And my 
cassandra nodes lived happily ever after.

For clarity, `is a bad broadcast address` isn't printed anywhere, that's only 
how I describe this issue.


was (Author: wulf4096):
tl;dr: local DNS name as broadcast address, DNS resolver answered with 
127.0.1.1.

And here's the story of me being stupid:

Cassandra configuration looks like this:
{noformat}
broadcast_rpc_address: cassandra1
{noformat}

I tried to join this node to an existing cluster, replacing an old "cassandra1" 
with different IP address. It didn't work, logs indicated some kind of 
communication problem, nothing helpful.

So I started a network sniffer (ngrep) to analyze the problem. The new node 
connected to the seeds and sent some messages, including the bytes {noformat}7f 
00 01 01{noformat} along with lots of other bytes. I didn't know the protocol 
so I couldn't make much sense of any of this. Those 4 bytes didn't draw my 
attention either (in retrospect they should have...). What I was able to guess 
was that my new node sends the same message multiple times but never gets a 
response, aside from an initial connection ack of some kind. I also looked at 
the communication between two old nodes: gossip connections appear to be 
unidirectional. For two nodes to communicate, two connections are needed, each 
established by the respective sender.
So I was wondering if the initial connect to the seed would also be 2x 
unidirectional and if so, how the seed node would learn the ip address of my 
new node. I couldn't locate the new IP address in the packet dump which I 
thought strange.
I asked on IRC, they suggested that it could be related to my broadcast address.

Some time later I ran tcpdump to verify if cassandra would try to resolve the 
broadcast address. And my DNS resolver answered with two addresses, 127.0.1.1 
and the real one. Problem was such an entry in /etc/hosts and dnsmasq picking 
it up. I fixed it and cassandra joined my new node to the cluster. And my 
cassandra nodes lived happily ever after.

For clarity, {noformat}is a bad broadcast address{noformat} isn't printed 
anywhere, that's only how I describe this issue.

> Document gossip protocol
> 
>
> Key: CASSANDRA-14012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14012
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jörn Heissler
>Priority: Minor
>  Labels: Documentation
>
> I had an issue today with two nodes communicating with each other; there's a 
> flaw in my configuration (wrong broadcast address).
> I saw a little bit of traffic on port 7000, but I couldn't understand it for 
> lack of documentation.
> With documentation I would have understood my issue very quickly (7f 00 01 01 
> is a bad broadcast address!). But I didn't recognize those 4 bytes as the bc 
> address.
> 

[jira] [Created] (CASSANDRA-14060) Separate CorruptSSTableException and FSError handling policies

2017-11-19 Thread Jay Zhuang (JIRA)
Jay Zhuang created CASSANDRA-14060:
--

 Summary: Separate CorruptSSTableException and FSError handling 
policies
 Key: CASSANDRA-14060
 URL: https://issues.apache.org/jira/browse/CASSANDRA-14060
 Project: Cassandra
  Issue Type: Improvement
  Components: Configuration
Reporter: Jay Zhuang
Assignee: Jay Zhuang
Priority: Minor


Currently, if 
[{{disk_failure_policy}}|https://github.com/apache/cassandra/blob/trunk/conf/cassandra.yaml#L230]
 is set to {{stop}} (default), StorageService will shutdown for {{FSError}}, 
but not {{CorruptSSTableException}} 
[DefaultFSErrorHandler.java:40|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/service/DefaultFSErrorHandler.java#L40].

But when we use policy: {{die}}, it has different behave, JVM will be killed 
for both {{FSError}} and {{CorruptSSTableException}} 
[JVMStabilityInspector.java:63|https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/utils/JVMStabilityInspector.java#L63]:

||{{dis_failure_policy}}|| hit {{FSError}} Exception || hit 
{{CorruptSSTableException}} ||
|{{stop}}| (/) stop | (x) not stop |
|{{die}}| (/) die | (/) die |

We saw {{CorruptSSTableException}} from time to time in our production, but 
mostly it's *not* because of a disk issue. So I would suggest having a separate 
policy for CorruptSSTable.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-14056) Many dtests fail with ConfigurationException: offheap_objects are not available in 3.0 when OFFHEAP_MEMTABLES="true"

2017-11-19 Thread Kurt Greaves (JIRA)

 [ 
https://issues.apache.org/jira/browse/CASSANDRA-14056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Kurt Greaves updated CASSANDRA-14056:
-
Description: 
Tons of dtests are running when they shouldn't as it looks like the path is no 
longer supported.. we need to add a bunch of logic that's missing to fully 
support running dtests with off-heap memtables enabled (via the 
OFFHEAP_MEMTABLES="true" environment variable)

{code}[node2 ERROR] java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at 
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
at 
org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
at 
org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
offheap_objects are not available in 3.0. They will be re-introduced in a 
future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
details
at 
org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
at org.apache.cassandra.db.Memtable.(Memtable.java:65)
... 14 more
{code}

  was:
Tons of dtests are running when they shouldn't as it looks like the path is no 
longer supported.. we need to add a bunch of logic that's missing to fully 
support running dtests with off-heap memtables enabled (via the 
OFFHEAP_MEMTABLES="true" environment variable)

[node2 ERROR] java.lang.ExceptionInInitializerError
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:394)
at 
org.apache.cassandra.db.ColumnFamilyStore.(ColumnFamilyStore.java:361)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:577)
at 
org.apache.cassandra.db.ColumnFamilyStore.createColumnFamilyStore(ColumnFamilyStore.java:554)
at org.apache.cassandra.db.Keyspace.initCf(Keyspace.java:368)
at org.apache.cassandra.db.Keyspace.(Keyspace.java:305)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:129)
at org.apache.cassandra.db.Keyspace.open(Keyspace.java:106)
at 
org.apache.cassandra.db.SystemKeyspace.checkHealth(SystemKeyspace.java:887)
at 
org.apache.cassandra.service.StartupChecks$9.execute(StartupChecks.java:354)
at 
org.apache.cassandra.service.StartupChecks.verify(StartupChecks.java:110)
at 
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:179)
at 
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:569)
at 
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:697)
Caused by: org.apache.cassandra.exceptions.ConfigurationException: 
offheap_objects are not available in 3.0. They will be re-introduced in a 
future release, see https://issues.apache.org/jira/browse/CASSANDRA-9472 for 
details
at 
org.apache.cassandra.config.DatabaseDescriptor.getMemtableAllocatorPool(DatabaseDescriptor.java:1907)
at org.apache.cassandra.db.Memtable.(Memtable.java:65)
... 14 more


> Many dtests fail with ConfigurationException: offheap_objects are not 
> available in 3.0 when OFFHEAP_MEMTABLES="true"
> 
>
> Key: CASSANDRA-14056
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14056
> Project: Cassandra
>  Issue Type: Bug
>  Components: Testing
>Reporter: Michael Kjellman
>
> Tons of dtests are running when they shouldn't as it looks like the path is 
> no longer supported.. we need to add a bunch of logic that's missing to fully 
> support running dtests with off-heap memtables enabled (via the 
> OFFHEAP_MEMTABLES="true" environment variable)
> {code}[node2 ERROR] java.lang.ExceptionInInitializerError
>   at 
> 

[jira] [Commented] (CASSANDRA-14012) Document gossip protocol

2017-11-19 Thread Kurt Greaves (JIRA)

[ 
https://issues.apache.org/jira/browse/CASSANDRA-14012?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16258779#comment-16258779
 ] 

Kurt Greaves commented on CASSANDRA-14012:
--

Where did you see {{7f 00 01 01 is a bad broadcast address!}}?

What was your configuration?

Documenting the whole protocol is a pretty large task, which should be done 
too, but to start with it'd probably be wise to at least document the specific 
issue you saw.

> Document gossip protocol
> 
>
> Key: CASSANDRA-14012
> URL: https://issues.apache.org/jira/browse/CASSANDRA-14012
> Project: Cassandra
>  Issue Type: Improvement
>Reporter: Jörn Heissler
>Priority: Minor
>  Labels: Documentation
>
> I had an issue today with two nodes communicating with each other; there's a 
> flaw in my configuration (wrong broadcast address).
> I saw a little bit of traffic on port 7000, but I couldn't understand it for 
> lack of documentation.
> With documentation I would have understood my issue very quickly (7f 00 01 01 
> is a bad broadcast address!). But I didn't recognize those 4 bytes as the bc 
> address.
> Could you please document the gossip protocol?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org