[jira] [Commented] (CASSANDRA-19596) IntervalTree build throughput is low enough to be a bottleneck

2024-08-15 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19596?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17874062#comment-17874062
 ] 

Yuqi Yan commented on CASSANDRA-19596:
--

We encountered the exact same issue and this inefficiency made mutation tasks 
stuck for > 10 minutes waiting on memtable flush.

Do you think it's possible to do insert as well for IntervalTree? Wondering 
what was the reason we have to build a new tree every time when updating the 
View

> IntervalTree build throughput is low enough to be a bottleneck
> --
>
> Key: CASSANDRA-19596
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19596
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Local/Compaction, Local/SSTable
>Reporter: Ariel Weisberg
>Assignee: Ariel Weisberg
>Priority: Normal
> Fix For: 5.x
>
> Attachments: ci_summary.html
>
>
> With several terabytes of data and 8 compactors it’s possible for the 
> compactors to spend a lot of time blocked waiting on IntervalTrees to be 
> built.
> There is also a lot of wasted CPU because it’s updated optimistically so most 
> of them end up being thrown away.
> This can end up being quite painful because it can block memtable flushing as 
> well and then a single slow CFS can block unrelated CFS because the memtable 
> post flush executor is single threaded and shared across all CFS. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade. 
This slows down node startup significantly when it's doing 
preloadPreparedStatements

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade. 
This slows down node startup time significantly when it's doing 
preloadPreparedStatements

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 


> Newly inserted prepared statements got evicted

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade. 
This slows down node startup time significantly when it's doing 
preloadPreparedStatements

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 


> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> -

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade. 
This slows down node startup time significantly when it's doing 
preloadPreparedStatements

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade. 
This slows down node startup time significantly when it's doing 
preloadPreparedStatements

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 


> Newly inserted prepared statements got

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Fix Version/s: 4.1.x

> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> 
>
> Key: CASSANDRA-19703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuqi Yan
>Priority: Normal
> Fix For: 4.1.x
>
>
> We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
> system.prepared_statements table size start growing to GB size after upgrade.
> I can't share the exact log but it's a race condition like this:
>  # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
>  # [Thread 1] Cache miss, put this S1 into cache
>  # [Thread 1] Attempts to write S1 into local table
>  # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
>  # [Thread 2] Cache miss, put this S2 into cache
>  # [Thread 2] Cache is full, evicting S1 from cache
>  # [Thread 2] Attempts to delete S1 from local table
>  # [Thread 2] Tombstone inserted for S1, delete finished
>  # [Thread 1] Record inserted for S1, write finished
> Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
> the record in the table. Hence the data will not be removed because the later 
> insert has newer write time than the tombstone.
> Whether this would happen or not depends on how the cache decides what’s the 
> next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
> upgraded to 2.9.2 CASSANDRA-15153
>  
> I did a small research in Caffeine commits. It seems this commit was causing 
> the entry got evicted to early: Eagerly evict an entry if it too large to fit 
> in the cache(Feb 2021), available after 2.9.0: 
> [https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
>  
> And later fixed in: Improve eviction when overflow or the weight is 
> oversized(Aug 2022), available after 3.1.2: 
> [https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
> {quote}Previously an attempt to centralize evictions into one code path led 
> to a suboptimal approach 
> ([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
> ). This tried to move those entries into the LRU position for early eviction, 
> but was confusing and could too aggressively evict something that is 
> desirable to keep.
> {quote}
>  
> I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. 
> But I think this version is not compatible with Java 8.
> I'm not 100% sure if this is the root cause and what's the correct fix here. 
> Would appreciate if anyone can have a look, thanks
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I upgrade the Caffeine to 3.1.8 (same as 5.0 trunk) and this issue is gone. But 
I think this version is not compatible with Java 8.

I'm not 100% sure if this is the root cause and what's the correct fix here. 
Would appreciate if anyone can have a look, thanks

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I'm not 100% sure if this is the root cause and what's the correct fix here. 
But would appreciate if anyone can have a look

 

 


> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> 
>
> Key: CASSANDRA-19703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
> Project: Cassandra
>  

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I'm not 100% sure if this is the root cause and what's the correct fix here. 
But would appreciate if anyone can have a look

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I'm not sure what's the correct fix here. But would appreciate if anyone can 
have a look

 

 


> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> 
>
> Key: CASSANDRA-19703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuqi Yan
>Priority: Normal
>
> We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
> system.prepared_statements

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Description: 
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.
{quote}
 

I'm not sure what's the correct fix here. But would appreciate if anyone can 
have a look

 

 

  was:
We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 https://issues.apache.org/jira/browse/CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.{quote}
 

I'm not sure what's the correct fix here. But would appreciate if anyone can 
have a look

 

 


> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> 
>
> Key: CASSANDRA-19703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuqi Yan
>Priority: Normal
>
> We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
> system.prepared_statemen

[jira] [Updated] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19703?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19703:
-
Impacts:   (was: None)

> Newly inserted prepared statements got evicted too early from cache that 
> leads to race condition
> 
>
> Key: CASSANDRA-19703
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
> Project: Cassandra
>  Issue Type: Bug
>Reporter: Yuqi Yan
>Priority: Normal
>
> We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
> system.prepared_statements table size start growing to GB size after upgrade.
> I can't share the exact log but it's a race condition like this:
>  # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
>  # [Thread 1] Cache miss, put this S1 into cache
>  # [Thread 1] Attempts to write S1 into local table
>  # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
>  # [Thread 2] Cache miss, put this S2 into cache
>  # [Thread 2] Cache is full, evicting S1 from cache
>  # [Thread 2] Attempts to delete S1 from local table
>  # [Thread 2] Tombstone inserted for S1, delete finished
>  # [Thread 1] Record inserted for S1, write finished
> Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
> the record in the table. Hence the data will not be removed because the later 
> insert has newer write time than the tombstone.
> Whether this would happen or not depends on how the cache decides what’s the 
> next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
> upgraded to 2.9.2 https://issues.apache.org/jira/browse/CASSANDRA-15153
>  
> I did a small research in Caffeine commits. It seems this commit was causing 
> the entry got evicted to early: Eagerly evict an entry if it too large to fit 
> in the cache(Feb 2021), available after 2.9.0: 
> [https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
>  
> And later fixed in: Improve eviction when overflow or the weight is 
> oversized(Aug 2022), available after 3.1.2: 
> [https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
> {quote}Previously an attempt to centralize evictions into one code path led 
> to a suboptimal approach 
> ([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
> ). This tried to move those entries into the LRU position for early eviction, 
> but was confusing and could too aggressively evict something that is 
> desirable to keep.{quote}
>  
> I'm not sure what's the correct fix here. But would appreciate if anyone can 
> have a look
>  
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-19703) Newly inserted prepared statements got evicted too early from cache that leads to race condition

2024-06-13 Thread Yuqi Yan (Jira)
Yuqi Yan created CASSANDRA-19703:


 Summary: Newly inserted prepared statements got evicted too early 
from cache that leads to race condition
 Key: CASSANDRA-19703
 URL: https://issues.apache.org/jira/browse/CASSANDRA-19703
 Project: Cassandra
  Issue Type: Bug
Reporter: Yuqi Yan


We're upgrading from Cassandra 4.0 to Cassandra 4.1.3 and 
system.prepared_statements table size start growing to GB size after upgrade.

I can't share the exact log but it's a race condition like this:
 # [Thread 1] Receives a prepared request for S1. Attempts to get S1 in cache
 # [Thread 1] Cache miss, put this S1 into cache
 # [Thread 1] Attempts to write S1 into local table
 # [Thread 2] Receives a prepared request for S2. Attempts to get S2 in cache
 # [Thread 2] Cache miss, put this S2 into cache
 # [Thread 2] Cache is full, evicting S1 from cache
 # [Thread 2] Attempts to delete S1 from local table
 # [Thread 2] Tombstone inserted for S1, delete finished
 # [Thread 1] Record inserted for S1, write finished

Thread 2 inserted a tombstone for S1 earlier than Thread 1 was able to insert 
the record in the table. Hence the data will not be removed because the later 
insert has newer write time than the tombstone.

Whether this would happen or not depends on how the cache decides what’s the 
next entry to evict when it’s full. We noticed that in 4.1.3 Caffeine was 
upgraded to 2.9.2 https://issues.apache.org/jira/browse/CASSANDRA-15153

 

I did a small research in Caffeine commits. It seems this commit was causing 
the entry got evicted to early: Eagerly evict an entry if it too large to fit 
in the cache(Feb 2021), available after 2.9.0: 
[https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]

 

And later fixed in: Improve eviction when overflow or the weight is 
oversized(Aug 2022), available after 3.1.2: 
[https://github.com/ben-manes/caffeine/commit/25b7d17b1a246a63e4991d4902a2ecf24e86d234]
{quote}Previously an attempt to centralize evictions into one code path led to 
a suboptimal approach 
([{{464bc19}}|https://github.com/ben-manes/caffeine/commit/464bc1914368c47a0203517fda2151fbedaf568b]
). This tried to move those entries into the LRU position for early eviction, 
but was confusing and could too aggressively evict something that is desirable 
to keep.{quote}
 

I'm not sure what's the correct fix here. But would appreciate if anyone can 
have a look

 

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-05-08 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17844801#comment-17844801
 ] 

Yuqi Yan commented on CASSANDRA-19556:
--

[~smiklosovic] thanks for posting this in ML and the follow-ups here. Yes that 
sg to me.

> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1.5h
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-05-01 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842576#comment-17842576
 ] 

Yuqi Yan edited comment on CASSANDRA-19556 at 5/1/24 7:50 AM:
--

okay I just realized there is a similar feature already added in 5.0: 
CASSANDRA-17495

The implementation was also adding these check for few subclasses in 
AlterTableStatement in apply(). I guess with 17495 we don't need DDL guardrail, 
but still we can have this DCL guardrail. wdyt? [~smiklosovic] 


was (Author: JIRAUSER301388):
okay I just realized there is a similar feature already added in 5.0: 
https://issues.apache.org/jira/browse/CASSANDRA-17495

The implementation was also adding these check for few subclasses in 
AlterTableStatement in apply(). I guess with 17495 we don't need DDL guardrail, 
but still we can have this DCL guardrail. wdyt? [~smiklosovic] 

> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Comment Edited] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-05-01 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842576#comment-17842576
 ] 

Yuqi Yan edited comment on CASSANDRA-19556 at 5/1/24 7:49 AM:
--

okay I just realized there is a similar feature already added in 5.0: 
https://issues.apache.org/jira/browse/CASSANDRA-17495

The implementation was also adding these check for few subclasses in 
AlterTableStatement in apply(). I guess with 17495 we don't need DDL guardrail, 
but still we can have this DCL guardrail. wdyt? [~smiklosovic] 


was (Author: JIRAUSER301388):
okay I just realized there is a similar feature already added in 5.0: 
https://issues.apache.org/jira/browse/CASSANDRA-17495

The implementation was also adding these check for these statements. I guess 
with 17495 we don't need DDL guardrail, but still we can have this DCL 
guardrail. wdyt? [~smiklosovic] 

> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-05-01 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17842576#comment-17842576
 ] 

Yuqi Yan commented on CASSANDRA-19556:
--

okay I just realized there is a similar feature already added in 5.0: 
https://issues.apache.org/jira/browse/CASSANDRA-17495

The implementation was also adding these check for these statements. I guess 
with 17495 we don't need DDL guardrail, but still we can have this DCL 
guardrail. wdyt? [~smiklosovic] 

> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 20m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-04-29 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19556:
-
Fix Version/s: 5.x
  Description: 
Sometimes we want to block DDL/DCL queries to stop new schemas being created or 
roles created. (e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3248]

trunk PR: [https://github.com/apache/cassandra/pull/3275]

 

  was:
Sometimes we want to block DDL/DCL queries to stop new schemas being created or 
roles created. (e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3248]

 


> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 5.x
>
>  Time Spent: 1h 10m
>  Remaining Estimate: 0h
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
> trunk PR: [https://github.com/apache/cassandra/pull/3275]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Commented] (CASSANDRA-19552) Nodetool to get/set guardrails configurations

2024-04-11 Thread Yuqi Yan (Jira)


[ 
https://issues.apache.org/jira/browse/CASSANDRA-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17836307#comment-17836307
 ] 

Yuqi Yan commented on CASSANDRA-19552:
--

[~smiklosovic] thanks, I'm quite new to the community and wasn't aware of this 
trend. Yes I agree we should have vtable approach for 5.x. Looking at your 
patch it seems that we don't have mutable vtable in 4.1 yet.

> Nodetool to get/set guardrails configurations
> -
>
> Key: CASSANDRA-19552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19552
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 4.1.x
>
>
> Currently guardrails are only configurable through JMX / cassandra.yaml
> This provides a nodetool command to interact with all the getters/setters for 
> guardrails.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3243]
> trunk PR: [https://github.com/apache/cassandra/pull/3244]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-04-11 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19556:
-
Description: 
Sometimes we want to block DDL/DCL queries to stop new schemas being created or 
roles created. (e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3248]

 

  was:
Sometimes we want to block DDL/DCL queries to stop new schemas being created. 
(e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3248]

 


> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created 
> or roles created. (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19556) Guardrail to block DDL/DCL queries

2024-04-11 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19556?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19556:
-
Description: 
Sometimes we want to block DDL/DCL queries to stop new schemas being created. 
(e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3248]

 

  was:
Sometimes we want to block DDL/DCL queries to stop new schemas being created. 
(e.g. when doing live-upgrade)

For DDL guardrail current implementation won't block the query if it's no-op 
(e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
guardrail check is added in apply() right after all the existence check)

I don't have preference on either block every DDL query or check whether if 
it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. at 
startup, which is no-op but will be blocked by this guardrail and failed to 
start.


> Guardrail to block DDL/DCL queries
> --
>
> Key: CASSANDRA-19556
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19556
> Project: Cassandra
>  Issue Type: New Feature
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
>
> Sometimes we want to block DDL/DCL queries to stop new schemas being created. 
> (e.g. when doing live-upgrade)
> For DDL guardrail current implementation won't block the query if it's no-op 
> (e.g. CREATE TABLE...IF NOT EXISTS, but table already exists, etc. The 
> guardrail check is added in apply() right after all the existence check)
> I don't have preference on either block every DDL query or check whether if 
> it's no-op here. Just we have some users always run CREATE..IF NOT EXISTS.. 
> at startup, which is no-op but will be blocked by this guardrail and failed 
> to start.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3248]
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19552) Nodetool to get/set guardrails configurations

2024-04-10 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19552:
-
Fix Version/s: 4.1.x

> Nodetool to get/set guardrails configurations
> -
>
> Key: CASSANDRA-19552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19552
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
> Fix For: 4.1.x
>
>
> Currently guardrails are only configurable through JMX / cassandra.yaml
> This provides a nodetool command to interact with all the getters/setters for 
> guardrails.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3243]
> trunk PR: [https://github.com/apache/cassandra/pull/3244]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19552) Nodetool to get/set guardrails configurations

2024-04-10 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19552?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19552:
-
Description: 
Currently guardrails are only configurable through JMX / cassandra.yaml

This provides a nodetool command to interact with all the getters/setters for 
guardrails.

 

4.1 PR: [https://github.com/apache/cassandra/pull/3243]

trunk PR: [https://github.com/apache/cassandra/pull/3244]

  was:
Currently guardrails are only configurable through JMX / cassandra.yaml

This provides a nodetool command to interact with all the getters/setters for 
guardrails.


> Nodetool to get/set guardrails configurations
> -
>
> Key: CASSANDRA-19552
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19552
> Project: Cassandra
>  Issue Type: Improvement
>  Components: Feature/Guardrails
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
>
> Currently guardrails are only configurable through JMX / cassandra.yaml
> This provides a nodetool command to interact with all the getters/setters for 
> guardrails.
>  
> 4.1 PR: [https://github.com/apache/cassandra/pull/3243]
> trunk PR: [https://github.com/apache/cassandra/pull/3244]



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-19552) Nodetool to get/set guardrails configurations

2024-04-10 Thread Yuqi Yan (Jira)
Yuqi Yan created CASSANDRA-19552:


 Summary: Nodetool to get/set guardrails configurations
 Key: CASSANDRA-19552
 URL: https://issues.apache.org/jira/browse/CASSANDRA-19552
 Project: Cassandra
  Issue Type: Improvement
  Components: Feature/Guardrails
Reporter: Yuqi Yan
Assignee: Yuqi Yan


Currently guardrails are only configurable through JMX / cassandra.yaml

This provides a nodetool command to interact with all the getters/setters for 
guardrails.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19378) JMXStandardsTest fails when running with codecoverage

2024-04-10 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19378:
-
Source Control Link: https://github.com/apache/cassandra/pull/3242  (was: 
https://github.com/apache/cassandra/pull/3092)

> JMXStandardsTest fails when running with codecoverage
> -
>
> Key: CASSANDRA-19378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19378
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
>  Time Spent: 10m
>  Remaining Estimate: 0h
>
> JMXStandardsTest is added in unit test since 4.1. I recently run this test 
> with codecoverage and it fails:
> {code:java}
> ant codecoverage -Dtaskname=testsome 
> -Dtest.name=org.apache.cassandra.tools.JMXStandardsTest {code}
> {code:java}
> [junit-timeout] -  --- 
> [junit-timeout] Testcase: 
> interfaces(org.apache.cassandra.tools.JMXStandardsTest):      FAILED 
> [junit-timeout] Errors detected while validating MBeans [junit-timeout] Error 
> at signature parameter; type java.lang.invoke.MethodHandles.Lookup is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] junit.framework.AssertionFailedError: Errors detected while 
> validating MBeans [junit-timeout] Error at signature parameter; type 
> java.lang.invoke.MethodHandles.Lookup is not in the supported set of types, 
> method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout]         at 
> org.apache.cassandra.tools.JMXStandardsTest.interfaces(JMXStandardsTest.java:156)
>  [junit-timeout]         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) [junit-timeout]         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit-timeout]         at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}
> This `$jacocoInit` method was included in the test, which should be excluded
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Updated] (CASSANDRA-19378) JMXStandardsTest fails when running with codecoverage

2024-02-07 Thread Yuqi Yan (Jira)


 [ 
https://issues.apache.org/jira/browse/CASSANDRA-19378?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Yuqi Yan updated CASSANDRA-19378:
-
Source Control Link: https://github.com/apache/cassandra/pull/3092

> JMXStandardsTest fails when running with codecoverage
> -
>
> Key: CASSANDRA-19378
> URL: https://issues.apache.org/jira/browse/CASSANDRA-19378
> Project: Cassandra
>  Issue Type: Bug
>  Components: Test/unit
>Reporter: Yuqi Yan
>Assignee: Yuqi Yan
>Priority: Normal
>
> JMXStandardsTest is added in unit test since 4.1. I recently run this test 
> with codecoverage and it fails:
> {code:java}
> ant codecoverage -Dtaskname=testsome 
> -Dtest.name=org.apache.cassandra.tools.JMXStandardsTest {code}
> {code:java}
> [junit-timeout] -  --- 
> [junit-timeout] Testcase: 
> interfaces(org.apache.cassandra.tools.JMXStandardsTest):      FAILED 
> [junit-timeout] Errors detected while validating MBeans [junit-timeout] Error 
> at signature parameter; type java.lang.invoke.MethodHandles.Lookup is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] junit.framework.AssertionFailedError: Errors detected while 
> validating MBeans [junit-timeout] Error at signature parameter; type 
> java.lang.invoke.MethodHandles.Lookup is not in the supported set of types, 
> method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
> the supported set of types, method method 'private static boolean[] 
> org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
>  [junit-timeout]         at 
> org.apache.cassandra.tools.JMXStandardsTest.interfaces(JMXStandardsTest.java:156)
>  [junit-timeout]         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native 
> Method) [junit-timeout]         at 
> java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>  [junit-timeout]         at 
> java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> {code}
> This `$jacocoInit` method was included in the test, which should be excluded
>  



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org



[jira] [Created] (CASSANDRA-19378) JMXStandardsTest fails when running with codecoverage

2024-02-07 Thread Yuqi Yan (Jira)
Yuqi Yan created CASSANDRA-19378:


 Summary: JMXStandardsTest fails when running with codecoverage
 Key: CASSANDRA-19378
 URL: https://issues.apache.org/jira/browse/CASSANDRA-19378
 Project: Cassandra
  Issue Type: Bug
  Components: Test/unit
Reporter: Yuqi Yan
Assignee: Yuqi Yan


JMXStandardsTest is added in unit test since 4.1. I recently run this test with 
codecoverage and it fails:
{code:java}
ant codecoverage -Dtaskname=testsome 
-Dtest.name=org.apache.cassandra.tools.JMXStandardsTest {code}
{code:java}
[junit-timeout] -  --- [junit-timeout] 
Testcase: interfaces(org.apache.cassandra.tools.JMXStandardsTest):      FAILED 
[junit-timeout] Errors detected while validating MBeans [junit-timeout] Error 
at signature parameter; type java.lang.invoke.MethodHandles.Lookup is not in 
the supported set of types, method method 'private static boolean[] 
org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
 [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
the supported set of types, method method 'private static boolean[] 
org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
 [junit-timeout] junit.framework.AssertionFailedError: Errors detected while 
validating MBeans [junit-timeout] Error at signature parameter; type 
java.lang.invoke.MethodHandles.Lookup is not in the supported set of types, 
method method 'private static boolean[] 
org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
 [junit-timeout] Error at signature parameter; type java.lang.Class is not in 
the supported set of types, method method 'private static boolean[] 
org.apache.cassandra.service.StorageServiceMBean.$jacocoInit(java.lang.invoke.MethodHandles$Lookup,java.lang.String,java.lang.Class)'
 [junit-timeout]         at 
org.apache.cassandra.tools.JMXStandardsTest.interfaces(JMXStandardsTest.java:156)
 [junit-timeout]         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method) 
[junit-timeout]         at 
java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
 [junit-timeout]         at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
{code}
This `$jacocoInit` method was included in the test, which should be excluded

 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

-
To unsubscribe, e-mail: commits-unsubscr...@cassandra.apache.org
For additional commands, e-mail: commits-h...@cassandra.apache.org