[
https://issues.apache.org/jira/browse/CALCITE-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16599853#comment-16599853
]
Julian Hyde commented on CALCITE-2521:
--------------------------------------
It is not useless. It was constructed to stress-test the metadata cache, and it
does its job very well. The metadata cache is complex: it uses code generation,
has very compact data structures, and has to play nicely with gc and exceptions
in order to avoid memory leaks.
CALCITE-1808 was a complex bug to solve and was the cause of a serious runtime
issue in Hive. This test case prevents something similar from happening again.
20 seconds per CI run is well worth paying.
> RelMetadataTest#testMetadataHandlerCacheLimit takes 8-20 seconds and it looks
> useless
> -------------------------------------------------------------------------------------
>
> Key: CALCITE-2521
> URL: https://issues.apache.org/jira/browse/CALCITE-2521
> Project: Calcite
> Issue Type: Bug
> Components: core
> Affects Versions: 1.17.0
> Reporter: Vladimir Sitnikov
> Assignee: Julian Hyde
> Priority: Major
>
> org.apache.calcite.test.RelMetadataTest#testMetadataHandlerCacheLimit was
> introduced in CALCITE-1808
> Travis takes 17 seconds to execute RelMetadataTest
> {noformat}
> [INFO] Running org.apache.calcite.test.RelMetadataTest
> [WARNING] Tests run: 140, Failures: 0, Errors: 0, Skipped: 6, Time elapsed:
> 17.318 s - in {noformat}
> {code:java} /** Test case for
> * <a
> href="https://issues.apache.org/jira/browse/CALCITE-1808">[CALCITE-1808]
> * JaninoRelMetadataProvider loading cache might cause
> * OutOfMemoryError</a>. */
> @Test public void testMetadataHandlerCacheLimit() {
> Assume.assumeTrue("If cache size is too large, this test may fail and the
> "
> + "test won't be to blame",
> SaffronProperties.INSTANCE.metadataHandlerCacheMaximumSize().get()
> < 10_000);
> final int iterationCount = 2_000;
> final RelNode rel = convertSql("select * from emp");
> final RelMetadataProvider metadataProvider =
> rel.getCluster().getMetadataProvider();
> final RelOptPlanner planner = rel.getCluster().getPlanner();
> for (int i = 0; i < iterationCount; i++) {
> RelMetadataQuery.THREAD_PROVIDERS.set(
> JaninoRelMetadataProvider.of(
> new CachingRelMetadataProvider(metadataProvider, planner)));
> final RelMetadataQuery mq = RelMetadataQuery.instance();
> final Double result = mq.getRowCount(rel);
> assertThat(result, within(14d, 0.1d));
> }
> }
> {code}
> In fact, it creates 1000 metadata providers, and it does take noticeable time
> (e.g. 8 seconds on my notebook).
> I suggest to remove the test as it spends noticeable time, and it never
> reproduces "out of memory".
> Technically speaking the test tries to validate if
> {{org.apache.calcite.rel.metadata.JaninoRelMetadataProvider}} cache is
> bounded, however there's no assertions.
> Alternative option would be to significantly increase the number of
> iterations, and disable the test by default
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)