[
https://issues.apache.org/jira/browse/CALCITE-2521?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16600376#comment-16600376
]
Vladimir Sitnikov commented on CALCITE-2521:
--------------------------------------------
{quote} stress-test the metadata cache, and it does its job very well{quote}
How do you tell that?
The test generates exactly the same code for exactly the same input values. It
does not reproduce original issue.
{quote} This test case prevents something similar from happening again{quote}
1) Does the test reproduce the error if I revert code change of CALCITE-1808 ?
No. The test still passes
2) Cache in CALCITE-1808 did contain 100'000+ entries. Did you investigate what
was that? I doesn't look like that you did.
That is: the issue might be caused by a metadataprovider in Hive that lacked
equals/hashcode, you did add a workaround of limiting the cache size (which is
good anyway), and you did add a test that verifies NOTHING.
{quote}20 seconds per CI run is well worth paying.{quote}
On my machine it takes 10 seconds. I still do not want to pay that time,
especially when I know the test adds no value.
> RelMetadataTest#testMetadataHandlerCacheLimit takes 8-20 seconds and it looks
> useless
> -------------------------------------------------------------------------------------
>
> Key: CALCITE-2521
> URL: https://issues.apache.org/jira/browse/CALCITE-2521
> Project: Calcite
> Issue Type: Bug
> Components: core
> Affects Versions: 1.17.0
> Reporter: Vladimir Sitnikov
> Assignee: Julian Hyde
> Priority: Major
>
> org.apache.calcite.test.RelMetadataTest#testMetadataHandlerCacheLimit was
> introduced in CALCITE-1808
> Travis takes 17 seconds to execute RelMetadataTest
> {noformat}
> [INFO] Running org.apache.calcite.test.RelMetadataTest
> [WARNING] Tests run: 140, Failures: 0, Errors: 0, Skipped: 6, Time elapsed:
> 17.318 s - in {noformat}
> {code:java} /** Test case for
> * <a
> href="https://issues.apache.org/jira/browse/CALCITE-1808">[CALCITE-1808]
> * JaninoRelMetadataProvider loading cache might cause
> * OutOfMemoryError</a>. */
> @Test public void testMetadataHandlerCacheLimit() {
> Assume.assumeTrue("If cache size is too large, this test may fail and the
> "
> + "test won't be to blame",
> SaffronProperties.INSTANCE.metadataHandlerCacheMaximumSize().get()
> < 10_000);
> final int iterationCount = 2_000;
> final RelNode rel = convertSql("select * from emp");
> final RelMetadataProvider metadataProvider =
> rel.getCluster().getMetadataProvider();
> final RelOptPlanner planner = rel.getCluster().getPlanner();
> for (int i = 0; i < iterationCount; i++) {
> RelMetadataQuery.THREAD_PROVIDERS.set(
> JaninoRelMetadataProvider.of(
> new CachingRelMetadataProvider(metadataProvider, planner)));
> final RelMetadataQuery mq = RelMetadataQuery.instance();
> final Double result = mq.getRowCount(rel);
> assertThat(result, within(14d, 0.1d));
> }
> }
> {code}
> In fact, it creates 1000 metadata providers, and it does take noticeable time
> (e.g. 8 seconds on my notebook).
> I suggest to remove the test as it spends noticeable time, and it never
> reproduces "out of memory".
> Technically speaking the test tries to validate if
> {{org.apache.calcite.rel.metadata.JaninoRelMetadataProvider}} cache is
> bounded, however there's no assertions.
> Alternative option would be to significantly increase the number of
> iterations, and disable the test by default
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)