[
https://issues.apache.org/jira/browse/IGNITE-8100?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16566437#comment-16566437
]
Vladimir Ozerov commented on IGNITE-8100:
-----------------------------------------
[~ilyak], [~pkouznet], [~tledkov-gridgain],
Proposed patch fixes original problem, but its implementation may lead to
massive overhead on the whole cluster. Caches are not started on a client nodes
until they are needed. When client node needs to start a cache it will send a
discovery message. Then server nodes will register this client and it will
participate in further exchanges, making them more heavy.
At the same time there is a high chance that user will never use this cache and
only need to fethc metadata. This is especially true for various third-party
tools, which can eagerly list all database objects in UI. We definitely do not
want to start potentially dozens of cache on a client just to do that. In
addition, current implementation start caches one-by-one which may be really
long thing.
Correct implementation should work as follows:
# When node is started, we should collect all known cache descriptors into some
local collection under *write lock*
# When dynamic cache change request is received (i.e. cache is started or
stopped) we should update this collection under *write lock* in *discovery
thread*
# Schema changes ({{CREATE/DROP INDEX}} , {{ALTER TABLE}}) should be processed
in the same way as p.2
# When metadata request from JDBC driver is received, we should iterate over
collected descriptors and construct the result (e.g. list of schemas, list of
indexes, etc) under *read lock*
Lock is necessary because descriptors may be changed from discovery thread
leading to inconsistent JDBC results. Alternatively you may want to employ
copy-on-write technique.
> jdbc getSchemas method could miss schemas for not started remote caches
> -----------------------------------------------------------------------
>
> Key: IGNITE-8100
> URL: https://issues.apache.org/jira/browse/IGNITE-8100
> Project: Ignite
> Issue Type: Bug
> Reporter: Pavel Kuznetsov
> Assignee: Ilya Kasnacheev
> Priority: Major
>
> On jdbc side we have
> org.apache.ignite.internal.jdbc.thin.JdbcThinDatabaseMetadata#getSchemas(java.lang.String,
> java.lang.String)
> on the server side result is constructed by this:
> {noformat}
> for (String cacheName : ctx.cache().publicCacheNames()) {
> for (GridQueryTypeDescriptor table : ctx.query().types(cacheName)) {
> if (matches(table.schemaName(), schemaPtrn))
> schemas.add(table.schemaName());
> }
> }
> {noformat}
> see
> org.apache.ignite.internal.processors.odbc.jdbc.JdbcRequestHandler#getSchemas
> If we havn't started cache(with a table) on some remote node, we will miss
> that scheme.
--
This message was sent by Atlassian JIRA
(v7.6.3#76005)