[ 
https://issues.apache.org/jira/browse/PHOENIX-7251?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17824514#comment-17824514
 ] 

ASF GitHub Bot commented on PHOENIX-7251:
-----------------------------------------

palashc commented on PR #1845:
URL: https://github.com/apache/phoenix/pull/1845#issuecomment-1984251635

   > Lets see why this is breaking our test.
   Creating a connection on the server side fails because of malformed URL 
exception. 
   
   ```
   Caused by: java.sql.SQLException: ERROR 102 (08001): Malformed connection 
url. Quorum not specified and hbase.client.zookeeper.quorum is not set in 
configuration : jdbc:phoenix
        at 
org.apache.phoenix.exception.SQLExceptionCode$Factory$1.newException(SQLExceptionCode.java:656)
        at 
org.apache.phoenix.exception.SQLExceptionInfo.buildException(SQLExceptionInfo.java:229)
        at 
org.apache.phoenix.jdbc.ConnectionInfo.getMalFormedUrlException(ConnectionInfo.java:82)
        at 
org.apache.phoenix.jdbc.ZKConnectionInfo$Builder.normalize(ZKConnectionInfo.java:322)
        at 
org.apache.phoenix.jdbc.ZKConnectionInfo$Builder.create(ZKConnectionInfo.java:175)
        at 
org.apache.phoenix.jdbc.ConnectionInfo.create(ConnectionInfo.java:174)
        at 
org.apache.phoenix.jdbc.ConnectionInfo.createNoLogin(ConnectionInfo.java:119)
        at 
org.apache.phoenix.util.QueryUtil.getConnectionUrl(QueryUtil.java:454)
        at 
org.apache.phoenix.util.QueryUtil.getConnectionUrl(QueryUtil.java:438)
        at org.apache.phoenix.util.QueryUtil.getConnection(QueryUtil.java:429)
        at 
org.apache.phoenix.util.QueryUtil.getConnectionOnServer(QueryUtil.java:410)
        at 
org.apache.phoenix.cache.ServerMetadataCacheImpl.getConnection(ServerMetadataCacheImpl.java:162)
        at 
org.apache.phoenix.end2end.ServerMetadataCacheTestImpl.getConnection(ServerMetadataCacheTestImpl.java:87)
        at 
org.apache.phoenix.cache.ServerMetadataCacheImpl.getLastDDLTimestampForTable(ServerMetadataCacheImpl.java:134)
        at 
org.apache.phoenix.coprocessor.VerifyLastDDLTimestamp.verifyLastDDLTimestamp(VerifyLastDDLTimestamp.java:57)
        at 
org.apache.phoenix.coprocessor.PhoenixRegionServerEndpoint.validateLastDDLTimestamp(PhoenixRegionServerEndpoint.java:76)
   ```
   
   
   So I think we do need to read all site files? 




> Refactor server-side code to support multiple ServerMetadataCache for HA tests
> ------------------------------------------------------------------------------
>
>                 Key: PHOENIX-7251
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-7251
>             Project: Phoenix
>          Issue Type: Sub-task
>            Reporter: Palash Chauhan
>            Assignee: Palash Chauhan
>            Priority: Major
>
> In the metadata caching re-design, `ServerMetadataCache` is required to be a 
> singleton in the implementation. This affects tests for the HA use case 
> because the coprocessors on the 2 clusters end up using the same 
> `ServerMetadataCache`. All tests which execute queries with 1 of the clusters 
> unavailable will fail. 
> We can refactor the implementation in the following way to support HA test 
> cases:
> 1. Create a `ServerMetadataCache` interface and use the current 
> implementation as `ServerMetadataCacheImpl` for all other tests. This would 
> be a singleton.
> 2. Implement `ServerMetadataCacheHAImpl` with a map of instances keyed on 
> config.
> 3. Extend `PhoenixRegionServerEndpoint` and use `ServerMetadataCacheHAImpl`. 
> 4. In HA tests, load this new endpoint on the region servers. 



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

Reply via email to