[
https://issues.apache.org/jira/browse/TRAFODION-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127815#comment-16127815
]
ASF GitHub Bot commented on TRAFODION-2617:
-------------------------------------------
Github user selvaganesang commented on a diff in the pull request:
https://github.com/apache/incubator-trafodion/pull/1206#discussion_r133294368
--- Diff: core/sql/src/main/java/org/trafodion/sql/HBaseClient.java ---
@@ -1395,6 +1422,169 @@ private boolean estimateRowCountBody(String
tblName, int partialRowSize,
return true;
}
+ // Similar to estimateRowCount, except that the implementation
+ // uses a coprocessor. This is necessary when HBase encryption is
+ // in use, because the Trafodion ID does not have the proper
+ // authorization to the KeyStore file used by HBase.
+ public boolean estimateRowCountViaCoprocessor(String tblName, int
partialRowSize,
+ int numCols, int
retryLimitMilliSeconds, long[] rc)
+ throws ServiceException, IOException {
+ if (logger.isDebugEnabled()) {
+ logger.debug("HBaseClient.estimateRowCountViaCoprocessor(" +
tblName + ") called.");
+ logger.debug("numCols = " + numCols + ", partialRowSize = " +
partialRowSize);
+ }
+
+ boolean retcode = true;
+ rc[0] = 0;
+
+ HConnection connection = null;
+ HTableInterface table = null;
+ connection = HConnectionManager.createConnection(config);
+ table = connection.getTable(tblName);
+
+ int putKVsSampled = 0;
+ int nonPutKVsSampled = 0;
+ int missingKVsCount = 0;
+ long totalEntries = 0; // KeyValues in all HFiles for table
+ long totalSizeBytes = 0; // Size of all HFiles for table
+
+ final int finalNumCols = numCols;
+
+ Batch.Call<TrxRegionService, TrafEstimateRowCountResponse> callable
=
+ new Batch.Call<TrxRegionService, TrafEstimateRowCountResponse>() {
+ ServerRpcController controller = new ServerRpcController();
+ BlockingRpcCallback<TrafEstimateRowCountResponse> rpcCallback =
+ new BlockingRpcCallback<TrafEstimateRowCountResponse>();
+
+ @Override
+ public TrafEstimateRowCountResponse call(TrxRegionService
instance) throws IOException {
+ if (logger.isDebugEnabled()) logger.debug("call method for
TrxRegionService was called");
+
+ // one of these God-awful long type identifiers common in
Java/Maven environments...
+
org.apache.hadoop.hbase.coprocessor.transactional.generated.TrxRegionProtos.TrafEstimateRowCountRequest.Builder
+ builder = TrafEstimateRowCountRequest.newBuilder();
+ builder.setNumCols(finalNumCols);
+
+ instance.trafEstimateRowCount(controller, builder.build(),
rpcCallback);
+ TrafEstimateRowCountResponse response = rpcCallback.get();
+ if (logger.isDebugEnabled()) {
+ if (response == null)
+ logger.debug("response was null");
+ else
+ logger.debug("response was non-null");
+ if (controller.failed())
+ logger.debug("controller.failed() is true");
+ else
+ logger.debug("controller.failed() is false");
+ if (controller.errorText() != null)
+ logger.debug("controller.errorText() is " +
controller.errorText());
+ else
+ logger.debug("controller.errorText() is null");
+ IOException ioe = controller.getFailedOn();
+ if (ioe != null)
+ logger.debug("controller.getFailedOn() returned " +
ioe.getMessage());
+ else
+ logger.debug("controller.getFailedOn() returned null");
+ }
+ return response;
+ }
+ };
+
+ Map<byte[], TrafEstimateRowCountResponse> result = null;
+ try {
+ result = table.coprocessorService(TrxRegionService.class, null,
null, callable);
+ } catch (Throwable e) {
+ throw new IOException("Exception from coprocessorService caught in
estimateRowCountViaCoprocessor",e);
--- End diff --
It is good to throw exception, but I think that the estimate earlier
returned some default values if there is any issue with getting estimates. It
would be better to retain the same semantics to avoid surprises.
> Error 9252 during update statistics of an encrypted Trafodion table
> -------------------------------------------------------------------
>
> Key: TRAFODION-2617
> URL: https://issues.apache.org/jira/browse/TRAFODION-2617
> Project: Apache Trafodion
> Issue Type: Bug
> Components: sql-cmp
> Affects Versions: 2.1-incubating
> Environment: Any, HBase encryption is enabled for the table.
> Reporter: Hans Zeller
> Assignee: David Wayne Birdsall
>
> Anu tried an update statistics command for a table that is using HBase
> encryption. That failed with the following stack trace, as printed
> >>update statistics for table t on every column sample;
> ..
> *** ERROR[9252] Unable to get row count estimate: Error code 68, detail 4.
> Exception info (if any):
> Instead of showing the exception info printed to stdout, I'm showing the
> contents of the ulog file:
> UPDATE STATISTICS
> =====================================================================
> [Wed 17 May 2017 10:38:30 PM UTC] update statistics for table t on every
> column sample;
> [Wed 17 May 2017 10:38:30 PM UTC] :BEGIN UpdateStats()
> [Wed 17 May 2017 10:38:30 PM UTC] :| BEGIN Setup CQDs prior to parsing
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT QUERY_CACHE '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT CACHE_HISTOGRAMS 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> USTAT_MODIFY_DEFAULT_UEC '0.05'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT OUTPUT_DATE_FORMAT
> 'ANSI'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> HIST_MISSING_STATS_WARNING_LEVEL '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> USTAT_AUTOMATION_INTERVAL '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> MV_ALLOW_SELECT_SYSTEM_ADDED_COLUMNS 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> HIST_ON_DEMAND_STATS_SIZE '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT ISOLATION_LEVEL 'READ
> COMMITTED'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> ALLOW_DML_ON_NONAUDITED_TABLE 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> MV_ALLOW_SELECT_SYSTEM_ADDED_COLUMNS 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> CAT_ERROR_ON_NOTNULL_STOREBY 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> WMS_CHILD_QUERY_MONITORING 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT WMS_QUERY_MONITORING
> 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> TRAF_TINYINT_RETURN_VALUES 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT TRAF_BOOLEAN_IO 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT
> TRAF_LARGEINT_UNSIGNED_IO 'ON'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT
> TRAF_ALLOW_RESERVED_COLNAMES 'ON'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT TRAF_BLOB_AS_VARCHAR
> 'OFF'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT TRAF_CLOB_AS_VARCHAR
> 'OFF'
> [Wed 17 May 2017 10:38:31 PM UTC] :| END Setup CQDs prior to parsing
> elapsed time (00:00:00.420)
> [Wed 17 May 2017 10:38:31 PM UTC] :| BEGIN Parse statement
> [Wed 17 May 2017 10:38:31 PM UTC] call HSHbaseTableDef::objExists
> [Wed 17 May 2017 10:38:31 PM UTC] naTbl_->objectUid() is 6001738912217799228
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT
> DISPLAY_DIVISION_BY_COLUMNS RESET
> [Wed 17 May 2017 10:38:31 PM UTC]
> CHECK SCHEMA VERSION FOR TABLE: XXXXXXXXXXXX
> [Wed 17 May 2017 10:38:31 PM UTC]
> UpdateStats: TABLE: XXXXXXXXXXXX; SCHEMA VERSION: 2600; AUTOMATION INTERVAL: 0
> [Wed 17 May 2017 10:38:31 PM UTC] KEY:
> (_SALT_,PATH_ID,NAME_ID)
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: argument: colSet
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[2]: :NAME_ID: 1
> [Wed 17 May 2017 10:38:31 PM UTC] KEY: (_SALT_,PATH_ID)
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: argument: colSet
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: mgroup->colSet
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC] colSet[2]: :NAME_ID: 1
> [Wed 17 May 2017 10:38:31 PM UTC] :| END Parse statement elapsed time
> (00:00:00.930)
> [Wed 17 May 2017 10:38:31 PM UTC]
> USTAT_CQDS_ALLOWED_FOR_SPAWNED_COMPILERS size of (0) is not acceptable
> [Wed 17 May 2017 10:38:31 PM UTC] :| BEGIN Initialize environment
> [Wed 17 May 2017 10:38:31 PM UTC] Creating histogram tables for schema
> TRAFODION.XXXXXXX on demand.
> [Wed 17 May 2017 10:38:31 PM UTC] :| | BEGIN Create histogram tables
> [Wed 17 May 2017 10:38:31 PM UTC] BEGIN WORK
> [Wed 17 May 2017 10:38:32 PM UTC] BEGINWORK(Create histogram tables.)
> [Wed 17 May 2017 10:38:32 PM UTC] Transaction started: 2017-05-17
> 22:38:32.007401
> [Wed 17 May 2017 10:38:33 PM UTC] :| | END Create histogram tables
> elapsed time (00:00:01.090)
> [Wed 17 May 2017 10:38:33 PM UTC] COMMIT WORK
> [Wed 17 May 2017 10:38:33 PM UTC] COMMITWORK()
> [Wed 17 May 2017 10:38:33 PM UTC] Transaction committed: 2017-05-17
> 22:38:33.099332
> [Wed 17 May 2017 10:38:33 PM UTC] :| | BEGIN getRowCount()
> [Wed 17 May 2017 10:38:33 PM UTC] :| | END getRowCount() elapsed time
> (00:00:00.065)
> [Wed 17 May 2017 10:38:33 PM UTC] currentRowCountIsEstimate_=1 from
> getRowCount()
> [Wed 17 May 2017 10:38:33 PM UTC] errorCode=68, breadCrumb=4
> [Wed 17 May 2017 10:38:33 PM UTC] JNI exception info:
> [Wed 17 May 2017 10:38:33 PM UTC]
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile
> Trailer from file
> hdfs://ip-172-31-65-71.ec2.internal:8020/apps/hbase/data/data/default/XXXXXXXXXX/00c6a0e9c39b98bd04f188647bd50253/#1/033b3f07b7c84725b5bc9e7aaf75eb54
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:481)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused
> by
> java.lang.RuntimeException: java.lang.RuntimeException:
> java.io.FileNotFoundException: /etc/hbase/conf/hbase.jks (Permission denied)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:560)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused
> by
> java.lang.RuntimeException: java.io.FileNotFoundException:
> /etc/hbase/conf/hbase.jks (Permission denied)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:153)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:553)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused
> by
> java.io.FileNotFoundException: /etc/hbase/conf/hbase.jks (Permission denied)
> java.io.FileInputStream.open0(Native Method)
> java.io.FileInputStream.open(FileInputStream.java:195)
> java.io.FileInputStream.<init>(FileInputStream.java:138)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.load(KeyStoreKeyProvider.java:124)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:147)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:553)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207)
> [Wed 17 May 2017 10:38:33 PM UTC] :| END Initialize environment elapsed
> time (00:00:01.169)
> [Wed 17 May 2017 10:38:33 PM UTC] *** ERROR[-1] in hs_update:445
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)