[ 
https://issues.apache.org/jira/browse/TRAFODION-2617?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16127926#comment-16127926
 ] 

ASF GitHub Bot commented on TRAFODION-2617:
-------------------------------------------

Github user DaveBirdsall commented on a diff in the pull request:

    
https://github.com/apache/incubator-trafodion/pull/1206#discussion_r133313356
  
    --- Diff: core/sql/sqlcomp/nadefaults.cpp ---
    @@ -1759,6 +1763,7 @@ SDDkwd__(EXE_DIAGNOSTIC_EVENTS,               "OFF"),
      // exposure.
      DDkwd__(HBASE_DELETE_COSTING,                          "ON"),
      DDflt0_(HBASE_DOP_PARALLEL_SCANNER,             "0."),
    + DDkwd__(HBASE_ESTIMATE_ROW_COUNT_VIA_COPROCESSOR,   "OFF"),
    --- End diff --
    
    No. In my tests, there was a small performance penalty for using the 
coprocessor. We must have the coprocessor if HBase encryption is being used, 
but otherwise we can just use the client code.
    
    Your next question might be: why is there a performance penalty? I looked 
into that a little bit. I think it is because we do dynamic coprocessor 
loading. It may be that the penalty is not real: this coprocessor is packaged 
with the DTM coprocessors. I think they all get loaded together. Assuming this 
is true, we are simply paying this penalty earlier (at estimate row count time) 
rather than at transaction start time.


> Error 9252 during update statistics of an encrypted Trafodion table
> -------------------------------------------------------------------
>
>                 Key: TRAFODION-2617
>                 URL: https://issues.apache.org/jira/browse/TRAFODION-2617
>             Project: Apache Trafodion
>          Issue Type: Bug
>          Components: sql-cmp
>    Affects Versions: 2.1-incubating
>         Environment: Any, HBase encryption is enabled for the table.
>            Reporter: Hans Zeller
>            Assignee: David Wayne Birdsall
>
> Anu tried an update statistics command for a table that is using HBase 
> encryption. That failed with the following stack trace, as printed 
> >>update statistics for table t on every column sample;
> ..
> *** ERROR[9252] Unable to get row count estimate: Error code 68, detail 4. 
> Exception info (if any): 
> Instead of showing the exception info printed to stdout, I'm showing the 
> contents of the ulog file:
> UPDATE STATISTICS
> =====================================================================
> [Wed 17 May 2017 10:38:30 PM UTC] update statistics for table t on every 
> column sample;
> [Wed 17 May 2017 10:38:30 PM UTC] :BEGIN UpdateStats()
> [Wed 17 May 2017 10:38:30 PM UTC] :|  BEGIN Setup CQDs prior to parsing
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT QUERY_CACHE '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT CACHE_HISTOGRAMS 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> USTAT_MODIFY_DEFAULT_UEC '0.05'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT OUTPUT_DATE_FORMAT 
> 'ANSI'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> HIST_MISSING_STATS_WARNING_LEVEL '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> USTAT_AUTOMATION_INTERVAL '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> MV_ALLOW_SELECT_SYSTEM_ADDED_COLUMNS 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> HIST_ON_DEMAND_STATS_SIZE '0'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT ISOLATION_LEVEL 'READ 
> COMMITTED'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> ALLOW_DML_ON_NONAUDITED_TABLE 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> MV_ALLOW_SELECT_SYSTEM_ADDED_COLUMNS 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> ALLOW_NULLABLE_UNIQUE_KEY_CONSTRAINT 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> CAT_ERROR_ON_NOTNULL_STOREBY 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> WMS_CHILD_QUERY_MONITORING 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT WMS_QUERY_MONITORING 
> 'OFF'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> TRAF_TINYINT_RETURN_VALUES 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT TRAF_BOOLEAN_IO 'ON'
> [Wed 17 May 2017 10:38:30 PM UTC] CONTROL QUERY DEFAULT 
> TRAF_LARGEINT_UNSIGNED_IO 'ON'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT 
> TRAF_ALLOW_RESERVED_COLNAMES 'ON'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT TRAF_BLOB_AS_VARCHAR 
> 'OFF'
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT TRAF_CLOB_AS_VARCHAR 
> 'OFF'
> [Wed 17 May 2017 10:38:31 PM UTC] :|  END   Setup CQDs prior to parsing 
> elapsed time (00:00:00.420)
> [Wed 17 May 2017 10:38:31 PM UTC] :|  BEGIN Parse statement
> [Wed 17 May 2017 10:38:31 PM UTC] call HSHbaseTableDef::objExists
> [Wed 17 May 2017 10:38:31 PM UTC] naTbl_->objectUid() is 6001738912217799228
> [Wed 17 May 2017 10:38:31 PM UTC] CONTROL QUERY DEFAULT 
> DISPLAY_DIVISION_BY_COLUMNS RESET
> [Wed 17 May 2017 10:38:31 PM UTC] 
> CHECK SCHEMA VERSION FOR TABLE: XXXXXXXXXXXX
> [Wed 17 May 2017 10:38:31 PM UTC] 
> UpdateStats: TABLE: XXXXXXXXXXXX; SCHEMA VERSION: 2600; AUTOMATION INTERVAL: 0
> [Wed 17 May 2017 10:38:31 PM UTC]             KEY:            
> (_SALT_,PATH_ID,NAME_ID)
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: argument: colSet
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[2]: :NAME_ID: 1
> [Wed 17 May 2017 10:38:31 PM UTC]             KEY:            (_SALT_,PATH_ID)
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: argument: colSet
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC] GroupExists: mgroup->colSet
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[0]: :_SALT_: 12
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[1]: :PATH_ID: 0
> [Wed 17 May 2017 10:38:31 PM UTC]             colSet[2]: :NAME_ID: 1
> [Wed 17 May 2017 10:38:31 PM UTC] :|  END   Parse statement elapsed time 
> (00:00:00.930)
> [Wed 17 May 2017 10:38:31 PM UTC] 
> USTAT_CQDS_ALLOWED_FOR_SPAWNED_COMPILERS size of (0) is not acceptable
> [Wed 17 May 2017 10:38:31 PM UTC] :|  BEGIN Initialize environment
> [Wed 17 May 2017 10:38:31 PM UTC] Creating histogram tables for schema 
> TRAFODION.XXXXXXX on demand.
> [Wed 17 May 2017 10:38:31 PM UTC] :|  |  BEGIN Create histogram tables
> [Wed 17 May 2017 10:38:31 PM UTC] BEGIN WORK
> [Wed 17 May 2017 10:38:32 PM UTC] BEGINWORK(Create histogram tables.)
> [Wed 17 May 2017 10:38:32 PM UTC] Transaction started: 2017-05-17 
> 22:38:32.007401
> [Wed 17 May 2017 10:38:33 PM UTC] :|  |  END   Create histogram tables 
> elapsed time (00:00:01.090)
> [Wed 17 May 2017 10:38:33 PM UTC] COMMIT WORK
> [Wed 17 May 2017 10:38:33 PM UTC] COMMITWORK()
> [Wed 17 May 2017 10:38:33 PM UTC] Transaction committed: 2017-05-17 
> 22:38:33.099332
> [Wed 17 May 2017 10:38:33 PM UTC] :|  |  BEGIN getRowCount()
> [Wed 17 May 2017 10:38:33 PM UTC] :|  |  END   getRowCount() elapsed time 
> (00:00:00.065)
> [Wed 17 May 2017 10:38:33 PM UTC]     currentRowCountIsEstimate_=1 from 
> getRowCount()
> [Wed 17 May 2017 10:38:33 PM UTC]     errorCode=68, breadCrumb=4
> [Wed 17 May 2017 10:38:33 PM UTC]     JNI exception info:
> [Wed 17 May 2017 10:38:33 PM UTC] 
> org.apache.hadoop.hbase.io.hfile.CorruptHFileException: Problem reading HFile 
> Trailer from file 
> hdfs://ip-172-31-65-71.ec2.internal:8020/apps/hbase/data/data/default/XXXXXXXXXX/00c6a0e9c39b98bd04f188647bd50253/#1/033b3f07b7c84725b5bc9e7aaf75eb54
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:481)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused 
> by 
> java.lang.RuntimeException: java.lang.RuntimeException: 
> java.io.FileNotFoundException: /etc/hbase/conf/hbase.jks (Permission denied)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:560)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused 
> by 
> java.lang.RuntimeException: java.io.FileNotFoundException: 
> /etc/hbase/conf/hbase.jks (Permission denied)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:153)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:553)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207) Caused 
> by 
> java.io.FileNotFoundException: /etc/hbase/conf/hbase.jks (Permission denied)
> java.io.FileInputStream.open0(Native Method)
> java.io.FileInputStream.open(FileInputStream.java:195)
> java.io.FileInputStream.<init>(FileInputStream.java:138)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.load(KeyStoreKeyProvider.java:124)
> org.apache.hadoop.hbase.io.crypto.KeyStoreKeyProvider.init(KeyStoreKeyProvider.java:147)
> org.apache.hadoop.hbase.io.crypto.Encryption.getKeyProvider(Encryption.java:553)
> org.apache.hadoop.hbase.io.crypto.Encryption.getSecretKeyForSubject(Encryption.java:427)
> org.apache.hadoop.hbase.io.crypto.Encryption.decryptWithSubjectKey(Encryption.java:474)
> org.apache.hadoop.hbase.security.EncryptionUtil.getUnwrapKey(EncryptionUtil.java:129)
> org.apache.hadoop.hbase.security.EncryptionUtil.unwrapKey(EncryptionUtil.java:122)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.createHFileContext(HFileReaderV3.java:107)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV2.<init>(HFileReaderV2.java:130)
> org.apache.hadoop.hbase.io.hfile.HFileReaderV3.<init>(HFileReaderV3.java:77)
> org.apache.hadoop.hbase.io.hfile.HFile.pickReaderVersion(HFile.java:471)
> org.apache.hadoop.hbase.io.hfile.HFile.createReader(HFile.java:524)
> org.trafodion.sql.HBaseClient.estimateRowCountBody(HBaseClient.java:1302)
> org.trafodion.sql.HBaseClient.estimateRowCount(HBaseClient.java:1207)
> [Wed 17 May 2017 10:38:33 PM UTC] :|  END   Initialize environment elapsed 
> time (00:00:01.169)
> [Wed 17 May 2017 10:38:33 PM UTC] *** ERROR[-1] in hs_update:445



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)

Reply via email to