[jira] [Work logged] (HIVE-25652) Add constraints in result of “SHOW CREATE TABLE ”

2021-11-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25652?focusedWorklogId=672973=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-672973
 ]

ASF GitHub Bot logged work on HIVE-25652:
-

Author: ASF GitHub Bot
Created on: 02/Nov/21 05:42
Start Date: 02/Nov/21 05:42
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2752:
URL: https://github.com/apache/hive/pull/2752#discussion_r740720412



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DDLPlanUtils.java
##
@@ -800,19 +809,166 @@ private String getExternal(Table table) {
 return table.getTableType() == TableType.EXTERNAL_TABLE ? "EXTERNAL " : "";
   }
 
-  private String getColumns(Table table) {
-List columnDescs = new ArrayList();
+  private String getColumns(Table table) throws HiveException{
+List columnDescs = new ArrayList<>();
+List columns = 
table.getCols().stream().map(FieldSchema::getName).collect(Collectors.toList());
+Set notNullColumns = Collections.emptySet();
+if (NotNullConstraint.isNotEmpty(table.getNotNullConstraint())) {
+  notNullColumns = new 
HashSet<>(table.getNotNullConstraint().getNotNullConstraints().values());
+}
+
+Map columnDefaultValueMap = Collections.emptyMap();
+if (DefaultConstraint.isNotEmpty(table.getDefaultConstraint())) {
+  columnDefaultValueMap = 
table.getDefaultConstraint().getColNameToDefaultValueMap();
+}
+
+List sqlCheckConstraints;
+try {
+  sqlCheckConstraints = 
Hive.get().getCheckConstraintList(table.getDbName(), table.getTableName());
+} catch (NoSuchObjectException e) {
+  throw new HiveException(e);
+}
+Map columnCheckConstraintsMap = 
sqlCheckConstraints.stream()
+  .filter(SQLCheckConstraint::isSetColumn_name)
+  .collect(Collectors.toMap(SQLCheckConstraint::getColumn_name, 
Function.identity()));
+List tableCheckConstraints = 
sqlCheckConstraints.stream()
+  .filter(cc -> !cc.isSetColumn_name())
+  .collect(Collectors.toList());
+
 for (FieldSchema column : table.getCols()) {
   String columnType = 
formatType(TypeInfoUtils.getTypeInfoFromTypeString(column.getType()));
-  String columnDesc = "  `" + column.getName() + "` " + columnType;
+  String columnName = column.getName();
+  StringBuilder columnDesc = new StringBuilder();
+  columnDesc.append("  `").append(columnName).append("` 
").append(columnType);
+  if (notNullColumns.contains(columnName)) {
+columnDesc.append(" NOT NULL");
+  }
+  if (columnDefaultValueMap.containsKey(columnName)) {
+columnDesc.append(" DEFAULT 
").append(columnDefaultValueMap.get(columnName));
+  }
+  if (columnCheckConstraintsMap.containsKey(columnName)) {
+
columnDesc.append(getColumnCheckConstraintDesc(columnCheckConstraintsMap.get(columnName),
 columns));
+  }
   if (column.getComment() != null) {
-columnDesc += " COMMENT '" + 
HiveStringUtils.escapeHiveCommand(column.getComment()) + "'";
+columnDesc.append(" COMMENT 
'").append(HiveStringUtils.escapeHiveCommand(column.getComment())).append("'");
   }
-  columnDescs.add(columnDesc);
+  columnDescs.add(columnDesc.toString());
 }
+String pkDesc = getPrimaryKeyDesc(table);
+if (pkDesc != null) {
+  columnDescs.add(pkDesc);
+}
+columnDescs.addAll(getForeignKeyDesc(table));
+columnDescs.addAll(getTableCheckConstraintDesc(tableCheckConstraints, 
columns));
 return StringUtils.join(columnDescs, ", \n");
   }
 
+  private List getTableCheckConstraintDesc(List 
tableCheckConstraints,
+   List columns) {
+List ccDescs = new ArrayList<>();
+for (SQLCheckConstraint constraint: tableCheckConstraints) {
+  String enable = constraint.isEnable_cstr()? " enable": " disable";
+  String validate = constraint.isValidate_cstr()? " validate": " 
novalidate";
+  String rely = constraint.isRely_cstr()? " rely": " norely";
+  String expression = getCheckExpressionWithBackticks(columns, constraint);
+  ccDescs.add("  constraint " + constraint.getDc_name() + " CHECK(" + 
expression +
+")" + enable + validate + rely);
+}
+return ccDescs;
+  }
+
+  private String getCheckExpressionWithBackticks(List columns, 
SQLCheckConstraint constraint) {
+TreeMap indexToCols = new TreeMap<>();
+String expression = constraint.getCheck_expression();
+for (String col: columns) {
+  int idx = expression.indexOf(col);
+  if (idx == -1) {
+continue;
+  }
+  indexToCols.put(idx, col);
+  while (idx + col.length() < expression.length()) {
+idx = expression.indexOf(col, idx + col.length());
+if (idx == -1) {
+  break;
+}
+indexToCols.put(idx, col);
+  }
+}

Review comment:
   Is 

[jira] [Work logged] (HIVE-25652) Add constraints in result of “SHOW CREATE TABLE ”

2021-11-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25652?focusedWorklogId=672969=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-672969
 ]

ASF GitHub Bot logged work on HIVE-25652:
-

Author: ASF GitHub Bot
Created on: 02/Nov/21 05:02
Start Date: 02/Nov/21 05:02
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2752:
URL: https://github.com/apache/hive/pull/2752#discussion_r740720412



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DDLPlanUtils.java
##
@@ -800,19 +809,166 @@ private String getExternal(Table table) {
 return table.getTableType() == TableType.EXTERNAL_TABLE ? "EXTERNAL " : "";
   }
 
-  private String getColumns(Table table) {
-List columnDescs = new ArrayList();
+  private String getColumns(Table table) throws HiveException{
+List columnDescs = new ArrayList<>();
+List columns = 
table.getCols().stream().map(FieldSchema::getName).collect(Collectors.toList());
+Set notNullColumns = Collections.emptySet();
+if (NotNullConstraint.isNotEmpty(table.getNotNullConstraint())) {
+  notNullColumns = new 
HashSet<>(table.getNotNullConstraint().getNotNullConstraints().values());
+}
+
+Map columnDefaultValueMap = Collections.emptyMap();
+if (DefaultConstraint.isNotEmpty(table.getDefaultConstraint())) {
+  columnDefaultValueMap = 
table.getDefaultConstraint().getColNameToDefaultValueMap();
+}
+
+List sqlCheckConstraints;
+try {
+  sqlCheckConstraints = 
Hive.get().getCheckConstraintList(table.getDbName(), table.getTableName());
+} catch (NoSuchObjectException e) {
+  throw new HiveException(e);
+}
+Map columnCheckConstraintsMap = 
sqlCheckConstraints.stream()
+  .filter(SQLCheckConstraint::isSetColumn_name)
+  .collect(Collectors.toMap(SQLCheckConstraint::getColumn_name, 
Function.identity()));
+List tableCheckConstraints = 
sqlCheckConstraints.stream()
+  .filter(cc -> !cc.isSetColumn_name())
+  .collect(Collectors.toList());
+
 for (FieldSchema column : table.getCols()) {
   String columnType = 
formatType(TypeInfoUtils.getTypeInfoFromTypeString(column.getType()));
-  String columnDesc = "  `" + column.getName() + "` " + columnType;
+  String columnName = column.getName();
+  StringBuilder columnDesc = new StringBuilder();
+  columnDesc.append("  `").append(columnName).append("` 
").append(columnType);
+  if (notNullColumns.contains(columnName)) {
+columnDesc.append(" NOT NULL");
+  }
+  if (columnDefaultValueMap.containsKey(columnName)) {
+columnDesc.append(" DEFAULT 
").append(columnDefaultValueMap.get(columnName));
+  }
+  if (columnCheckConstraintsMap.containsKey(columnName)) {
+
columnDesc.append(getColumnCheckConstraintDesc(columnCheckConstraintsMap.get(columnName),
 columns));
+  }
   if (column.getComment() != null) {
-columnDesc += " COMMENT '" + 
HiveStringUtils.escapeHiveCommand(column.getComment()) + "'";
+columnDesc.append(" COMMENT 
'").append(HiveStringUtils.escapeHiveCommand(column.getComment())).append("'");
   }
-  columnDescs.add(columnDesc);
+  columnDescs.add(columnDesc.toString());
 }
+String pkDesc = getPrimaryKeyDesc(table);
+if (pkDesc != null) {
+  columnDescs.add(pkDesc);
+}
+columnDescs.addAll(getForeignKeyDesc(table));
+columnDescs.addAll(getTableCheckConstraintDesc(tableCheckConstraints, 
columns));
 return StringUtils.join(columnDescs, ", \n");
   }
 
+  private List getTableCheckConstraintDesc(List 
tableCheckConstraints,
+   List columns) {
+List ccDescs = new ArrayList<>();
+for (SQLCheckConstraint constraint: tableCheckConstraints) {
+  String enable = constraint.isEnable_cstr()? " enable": " disable";
+  String validate = constraint.isValidate_cstr()? " validate": " 
novalidate";
+  String rely = constraint.isRely_cstr()? " rely": " norely";
+  String expression = getCheckExpressionWithBackticks(columns, constraint);
+  ccDescs.add("  constraint " + constraint.getDc_name() + " CHECK(" + 
expression +
+")" + enable + validate + rely);
+}
+return ccDescs;
+  }
+
+  private String getCheckExpressionWithBackticks(List columns, 
SQLCheckConstraint constraint) {
+TreeMap indexToCols = new TreeMap<>();
+String expression = constraint.getCheck_expression();
+for (String col: columns) {
+  int idx = expression.indexOf(col);
+  if (idx == -1) {
+continue;
+  }
+  indexToCols.put(idx, col);
+  while (idx + col.length() < expression.length()) {
+idx = expression.indexOf(col, idx + col.length());
+if (idx == -1) {
+  break;
+}
+indexToCols.put(idx, col);
+  }
+}

Review comment:
   Is 

[jira] [Work logged] (HIVE-18920) CBO: Initialize the Janino providers ahead of 1st query

2021-11-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-18920?focusedWorklogId=672913=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-672913
 ]

ASF GitHub Bot logged work on HIVE-18920:
-

Author: ASF GitHub Bot
Created on: 02/Nov/21 00:10
Start Date: 02/Nov/21 00:10
Worklog Time Spent: 10m 
  Work Description: github-actions[bot] closed pull request #2596:
URL: https://github.com/apache/hive/pull/2596


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 672913)
Time Spent: 40m  (was: 0.5h)

> CBO: Initialize the Janino providers ahead of 1st query
> ---
>
> Key: HIVE-18920
> URL: https://issues.apache.org/jira/browse/HIVE-18920
> Project: Hive
>  Issue Type: Bug
>  Components: CBO
>Reporter: Gopal Vijayaraghavan
>Assignee: Jesus Camacho Rodriguez
>Priority: Major
>  Labels: pull-request-available
> Fix For: 4.0.0
>
> Attachments: HIVE-18920.01.patch, HIVE-18920.02.patch, 
> HIVE-18920.patch
>
>  Time Spent: 40m
>  Remaining Estimate: 0h
>
> Hive Calcite metadata providers are compiled when the 1st query comes in.
> If a second query arrives before the 1st one has built a metadata provider, 
> it will also try to do the same thing, because the cache is not populated yet.
> With 1024 concurrent users, it takes 6 minutes for the 1st query to finish 
> fighting all the other queries which are trying to load that cache.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-287) support count(*) and count distinct on multiple columns

2021-11-01 Thread Ameliaemma (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-287?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436833#comment-17436833
 ] 

Ameliaemma commented on HIVE-287:
-

[*tuckernuck promo 
codes*|https://fairycoupons.com/store-profile/tuckernuck-coupons]*,*[ *southern 
tide discount 
code*|https://fairycoupons.com/store-profile/southern-tide-coupons-code]*,*[ 
*eberjey discount code*|https://fairycoupons.com/store-profile/eberjey-coupons] 
*are making a hot impact on the market through*[ 
*fairycoupons.com*|https://fairycoupons.com/]*.*

> support count(*) and count distinct on multiple columns
> ---
>
> Key: HIVE-287
> URL: https://issues.apache.org/jira/browse/HIVE-287
> Project: Hive
>  Issue Type: Bug
>  Components: Query Processor
>Affects Versions: 0.6.0
>Reporter: Namit Jain
>Assignee: Arvind Prabhakar
>Priority: Major
> Fix For: 0.6.0
>
> Attachments: HIVE-287-1.patch, HIVE-287-2.patch, HIVE-287-3.patch, 
> HIVE-287-4.patch, HIVE-287-5-branch-0.6.patch, HIVE-287-5-trunk.patch, 
> HIVE-287-6-branch-0.6.patch, HIVE-287-6-trunk.patch
>
>
> The following query does not work:
> select count(distinct col1, col2) from Tbl



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24852) Add support for Snapshots during external table replication

2021-11-01 Thread Ayush Saxena (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436773#comment-17436773
 ] 

Ayush Saxena commented on HIVE-24852:
-

Hey [~ste...@apache.org], Quite surprised to see you here. :) 
{quote}1. Does this downgrade properly when the destination FS is not hdfs?
{quote}
This feature isn't enabled by default, You need to enable it via a config. So, 
If initially the target doesn't support Snapshots, and you enable use of 
snapshots for external tables, The Replication will fail with a non-recoverable 
error. So, Admin has to disable that and restart the Replication, Everything 
works normally post that.

 

If I catch your question correctly, You mean earlier the replication flow was 
Cluster-A->Cluster-B, both being HDFS(On-Prem to On-Prem Replication) and later 
if Cluster-B migrates the FileSystem from HDFS to some other FS which doesn't 
support Snapshots. Does this work or not.

--> So, In that case you can simply turn off the use of snapshots for 
replication, and it will start using the normal mode of replication. We will do 
the cleanup of snaphsots and will start doing a normal distcp. AFAIK there 
isn't any limitation from DistCp side that if any directory was copied using 
-diff can not be copied again using normal distcp -update -delete.

 
{quote}2. has anyone discussed with the HDFS team the possibility of providing 
an interface in hadoop-common for this?
{quote}
Atleast I haven't done that. May be [~aasha] or [~anishek] might throw some 
light on that.

But just out of curiosity how we can manage this through {{Hadoop-Common}}, 
something like adding a copy method in {{FileUtils}}?, But creation & deletion 
of snapshots we still have to do ourselves right? During dump create snapshots 
on Source cluster and then post copy delete & recreate the snapshots on Target 
cluster and stuffs like that. Operation on source cluster will be done as part 
of DUMP Policy running on the {{Source}} cluster & Operations on the {{Target}} 
Cluster would be done as part of the LOAD policy running on the target cluster, 
both running independently on different cluster at different time(_In 
synchronised manner_). So, what we can extract here to {{Hadoop-Common}}? Can 
you share some pointers, I can give it a try

 

> Add support for Snapshots during external table replication
> ---
>
> Key: HIVE-24852
> URL: https://issues.apache.org/jira/browse/HIVE-24852
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
>  Labels: pull-request-available
> Attachments: Design Doc HDFS Snapshots for External Table 
> Replication-01.pdf, Design Doc HDFS Snapshots for External Table 
> Replication-02.pdf
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Add support for use of snapshot diff for external table replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Commented] (HIVE-24852) Add support for Snapshots during external table replication

2021-11-01 Thread Steve Loughran (Jira)


[ 
https://issues.apache.org/jira/browse/HIVE-24852?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=17436757#comment-17436757
 ] 

Steve Loughran commented on HIVE-24852:
---

# Does this downgrade properly when the destination FS is not hdfs?
# has anyone discussed with the HDFS team the possibility of providing an 
interface in hadoop-common for this?



> Add support for Snapshots during external table replication
> ---
>
> Key: HIVE-24852
> URL: https://issues.apache.org/jira/browse/HIVE-24852
> Project: Hive
>  Issue Type: Improvement
>Reporter: Ayush Saxena
>Assignee: Ayush Saxena
>Priority: Critical
>  Labels: pull-request-available
> Attachments: Design Doc HDFS Snapshots for External Table 
> Replication-01.pdf, Design Doc HDFS Snapshots for External Table 
> Replication-02.pdf
>
>  Time Spent: 7h
>  Remaining Estimate: 0h
>
> Add support for use of snapshot diff for external table replication.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25652) Add constraints in result of “SHOW CREATE TABLE ”

2021-11-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25652?focusedWorklogId=672576=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-672576
 ]

ASF GitHub Bot logged work on HIVE-25652:
-

Author: ASF GitHub Bot
Created on: 01/Nov/21 08:24
Start Date: 01/Nov/21 08:24
Worklog Time Spent: 10m 
  Work Description: soumyakanti3578 commented on a change in pull request 
#2752:
URL: https://github.com/apache/hive/pull/2752#discussion_r740031279



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DDLPlanUtils.java
##
@@ -800,19 +804,59 @@ private String getExternal(Table table) {
 return table.getTableType() == TableType.EXTERNAL_TABLE ? "EXTERNAL " : "";
   }
 
-  private String getColumns(Table table) {
-List columnDescs = new ArrayList();
+  private String getColumns(Table table) throws HiveException{
+List columnDescs = new ArrayList<>();
+Set notNullColumns = null;
+if 
(NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+  notNullColumns = new 
HashSet<>(table.getNotNullConstraint().getNotNullConstraints().values());
+}
+
+Map columnDefaultValueMap = null;
+if 
(DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+  columnDefaultValueMap = 
table.getDefaultConstraint().getColNameToDefaultValueMap();
+}
 for (FieldSchema column : table.getCols()) {
   String columnType = 
formatType(TypeInfoUtils.getTypeInfoFromTypeString(column.getType()));
-  String columnDesc = "  `" + column.getName() + "` " + columnType;
+  String columnName = column.getName();
+  StringBuilder columnDesc = new StringBuilder();
+  columnDesc.append("  `").append(columnName).append("` 
").append(columnType);

Review comment:
   I just pushed a change where I just add the backticks to the columns! It 
seems to be working fine, but please do let me know if we need to parse it to 
AST instead! :)




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 672576)
Time Spent: 1h  (was: 50m)

> Add constraints in result of “SHOW CREATE TABLE ”
> -
>
> Key: HIVE-25652
> URL: https://issues.apache.org/jira/browse/HIVE-25652
> Project: Hive
>  Issue Type: Improvement
>Reporter: Soumyakanti Das
>Assignee: Soumyakanti Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 1h
>  Remaining Estimate: 0h
>
> Currently show create table doesn’t pull any constraint info like not null, 
> defaults, primary key.
> Example:
> Create table
>  
> {code:java}
> CREATE TABLE TEST(
>   col1 varchar(100) NOT NULL COMMENT "comment for column 1",
>   col2 timestamp DEFAULT CURRENT_TIMESTAMP() COMMENT "comment for column 2",
>   col3 decimal,
>   col4 varchar(512) NOT NULL,
>   col5 varchar(100),
>   primary key(col1, col2) disable novalidate)
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
> {code}
> Currently {{SHOW CREATE TABLE TEST}} doesn't show the column constraints.
> {code:java}
> CREATE TABLE `test`(
>   `col1` varchar(100) COMMENT 'comment for column 1', 
>   `col2` timestamp COMMENT 'comment for column 2', 
>   `col3` decimal(10,0), 
>   `col4` varchar(512), 
>   `col5` varchar(100))
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
> STORED AS INPUTFORMAT 
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' 
> OUTPUTFORMAT 
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
> {code}



--
This message was sent by Atlassian Jira
(v8.3.4#803005)


[jira] [Work logged] (HIVE-25652) Add constraints in result of “SHOW CREATE TABLE ”

2021-11-01 Thread ASF GitHub Bot (Jira)


 [ 
https://issues.apache.org/jira/browse/HIVE-25652?focusedWorklogId=672562=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-672562
 ]

ASF GitHub Bot logged work on HIVE-25652:
-

Author: ASF GitHub Bot
Created on: 01/Nov/21 07:16
Start Date: 01/Nov/21 07:16
Worklog Time Spent: 10m 
  Work Description: kasakrisz commented on a change in pull request #2752:
URL: https://github.com/apache/hive/pull/2752#discussion_r73468



##
File path: ql/src/java/org/apache/hadoop/hive/ql/exec/DDLPlanUtils.java
##
@@ -800,19 +804,59 @@ private String getExternal(Table table) {
 return table.getTableType() == TableType.EXTERNAL_TABLE ? "EXTERNAL " : "";
   }
 
-  private String getColumns(Table table) {
-List columnDescs = new ArrayList();
+  private String getColumns(Table table) throws HiveException{
+List columnDescs = new ArrayList<>();
+Set notNullColumns = null;
+if 
(NotNullConstraint.isNotNullConstraintNotEmpty(table.getNotNullConstraint())) {
+  notNullColumns = new 
HashSet<>(table.getNotNullConstraint().getNotNullConstraints().values());
+}
+
+Map columnDefaultValueMap = null;
+if 
(DefaultConstraint.isCheckConstraintNotEmpty(table.getDefaultConstraint())) {
+  columnDefaultValueMap = 
table.getDefaultConstraint().getColNameToDefaultValueMap();
+}
 for (FieldSchema column : table.getCols()) {
   String columnType = 
formatType(TypeInfoUtils.getTypeInfoFromTypeString(column.getType()));
-  String columnDesc = "  `" + column.getName() + "` " + columnType;
+  String columnName = column.getName();
+  StringBuilder columnDesc = new StringBuilder();
+  columnDesc.append("  `").append(columnName).append("` 
").append(columnType);

Review comment:
   Without backticks identifiers contains special characters like space can 
not be handled. See the create table example from `quotedid_basic.q` Those 
characters can be used in column names too.
   
   Yes, it requires a more complex solution but in the end we can print `CREATE 
TABLE` statements which can be executed by just copy-pasting.
   
   Probably the expression in the check constraint must be parsed to AST to 
collect the identifiers and replace them with the escaped versions.
   1. Example how to build the AST:
   
https://github.com/apache/hive/blob/8a8e03d02003aa3543f46f595b4425fd8c156ad9/ql/src/java/org/apache/hadoop/hive/ql/parse/type/ConstraintExprGenerator.java#L180
   2. The AST need to be traversed to replace the identifiers.
   3. Print the altered expression from AST. You need a TokenRewriteStream to 
do this and currently `parseExpression` doesn't return it. So more code need to 
be changed   
https://github.com/apache/hive/blob/8a8e03d02003aa3543f46f595b4425fd8c156ad9/ql/src/java/org/apache/hadoop/hive/ql/parse/SemanticAnalyzer.java#L15010
   
   This seems to be a bigger change feel free to tackle this in a separate jira.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: gitbox-unsubscr...@hive.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
---

Worklog Id: (was: 672562)
Time Spent: 50m  (was: 40m)

> Add constraints in result of “SHOW CREATE TABLE ”
> -
>
> Key: HIVE-25652
> URL: https://issues.apache.org/jira/browse/HIVE-25652
> Project: Hive
>  Issue Type: Improvement
>Reporter: Soumyakanti Das
>Assignee: Soumyakanti Das
>Priority: Major
>  Labels: pull-request-available
>  Time Spent: 50m
>  Remaining Estimate: 0h
>
> Currently show create table doesn’t pull any constraint info like not null, 
> defaults, primary key.
> Example:
> Create table
>  
> {code:java}
> CREATE TABLE TEST(
>   col1 varchar(100) NOT NULL COMMENT "comment for column 1",
>   col2 timestamp DEFAULT CURRENT_TIMESTAMP() COMMENT "comment for column 2",
>   col3 decimal,
>   col4 varchar(512) NOT NULL,
>   col5 varchar(100),
>   primary key(col1, col2) disable novalidate)
> ROW FORMAT SERDE
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
> STORED AS INPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
> OUTPUTFORMAT
>   'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat';
> {code}
> Currently {{SHOW CREATE TABLE TEST}} doesn't show the column constraints.
> {code:java}
> CREATE TABLE `test`(
>   `col1` varchar(100) COMMENT 'comment for column 1', 
>   `col2` timestamp COMMENT 'comment for column 2', 
>   `col3` decimal(10,0), 
>   `col4` varchar(512), 
>   `col5` varchar(100))
> ROW FORMAT SERDE 
>   'org.apache.hadoop.hive.ql.io.orc.OrcSerde' 
> STORED AS