ajantha-bhat commented on a change in pull request #3936:
URL: https://github.com/apache/carbondata/pull/3936#discussion_r490834840
##########
File path:
integration/hive/src/main/java/org/apache/carbondata/hive/MapredCarbonInputFormat.java
##########
@@ -154,8 +154,16 @@ private static CarbonTable getCarbonTable(Configuration
configuration, String pa
} else {
carbonInputFormat = new CarbonFileInputFormat<>();
}
- List<org.apache.hadoop.mapreduce.InputSplit> splitList =
- carbonInputFormat.getSplits(jobContext);
+ List<org.apache.hadoop.mapreduce.InputSplit> splitList;
+ try {
+ splitList = carbonInputFormat.getSplits(jobContext);
+ } catch (IOException ex) {
+ if (ex.getMessage().contains("No Index files are present in the table
location :")) {
Review comment:
I see that hive supports only non-transactional table support. Non
trascational tables are often created by the path provided by user. so, when
query if the path doesn't have carbon files. It is must to throw exception.
If external table is created on wrong path. Now query gives 0 rows and user
will not know what went wrong.
##########
File path:
integration/hive/src/main/java/org/apache/carbondata/hive/util/HiveCarbonUtil.java
##########
@@ -171,15 +171,33 @@ public static CarbonTable getCarbonTable(Configuration
tableProperties) throws S
columns = columns + "," + partitionColumns;
columnTypes = columnTypes + ":" + partitionColumnTypes;
}
- String[] columnTypeArray =
HiveCarbonUtil.splitSchemaStringToArray(columnTypes);
-
+ String[][] validatedColumnsAndTypes = validateColumnsAndTypes(columns,
columnTypes);
CarbonTable carbonTable = CarbonTable.buildFromTableInfo(
HiveCarbonUtil.getTableInfo(tableName, databaseName, tablePath,
- sortColumns, columns.split(","), columnTypeArray, new
ArrayList<>()));
+ sortColumns, validatedColumnsAndTypes[0],
+ validatedColumnsAndTypes[1], new ArrayList<>()));
carbonTable.setTransactionalTable(false);
return carbonTable;
}
+ private static String[][] validateColumnsAndTypes(String columns, String
columnTypes) {
+ String[] columnTypeArray =
HiveCarbonUtil.splitSchemaStringToArray(columnTypes);
Review comment:
what are these changes for didn't see anything about this in
descriptions. please add comments
##########
File path:
integration/hive/src/main/java/org/apache/carbondata/hive/util/HiveCarbonUtil.java
##########
@@ -303,8 +321,15 @@ public void commitDropTable(Table table, boolean b) throws
MetaException {
List<String> tokens = new ArrayList();
StringBuilder stack = new StringBuilder();
int openingCount = 0;
- for (int i = 0; i < schema.length(); i++) {
- if (schema.charAt(i) == '<') {
+ int length = schema.length();
Review comment:
same comment as above
##########
File path:
integration/hive/src/test/java/org/apache/carbondata/hive/HiveTestUtils.java
##########
@@ -88,7 +86,6 @@ public boolean checkAnswer(ResultSet actual, ResultSet
expected) throws SQLExcep
}
Collections.sort(expectedValuesList);Collections.sort(actualValuesList);
Assert.assertArrayEquals(expectedValuesList.toArray(),
actualValuesList.toArray());
- Assert.assertTrue(rowCountExpected > 0);
Review comment:
why removed this validation ?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]