wchevreuil commented on a change in pull request #72:
URL: https://github.com/apache/hbase-connectors/pull/72#discussion_r536191045
##########
File path:
spark/hbase-spark/src/main/scala/org/apache/hadoop/hbase/spark/DefaultSource.scala
##########
@@ -150,12 +151,10 @@ case class HBaseRelation (
def createTable() {
val numReg = parameters.get(HBaseTableCatalog.newTable).map(x =>
x.toInt).getOrElse(0)
- val startKey = Bytes.toBytes(
- parameters.get(HBaseTableCatalog.regionStart)
- .getOrElse(HBaseTableCatalog.defaultRegionStart))
- val endKey = Bytes.toBytes(
- parameters.get(HBaseTableCatalog.regionEnd)
- .getOrElse(HBaseTableCatalog.defaultRegionEnd))
+ val startKey = parameters.get(HBaseTableCatalog.regionStart)
+
.getOrElse(HBaseTableCatalog.defaultRegionStart).getBytes(StandardCharsets.ISO_8859_1)
Review comment:
I'm not sure it is a good idea to use different encoding from the
default used by `Bytes` util converter (StandardCharsets.UTF_8), as many pieces
of hbase code would rely on the `Bytes` converter, comparisons may become
inconsistent.
Also, I'm not sure I understand why you are using a different converter
here, can you elaborate better what is the issue you are having within the
builtin `Bytes` converter?
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
[email protected]