[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Description: 
In class NDCuboidBuilder:
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter = new RowKeySplitter(cubeSegment, 65, 256);
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
} 
which will create a bytes array with length 256 to fill in rowkey column bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users do have 
not much Kylin knowledge will prefer to chose fix length encoding on such 
dimensions, and set max length as 256. 

  was:
In class NDCuboidBuilder:
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter = new RowKeySplitter(cubeSegment, 65, 256);
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
} 
which will create a bytes array with length 256 to fill in rowkey column bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 







> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder:
> public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = 

[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Description: 
In class NDCuboidBuilder:
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter = new RowKeySplitter(cubeSegment, 65, 256);
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
} 
which will create a bytes array with length 256 to fill in rowkey column bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 






  was:
In class NDCuboidBuilder. 
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter = new RowKeySplitter(cubeSegment, 65, 256);
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 







> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder:
> public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> 

[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Description: 
In class NDCuboidBuilder. 
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 






  was:
In class NDCuboidBuilder. 
_public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}_
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
_rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_
*
So, if a dimension is encoded in fixed length and the length is 256. The cube 
building job will succeed. While, the merge job will always fail.*

public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
   _ long cuboidID = rowKeySplitter.split(key.getBytes());_
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
_for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
   _ System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}_
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
_rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);_

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 







> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder. 
> public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
>

[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Description: 
In class NDCuboidBuilder. 
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter = new RowKeySplitter(cubeSegment, 65, 256);
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 






  was:
In class NDCuboidBuilder. 
public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);

So, if a dimension is encoded in fixed length and the max length is set to 256. 
The cube building job will succeed. While, the merge job will always fail. 
Since in class MergeCuboidMapper method doMap:
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
long cuboidID = rowKeySplitter.split(key.getBytes());
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke method RowKeySplitter.split(byte[] bytes):
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column value length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 







> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder. 
> public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> 

[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Description: 
In class NDCuboidBuilder. 
_public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}_
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
_rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_
*
So, if a dimension is encoded in fixed length and the length is 256. The cube 
building job will succeed. While, the merge job will always fail.*

public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
   _ long cuboidID = rowKeySplitter.split(key.getBytes());_
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
_for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
   _ System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}_
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
_rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);_

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 






  was:
In class NDCuboidBuilder. 
_public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}_
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
_rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_

So, if a dimension is encoded in fixed length and the length is 256. The cube 
building job will succeed. While, the merge job will always fail.
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
   _ long cuboidID = rowKeySplitter.split(key.getBytes());_
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
_// rowkey columns
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
   _ System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}_
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 







> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder. 
> _public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
> this.rowKeyEncoderProvider = new 

[jira] [Updated] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-3115:
--
Priority: Minor  (was: Major)

> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>Priority: Minor
>
> In class NDCuboidBuilder. 
> _public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
> this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
> }_
> which will create a temp bytes array with length 256 to fill in rowkey column 
> bytes.
> While, in class MergeCuboidMapper it's initialized with length 255. 
> _rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_
> So, if a dimension is encoded in fixed length and the length is 256. The cube 
> building job will succeed. While, the merge job will always fail.
> public void doMap(Text key, Text value, Context context) throws 
> IOException, InterruptedException {
>_ long cuboidID = rowKeySplitter.split(key.getBytes());_
> Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);
> in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
> _// rowkey columns
> for (int i = 0; i < cuboid.getColumns().size(); i++) {
> splitOffsets[i] = offset;
> TblColRef col = cuboid.getColumns().get(i);
> int colLength = colIO.getColumnLength(col);
> SplittedBytes split = this.splitBuffers[this.bufferSize++];
> split.length = colLength;
>_ System.arraycopy(bytes, offset, split.value, 0, colLength);_
> offset += colLength;
> }_
> Method System.arraycopy will result in IndexOutOfBoundsException exception, 
> if a column length is 256 in bytes and is being copied to a bytes array with 
> length 255.
> The incompatibility is also occurred in class 
> FilterRecommendCuboidDataMapper, initialize RowkeySplitter as: 
> rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);
> I think the better way is to always set the max split length as 256.
> And actually dimension encoded in fix length 256 is pretty common in our 
> production. Since in Hive, type varchar(256) is pretty common, users does 
> have not much knowledge will prefer to chose fix length encoding on such 
> dimensions, and set max length as 256. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3115?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang reassigned KYLIN-3115:
-

Assignee: Wang, Gang

> Incompatible RowKeySplitter initialize between build and merge job
> --
>
> Key: KYLIN-3115
> URL: https://issues.apache.org/jira/browse/KYLIN-3115
> Project: Kylin
>  Issue Type: Bug
>Reporter: Wang, Gang
>Assignee: Wang, Gang
>
> In class NDCuboidBuilder. 
> _public NDCuboidBuilder(CubeSegment cubeSegment) {
> this.cubeSegment = cubeSegment;
> this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
> this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
> }_
> which will create a temp bytes array with length 256 to fill in rowkey column 
> bytes.
> While, in class MergeCuboidMapper it's initialized with length 255. 
> _rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_
> So, if a dimension is encoded in fixed length and the length is 256. The cube 
> building job will succeed. While, the merge job will always fail.
> public void doMap(Text key, Text value, Context context) throws 
> IOException, InterruptedException {
>_ long cuboidID = rowKeySplitter.split(key.getBytes());_
> Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);
> in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
> _// rowkey columns
> for (int i = 0; i < cuboid.getColumns().size(); i++) {
> splitOffsets[i] = offset;
> TblColRef col = cuboid.getColumns().get(i);
> int colLength = colIO.getColumnLength(col);
> SplittedBytes split = this.splitBuffers[this.bufferSize++];
> split.length = colLength;
>_ System.arraycopy(bytes, offset, split.value, 0, colLength);_
> offset += colLength;
> }_
> Method System.arraycopy will result in IndexOutOfBoundsException exception, 
> if a column length is 256 in bytes and is being copied to a bytes array with 
> length 255.
> The incompatibility is also occurred in class 
> FilterRecommendCuboidDataMapper, initialize RowkeySplitter as: 
> rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);
> I think the better way is to always set the max split length as 256.
> And actually dimension encoded in fix length 256 is pretty common in our 
> production. Since in Hive, type varchar(256) is pretty common, users does 
> have not much knowledge will prefer to chose fix length encoding on such 
> dimensions, and set max length as 256. 



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (KYLIN-3115) Incompatible RowKeySplitter initialize between build and merge job

2017-12-17 Thread Wang, Gang (JIRA)
Wang, Gang created KYLIN-3115:
-

 Summary: Incompatible RowKeySplitter initialize between build and 
merge job
 Key: KYLIN-3115
 URL: https://issues.apache.org/jira/browse/KYLIN-3115
 Project: Kylin
  Issue Type: Bug
Reporter: Wang, Gang


In class NDCuboidBuilder. 
_public NDCuboidBuilder(CubeSegment cubeSegment) {
this.cubeSegment = cubeSegment;
this.rowKeySplitter =* new RowKeySplitter(cubeSegment, 65, 256)*;
this.rowKeyEncoderProvider = new RowKeyEncoderProvider(cubeSegment);
}_
which will create a temp bytes array with length 256 to fill in rowkey column 
bytes.

While, in class MergeCuboidMapper it's initialized with length 255. 
_rowKeySplitter = new RowKeySplitter(sourceCubeSegment, 65, 255);_

So, if a dimension is encoded in fixed length and the length is 256. The cube 
building job will succeed. While, the merge job will always fail.
public void doMap(Text key, Text value, Context context) throws 
IOException, InterruptedException {
   _ long cuboidID = rowKeySplitter.split(key.getBytes());_
Cuboid cuboid = Cuboid.findForMandatory(cubeDesc, cuboidID);

in method doMap, it will invoke RowKeySplitter.split(byte[] bytes):
_// rowkey columns
for (int i = 0; i < cuboid.getColumns().size(); i++) {
splitOffsets[i] = offset;
TblColRef col = cuboid.getColumns().get(i);
int colLength = colIO.getColumnLength(col);
SplittedBytes split = this.splitBuffers[this.bufferSize++];
split.length = colLength;
   _ System.arraycopy(bytes, offset, split.value, 0, colLength);_
offset += colLength;
}_
Method System.arraycopy will result in IndexOutOfBoundsException exception, if 
a column length is 256 in bytes and is being copied to a bytes array with 
length 255.

The incompatibility is also occurred in class FilterRecommendCuboidDataMapper, 
initialize RowkeySplitter as: 
rowKeySplitter = new RowKeySplitter(originalSegment, 65, 255);

I think the better way is to always set the max split length as 256.
And actually dimension encoded in fix length 256 is pretty common in our 
production. Since in Hive, type varchar(256) is pretty common, users does have 
not much knowledge will prefer to chose fix length encoding on such dimensions, 
and set max length as 256. 








--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (KYLIN-2956) building trie dictionary blocked on value of length over 4095

2017-12-17 Thread Wang, Gang (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294449#comment-16294449
 ] 

Wang, Gang edited comment on KYLIN-2956 at 12/18/17 2:16 AM:
-

I think when building trie dictionary, 32767 is too huge as the value length 
limit, 8191 should make sense. Fix as '0xE000'.


was (Author: gwang3):
I think when building trie dictionary, 32767 is too huge as the value length 
limit, 8191 should make length. Fix as '0xE000'.

> building trie dictionary blocked on value of length over 4095 
> --
>
> Key: KYLIN-2956
> URL: https://issues.apache.org/jira/browse/KYLIN-2956
> Project: Kylin
>  Issue Type: Bug
>  Components: General
>Reporter: Wang, Gang
>Assignee: Wang, Gang
> Attachments: 
> 0001-KYLIN-2956-building-trie-dictionary-blocked-on-value.patch
>
>
> In the new release, Kylin will check the value length when building trie 
> dictionary, in class TrieDictionaryBuilder method buildTrieBytes, through 
> method:
> private void positiveShortPreCheck(int i, String fieldName) {
> if (!BytesUtil.isPositiveShort(i)) {
> throw new IllegalStateException(fieldName + " is not positive short, 
> usually caused by too long dict value.");
> }
> }
> public static boolean isPositiveShort(int i) {
> return (i & 0x7000) == 0;
> }
> And 0x7000 in binary:      0111   , so the 
> value length should be less than      0001  0001 , 
> values 4095 in decimalism.
> I wonder why is 0x7000, should 0x8000 (    1000  
>  ), support max length:      0111     
> (32767) 
> be what you want? 
> Or 32767 may be too large, I prefer use 0xE000, 0xE000 (  
>   1110   ), support max length:     0001 
>     (8191) 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KYLIN-2956) building trie dictionary blocked on value of length over 4095

2017-12-17 Thread Wang, Gang (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-2956?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Wang, Gang updated KYLIN-2956:
--
Attachment: 0001-KYLIN-2956-building-trie-dictionary-blocked-on-value.patch

I think when building trie dictionary, 32767 is too huge as the value length 
limit, 8191 should make length. Fix as '0xE000'.

> building trie dictionary blocked on value of length over 4095 
> --
>
> Key: KYLIN-2956
> URL: https://issues.apache.org/jira/browse/KYLIN-2956
> Project: Kylin
>  Issue Type: Bug
>  Components: General
>Reporter: Wang, Gang
>Assignee: Wang, Gang
> Attachments: 
> 0001-KYLIN-2956-building-trie-dictionary-blocked-on-value.patch
>
>
> In the new release, Kylin will check the value length when building trie 
> dictionary, in class TrieDictionaryBuilder method buildTrieBytes, through 
> method:
> private void positiveShortPreCheck(int i, String fieldName) {
> if (!BytesUtil.isPositiveShort(i)) {
> throw new IllegalStateException(fieldName + " is not positive short, 
> usually caused by too long dict value.");
> }
> }
> public static boolean isPositiveShort(int i) {
> return (i & 0x7000) == 0;
> }
> And 0x7000 in binary:      0111   , so the 
> value length should be less than      0001  0001 , 
> values 4095 in decimalism.
> I wonder why is 0x7000, should 0x8000 (    1000  
>  ), support max length:      0111     
> (32767) 
> be what you want? 
> Or 32767 may be too large, I prefer use 0xE000, 0xE000 (  
>   1110   ), support max length:     0001 
>     (8191) 
>  



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3069) Add proper time zone support to the WebUI instead of GMT/PST kludge

2017-12-17 Thread peng.jianhua (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294426#comment-16294426
 ] 

peng.jianhua commented on KYLIN-3069:
-

Ok. Thanks.

> Add proper time zone support to the WebUI instead of GMT/PST kludge
> ---
>
> Key: KYLIN-3069
> URL: https://issues.apache.org/jira/browse/KYLIN-3069
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.2.0
> Environment: HDP 2.5.3, Kylin 2.2.0
>Reporter: Vsevolod Ostapenko
>Assignee: peng.jianhua
>Priority: Minor
> Attachments: 
> 0001-KYLIN-3069-Add-proper-time-zone-support-to-the-WebUI.patch, Screen Shot 
> 2017-12-05 at 10.01.39 PM.png, kylin_pic1.png, kylin_pic2.png, kylin_pic3.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Time zone handling logic in the WebUI is a kludge, coded to parse only 
> "GMT-N" time zone specifications and defaulting to PST, if parsing is not 
> successful (kylin/webapp/app/js/filters/filter.js)
> Integrating moment and moment time zone (http://momentjs.com/timezone/docs/) 
> into the product, would allow correct time zone handling.
> For the users who happen to reside in the geographical locations that do 
> observe day light savings time, usage of GMT-N format is very inconvenient 
> and info reported by the UI in various places is perplexing.
> Needless to say that the GMT moniker itself is long deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3069) Add proper time zone support to the WebUI instead of GMT/PST kludge

2017-12-17 Thread Zhixiong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3069?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294103#comment-16294103
 ] 

Zhixiong Chen commented on KYLIN-3069:
--

Hi,[~peng.jianhua]
Could you follow  Vsevolod's suggestions to modify your patch?

> Add proper time zone support to the WebUI instead of GMT/PST kludge
> ---
>
> Key: KYLIN-3069
> URL: https://issues.apache.org/jira/browse/KYLIN-3069
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.2.0
> Environment: HDP 2.5.3, Kylin 2.2.0
>Reporter: Vsevolod Ostapenko
>Assignee: peng.jianhua
>Priority: Minor
> Attachments: 
> 0001-KYLIN-3069-Add-proper-time-zone-support-to-the-WebUI.patch, Screen Shot 
> 2017-12-05 at 10.01.39 PM.png, kylin_pic1.png, kylin_pic2.png, kylin_pic3.png
>
>   Original Estimate: 168h
>  Remaining Estimate: 168h
>
> Time zone handling logic in the WebUI is a kludge, coded to parse only 
> "GMT-N" time zone specifications and defaulting to PST, if parsing is not 
> successful (kylin/webapp/app/js/filters/filter.js)
> Integrating moment and moment time zone (http://momentjs.com/timezone/docs/) 
> into the product, would allow correct time zone handling.
> For the users who happen to reside in the geographical locations that do 
> observe day light savings time, usage of GMT-N format is very inconvenient 
> and info reported by the UI in various places is perplexing.
> Needless to say that the GMT moniker itself is long deprecated.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3061) When we cancel the Topic modification for 'Kafka Setting' of streaming table, the 'Cancel' operation will make a mistake.

2017-12-17 Thread Zhixiong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3061?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294099#comment-16294099
 ] 

Zhixiong Chen commented on KYLIN-3061:
--

OK.It 's my mistake.
Then it's fine to me.
I will merge it into KYLIN master.

> When we cancel the Topic modification for 'Kafka Setting' of streaming table, 
> the 'Cancel' operation will make a mistake.
> -
>
> Key: KYLIN-3061
> URL: https://issues.apache.org/jira/browse/KYLIN-3061
> Project: Kylin
>  Issue Type: Bug
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
>  Labels: patch
> Attachments: 
> 0001-KYLIN-3061-When-we-cancel-the-Topic-modification-for.patch, MG140.jpeg, 
> MG141.jpeg, MG146.jpeg, after_repaired.png, can_add_the_same_record.png, 
> cancel_is_not_right.png, edit_kafka_setting.png, edit_streaming_table.png
>
>
> There are two bugs in this issue.
> First
> 1. Choose one streaming table, choose the 'Streaming Cluster' tab, then click 
> 'Edit' button, refer to [^edit_streaming_table.png]
> 2. The edit page will be opened, then you can edit the value of topic of 
> 'Kafka Setting'. As long as you modify the ID, Host, Port value, the original 
> value tag will follow changes, refer to [^edit_kafka_setting.png];
> 3. When you click 'cancel' button, you will find the old values have been 
> changed to the new values, and if you click the 'submit' button, you will 
> also find the values to be canceled will be submitted, refer to 
> [^cancel_is_not_right.png]
> But I think the correct way should be that 'cancel' button will not change 
> any value.
> Second
> The follow code in streamingConfig.js has a bug, even if "cluster.newBroker" 
> object matchs the one element of "cluster.brokers" array, this "if" decision 
> will return false. Because even though the "cluster.newBroker" object has the 
> same attribute values as an element in the array, their storage addresses may 
> be different. The result is that you can add several same record, like 
> [^can_add_the_same_record.png].
> {code:java}
>   $scope.saveNewBroker = function(cluster) {
> if (cluster.brokers.indexOf(cluster.newBroker) === -1) {
> ..
> }
>   }
> {code}
> So I have repaired these two bugs, please check the patch, thanks!



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KYLIN-3076) Make kylin remember the choices we have made in the "Monitor>Jobs" page

2017-12-17 Thread Zhixiong Chen (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Zhixiong Chen resolved KYLIN-3076.
--
   Resolution: Fixed
Fix Version/s: v2.3.0

> Make kylin remember the choices we have made in the "Monitor>Jobs" page
> ---
>
> Key: KYLIN-3076
> URL: https://issues.apache.org/jira/browse/KYLIN-3076
> Project: Kylin
>  Issue Type: Improvement
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
> Fix For: v2.3.0
>
> Attachments: 
> 0001-KYLIN-3076-Make-kylin-remember-the-choices-we-have-m.patch
>
>
> In "Monitor>Jobs" page, the defaut setting is to list all the jobs in the 
> last one week, if we only want to see the jobs whose cube named "cube1" and 
> was successfully finished in the last one month, we have to type "cube1" in 
> the filter box, select "LAST ONE MONTH" in the drop-down box, and check the 
> "FINISHED" item in the job status checkbox, then click the "refresh" button, 
> the results we want will come out. But, when we leave this page for a moment, 
> for example, we click "MODEL" menu to see the model infomation and then click 
> "MONITOR" menu to turn back to the monitor page, we will find that all the 
> choices we have made was gone, and have to do it again to filter out the jobs 
> we want.
> We hope that kylin can remember the choices we have made in the 
> "Monitor>Jobs" page, so we don't have to make the same choices everytime when 
> we enter this page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3076) Make kylin remember the choices we have made in the "Monitor>Jobs" page

2017-12-17 Thread Zhixiong Chen (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294094#comment-16294094
 ] 

Zhixiong Chen commented on KYLIN-3076:
--

It is fine to me.
Thanks! Jianhua.

> Make kylin remember the choices we have made in the "Monitor>Jobs" page
> ---
>
> Key: KYLIN-3076
> URL: https://issues.apache.org/jira/browse/KYLIN-3076
> Project: Kylin
>  Issue Type: Improvement
>  Components: Web 
>Affects Versions: v2.3.0
>Reporter: peng.jianhua
>Assignee: peng.jianhua
> Attachments: 
> 0001-KYLIN-3076-Make-kylin-remember-the-choices-we-have-m.patch
>
>
> In "Monitor>Jobs" page, the defaut setting is to list all the jobs in the 
> last one week, if we only want to see the jobs whose cube named "cube1" and 
> was successfully finished in the last one month, we have to type "cube1" in 
> the filter box, select "LAST ONE MONTH" in the drop-down box, and check the 
> "FINISHED" item in the job status checkbox, then click the "refresh" button, 
> the results we want will come out. But, when we leave this page for a moment, 
> for example, we click "MODEL" menu to see the model infomation and then click 
> "MONITOR" menu to turn back to the monitor page, we will find that all the 
> choices we have made was gone, and have to do it again to filter out the jobs 
> we want.
> We hope that kylin can remember the choices we have made in the 
> "Monitor>Jobs" page, so we don't have to make the same choices everytime when 
> we enter this page.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Resolved] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li resolved KYLIN-3111.

   Resolution: Fixed
Fix Version/s: v2.3.0

Merged to master branch, thanks Chao!

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Fix For: v2.3.0
>
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294053#comment-16294053
 ] 

Chao Long commented on KYLIN-3111:
--

Thanks [~lidong_sjtu] for your suggestions!
I have updated the commit infomation and attached a new patch!

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Issue Comment Deleted] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Long updated KYLIN-3111:
-
Comment: was deleted

(was: Thanks Dong Li for your suggestions!
I have updated the commit infomation and attached a new patch!)

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294052#comment-16294052
 ] 

Chao Long commented on KYLIN-3111:
--

Thanks Dong Li for your suggestions!
I have updated the commit infomation and attached a new patch!

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Long updated KYLIN-3111:
-
Attachment: 
KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Long updated KYLIN-3111:
-
Attachment: (was: 
KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch)

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Long updated KYLIN-3111:
-
Attachment: (was: KYLIN-3111.patch)

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chao Long updated KYLIN-3111:
-
Attachment: 
KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: 
> KYLIN-3111-Place-close-of-admin-instance-into-finally-block.patch, 
> KYLIN-3111.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Dong Li (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294038#comment-16294038
 ] 

Dong Li commented on KYLIN-3111:


Thanks [~Wayne0101] for your contribution!

Two suggestions for this patch:
1. Commit message should follow the same patterns, for example: "KYLIN-3111 
X". Please refer to other commits and update your message.
2. Your author information(email, name) is not included in this patch. Please 
update your git config so that your contribution can be recognized directly.

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: KYLIN-3111.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li reassigned KYLIN-3111:
--

Assignee: Dong Li

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Dong Li
> Attachments: KYLIN-3111.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Assigned] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Dong Li (JIRA)

 [ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Dong Li reassigned KYLIN-3111:
--

Assignee: Chao Long  (was: Dong Li)

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
>Assignee: Chao Long
> Attachments: KYLIN-3111.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (KYLIN-3111) Close of Admin instance should be placed in finally block

2017-12-17 Thread Chao Long (JIRA)

[ 
https://issues.apache.org/jira/browse/KYLIN-3111?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16294037#comment-16294037
 ] 

Chao Long commented on KYLIN-3111:
--

I hope to fix this issue.

> Close of Admin instance should be placed in finally block
> -
>
> Key: KYLIN-3111
> URL: https://issues.apache.org/jira/browse/KYLIN-3111
> Project: Kylin
>  Issue Type: Bug
>Reporter: Ted Yu
> Attachments: KYLIN-3111.patch
>
>
> Looking at the code in DeployCoprocessorCLI.java and 
> HtableAlterMetadataCLI.java , I see that close of Admin instance is without 
> finally block:
> {code}
> hbaseAdmin.disableTable(table.getTableName());
> table.setValue(metadataKey, metadataValue);
> hbaseAdmin.modifyTable(table.getTableName(), table);
> hbaseAdmin.enableTable(table.getTableName());
> hbaseAdmin.close();
> {code}
> If any exception is thrown in the operations prior to the close(), the 
> close() would be skipped, leading to resource leak.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)