[jira] [Updated] (HIVE-8458) Potential null dereference in Utilities#clearWork()
[ https://issues.apache.org/jira/browse/HIVE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-8458: Attachment: HIVE-8458_001.patch I think null check logic from to || So I changed.. Potential null dereference in Utilities#clearWork() --- Key: HIVE-8458 URL: https://issues.apache.org/jira/browse/HIVE-8458 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor Attachments: HIVE-8458_001.patch {code} Path mapPath = getPlanPath(conf, MAP_PLAN_NAME); Path reducePath = getPlanPath(conf, REDUCE_PLAN_NAME); // if the plan path hasn't been initialized just return, nothing to clean. if (mapPath == null reducePath == null) { return; } try { FileSystem fs = mapPath.getFileSystem(conf); {code} If mapPath is null but reducePath is not null, getFileSystem() call would produce NPE -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8458) Potential null dereference in Utilities#clearWork()
[ https://issues.apache.org/jira/browse/HIVE-8458?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-8458: Affects Version/s: 0.13.1 Status: Patch Available (was: Open) Potential null dereference in Utilities#clearWork() --- Key: HIVE-8458 URL: https://issues.apache.org/jira/browse/HIVE-8458 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Priority: Minor Attachments: HIVE-8458_001.patch {code} Path mapPath = getPlanPath(conf, MAP_PLAN_NAME); Path reducePath = getPlanPath(conf, REDUCE_PLAN_NAME); // if the plan path hasn't been initialized just return, nothing to clean. if (mapPath == null reducePath == null) { return; } try { FileSystem fs = mapPath.getFileSystem(conf); {code} If mapPath is null but reducePath is not null, getFileSystem() call would produce NPE -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()
[ https://issues.apache.org/jira/browse/HIVE-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-8342: Attachment: HIVE-8342_002.patch I changed from null pointer exception to hive exception.. How about that? Potential null dereference in ColumnTruncateMapper#jobClose() - Key: HIVE-8342 URL: https://issues.apache.org/jira/browse/HIVE-8342 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor Attachments: HIVE-8342_001.patch, HIVE-8342_002.patch {code} Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, null, reporter); {code} Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is dereferenced: {code} boolean isCompressed = conf.getCompressed(); TableDesc tableInfo = conf.getTableInfo(); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()
[ https://issues.apache.org/jira/browse/HIVE-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho reassigned HIVE-8342: --- Assignee: skrho Potential null dereference in ColumnTruncateMapper#jobClose() - Key: HIVE-8342 URL: https://issues.apache.org/jira/browse/HIVE-8342 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor {code} Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, null, reporter); {code} Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is dereferenced: {code} boolean isCompressed = conf.getCompressed(); TableDesc tableInfo = conf.getTableInfo(); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()
[ https://issues.apache.org/jira/browse/HIVE-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-8342: Attachment: HIVE-8342_001.patch jobClose method is setting null object in FileSingDesc parameter, when calling Utilities.mvFileToFinalPath method.. And mvFileToFinalPath method is calling createEmptyBuckets method.. First.. I added the code in createEmptyBuckets method , which is null checker if FileSinkDesc object null But util class can be called everywhere~~ So I changed logic from null checker to throw new exception if null object How about that? ^^ Potential null dereference in ColumnTruncateMapper#jobClose() - Key: HIVE-8342 URL: https://issues.apache.org/jira/browse/HIVE-8342 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor Attachments: HIVE-8342_001.patch {code} Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, null, reporter); {code} Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is dereferenced: {code} boolean isCompressed = conf.getCompressed(); TableDesc tableInfo = conf.getTableInfo(); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-8342) Potential null dereference in ColumnTruncateMapper#jobClose()
[ https://issues.apache.org/jira/browse/HIVE-8342?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-8342: Status: Patch Available (was: Open) Potential null dereference in ColumnTruncateMapper#jobClose() - Key: HIVE-8342 URL: https://issues.apache.org/jira/browse/HIVE-8342 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor Attachments: HIVE-8342_001.patch {code} Utilities.mvFileToFinalPath(outputPath, job, success, LOG, dynPartCtx, null, reporter); {code} Utilities.mvFileToFinalPath() calls createEmptyBuckets() where conf is dereferenced: {code} boolean isCompressed = conf.getCompressed(); TableDesc tableInfo = conf.getTableInfo(); {code} -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-7996) Potential resource leak in HiveBurnInClient
[ https://issues.apache.org/jira/browse/HIVE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho reassigned HIVE-7996: --- Assignee: skrho Potential resource leak in HiveBurnInClient --- Key: HIVE-7996 URL: https://issues.apache.org/jira/browse/HIVE-7996 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor In createTables() and runQueries(), Statement stmt is not closed upon return. In main(), Connection con is not closed upon exit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7996) Potential resource leak in HiveBurnInClient
[ https://issues.apache.org/jira/browse/HIVE-7996?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14135139#comment-14135139 ] skrho commented on HIVE-7996: - Hello Ted Yu~~ What is class name which is fixed? or Where do I check to fix ? ^^ Potential resource leak in HiveBurnInClient --- Key: HIVE-7996 URL: https://issues.apache.org/jira/browse/HIVE-7996 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor In createTables() and runQueries(), Statement stmt is not closed upon return. In main(), Connection con is not closed upon exit. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7305) Return value from in.read() is ignored in SerializationUtils#readLongLE()
[ https://issues.apache.org/jira/browse/HIVE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7305: Attachment: HIVE-7305_001.patch I added null check and size check logic.. Please review my patch~~ Return value from in.read() is ignored in SerializationUtils#readLongLE() - Key: HIVE-7305 URL: https://issues.apache.org/jira/browse/HIVE-7305 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor Attachments: HIVE-7305_001.patch {code} long readLongLE(InputStream in) throws IOException { in.read(readBuffer, 0, 8); return (((readBuffer[0] 0xff) 0) + ((readBuffer[1] 0xff) 8) {code} Return value from read() may indicate fewer than 8 bytes read. The return value should be checked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7305) Return value from in.read() is ignored in SerializationUtils#readLongLE()
[ https://issues.apache.org/jira/browse/HIVE-7305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7305: Assignee: skrho Status: Patch Available (was: Open) Return value from in.read() is ignored in SerializationUtils#readLongLE() - Key: HIVE-7305 URL: https://issues.apache.org/jira/browse/HIVE-7305 Project: Hive Issue Type: Bug Reporter: Ted Yu Assignee: skrho Priority: Minor Attachments: HIVE-7305_001.patch {code} long readLongLE(InputStream in) throws IOException { in.read(readBuffer, 0, 8); return (((readBuffer[0] 0xff) 0) + ((readBuffer[1] 0xff) 8) {code} Return value from read() may indicate fewer than 8 bytes read. The return value should be checked. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Commented] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=14133512#comment-14133512 ] skrho commented on HIVE-7180: - Sorry.. I don't know that~~ What do I do for? Assign to someone ? or .. BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: skrho Priority: Minor Labels: patch Attachments: HIVE-7180.patch, HIVE-7180_001.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Assigned] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho reassigned HIVE-7180: --- Assignee: skrho (was: Swarnim Kulkarni) BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: skrho Priority: Minor Labels: patch Attachments: HIVE-7180.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7180: Attachment: HIVE-7180_001.patch Here is my new patch~ ^^ Please review my source BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: skrho Priority: Minor Labels: patch Attachments: HIVE-7180.patch, HIVE-7180_001.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7180: Status: Patch Available (was: Open) BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: skrho Priority: Minor Labels: patch Attachments: HIVE-7180.patch, HIVE-7180_001.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7862) close of InputStream in Utils#copyToZipStream() should be placed in finally block
[ https://issues.apache.org/jira/browse/HIVE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7862: Attachment: HIVE-7862_001.txt Here is patch.. I added try/catch/finally statement.. and InputStream, ZipOutputStream object is closed in finally statement close of InputStream in Utils#copyToZipStream() should be placed in finally block - Key: HIVE-7862 URL: https://issues.apache.org/jira/browse/HIVE-7862 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor Attachments: HIVE-7862_001.txt In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , line 278 : {code} private static void copyToZipStream(InputStream is, ZipEntry entry, ZipOutputStream zos) throws IOException { zos.putNextEntry(entry); byte[] arr = new byte[4096]; int read = is.read(arr); while (read -1) { zos.write(arr, 0, read); read = is.read(arr); } is.close(); {code} If read() throws IOException, is would be left unclosed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7862) close of InputStream in Utils#copyToZipStream() should be placed in finally block
[ https://issues.apache.org/jira/browse/HIVE-7862?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7862: Labels: patch (was: ) Affects Version/s: 0.13.0 Status: Patch Available (was: Open) close of InputStream in Utils#copyToZipStream() should be placed in finally block - Key: HIVE-7862 URL: https://issues.apache.org/jira/browse/HIVE-7862 Project: Hive Issue Type: Bug Affects Versions: 0.13.0 Reporter: Ted Yu Priority: Minor Labels: patch Attachments: HIVE-7862_001.txt In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , line 278 : {code} private static void copyToZipStream(InputStream is, ZipEntry entry, ZipOutputStream zos) throws IOException { zos.putNextEntry(entry); byte[] arr = new byte[4096]; int read = is.read(arr); while (read -1) { zos.write(arr, 0, read); read = is.read(arr); } is.close(); {code} If read() throws IOException, is would be left unclosed. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7306) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult()
[ https://issues.apache.org/jira/browse/HIVE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7306: Attachment: HIVE-7306.patch Here is patch... I chanaged null check logic using if/else statement Please review my source..and give me a chance to contribute source Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult() -- Key: HIVE-7306 URL: https://issues.apache.org/jira/browse/HIVE-7306 Project: Hive Issue Type: Bug Reporter: Ted Yu Priority: Minor Attachments: HIVE-7306.patch {code} Object[] o = ss.intermediateVals.remove(0); Double d = o == null ? 0.0 : (Double) o[0]; r = r == null ? null : r - d; cnt = cnt - ((Long) o[1]); {code} Array o is accessed without null check in the last line above. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7306) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult()
[ https://issues.apache.org/jira/browse/HIVE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7306: Labels: patch (was: ) Affects Version/s: 0.13.1 Status: Patch Available (was: Open) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult() -- Key: HIVE-7306 URL: https://issues.apache.org/jira/browse/HIVE-7306 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Priority: Minor Labels: patch Attachments: HIVE-7306.patch {code} Object[] o = ss.intermediateVals.remove(0); Double d = o == null ? 0.0 : (Double) o[0]; r = r == null ? null : r - d; cnt = cnt - ((Long) o[1]); {code} Array o is accessed without null check in the last line above. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7306) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult()
[ https://issues.apache.org/jira/browse/HIVE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7306: Attachment: (was: HIVE-7306.patch) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult() -- Key: HIVE-7306 URL: https://issues.apache.org/jira/browse/HIVE-7306 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Priority: Minor Labels: patch {code} Object[] o = ss.intermediateVals.remove(0); Double d = o == null ? 0.0 : (Double) o[0]; r = r == null ? null : r - d; cnt = cnt - ((Long) o[1]); {code} Array o is accessed without null check in the last line above. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7306) Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult()
[ https://issues.apache.org/jira/browse/HIVE-7306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7306: Attachment: HIVE-7306.patch Here is my patch.. I changed null checker position to effect null check.. Ineffective null check in GenericUDAFAverage#GenericUDAFAverageEvaluatorDouble#getNextResult() -- Key: HIVE-7306 URL: https://issues.apache.org/jira/browse/HIVE-7306 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Priority: Minor Labels: patch Attachments: HIVE-7306.patch {code} Object[] o = ss.intermediateVals.remove(0); Double d = o == null ? 0.0 : (Double) o[0]; r = r == null ? null : r - d; cnt = cnt - ((Long) o[1]); {code} Array o is accessed without null check in the last line above. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7180: Labels: patch (was: ) Affects Version/s: 0.13.1 Status: Patch Available (was: Open) BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: Swarnim Kulkarni Priority: Minor Labels: patch Attachments: HIVE-7180.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7180) BufferedReader is not closed in MetaStoreSchemaInfo ctor
[ https://issues.apache.org/jira/browse/HIVE-7180?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7180: Attachment: HIVE-7180.patch Here is my patch.. I chanaged BufferedReader position and added BufferedReader close statement in finally statement Please review my source.. Give me chance to contribute source BufferedReader is not closed in MetaStoreSchemaInfo ctor Key: HIVE-7180 URL: https://issues.apache.org/jira/browse/HIVE-7180 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: Ted Yu Assignee: Swarnim Kulkarni Priority: Minor Labels: patch Attachments: HIVE-7180.patch Here is related code: {code} BufferedReader bfReader = new BufferedReader(new FileReader(upgradeListFile)); String currSchemaVersion; while ((currSchemaVersion = bfReader.readLine()) != null) { upgradeOrderList.add(currSchemaVersion.trim()); {code} BufferedReader / FileReader should be closed upon return from ctor. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-7928) There is no catch statement in Utils#updateMap
skrho created HIVE-7928: --- Summary: There is no catch statement in Utils#updateMap Key: HIVE-7928 URL: https://issues.apache.org/jira/browse/HIVE-7928 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor There is no catch statement in Utils class( In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java line : 148) If there is no catch statement, We can't know why if exception is happended.. I think add catch statement and throw exception.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7928) There is no catch statement in Utils#updateMap
[ https://issues.apache.org/jira/browse/HIVE-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7928: Attachment: HIVE-7928_001.patch I added catch statement and RuntimeException statment, so if exception is happened throw error message.. There is no catch statement in Utils#updateMap -- Key: HIVE-7928 URL: https://issues.apache.org/jira/browse/HIVE-7928 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor Attachments: HIVE-7928_001.patch There is no catch statement in Utils class( In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java line : 148) If there is no catch statement, We can't know why if exception is happended.. I think add catch statement and throw exception.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7928) There is no catch statement in Utils#updateMap
[ https://issues.apache.org/jira/browse/HIVE-7928?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7928: Status: Patch Available (was: Open) There is no catch statement in Utils#updateMap -- Key: HIVE-7928 URL: https://issues.apache.org/jira/browse/HIVE-7928 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor Attachments: HIVE-7928_001.patch There is no catch statement in Utils class( In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java line : 148) If there is no catch statement, We can't know why if exception is happended.. I think add catch statement and throw exception.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Created] (HIVE-7929) close of ZipOutputStream in Utils#jarDir() should be placed in finally block
skrho created HIVE-7929: --- Summary: close of ZipOutputStream in Utils#jarDir() should be placed in finally block Key: HIVE-7929 URL: https://issues.apache.org/jira/browse/HIVE-7929 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , line 308 : zos.closeEntry(); zipDir(dir, relativePath, zos, true); zos.close(); If exception is happened, ZipOutputStream would be left unclosed.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7929) close of ZipOutputStream in Utils#jarDir() should be placed in finally block
[ https://issues.apache.org/jira/browse/HIVE-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7929: Attachment: HIVE-7929_001.patch Here is patch.. I added try/catch/finally statement.. and IZipOutputStream object is closed in finally statement.. Please review my source code.. ^^ Thank you.. close of ZipOutputStream in Utils#jarDir() should be placed in finally block Key: HIVE-7929 URL: https://issues.apache.org/jira/browse/HIVE-7929 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor Labels: patch Attachments: HIVE-7929_001.patch In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , line 308 : zos.closeEntry(); zipDir(dir, relativePath, zos, true); zos.close(); If exception is happened, ZipOutputStream would be left unclosed.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-7929) close of ZipOutputStream in Utils#jarDir() should be placed in finally block
[ https://issues.apache.org/jira/browse/HIVE-7929?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-7929: Labels: patch (was: ) Status: Patch Available (was: Open) close of ZipOutputStream in Utils#jarDir() should be placed in finally block Key: HIVE-7929 URL: https://issues.apache.org/jira/browse/HIVE-7929 Project: Hive Issue Type: Bug Affects Versions: 0.13.1 Reporter: skrho Priority: Minor Labels: patch Attachments: HIVE-7929_001.patch In accumulo-handler/src/java/org/apache/hadoop/hive/accumulo/Utils.java , line 308 : zos.closeEntry(); zipDir(dir, relativePath, zos, true); zos.close(); If exception is happened, ZipOutputStream would be left unclosed.. -- This message was sent by Atlassian JIRA (v6.3.4#6332)
[jira] [Updated] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4
[ https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-6987: Attachment: HIVE-6987.txt Please review my patch and give me chance to contribute source~ ^^ Metastore qop settings won't work with Hadoop-2.4 - Key: HIVE-6987 URL: https://issues.apache.org/jira/browse/HIVE-6987 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Labels: patch Fix For: 0.14.0 Attachments: HIVE-6987.txt [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a backward incompatible change due to which the following hive call returns a null map: {code} MapString, String hadoopSaslProps = ShimLoader.getHadoopThriftAuthBridge(). getHadoopSaslProperties(conf); {code} Metastore uses the underlying hadoop.rpc.protection values to set the qop between metastore client/server. -- This message was sent by Atlassian JIRA (v6.2#6252)
[jira] [Updated] (HIVE-6987) Metastore qop settings won't work with Hadoop-2.4
[ https://issues.apache.org/jira/browse/HIVE-6987?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ] skrho updated HIVE-6987: Labels: patch (was: ) Status: Patch Available (was: Open) Metastore qop settings won't work with Hadoop-2.4 - Key: HIVE-6987 URL: https://issues.apache.org/jira/browse/HIVE-6987 Project: Hive Issue Type: Bug Components: Metastore Affects Versions: 0.14.0 Reporter: Vaibhav Gumashta Labels: patch Fix For: 0.14.0 Attachments: HIVE-6987.txt [HADOOP-10211|https://issues.apache.org/jira/browse/HADOOP-10211] made a backward incompatible change due to which the following hive call returns a null map: {code} MapString, String hadoopSaslProps = ShimLoader.getHadoopThriftAuthBridge(). getHadoopSaslProperties(conf); {code} Metastore uses the underlying hadoop.rpc.protection values to set the qop between metastore client/server. -- This message was sent by Atlassian JIRA (v6.2#6252)