[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2018-02-09 Thread Josh Mahonin (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Mahonin updated PHOENIX-4056:
--
Fix Version/s: 4.14.0
   5.0.0

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>Priority: Major
>  Labels: features, patch, test
> Fix For: 5.0.0, 4.14.0
>
> Attachments: PHOENIX-4056.patch, PHOENIX-4056_v2.patch, 
> PHOENIX-4056_v3.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2018-02-09 Thread Stepan Migunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stepan Migunov updated PHOENIX-4056:

Attachment: PHOENIX-4056_v3.patch

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>Priority: Major
>  Labels: features, patch, test
> Attachments: PHOENIX-4056.patch, PHOENIX-4056_v2.patch, 
> PHOENIX-4056_v3.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2018-02-08 Thread Stepan Migunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stepan Migunov updated PHOENIX-4056:

Attachment: PHOENIX-4056_v2.patch

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>Priority: Major
>  Labels: features, patch, test
> Attachments: PHOENIX-4056.patch, PHOENIX-4056_v2.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2018-02-06 Thread Stepan Migunov (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Stepan Migunov updated PHOENIX-4056:

Attachment: PHOENIX-4056.patch

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>Priority: Major
>  Labels: features, patch, test
> Attachments: PHOENIX-4056.patch
>
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-09-22 Thread Josh Elser (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Josh Elser updated PHOENIX-4056:

Fix Version/s: (was: 4.11.0)

> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> 

[jira] [Updated] (PHOENIX-4056) java.lang.IllegalArgumentException: Can not create a Path from an empty string

2017-08-02 Thread Jepson (JIRA)

 [ 
https://issues.apache.org/jira/browse/PHOENIX-4056?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jepson updated PHOENIX-4056:

Environment: 
CDH5.12
Phoenix:4.11
HBase:1.2
Spark: 2.2.0

phoenix-spark.version:4.11.0-HBase-1.2

  was:
CDH5.12
Phoenix:4.11
HBase:1.2

phoenix-spark.version:4.11.0-HBase-1.2


> java.lang.IllegalArgumentException: Can not create a Path from an empty string
> --
>
> Key: PHOENIX-4056
> URL: https://issues.apache.org/jira/browse/PHOENIX-4056
> Project: Phoenix
>  Issue Type: Bug
>Affects Versions: 4.11.0
> Environment: CDH5.12
> Phoenix:4.11
> HBase:1.2
> Spark: 2.2.0
> phoenix-spark.version:4.11.0-HBase-1.2
>Reporter: Jepson
>  Labels: features, patch, test
>   Original Estimate: 12h
>  Remaining Estimate: 12h
>
> 1.use the configuration of server and client(scala project)
>  
> phoenix.schema.isNamespaceMappingEnabled
> true
>   
>   
> phoenix.schema.mapSystemTablesToNamespace
> true
>   
> 2.The Code:
> {code:java}
> resultDF.write
>  .format("org.apache.phoenix.spark")
>  .mode(SaveMode.Overwrite)
>  .option("table", "JYDW.ADDRESS_ORDERCOUNT")
>  .option("zkUrl","192.168.1.40,192.168.1.41,192.168.1.42:2181")
>  .save()
> {code}
> 3.Throw this error,help to fix it,thankyou :
> 7/08/02 01:07:25 INFO DAGScheduler: Job 6 finished: runJob at 
> SparkHadoopMapReduceWriter.scala:88, took 7.990715 s
> 17/08/02 01:07:25 ERROR SparkHadoopMapReduceWriter: Aborting job 
> job_20170802010717_0079.
> {color:#59afe1}*java.lang.IllegalArgumentException: Can not create a Path 
> from an empty string*{color}
>   at org.apache.hadoop.fs.Path.checkPathArg(Path.java:126)
>   at org.apache.hadoop.fs.Path.(Path.java:134)
>   at org.apache.hadoop.fs.Path.(Path.java:88)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.absPathStagingDir(HadoopMapReduceCommitProtocol.scala:58)
>   at 
> org.apache.spark.internal.io.HadoopMapReduceCommitProtocol.commitJob(HadoopMapReduceCommitProtocol.scala:132)
>   at 
> org.apache.spark.internal.io.SparkHadoopMapReduceWriter$.write(SparkHadoopMapReduceWriter.scala:101)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply$mcV$sp(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopDataset$1.apply(PairRDDFunctions.scala:1085)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopDataset(PairRDDFunctions.scala:1084)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply$mcV$sp(PairRDDFunctions.scala:1003)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.PairRDDFunctions$$anonfun$saveAsNewAPIHadoopFile$2.apply(PairRDDFunctions.scala:994)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
>   at 
> org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:112)
>   at org.apache.spark.rdd.RDD.withScope(RDD.scala:362)
>   at 
> org.apache.spark.rdd.PairRDDFunctions.saveAsNewAPIHadoopFile(PairRDDFunctions.scala:994)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:59)
>   at 
> org.apache.phoenix.spark.DataFrameFunctions.saveToPhoenix(DataFrameFunctions.scala:28)
>   at 
> org.apache.phoenix.spark.DefaultSource.createRelation(DefaultSource.scala:47)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:472)
>   at 
> org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
>   at 
> org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
>   at 
> org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
>   at 
>