[ 
https://issues.apache.org/jira/browse/HADOOP-16360?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ladislav Jech updated HADOOP-16360:
-----------------------------------
    Issue Type: Bug  (was: Improvement)

> java.lang.NullPointerException: null uri host. This can be caused by 
> unencoded / in the password string
> -------------------------------------------------------------------------------------------------------
>
>                 Key: HADOOP-16360
>                 URL: https://issues.apache.org/jira/browse/HADOOP-16360
>             Project: Hadoop Common
>          Issue Type: Bug
>            Reporter: Ladislav Jech
>            Priority: Blocker
>
> I am experiencing very old issue appearing now again on Cloudera cluster 6.2. 
> I use following libraries with pyspark job:
>  * 
> /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop/hadoop-common-3.0.0-cdh6.2.0.jar
>  * 
> /opt/cloudera/parcels/CDH-6.2.0-1.cdh6.2.0.p0.967373/lib/hadoop/hadoop-aws-3.0.0-cdh6.2.0.jar
> While trying to write DF to S3 as CSV I get following error:
> {code:java}
> java.lang.NullPointerException: null uri host. This can be caused by 
> unencoded / in the password string
>       at java.util.Objects.requireNonNull(Objects.java:228)
>       at 
> org.apache.hadoop.fs.s3native.S3xLoginHelper.buildFSURI(S3xLoginHelper.java:69)
>       at org.apache.hadoop.fs.s3a.S3AFileSystem.setUri(S3AFileSystem.java:467)
>       at 
> org.apache.hadoop.fs.s3a.S3AFileSystem.initialize(S3AFileSystem.java:234)
>       at 
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3288)
>       at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:123)
>       at 
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3337)
>       at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3305)
>       at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:476)
>       at org.apache.hadoop.fs.Path.getFileSystem(Path.java:361)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource.planForWritingFileFormat(DataSource.scala:423)
>       at 
> org.apache.spark.sql.execution.datasources.DataSource.planForWriting(DataSource.scala:523)
>       at 
> org.apache.spark.sql.DataFrameWriter.saveToV1Source(DataFrameWriter.scala:281)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:270)
>       at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:228)
>       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>       at 
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
>       at 
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>       at java.lang.reflect.Method.invoke(Method.java:498)
>       at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
>       at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
>       at py4j.Gateway.invoke(Gateway.java:282)
>       at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
>       at py4j.commands.CallCommand.execute(CallCommand.java:79)
>       at py4j.GatewayConnection.run(GatewayConnection.java:238)
>       at java.lang.Thread.run(Thread.java:748)
> // code placeholder
> {code}
> My code doesn't use secret key in s3 path, but as follows:
> {code:java}
> sparkSession = SparkSession.builder.getOrCreate() 
> sparkContext = sparkSession.sparkContext 
> #sparkContext._jsc.hadoopConfiguration().set("fs.s3a.multipart.size", 
> "1000000") 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3a.access.key", 
> AWS_ACCESS_KEY_ID) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3a.secret.key", 
> AWS_SECRET_ACCESS_KEY) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3a.endpoint", AWS_HOST_BASE) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3.access.key", 
> AWS_ACCESS_KEY_ID) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3.secret.key", 
> AWS_SECRET_ACCESS_KEY) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3.endpoint", AWS_HOST_BASE) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3n.access.key", 
> AWS_ACCESS_KEY_ID) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3n.secret.key", 
> AWS_SECRET_ACCESS_KEY) 
> sparkContext._jsc.hadoopConfiguration().set("fs.s3n.endpoint", AWS_HOST_BASE) 
> sqlContext = SQLContext(sparkSession.sparkContext) # log4j = 
> sparkContext._jvm.org.apache.log4j # pylint: disable=W0212 logger = 
> sparkContext._jvm.org.apache.log4j.LogManager.getLogger("OracleToS3") # 
> logger = log4j.LogManager.getlogger(__name__) 
> sparkContext.setLogLevel('INFO') logger.info("Going to process Oracle 
> tables...") for table in ADDCSource.table_list: logger.info("Reading oracle 
> table into dataframe") oracle_table = sparkContext.read \ .format("jdbc") \ 
> .option("url", ADDCSource.jdbc_string) \ .option("dbtable", table) \ 
> .option("user", ADDCSource.user) \ .option("password", ADDCSource.password) \ 
> .option("driver", "oracle.jdbc.driver.OracleDriver") \ .load() # Display 
> schema logger.info("Display table schema") oracle_table.show() 
> logger.info("Display table top 5") oracle_table.head(5) output_file = 
> "s3a://ADDC_ELICTRICITY_201906/" + "11/" + table + "_" + 
> time.strftime("%Y%m%d_%H%M%S") +".csv" logger.info("Writing table into S3 to 
> file: " + output_file) oracle_table\ .repartition(1)\ .write \ 
> .mode("overwrite")\ .format("csv")\ .option("header","true") \ 
> .save("s3a://ADDC_ELICTRICITY_201906/" + "11/" + table + "_" + 
> time.strftime("%Y%m%d_%H%M%S") +".csv")
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to