Ok it turned out a bit less complicated than I thought :). I would be
interested if creating temporary table from DF and pushing data into Hive the
best option here?
1. Prepare and clean up data with filter & map
2. Convert the RDD to DF
3. Create temporary table from DF
4. Use Hive database
5. Drop if exists and create ORC table in Hive database
6. Simple Insert/select from temporary table to Hive table
//
// Get a DF first based on Databricks CSV libraries
//
val df =
sqlContext.read.format("com.databricks.spark.csv").option("inferSchema",
"true").option("header", "true").load("/data/stg/table2")
//
// Next filter out empty rows (last colum has to be > "" and get rid of "?"
special character. Also get rid of "," in money fields
// Example csv cell £,500.00 --> need to transform to plain 2500.00
//
val a = df.filter(col("Total") > "").map(x => (x.getString(0),x.getString(1),
x.getString(2).substring(1).replace(",", "").toDouble,
x.getString(3).substring(1).replace(",", "").toDouble,
x.getString(4).substring(1).replace(",", "").toDouble))
a.first
//
// convert this RDD to DF and create a Spark temporary table
//
a.toDF.registerTempTable("tmp")
//
// Need to create and populate target ORC table t3 in database test in Hive
//
sql("use test")
//
// Drop and create table t3
//
sql("DROP TABLE IF EXISTS t3")
var sqltext : String = ""
sqltext = """
CREATE TABLE t3 (
INVOICENUMBER INT
,PAYMENTDATE timestamp
,NET DECIMAL(20,2)
,VAT DECIMAL(20,2)
,TOTAL DECIMAL(20,2)
)
COMMENT 'from csv file from excel sheet'
STORED AS ORC
TBLPROPERTIES ( "orc.compress"="ZLIB" )
"""
sql(sqltext)
//
// Put data in Hive table. Clean up is already done
//
sqltext = "INSERT INTO t3 SELECT * FROM tmp"
sql(sqltext)
sql("SELECT * FROM t3 ORDER BY 1").collect.foreach(println)
sys.exit()
Dr Mich Talebzadeh
LinkedIn
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this message
shall not be understood as given or endorsed by Peridale Technology Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Technology Ltd, its subsidiaries nor their employees
accept any responsibility.
From: Ted Yu [mailto:[email protected]]
Sent: 20 February 2016 12:33
To: Mich Talebzadeh <[email protected]>
Cc: Michał Zieliński <[email protected]>; user @spark
<[email protected]>
Subject: Re: Checking for null values when mapping
For #2, you can filter out row whose first column has length 0.
Cheers
On Feb 20, 2016, at 6:59 AM, Mich Talebzadeh <[email protected]
<mailto:[email protected]> > wrote:
Thanks
So what I did was
scala> val df =
sqlContext.read.format("com.databricks.spark.csv").option("inferSchema",
"true").option("header", "true").load("/data/stg/table2")
df: org.apache.spark.sql.DataFrame = [Invoice Number: string, Payment date:
string, Net: string, VAT: string, Total: string]
scala> df.printSchema
root
|-- Invoice Number: string (nullable = true)
|-- Payment date: string (nullable = true)
|-- Net: string (nullable = true)
|-- VAT: string (nullable = true)
|-- Total: string (nullable = true)
So all the columns are Strings
Then I tried to exclude null rows by filtering on all columns not being null
and map the rest
scala> val a = df.filter(col("Invoice Number").isNotNull and col("Payment
date").isNotNull and col("Net").isNotNull and col("VAT").isNotNull and
col("Total").isNotNull).map(x =>
(x.getString(1),x.getString(2).substring(1).replace(",",
"").toDouble,x.getString(3).substring(1).replace(",", "").toDouble,
x.getString(4).substring(1).replace(",", "").toDouble))
a: org.apache.spark.rdd.RDD[(String, Double, Double, Double)] =
MapPartitionsRDD[176] at map at <console>:21
This still comes up with “String index out of range: “ error
16/02/20 11:50:51 ERROR Executor: Exception in task 0.0 in stage 12.0 (TID 18)
java.lang.StringIndexOutOfBoundsException: String index out of range: -1
My questions are:
1. Doing the map, map(x => (x.getString(1) -- Can I replace
x.getString(1) with the actual column name say “Invoice Number” and so forth
for other columns as well?
2. Sounds like it crashes because of these columns below at the end
[421,02/10/2015,?1,187.50,?237.50,?1,425.00] \\ example good one
[,,,,] \\ bad one, empty one
[Net income,,?182,531.25,?14,606.25,?197,137.50]
[,,,,] \\ bad one, empty one
[year 2014,,?113,500.00,?0.00,?113,500.00]
[Year 2015,,?69,031.25,?14,606.25,?83,637.50]
3. Also to clarify I want to drop those two empty line -> [,,,,] if I
can. Unfortunately drop() call does not work
a.drop()
<console>:24: error: value drop is not a member of
org.apache.spark.rdd.RDD[(String, Double, Double, Double)]
a.drop()
^
Thanka again,
Dr Mich Talebzadeh
LinkedIn
https://www.linkedin.com/profile/view?id=AAEAAAAWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
http://talebzadehmich.wordpress.com <http://talebzadehmich.wordpress.com/>
NOTE: The information in this email is proprietary and confidential. This
message is for the designated recipient only, if you are not the intended
recipient, you should destroy it immediately. Any information in this message
shall not be understood as given or endorsed by Peridale Technology Ltd, its
subsidiaries or their employees, unless expressly so stated. It is the
responsibility of the recipient to ensure that this email is virus free,
therefore neither Peridale Technology Ltd, its subsidiaries nor their employees
accept any responsibility.