Yeah, this does look confusing. We are trying to improve the error
reporting by catching similar issues at the end of the analysis phase
and give more descriptive error messages. Part of the work can be found
here:
https://github.com/apache/spark/blob/0902a11940e550e85a53e110b490fe90e16ddaf4/sq
Cheng you were right. It works when I remove the field from either one. I
should have checked the types beforehand. What confused me is that Spark
attempted to join it and midway threw the error. It isn't quite there yet.
Thanks for the help.
On Mon, Jun 8, 2015 at 8:29 PM Cheng Lian wrote:
> I
I suspect that Bookings and Customerdetails both have a PolicyType
field, one is string and the other is an int.
Cheng
On 6/8/15 9:15 PM, Bipin Nag wrote:
Hi Jeetendra, Cheng
I am using following code for joining
val Bookings = sqlContext.load("/home/administrator/stageddata/Bookings")
val C
Hi Jeetendra, Cheng
I am using following code for joining
val Bookings = sqlContext.load("/home/administrator/stageddata/Bookings")
val Customerdetails =
sqlContext.load("/home/administrator/stageddata/Customerdetails")
val CD = Customerdetails.
where($"CreatedOn" > "2015-04-01 00:00:00.0").
Parquet file when are you loading these file?
can you please share the code where you are passing parquet file to spark?.
On 8 June 2015 at 16:39, Cheng Lian wrote:
> Are you appending the joined DataFrame whose PolicyType is string to an
> existing Parquet file whose PolicyType is int? The exce
Are you appending the joined DataFrame whose PolicyType is string to an
existing Parquet file whose PolicyType is int? The exception indicates
that Parquet found a column with conflicting data types.
Cheng
On 6/8/15 5:29 PM, bipin wrote:
Hi I get this error message when saving a table:
parqu