[mailto:ianoconn...@gmail.com] On Behalf Of Ian
O'Connell
Sent: Wednesday, July 20, 2016 11:05 PM
To: Ravi Aggarwal
Cc: Ted Yu ; user
Subject: Re: OutOfMemory when doing joins in spark 2.0 while same code runs
fine in spark 1.5.2
Ravi did your issue ever get solved for this?
I think i've be
gt; => {
>
> val fieldCell = b.asInstanceOf[Cell]
>
> a :+ new
> String(fieldCell.getQualifierArray).substring(fieldCell.getQualifierOffset,
> fieldCell.getQualifierLength + fieldCell.getQualifierOffset)
>
> }
>
> }
>
>
>
>
t-merge join.
Can we deduce anything from this?
Thanks
Ravi
From: Ravi Aggarwal
Sent: Friday, June 10, 2016 12:31 PM
To: 'Ted Yu'
Cc: user
Subject: RE: OutOfMemory when doing joins in spark 2.0 while same code runs
fine in spark 1.5.2
Hi Ted,
Thanks for the reply.
Here is the code
Btw
CatalystTypeConverters.convertToScala(
Cast(Literal(value._2), colDataType).eval(),
colDataType)
}).toArray
Row(recordFields: _*)
}
rowRdd
}
}
Thanks
Ravi
From: Ted Yu [mailto:yuzhih...@gmail.com]
Sent: Thursday, June 9, 2016 7:56 PM
To: Ravi Aggarw
bq. Read data from hbase using custom DefaultSource (implemented using
TableScan)
Did you use the DefaultSource from hbase-spark module in hbase master
branch ?
If you wrote your own, mind sharing related code ?
Thanks
On Thu, Jun 9, 2016 at 2:53 AM, raaggarw wrote:
> Hi,
>
> I was trying to p
Hi,
I was trying to port my code from spark 1.5.2 to spark 2.0 however i faced
some outofMemory issues. On drilling down i could see that OOM is because of
join, because removing join fixes the issue. I then created a small
spark-app to reproduce this:
(48 cores, 300gb ram - divided among 4 worke