Re: Spark reading from Hbase throws java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods

2020-02-23 Thread Jörn Franke
Yes I fear you have to shade and create an uberjar 

> Am 17.02.2020 um 23:27 schrieb Mich Talebzadeh :
> 
> 
> I stripped everything from the jar list. This is all I have
> 
> sspark-shell --jars shc-core-1.1.1-2.1-s_2.11.jar, \
>   json4s-native_2.11-3.5.3.jar, \
>   json4s-jackson_2.11-3.5.3.jar, \
>   hbase-client-1.2.3.jar, \
>   hbase-common-1.2.3.jar
> 
> Now I still get the same error!
> 
> scala> val df = withCatalog(catalog)
> java.lang.NoSuchMethodError: 
> org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
>   at 
> org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:257)
>   at 
> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:80)
>   at 
> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51)
>   at 
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
>   at 
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
>   at withCatalog(:54)
> 
> Thanks
> 
> Disclaimer: Use it at your own risk. Any and all responsibility for any loss, 
> damage or destruction of data or any other property which may arise from 
> relying on this email's technical content is explicitly disclaimed. The 
> author will in no case be liable for any monetary damages arising from such 
> loss, damage or destruction.
>  
> 
> 
>> On Mon, 17 Feb 2020 at 21:37, Mich Talebzadeh  
>> wrote:
>> 
>> Dr Mich Talebzadeh
>>  
>> LinkedIn  
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>>  
>> http://talebzadehmich.wordpress.com
>> 
>> Disclaimer: Use it at your own risk. Any and all responsibility for any 
>> loss, damage or destruction of data or any other property which may arise 
>> from relying on this email's technical content is explicitly disclaimed. The 
>> author will in no case be liable for any monetary damages arising from such 
>> loss, damage or destruction.
>>  
>> Many thanks both.
>> 
>> Let me check and confirm. 
>> 
>> regards,
>> 
>> Mich
>> 
>> 
>>> On Mon, 17 Feb 2020 at 21:33, Jörn Franke  wrote:
>>> Is there a reason why different Scala (it seems at least 2.10/2.11) 
>>> versions are mixed? This never works.
>>> Do you include by accident a dependency to with an old Scala version? Ie 
>>> the Hbase datasource maybe?
>>> 
>>> 
> Am 17.02.2020 um 22:15 schrieb Mich Talebzadeh 
> :
> 
 
 Thanks Muthu,
 
 
 I am using the following jar files for now in local mode i.e.  
 spark-shell_local --jars …..
 
 json4s-jackson_2.10-3.2.10.jar
 json4s_2.11-3.2.11.jar
 json4s-native_2.10-3.4.0.jar
 
 Which one is the incorrect one please/
 
 Regards,
 
 Mich
 
 
 
 Disclaimer: Use it at your own risk. Any and all responsibility for any 
 loss, damage or destruction of data or any other property which may arise 
 from relying on this email's technical content is explicitly disclaimed. 
 The author will in no case be liable for any monetary damages arising from 
 such loss, damage or destruction.
  
 
 
> On Mon, 17 Feb 2020 at 20:28, Muthu Jayakumar  wrote:
> I suspect the spark job is somehow having an incorrect (newer) version of 
> json4s in the classpath. json4s 3.5.3 is the utmost version that can be 
> used. 
> 
> Thanks,
> Muthu
> 
>> On Mon, Feb 17, 2020, 06:43 Mich Talebzadeh  
>> wrote:
>> Hi,
>> 
>> Spark version 2.4.3
>> Hbase 1.2.7
>> 
>> Data is stored in Hbase as Json. example of a row shown below
>> 
>> 
>> I am trying to read this table in Spark Scala
>> 
>> import org.apache.spark.sql.{SQLContext, _}
>> import org.apache.spark.sql.execution.datasources.hbase._
>> import org.apache.spark.{SparkConf, SparkContext}
>> import spark.sqlContext.implicits._
>> import org.json4s._
>> import org.json4s.jackson.JsonMethods._
>> import org.json4s.jackson.Serialization.{read => JsonRead}
>> import org.json4s.jackson.Serialization.{read, write}
>> def catalog = s"""{
>>  | "table":{"namespace":"trading", "name":"MARKETDATAHBASEBATCH",
>>  | "rowkey":"key",
>>  | "columns":{
>>  | "rowkey":{"cf":"rowkey", "col":"key", "type":"string"},
>>  | |"ticker":{"cf":"PRICE_INFO", "col":"ticker", 
>> "type":"string"},
>>  | |"timeissued":{"cf":"PRICE_INFO", "col":"timeissued", 
>> "type":"string"},
>>  | |"price":{"cf":"PRICE_INFO", "col":"price", "type":"string"}
>>  | |}
>>  | |}""".stripMargin
>> def withCatalog(cat: String): 

Re: Spark reading from Hbase throws java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods

2020-02-23 Thread Sean Busbey
Hi Mich!

Please try to keep your thread on a single mailing list. It's much easier
to have things show up on a new list if you give a brief summary of the
discussion and a pointer to the original thread (lists.apache.org is great
for this).

It looks like you're using "SHC" aka the "Spark HBase Connector". This is a
toolset from a third-party and isn't associated with either the Apache
Spark or Apache HBase communities. You should address your concerns to the
provider of said tool.

If you are interested in reading/writing with HBase from Spark jobs, the
Apache HBase community provides its own integration through our "Apache
HBase Connectors" project.

The project's reference guide includes some examples of using the
integration:

http://hbase.apache.org/book.html#spark

And the bits are available from our download page:

http://hbase.apache.org/downloads.html

The current documentation for deployment is thin, but if can bring specific
questions to the user@hbase mailing list about it that'll help get push
along improving that.


On Sun, Feb 23, 2020, 13:13 Mich Talebzadeh 
wrote:

> Hi,
>
> Does anyone has any more suggestion for the error I reported below please?
>
> Thanks,
>
> Mich
>
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Mon, 17 Feb 2020 at 22:27, Mich Talebzadeh 
> wrote:
>
> > I stripped everything from the jar list. This is all I have
> >
> > sspark-shell --jars shc-core-1.1.1-2.1-s_2.11.jar, \
> >   json4s-native_2.11-3.5.3.jar, \
> >   json4s-jackson_2.11-3.5.3.jar, \
> >   hbase-client-1.2.3.jar, \
> >   hbase-common-1.2.3.jar
> >
> > Now I still get the same error!
> >
> > scala> val df = withCatalog(catalog)
> > java.lang.NoSuchMethodError:
> >
> org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
> >   at
> >
> org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:257)
> >   at
> >
> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:80)
> >   at
> >
> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51)
> >   at
> >
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
> >   at
> >
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
> >   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
> >   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
> >   at withCatalog(:54)
> >
> > Thanks
> >
> >
> > *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> > loss, damage or destruction of data or any other property which may arise
> > from relying on this email's technical content is explicitly disclaimed.
> > The author will in no case be liable for any monetary damages arising
> from
> > such loss, damage or destruction.
> >
> >
> >
> >
> > On Mon, 17 Feb 2020 at 21:37, Mich Talebzadeh  >
> > wrote:
> >
> >>
> >> Dr Mich Talebzadeh
> >>
> >>
> >>
> >> LinkedIn *
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >> <
> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
> >*
> >>
> >>
> >>
> >> http://talebzadehmich.wordpress.com
> >>
> >>
> >> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> >> any loss, damage or destruction of data or any other property which may
> >> arise from relying on this email's technical content is explicitly
> >> disclaimed. The author will in no case be liable for any monetary
> damages
> >> arising from such loss, damage or destruction.
> >>
> >>
> >> Many thanks both.
> >>
> >> Let me check and confirm.
> >>
> >> regards,
> >>
> >> Mich
> >>
> >>
> >> On Mon, 17 Feb 2020 at 21:33, Jörn Franke  wrote:
> >>
> >>> Is there a reason why different Scala (it seems at least 2.10/2.11)
> >>> versions are mixed? This never works.
> >>> Do you include by accident a dependency to with an old Scala version?
> Ie
> >>> the Hbase datasource maybe?
> >>>
> >>>
> >>> Am 17.02.2020 um 22:15 schrieb Mich Talebzadeh <
> >>> mich.talebza...@gmail.com>:
> >>>
> >>> 
> >>> Thanks Muthu,
> >>>
> >>>
> >>> I am using the following jar files for now in local mode i.e.
> spark-shell_local
> >>> --jars …..
> >>>
> >>> json4s-jackson_2.10-3.2.10.jar
> >>> json4s_2.11-3.2.11.jar
> >>> json4s-native_2.10-3.4.0.jar
> >>>
> >>> Which one is the incorrect one please/
> >>>
> >>> Regards,
> >>>
> >>> Mich
> >>>
> >>>
> >>>
> >>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
> >>> any loss, damage or destruction of data or any other 

Re: Spark reading from Hbase throws java.lang.NoSuchMethodError: org.json4s.jackson.JsonMethods

2020-02-23 Thread Mich Talebzadeh
Hi,

Does anyone has any more suggestion for the error I reported below please?

Thanks,

Mich



*Disclaimer:* Use it at your own risk. Any and all responsibility for any
loss, damage or destruction of data or any other property which may arise
from relying on this email's technical content is explicitly disclaimed.
The author will in no case be liable for any monetary damages arising from
such loss, damage or destruction.




On Mon, 17 Feb 2020 at 22:27, Mich Talebzadeh 
wrote:

> I stripped everything from the jar list. This is all I have
>
> sspark-shell --jars shc-core-1.1.1-2.1-s_2.11.jar, \
>   json4s-native_2.11-3.5.3.jar, \
>   json4s-jackson_2.11-3.5.3.jar, \
>   hbase-client-1.2.3.jar, \
>   hbase-common-1.2.3.jar
>
> Now I still get the same error!
>
> scala> val df = withCatalog(catalog)
> java.lang.NoSuchMethodError:
> org.json4s.jackson.JsonMethods$.parse(Lorg/json4s/JsonInput;Z)Lorg/json4s/JsonAST$JValue;
>   at
> org.apache.spark.sql.execution.datasources.hbase.HBaseTableCatalog$.apply(HBaseTableCatalog.scala:257)
>   at
> org.apache.spark.sql.execution.datasources.hbase.HBaseRelation.(HBaseRelation.scala:80)
>   at
> org.apache.spark.sql.execution.datasources.hbase.DefaultSource.createRelation(HBaseRelation.scala:51)
>   at
> org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:318)
>   at
> org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:223)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:211)
>   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:167)
>   at withCatalog(:54)
>
> Thanks
>
>
> *Disclaimer:* Use it at your own risk. Any and all responsibility for any
> loss, damage or destruction of data or any other property which may arise
> from relying on this email's technical content is explicitly disclaimed.
> The author will in no case be liable for any monetary damages arising from
> such loss, damage or destruction.
>
>
>
>
> On Mon, 17 Feb 2020 at 21:37, Mich Talebzadeh 
> wrote:
>
>>
>> Dr Mich Talebzadeh
>>
>>
>>
>> LinkedIn * 
>> https://www.linkedin.com/profile/view?id=AAEWh2gBxianrbJd6zP6AcPCCdOABUrV8Pw
>> *
>>
>>
>>
>> http://talebzadehmich.wordpress.com
>>
>>
>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>> any loss, damage or destruction of data or any other property which may
>> arise from relying on this email's technical content is explicitly
>> disclaimed. The author will in no case be liable for any monetary damages
>> arising from such loss, damage or destruction.
>>
>>
>> Many thanks both.
>>
>> Let me check and confirm.
>>
>> regards,
>>
>> Mich
>>
>>
>> On Mon, 17 Feb 2020 at 21:33, Jörn Franke  wrote:
>>
>>> Is there a reason why different Scala (it seems at least 2.10/2.11)
>>> versions are mixed? This never works.
>>> Do you include by accident a dependency to with an old Scala version? Ie
>>> the Hbase datasource maybe?
>>>
>>>
>>> Am 17.02.2020 um 22:15 schrieb Mich Talebzadeh <
>>> mich.talebza...@gmail.com>:
>>>
>>> 
>>> Thanks Muthu,
>>>
>>>
>>> I am using the following jar files for now in local mode i.e.  
>>> spark-shell_local
>>> --jars …..
>>>
>>> json4s-jackson_2.10-3.2.10.jar
>>> json4s_2.11-3.2.11.jar
>>> json4s-native_2.10-3.4.0.jar
>>>
>>> Which one is the incorrect one please/
>>>
>>> Regards,
>>>
>>> Mich
>>>
>>>
>>>
>>> *Disclaimer:* Use it at your own risk. Any and all responsibility for
>>> any loss, damage or destruction of data or any other property which may
>>> arise from relying on this email's technical content is explicitly
>>> disclaimed. The author will in no case be liable for any monetary damages
>>> arising from such loss, damage or destruction.
>>>
>>>
>>>
>>>
>>> On Mon, 17 Feb 2020 at 20:28, Muthu Jayakumar 
>>> wrote:
>>>
 I suspect the spark job is somehow having an incorrect (newer) version
 of json4s in the classpath. json4s 3.5.3 is the utmost version that can be
 used.

 Thanks,
 Muthu

 On Mon, Feb 17, 2020, 06:43 Mich Talebzadeh 
 wrote:

> Hi,
>
> Spark version 2.4.3
> Hbase 1.2.7
>
> Data is stored in Hbase as Json. example of a row shown below
> 
>
> I am trying to read this table in Spark Scala
>
> import org.apache.spark.sql.{SQLContext, _}
> import org.apache.spark.sql.execution.datasources.hbase._
> import org.apache.spark.{SparkConf, SparkContext}
> import spark.sqlContext.implicits._
> import org.json4s._
> import org.json4s.jackson.JsonMethods._
> import org.json4s.jackson.Serialization.{read => JsonRead}
> import org.json4s.jackson.Serialization.{read, write}
> def catalog = s"""{
>  | "table":{"namespace":"trading", "name":"MARKETDATAHBASEBATCH",
>  | "rowkey":"key",
>  | "columns":{
>  | 

Re: [Spark SQL] NegativeArraySizeException When Parse InternalRow to DTO Field with Type Array[String]

2020-02-23 Thread Sandeep Patra
This might be due to the serializer being used.
This stackoverflow answer might help:
https://stackoverflow.com/questions/44414429/spark-negativearraysizeexception

On Sun, Feb 23, 2020 at 1:39 PM Proust (Feng Guizhou) [Travel Search &
Discovery]  wrote:

> Hi, Spark Users
>
> I ecounter below NegativeArraySizeException when run Spark SQL. The
> catalyst generated code for "apply2_19" and "apply1_11" is attached and
> also the related DTO.
> Difficult to understand how the problem could happen, please help if any
> idea.
>
> I can see maybe https://issues.apache.org/jira/browse/SPARK-15062 is
> similar but my data type is Array[String] and Spark version is 2.1.2 which
> looks good both.
> [SPARK-15062] Show on DataFrame causes OutOfMemoryError,
> NegativeArraySizeException or segfault - ASF JIRA
> 
> By increasing memory to 8G one will instead get a
> NegativeArraySizeException or a segfault. See here for original discussion:
> http://apache-spark-developers-list ...
> issues.apache.org
>
>
> java.lang.NegativeArraySizeException
>   at org.apache.spark.unsafe.types.UTF8String.getBytes(UTF8String.java:229)
>   at
> org.apache.spark.unsafe.types.UTF8String.toString(UTF8String.java:1005)
>   at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply1_11$(generated.java:2467)
>   at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply2_19$(generated.java:1475)
>   at
> org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(generated.java:3881)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>   at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
>   at
> scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1076)
>   at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1091)
>   at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1128)
>   at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1132)
>   at scala.collection.Iterator$class.foreach(Iterator.scala:893)
>   at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
>   at
> indexer.executor.TicketIndexerExecutorV2$$anonfun$indexData$2.apply(TicketIndexerExecutorV2.scala:101)
>   at
> indexer.executor.TicketIndexerExecutorV2$$anonfun$indexData$2.apply(TicketIndexerExecutorV2.scala:95)
>   at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
>   at
> org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
>   at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
>   at
> org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
>   at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
>   at org.apache.spark.scheduler.Task.run(Task.scala:99)
>   at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>
>
> case class VendorItemDto(
>   rate_category_id: java.lang.Long,
>   product_id: java.lang.Long,
>   vendor_item_rate_category_id: String,
>   vendor_item_id: java.lang.Long,
>   name: String,
>   item_name: String,
>   vendor_item_order: java.lang.Integer,
>   option_codes: Array[String],
>   option_names: Array[String],
>   feature_values_ids: Array[String],
>   feature_values_names: Array[String],
>   benefit_policy_type: String,
>   benefit_discount_rate: java.lang.Double,
>   benefit_discount_amount: java.lang.Long,
>   available_stock: java.lang.Integer,
>   sale_price: java.lang.Double,
>   original_price: java.lang.Double,
>   supply_price: java.lang.Double,
>   period_id_set: Array[String],
>   use_start_set: Array[String],
>   use_end_set: Array[String]
> )
>
>
>
> /* 2430 */   private void apply1_11(InternalRow i) {
> /* 2431 */
> /* 2432 */
> /* 2433 */ boolean isNull222 = MapObjects_loopIsNull379;
> /* 2434 */ ArrayData value222 = null;
> /* 2435 */
> /* 2436 */ if (!MapObjects_loopIsNull379) {
> /* 2437 */
> /* 2438 */   if (MapObjects_loopValue378.isNullAt(19)) {
> /* 2439 */ isNull222 = true;
> /* 2440 */   } else {
> /* 2441 */ value222 = 

[Spark SQL] NegativeArraySizeException When Parse InternalRow to DTO Field with Type Array[String]

2020-02-23 Thread Proust (Feng Guizhou) [Travel Search & Discovery]
Hi, Spark Users

I ecounter below NegativeArraySizeException when run Spark SQL. The catalyst 
generated code for "apply2_19" and "apply1_11" is attached and also the related 
DTO.
Difficult to understand how the problem could happen, please help if any idea.

I can see maybe https://issues.apache.org/jira/browse/SPARK-15062 is similar 
but my data type is Array[String] and Spark version is 2.1.2 which looks good 
both.
[SPARK-15062] Show on DataFrame causes OutOfMemoryError, 
NegativeArraySizeException or segfault - ASF 
JIRA
By increasing memory to 8G one will instead get a NegativeArraySizeException or 
a segfault. See here for original discussion: 
http://apache-spark-developers-list ...
issues.apache.org


java.lang.NegativeArraySizeException
  at org.apache.spark.unsafe.types.UTF8String.getBytes(UTF8String.java:229)
  at org.apache.spark.unsafe.types.UTF8String.toString(UTF8String.java:1005)
  at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply1_11$(generated.java:2467)
  at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply2_19$(generated.java:1475)
  at 
org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificSafeProjection.apply(generated.java:3881)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
  at scala.collection.Iterator$$anon$11.next(Iterator.scala:409)
  at 
scala.collection.Iterator$GroupedIterator.takeDestructively(Iterator.scala:1076)
  at scala.collection.Iterator$GroupedIterator.go(Iterator.scala:1091)
  at scala.collection.Iterator$GroupedIterator.fill(Iterator.scala:1128)
  at scala.collection.Iterator$GroupedIterator.hasNext(Iterator.scala:1132)
  at scala.collection.Iterator$class.foreach(Iterator.scala:893)
  at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
  at 
indexer.executor.TicketIndexerExecutorV2$$anonfun$indexData$2.apply(TicketIndexerExecutorV2.scala:101)
  at 
indexer.executor.TicketIndexerExecutorV2$$anonfun$indexData$2.apply(TicketIndexerExecutorV2.scala:95)
  at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
  at 
org.apache.spark.rdd.RDD$$anonfun$foreachPartition$1$$anonfun$apply$29.apply(RDD.scala:926)
  at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
  at 
org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1954)
  at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87)
  at org.apache.spark.scheduler.Task.run(Task.scala:99)
  at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:325)
  at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
  at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
  at java.lang.Thread.run(Thread.java:748)



case class VendorItemDto(
  rate_category_id: java.lang.Long,
  product_id: java.lang.Long,
  vendor_item_rate_category_id: String,
  vendor_item_id: java.lang.Long,
  name: String,
  item_name: String,
  vendor_item_order: java.lang.Integer,
  option_codes: Array[String],
  option_names: Array[String],
  feature_values_ids: Array[String],
  feature_values_names: Array[String],
  benefit_policy_type: String,
  benefit_discount_rate: java.lang.Double,
  benefit_discount_amount: java.lang.Long,
  available_stock: java.lang.Integer,
  sale_price: java.lang.Double,
  original_price: java.lang.Double,
  supply_price: java.lang.Double,
  period_id_set: Array[String],
  use_start_set: Array[String],
  use_end_set: Array[String]
)


/* 2430 */   private void apply1_11(InternalRow i) {
/* 2431 */
/* 2432 */
/* 2433 */ boolean isNull222 = MapObjects_loopIsNull379;
/* 2434 */ ArrayData value222 = null;
/* 2435 */
/* 2436 */ if (!MapObjects_loopIsNull379) {
/* 2437 */
/* 2438 */   if (MapObjects_loopValue378.isNullAt(19)) {
/* 2439 */ isNull222 = true;
/* 2440 */   } else {
/* 2441 */ value222 = MapObjects_loopValue378.getArray(19);
/* 2442 */   }
/* 2443 */
/* 2444 */ }
/* 2445 */ ArrayData value221 = null;
/* 2446 */
/* 2447 */ if (!isNull222) {
/* 2448 */
/* 2449 */   java.lang.String[] convertedArray17 = null;
/* 2450 */   int dataLength17 = value222.numElements();
/* 2451 */   convertedArray17 = new java.lang.String[dataLength17];
/* 2452 */
/* 2453 */   int loopIndex17 = 0;
/* 2454 */   while