RE: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Manohar Reddy
Yaa got it

Thanks Akhil.

From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Tuesday, July 28, 2015 2:47 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

That happens when you batch duration is less than your processing time, you 
need to set StorageLevel to MEMORY_AND_DISK, if you are using the latest 
version of spark and you are just exploring things, then you can go with the 
kafka consumers that comes with Spark itself. You will not have this issue with 
KafkaUtils.directStream since it is not a receiver based consumer.

Thanks
Best Regards

On Tue, Jul 28, 2015 at 2:36 PM, Manohar Reddy 
mailto:manohar.re...@happiestminds.com>> wrote:
Thanks Akhil.that solved now but below is the new stack trace.
Don’t feel bad, am look into that but if it is there in your fingers please

15/07/28 09:03:31 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 5.0 
(TID 77, ip-10-252-7-70.us-west-2.compute.internal): java.lang.Exception: Could 
not compute split, block input-0-1438074176218 not found
at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


From: Akhil Das 
[mailto:ak...@sigmoidanalytics.com<mailto:ak...@sigmoidanalytics.com>]
Sent: Tuesday, July 28, 2015 2:30 PM

To: Manohar Reddy
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

You need to trigger an action on your rowrdd for it to execute the map, you can 
do a rowrdd.count() for that.

Thanks
Best Regards

On Tue, Jul 28, 2015 at 2:18 PM, Manohar Reddy 
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Akhil,

Thanks for thereply.I found the root cause but don’t know how to solve this.
Below is the cause.this map function not going inside to execute because of 
this all my list fields are empty.
Please let me know what  might be the cause to not execute this snippet of 
code.the below map is not execution not going inside.
JavaRDD rowrdd=rdd.map(new Function() {
@Override
public Row call(MessageAndMetadata arg0) throws Exception {
  System.out.println("inside thread map ca");
String[] data=new String(arg0.getPayload()).split("\\|");
int i=0;
for (String string : data) {
if(i>3){
if(i%2==0){
  
fields.add(DataTypes.createStructField(string, DataTypes.StringType,
true));
  System.out.println(string);
}else{
listvalues.add(string);
System.out.println(string);
}

From: Akhil Das 
[mailto:ak...@sigmoidanalytics.com<mailto:ak...@sigmoidanalytics.com>]
Sent: Tuesday, July 28, 2015 1:52 PM
To: Manohar Reddy
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

Put a try catch inside your code and inside the catch print out the length or 
the list itself which causes the ArrayIndexOutOfBounds. It might happen that 
some of your data is not proper.

Thanks
Best Regards

On Mon, Jul 27, 2015 at 8:24 PM, Manohar753 
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Team,

can please some body help me out what am doing wrong to get the below
exce

Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Akhil Das
That happens when you batch duration is less than your processing time, you
need to set StorageLevel to MEMORY_AND_DISK, if you are using the latest
version of spark and you are just exploring things, then you can go with
the kafka consumers that comes with Spark itself. You will not have this
issue with KafkaUtils.directStream since it is not a receiver based
consumer.

Thanks
Best Regards

On Tue, Jul 28, 2015 at 2:36 PM, Manohar Reddy <
manohar.re...@happiestminds.com> wrote:

>  Thanks Akhil.that solved now but below is the new stack trace.
>
> Don’t feel bad, am look into that but if it is there in your fingers please
>
>
>
> *15/07/28 09:03:31 WARN scheduler.TaskSetManager: Lost task 0.0 in stage
> 5.0 (TID 77, ip-10-252-7-70.us-west-2.compute.internal):
> java.lang.Exception: Could not compute split, block input-0-1438074176218
> not found*
>
> at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
>
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>
> at
> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
>
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
>
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
>
> at
> org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
>
> at org.apache.spark.scheduler.Task.run(Task.scala:70)
>
> at
> org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
>
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
>
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>
> at java.lang.Thread.run(Thread.java:745)
>
>
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Tuesday, July 28, 2015 2:30 PM
>
> *To:* Manohar Reddy
> *Cc:* user@spark.apache.org
> *Subject:* Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client
>
>
>
> You need to trigger an action on your rowrdd for it to execute the map,
> you can do a rowrdd.count() for that.
>
>
>   Thanks
>
> Best Regards
>
>
>
> On Tue, Jul 28, 2015 at 2:18 PM, Manohar Reddy <
> manohar.re...@happiestminds.com> wrote:
>
>  Hi Akhil,
>
>
>
> Thanks for thereply.I found the root cause but don’t know how to solve
> this.
>
> Below is the cause.this map function not going inside to execute because
> of this all my list fields are empty.
>
> Please let me know what  might be the cause to not execute this snippet of
> code*.the below map is not execution not going inside.*
>
> JavaRDD rowrdd=*rdd**.map(**new** Function Row>() {*
>
> *@Override*
>
> *public** Row call(MessageAndMetadata **arg0**) **throws**
> Exception {*
>
> *  System.**out**.println(**"inside thread map
> ca"**);*
>
> *String[] **data**=**new** String(**arg0*
> *.getPayload()).split(**"\\|"**);*
>
> *int* *i**=0;*
>
> *for** (String* string : data) {
>
> *if*(i>3){
>
> *if*(i%2==0){
>
>   fields.add(DataTypes.
> *createStructField*(string, DataTypes.*StringType*,
>
> *true*));
>
>   System.*out*.println(string);
>
>             }*else*{
>
> listvalues.add(string);
>
> System.*out*.println(string);
>
> }
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Tuesday, July 28, 2015 1:52 PM
> *To:* Manohar Reddy
> *Cc:* user@spark.apache.org
> *Subject:* Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Cli

RE: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Manohar Reddy
Thanks Akhil.that solved now but below is the new stack trace.
Don’t feel bad, am look into that but if it is there in your fingers please

15/07/28 09:03:31 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 5.0 
(TID 77, ip-10-252-7-70.us-west-2.compute.internal): java.lang.Exception: Could 
not compute split, block input-0-1438074176218 not found
at org.apache.spark.rdd.BlockRDD.compute(BlockRDD.scala:51)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at 
org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:35)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:277)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:244)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)


From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Tuesday, July 28, 2015 2:30 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

You need to trigger an action on your rowrdd for it to execute the map, you can 
do a rowrdd.count() for that.

Thanks
Best Regards

On Tue, Jul 28, 2015 at 2:18 PM, Manohar Reddy 
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Akhil,

Thanks for thereply.I found the root cause but don’t know how to solve this.
Below is the cause.this map function not going inside to execute because of 
this all my list fields are empty.
Please let me know what  might be the cause to not execute this snippet of 
code.the below map is not execution not going inside.
JavaRDD rowrdd=rdd.map(new Function() {
@Override
public Row call(MessageAndMetadata arg0) throws Exception {
  System.out.println("inside thread map ca");
String[] data=new String(arg0.getPayload()).split("\\|");
int i=0;
for (String string : data) {
if(i>3){
if(i%2==0){
  
fields.add(DataTypes.createStructField(string, DataTypes.StringType,
true));
  System.out.println(string);
}else{
listvalues.add(string);
System.out.println(string);
}

From: Akhil Das 
[mailto:ak...@sigmoidanalytics.com<mailto:ak...@sigmoidanalytics.com>]
Sent: Tuesday, July 28, 2015 1:52 PM
To: Manohar Reddy
Cc: user@spark.apache.org<mailto:user@spark.apache.org>
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

Put a try catch inside your code and inside the catch print out the length or 
the list itself which causes the ArrayIndexOutOfBounds. It might happen that 
some of your data is not proper.

Thanks
Best Regards

On Mon, Jul 27, 2015 at 8:24 PM, Manohar753 
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Team,

can please some body help me out what am doing wrong to get the below
exception while running my app on Yarn cluster with spark 1.4.

Kafka stream am getting AND DOING foreachRDD and giving it to new thread for
process.please find the below code snippet.

JavaDStream unionStreams = ReceiverLauncher.launch(
jsc, props, numberOfReceivers, 
StorageLevel.MEMORY_ONLY());
unionStreams
.foreachRDD(new 
Function2, Time, Void>()
{

@Override
public Void 
call(JavaRDD rdd, Time time)
throws Exception {
new ThreadParam(rdd).start();


 

Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Akhil Das
You need to trigger an action on your rowrdd for it to execute the map, you
can do a rowrdd.count() for that.

Thanks
Best Regards

On Tue, Jul 28, 2015 at 2:18 PM, Manohar Reddy <
manohar.re...@happiestminds.com> wrote:

>  Hi Akhil,
>
>
>
> Thanks for thereply.I found the root cause but don’t know how to solve
> this.
>
> Below is the cause.this map function not going inside to execute because
> of this all my list fields are empty.
>
> Please let me know what  might be the cause to not execute this snippet of
> code*.the below map is not execution not going inside.*
>
> JavaRDD rowrdd=*rdd**.map(**new** Function Row>() {*
>
> *@Override*
>
> *public** Row call(MessageAndMetadata **arg0**) **throws**
> Exception {*
>
> *  System.**out**.println(**"inside thread map
> ca"**);*
>
> *String[] **data**=**new** String(**arg0*
> *.getPayload()).split(**"\\|"**);*
>
> *int* *i**=0;*
>
> *for** (String* string : data) {
>
> *if*(i>3){
>
> *if*(i%2==0){
>
>   fields.add(DataTypes.
> *createStructField*(string, DataTypes.*StringType*,
>
> *true*));
>
>   System.*out*.println(string);
>
> }*else*{
>
> listvalues.add(string);
>
> System.*out*.println(string);
>
>         }
>
>
>
> *From:* Akhil Das [mailto:ak...@sigmoidanalytics.com]
> *Sent:* Tuesday, July 28, 2015 1:52 PM
> *To:* Manohar Reddy
> *Cc:* user@spark.apache.org
> *Subject:* Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client
>
>
>
> Put a try catch inside your code and inside the catch print out the length
> or the list itself which causes the ArrayIndexOutOfBounds. It might happen
> that some of your data is not proper.
>
>
>   Thanks
>
> Best Regards
>
>
>
> On Mon, Jul 27, 2015 at 8:24 PM, Manohar753 <
> manohar.re...@happiestminds.com> wrote:
>
> Hi Team,
>
> can please some body help me out what am doing wrong to get the below
> exception while running my app on Yarn cluster with spark 1.4.
>
> Kafka stream am getting AND DOING foreachRDD and giving it to new thread
> for
> process.please find the below code snippet.
>
> JavaDStream unionStreams = ReceiverLauncher.launch(
> jsc, props, numberOfReceivers,
> StorageLevel.MEMORY_ONLY());
> unionStreams
> .foreachRDD(new
> Function2, Time, Void>()
> {
>
> @Override
> public Void
> call(JavaRDD rdd, Time time)
> throws Exception {
> new
> ThreadParam(rdd).start();
>
>
> return null;
> }
> });
> #
> public ThreadParam(JavaRDD rdd) {
> this.rdd = rdd;
> //  this.context=context;
> }
>
> public void run(){
> final List fields = new
> ArrayList();
> List listvalues=new ArrayList<>();
> final List meta=new ArrayList<>();
>
> JavaRDD rowrdd=rdd.map(new Function Row>() {
> @Override
> public Row call(MessageAndMetadata arg0) throws Exception {
> String[] data=new
> String(arg0.getPayload()).split("\\|");
> int i=0;
> List fields = new
> ArrayList();
> List listvalues=new ArrayList<>();
> List meta=new ArrayList<>();
> for (String string : data) {
> if(i>3){
> if(i%2==0){
>
> fields.add(DataTypes.createStructField(string, DataTypes.StringType,
> true));
> //
> System.out.println(splitarr[i]);
> }else{
> listvalues.add(string);
> //
> System.out.println(splitarr[i]);
> }
> }else{
> meta.add(string);
>

RE: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Manohar Reddy
Hi Akhil,

Thanks for thereply.I found the root cause but don’t know how to solve this.
Below is the cause.this map function not going inside to execute because of 
this all my list fields are empty.
Please let me know what  might be the cause to not execute this snippet of 
code.the below map is not execution not going inside.
JavaRDD rowrdd=rdd.map(new Function() {
@Override
public Row call(MessageAndMetadata arg0) throws Exception {
  System.out.println("inside thread map ca");
String[] data=new String(arg0.getPayload()).split("\\|");
int i=0;
for (String string : data) {
if(i>3){
if(i%2==0){
  
fields.add(DataTypes.createStructField(string, DataTypes.StringType,
true));
  System.out.println(string);
}else{
listvalues.add(string);
System.out.println(string);
}

From: Akhil Das [mailto:ak...@sigmoidanalytics.com]
Sent: Tuesday, July 28, 2015 1:52 PM
To: Manohar Reddy
Cc: user@spark.apache.org
Subject: Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

Put a try catch inside your code and inside the catch print out the length or 
the list itself which causes the ArrayIndexOutOfBounds. It might happen that 
some of your data is not proper.

Thanks
Best Regards

On Mon, Jul 27, 2015 at 8:24 PM, Manohar753 
mailto:manohar.re...@happiestminds.com>> wrote:
Hi Team,

can please some body help me out what am doing wrong to get the below
exception while running my app on Yarn cluster with spark 1.4.

Kafka stream am getting AND DOING foreachRDD and giving it to new thread for
process.please find the below code snippet.

JavaDStream unionStreams = ReceiverLauncher.launch(
jsc, props, numberOfReceivers, 
StorageLevel.MEMORY_ONLY());
unionStreams
.foreachRDD(new 
Function2, Time, Void>()
{

@Override
public Void 
call(JavaRDD rdd, Time time)
throws Exception {
new ThreadParam(rdd).start();


return null;
}
});
#
public ThreadParam(JavaRDD rdd) {
this.rdd = rdd;
//  this.context=context;
}

public void run(){
final List fields = new ArrayList();
List listvalues=new ArrayList<>();
final List meta=new ArrayList<>();

JavaRDD rowrdd=rdd.map(new Function() {
@Override
public Row call(MessageAndMetadata arg0) throws Exception {
String[] data=new 
String(arg0.getPayload()).split("\\|");
int i=0;
List fields = new ArrayList();
List listvalues=new ArrayList<>();
List meta=new ArrayList<>();
for (String string : data) {
if(i>3){
if(i%2==0){
  
fields.add(DataTypes.createStructField(string, DataTypes.StringType,
true));
//  System.out.println(splitarr[i]);
}else{
listvalues.add(string);
//  System.out.println(splitarr[i]);
}
}else{
meta.add(string);
}
i++;
}int size=listvalues.size();
return
RowFactory.create(listvalues.get(25-25),listvalues.get(25-24),listvalues.get(25-23),

listvalues.get(25-22),listvalues.get(25-21),listvalues.get(25-20),

listvalues.get(25-19),listvalues.get(25-18),listvalues.get(25-17),

listvalues.get(25-16),listvalues.get(25-15),listvalues.get(25-14),

listvalues.get(25-13),listvalues.get(25-12),listvalues.get(25-11),

listvalues.get(25-10),listvalues.get(25-9),listvalues.get(25-8),

listvalues.get(25-7),listvalues.get(25-6),listvalues.get(

Re: java.lang.ArrayIndexOutOfBoundsException: 0 on Yarn Client

2015-07-28 Thread Akhil Das
Put a try catch inside your code and inside the catch print out the length
or the list itself which causes the ArrayIndexOutOfBounds. It might happen
that some of your data is not proper.

Thanks
Best Regards

On Mon, Jul 27, 2015 at 8:24 PM, Manohar753  wrote:

> Hi Team,
>
> can please some body help me out what am doing wrong to get the below
> exception while running my app on Yarn cluster with spark 1.4.
>
> Kafka stream am getting AND DOING foreachRDD and giving it to new thread
> for
> process.please find the below code snippet.
>
> JavaDStream unionStreams = ReceiverLauncher.launch(
> jsc, props, numberOfReceivers,
> StorageLevel.MEMORY_ONLY());
> unionStreams
> .foreachRDD(new
> Function2, Time, Void>()
> {
>
> @Override
> public Void
> call(JavaRDD rdd, Time time)
> throws Exception {
> new
> ThreadParam(rdd).start();
>
>
> return null;
> }
> });
> #
> public ThreadParam(JavaRDD rdd) {
> this.rdd = rdd;
> //  this.context=context;
> }
>
> public void run(){
> final List fields = new
> ArrayList();
> List listvalues=new ArrayList<>();
> final List meta=new ArrayList<>();
>
> JavaRDD rowrdd=rdd.map(new Function Row>() {
> @Override
> public Row call(MessageAndMetadata arg0) throws Exception {
> String[] data=new
> String(arg0.getPayload()).split("\\|");
> int i=0;
> List fields = new
> ArrayList();
> List listvalues=new ArrayList<>();
> List meta=new ArrayList<>();
> for (String string : data) {
> if(i>3){
> if(i%2==0){
>
> fields.add(DataTypes.createStructField(string, DataTypes.StringType,
> true));
> //
> System.out.println(splitarr[i]);
> }else{
> listvalues.add(string);
> //
> System.out.println(splitarr[i]);
> }
> }else{
> meta.add(string);
> }
> i++;
> }int size=listvalues.size();
> return
>
> RowFactory.create(listvalues.get(25-25),listvalues.get(25-24),listvalues.get(25-23),
>
> listvalues.get(25-22),listvalues.get(25-21),listvalues.get(25-20),
>
> listvalues.get(25-19),listvalues.get(25-18),listvalues.get(25-17),
>
> listvalues.get(25-16),listvalues.get(25-15),listvalues.get(25-14),
>
> listvalues.get(25-13),listvalues.get(25-12),listvalues.get(25-11),
>
> listvalues.get(25-10),listvalues.get(25-9),listvalues.get(25-8),
>
> listvalues.get(25-7),listvalues.get(25-6),listvalues.get(25-5),
>
>
> listvalues.get(25-4),listvalues.get(25-3),listvalues.get(25-2),listvalues.get(25-1));
>
> }
> });
>
> SQLContext sqlContext = new SQLContext(rowrdd.context());
> StructType schema = DataTypes.createStructType(fields);
> System.out.println("before creating schema");
> DataFrame courseDf=sqlContext.createDataFrame(rowrdd, schema);
> courseDf.registerTempTable("course");
> courseDf.show();
> System.out.println("after creating schema");
>
> 
> BELOW IS THE  COMMAND TO RUN THIS AND XENT FOR THAT IS THE STACKTRACE eRROR
>  MASTER=yarn-client /home/hadoop/spark/bin/spark-submit --class
> com.person.Consumer
>
> /mnt1/manohar/spark-load-from-db/targetpark-load-from-db-1.0-SNAPSHOT-jar-with-dependencies.jar
>
>
> ERROR IS AS
>
>
> 15/07/27 14:45:01 WARN scheduler.TaskSetManager: Lost task 0.0 in stage 4.0
> (TID 72, ip-10-252-7-73.us-west-2.compute.internal):
> java.lang.ArrayIndexOutOfBoundsException: 0
> at
>
> org.apache.spark.sql.catalyst.CatalystTypeConverters$.convertRowWithConverters(CatalystTypeConverters.scala:348)
> at
>
> org.apache.spark.sql.catalyst.CatalystTypeConverters$$anonfun$createToCatalystConverter$4.apply(CatalystTypeConverters.scala:180)
> at
> org.apache.spark.sql.SQLContext$$anonfun$9.apply(SQLContext.scala:488)
> at
> org.apache.spark.sql.SQLContext$$anonfun$9.apply(SQLContext.scala:488)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at scala.collection.Iterator$$anon$11.next(Iterator.scala:328)
> at scala.collection.Iterator$$anon$10.next(Iterator.scala:312)
>