Re: Performance Question

2016-07-11 Thread Benjamin Kim
Todd,

It’s no problem to start over again. But, a tool like that would be helpful. 
Gaps in data can be accommodated for by just back filling.

Thanks,
Ben

> On Jul 11, 2016, at 10:47 AM, Todd Lipcon  wrote:
> 
> On Mon, Jul 11, 2016 at 10:40 AM, Benjamin Kim  > wrote:
> Todd,
> 
> I had it at one replica. Do I have to recreate?
> 
> We don't currently have the ability to "accept data loss" on a tablet (or set 
> of tablets). If the machine is gone for good, then currently the only easy 
> way to recover is to recreate the table. If this sounds really painful, 
> though, maybe we can work up some kind of tool you could use to just recreate 
> the missing tablets (with those rows lost).
> 
> -Todd
> 
>> On Jul 11, 2016, at 10:37 AM, Todd Lipcon > > wrote:
>> 
>> Hey Ben,
>> 
>> Is the table that you're querying replicated? Or was it created with only 
>> one replica per tablet?
>> 
>> -Todd
>> 
>> On Mon, Jul 11, 2016 at 10:35 AM, Benjamin Kim > > wrote:
>> Over the weekend, a tablet server went down. It’s not coming back up. So, I 
>> decommissioned it and removed it from the cluster. Then, I restarted Kudu 
>> because I was getting a timeout  exception trying to do counts on the table. 
>> Now, when I try again. I get the same error.
>> 
>> 16/07/11 17:32:36 WARN scheduler.TaskSetManager: Lost task 468.3 in stage 
>> 0.0 (TID 603, prod-dc1-datanode167.pdc1i.gradientx.com 
>> ): 
>> com.stumbleupon.async.TimeoutException: Timed out after 3ms when joining 
>> Deferred@712342716(state=PAUSED, result=Deferred@1765902299, 
>> callback=passthrough -> scanner opened -> wakeup thread Executor task launch 
>> worker-2, errback=openScanner errback -> passthrough -> wakeup thread 
>> Executor task launch worker-2)
>> at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1177)
>> at com.stumbleupon.async.Deferred.join(Deferred.java:1045)
>> at org.kududb.client.KuduScanner.nextRows(KuduScanner.java:57)
>> at org.kududb.spark.kudu.RowResultIteratorScala.hasNext(KuduRDD.scala:99)
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> at 
>> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
>> at 
>> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>> at 
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>> at 
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at 
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at 
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>> 
>> Does anyone know how to recover from this?
>> 
>> Thanks,
>> Benjamin Kim
>> Data Solutions Architect
>> 
>> [a•mo•bee] (n.) the company defining digital marketing.
>> 
>> Mobile: +1 818 635 2900 
>> 3250 Ocean Park Blvd, Suite 200  |  Santa Monica, CA 90405  |  
>> www.amobee.com 
>>> On Jul 6, 2016, at 9:46 AM, Dan Burkert >> > wrote:
>>> 
>>> 
>>> 
>>> On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim >> > wrote:
>>> Over the weekend, the row count is up to <500M. I will give it another few 
>>> days to get to 1B rows. I still get consistent times ~15s for doing row 
>>> counts despite the amount of data growing.
>>> 
>>> On another note, I got a solicitation email from SnappyData to evaluate 
>>> their product. They claim to be the “Spark Data Store” with tight 
>>> integration with Spark executors. It claims to be an OLTP and OLAP system 
>>> with being an in-memory data store first then to disk. After going to 
>>> several Spark events, it would seem that this is the new “hot” area for 
>>> vendors. They all (MemSQL, Redis, Aerospike, Datastax, etc.) claim to be 
>>> the best "Spark Data Store”. I’m wondering if Kudu will become this too? 
>>> With the performance I’ve seen so far, it would seem that it can be a 
>>> contender. All that is needed is a hardened Spark connector package, I 
>>> w

Re: Performance Question

2016-07-11 Thread Todd Lipcon
On Mon, Jul 11, 2016 at 10:40 AM, Benjamin Kim  wrote:

> Todd,
>
> I had it at one replica. Do I have to recreate?
>

We don't currently have the ability to "accept data loss" on a tablet (or
set of tablets). If the machine is gone for good, then currently the only
easy way to recover is to recreate the table. If this sounds really
painful, though, maybe we can work up some kind of tool you could use to
just recreate the missing tablets (with those rows lost).

-Todd

>
> On Jul 11, 2016, at 10:37 AM, Todd Lipcon  wrote:
>
> Hey Ben,
>
> Is the table that you're querying replicated? Or was it created with only
> one replica per tablet?
>
> -Todd
>
> On Mon, Jul 11, 2016 at 10:35 AM, Benjamin Kim  wrote:
>
>> Over the weekend, a tablet server went down. It’s not coming back up. So,
>> I decommissioned it and removed it from the cluster. Then, I restarted Kudu
>> because I was getting a timeout  exception trying to do counts on the
>> table. Now, when I try again. I get the same error.
>>
>> 16/07/11 17:32:36 WARN scheduler.TaskSetManager: Lost task 468.3 in stage
>> 0.0 (TID 603, prod-dc1-datanode167.pdc1i.gradientx.com):
>> com.stumbleupon.async.TimeoutException: Timed out after 3ms when
>> joining Deferred@712342716(state=PAUSED, result=Deferred@1765902299,
>> callback=passthrough -> scanner opened -> wakeup thread Executor task
>> launch worker-2, errback=openScanner errback -> passthrough -> wakeup
>> thread Executor task launch worker-2)
>> at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1177)
>> at com.stumbleupon.async.Deferred.join(Deferred.java:1045)
>> at org.kududb.client.KuduScanner.nextRows(KuduScanner.java:57)
>> at org.kududb.spark.kudu.RowResultIteratorScala.hasNext(KuduRDD.scala:99)
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> at
>> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
>> at
>> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>> at
>> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at
>> org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
>> at org.apache.spark.scheduler.Task.run(Task.scala:89)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
>> at
>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Does anyone know how to recover from this?
>>
>> Thanks,
>> *Benjamin Kim*
>> *Data Solutions Architect*
>>
>> [a•mo•bee] *(n.)* the company defining digital marketing.
>>
>> *Mobile: +1 818 635 2900 <%2B1%20818%20635%202900>*
>> 3250 Ocean Park Blvd, Suite 200  |  Santa Monica, CA 90405  |
>> www.amobee.com
>>
>> On Jul 6, 2016, at 9:46 AM, Dan Burkert  wrote:
>>
>>
>>
>> On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim  wrote:
>>
>>> Over the weekend, the row count is up to <500M. I will give it another
>>> few days to get to 1B rows. I still get consistent times ~15s for doing row
>>> counts despite the amount of data growing.
>>>
>>> On another note, I got a solicitation email from SnappyData to evaluate
>>> their product. They claim to be the “Spark Data Store” with tight
>>> integration with Spark executors. It claims to be an OLTP and OLAP system
>>> with being an in-memory data store first then to disk. After going to
>>> several Spark events, it would seem that this is the new “hot” area for
>>> vendors. They all (MemSQL, Redis, Aerospike, Datastax, etc.) claim to be
>>> the best "Spark Data Store”. I’m wondering if Kudu will become this too?
>>> With the performance I’ve seen so far, it would seem that it can be a
>>> contender. All that is needed is a hardened Spark connector package, I
>>> would think. The next evaluation I will be conducting is to see if
>>> SnappyData’s claims are valid by doing my own tests.
>>>
>>
>> It's hard to compare Kudu against any other data store without a lot of
>> analysis and thorough benchmarking, but it is certainly a goal of Kudu to
>> be a great platform for ingesting and analyzing data through Spark.  Up
>> till this point most of the Spark work has been community driven, but more
>> thorough integration test

Re: Performance Question

2016-07-11 Thread Benjamin Kim
Todd,

I had it at one replica. Do I have to recreate?

Thanks,
Ben


> On Jul 11, 2016, at 10:37 AM, Todd Lipcon  wrote:
> 
> Hey Ben,
> 
> Is the table that you're querying replicated? Or was it created with only one 
> replica per tablet?
> 
> -Todd
> 
> On Mon, Jul 11, 2016 at 10:35 AM, Benjamin Kim  > wrote:
> Over the weekend, a tablet server went down. It’s not coming back up. So, I 
> decommissioned it and removed it from the cluster. Then, I restarted Kudu 
> because I was getting a timeout  exception trying to do counts on the table. 
> Now, when I try again. I get the same error.
> 
> 16/07/11 17:32:36 WARN scheduler.TaskSetManager: Lost task 468.3 in stage 0.0 
> (TID 603, prod-dc1-datanode167.pdc1i.gradientx.com 
> ): 
> com.stumbleupon.async.TimeoutException: Timed out after 3ms when joining 
> Deferred@712342716(state=PAUSED, result=Deferred@1765902299, 
> callback=passthrough -> scanner opened -> wakeup thread Executor task launch 
> worker-2, errback=openScanner errback -> passthrough -> wakeup thread 
> Executor task launch worker-2)
> at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1177)
> at com.stumbleupon.async.Deferred.join(Deferred.java:1045)
> at org.kududb.client.KuduScanner.nextRows(KuduScanner.java:57)
> at org.kududb.spark.kudu.RowResultIteratorScala.hasNext(KuduRDD.scala:99)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
> at 
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at 
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
> at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at 
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at 
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
> 
> Does anyone know how to recover from this?
> 
> Thanks,
> Benjamin Kim
> Data Solutions Architect
> 
> [a•mo•bee] (n.) the company defining digital marketing.
> 
> Mobile: +1 818 635 2900 
> 3250 Ocean Park Blvd, Suite 200  |  Santa Monica, CA 90405  |  www.amobee.com 
> 
>> On Jul 6, 2016, at 9:46 AM, Dan Burkert > > wrote:
>> 
>> 
>> 
>> On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim > > wrote:
>> Over the weekend, the row count is up to <500M. I will give it another few 
>> days to get to 1B rows. I still get consistent times ~15s for doing row 
>> counts despite the amount of data growing.
>> 
>> On another note, I got a solicitation email from SnappyData to evaluate 
>> their product. They claim to be the “Spark Data Store” with tight 
>> integration with Spark executors. It claims to be an OLTP and OLAP system 
>> with being an in-memory data store first then to disk. After going to 
>> several Spark events, it would seem that this is the new “hot” area for 
>> vendors. They all (MemSQL, Redis, Aerospike, Datastax, etc.) claim to be the 
>> best "Spark Data Store”. I’m wondering if Kudu will become this too? With 
>> the performance I’ve seen so far, it would seem that it can be a contender. 
>> All that is needed is a hardened Spark connector package, I would think. The 
>> next evaluation I will be conducting is to see if SnappyData’s claims are 
>> valid by doing my own tests.
>> 
>> It's hard to compare Kudu against any other data store without a lot of 
>> analysis and thorough benchmarking, but it is certainly a goal of Kudu to be 
>> a great platform for ingesting and analyzing data through Spark.  Up till 
>> this point most of the Spark work has been community driven, but more 
>> thorough integration testing of the Spark connector is going to be a focus 
>> going forward.
>> 
>> - Dan
>> 
>>  
>> Cheers,
>> Ben
>> 
>> 
>> 
>>> On Jun 15, 2016, at 12:47 AM, Todd Lipcon >> > wrote:
>>> 
>>> Hi Benjamin,
>>> 
>>> What workload are you using for benchmarks? Using spark or something more 
>>> cus

Re: Performance Question

2016-07-11 Thread Todd Lipcon
Hey Ben,

Is the table that you're querying replicated? Or was it created with only
one replica per tablet?

-Todd

On Mon, Jul 11, 2016 at 10:35 AM, Benjamin Kim  wrote:

> Over the weekend, a tablet server went down. It’s not coming back up. So,
> I decommissioned it and removed it from the cluster. Then, I restarted Kudu
> because I was getting a timeout  exception trying to do counts on the
> table. Now, when I try again. I get the same error.
>
> 16/07/11 17:32:36 WARN scheduler.TaskSetManager: Lost task 468.3 in stage
> 0.0 (TID 603, prod-dc1-datanode167.pdc1i.gradientx.com):
> com.stumbleupon.async.TimeoutException: Timed out after 3ms when
> joining Deferred@712342716(state=PAUSED, result=Deferred@1765902299,
> callback=passthrough -> scanner opened -> wakeup thread Executor task
> launch worker-2, errback=openScanner errback -> passthrough -> wakeup
> thread Executor task launch worker-2)
> at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1177)
> at com.stumbleupon.async.Deferred.join(Deferred.java:1045)
> at org.kududb.client.KuduScanner.nextRows(KuduScanner.java:57)
> at org.kududb.spark.kudu.RowResultIteratorScala.hasNext(KuduRDD.scala:99)
> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
> at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
> at
> org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at
> org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
> at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
> at
> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
> at org.apache.spark.scheduler.Task.run(Task.scala:89)
> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> at java.lang.Thread.run(Thread.java:745)
>
> Does anyone know how to recover from this?
>
> Thanks,
> *Benjamin Kim*
> *Data Solutions Architect*
>
> [a•mo•bee] *(n.)* the company defining digital marketing.
>
> *Mobile: +1 818 635 2900 <%2B1%20818%20635%202900>*
> 3250 Ocean Park Blvd, Suite 200  |  Santa Monica, CA 90405  |
> www.amobee.com
>
> On Jul 6, 2016, at 9:46 AM, Dan Burkert  wrote:
>
>
>
> On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim  wrote:
>
>> Over the weekend, the row count is up to <500M. I will give it another
>> few days to get to 1B rows. I still get consistent times ~15s for doing row
>> counts despite the amount of data growing.
>>
>> On another note, I got a solicitation email from SnappyData to evaluate
>> their product. They claim to be the “Spark Data Store” with tight
>> integration with Spark executors. It claims to be an OLTP and OLAP system
>> with being an in-memory data store first then to disk. After going to
>> several Spark events, it would seem that this is the new “hot” area for
>> vendors. They all (MemSQL, Redis, Aerospike, Datastax, etc.) claim to be
>> the best "Spark Data Store”. I’m wondering if Kudu will become this too?
>> With the performance I’ve seen so far, it would seem that it can be a
>> contender. All that is needed is a hardened Spark connector package, I
>> would think. The next evaluation I will be conducting is to see if
>> SnappyData’s claims are valid by doing my own tests.
>>
>
> It's hard to compare Kudu against any other data store without a lot of
> analysis and thorough benchmarking, but it is certainly a goal of Kudu to
> be a great platform for ingesting and analyzing data through Spark.  Up
> till this point most of the Spark work has been community driven, but more
> thorough integration testing of the Spark connector is going to be a focus
> going forward.
>
> - Dan
>
>
>
>> Cheers,
>> Ben
>>
>>
>>
>> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
>>
>> Hi Benjamin,
>>
>> What workload are you using for benchmarks? Using spark or something more
>> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
>> some queries
>>
>> Todd
>>
>> Todd
>> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:
>>
>>> Hi Todd,
>>>
>>> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
>>> impressed. Compared to HBase, read and write performance are better. Write
>>> performance has the greatest imp

Re: Performance Question

2016-07-11 Thread Benjamin Kim
Over the weekend, a tablet server went down. It’s not coming back up. So, I 
decommissioned it and removed it from the cluster. Then, I restarted Kudu 
because I was getting a timeout  exception trying to do counts on the table. 
Now, when I try again. I get the same error.

16/07/11 17:32:36 WARN scheduler.TaskSetManager: Lost task 468.3 in stage 0.0 
(TID 603, 
prod-dc1-datanode167.pdc1i.gradientx.com):
 com.stumbleupon.async.TimeoutException: Timed out after 3ms when joining 
Deferred@712342716(state=PAUSED, result=Deferred@1765902299, 
callback=passthrough -> scanner opened -> wakeup thread Executor task launch 
worker-2, errback=openScanner errback -> passthrough -> wakeup thread Executor 
task launch worker-2)
at com.stumbleupon.async.Deferred.doJoin(Deferred.java:1177)
at com.stumbleupon.async.Deferred.join(Deferred.java:1045)
at org.kududb.client.KuduScanner.nextRows(KuduScanner.java:57)
at org.kududb.spark.kudu.RowResultIteratorScala.hasNext(KuduRDD.scala:99)
at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:88)
at 
org.apache.spark.sql.execution.aggregate.TungstenAggregate$$anonfun$doExecute$1$$anonfun$2.apply(TungstenAggregate.scala:86)
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at 
org.apache.spark.rdd.RDD$$anonfun$mapPartitions$1$$anonfun$apply$20.apply(RDD.scala:710)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
at org.apache.spark.scheduler.Task.run(Task.scala:89)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:214)
at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Does anyone know how to recover from this?

Thanks,
Benjamin Kim
Data Solutions Architect

[a•mo•bee] (n.) the company defining digital marketing.

Mobile: +1 818 635 2900
3250 Ocean Park Blvd, Suite 200  |  Santa Monica, CA 90405  |  
www.amobee.com

On Jul 6, 2016, at 9:46 AM, Dan Burkert 
mailto:d...@cloudera.com>> wrote:



On Wed, Jul 6, 2016 at 7:05 AM, Benjamin Kim 
mailto:bbuil...@gmail.com>> wrote:
Over the weekend, the row count is up to <500M. I will give it another few days 
to get to 1B rows. I still get consistent times ~15s for doing row counts 
despite the amount of data growing.

On another note, I got a solicitation email from SnappyData to evaluate their 
product. They claim to be the “Spark Data Store” with tight integration with 
Spark executors. It claims to be an OLTP and OLAP system with being an 
in-memory data store first then to disk. After going to several Spark events, 
it would seem that this is the new “hot” area for vendors. They all (MemSQL, 
Redis, Aerospike, Datastax, etc.) claim to be the best "Spark Data Store”. I’m 
wondering if Kudu will become this too? With the performance I’ve seen so far, 
it would seem that it can be a contender. All that is needed is a hardened 
Spark connector package, I would think. The next evaluation I will be 
conducting is to see if SnappyData’s claims are valid by doing my own tests.

It's hard to compare Kudu against any other data store without a lot of 
analysis and thorough benchmarking, but it is certainly a goal of Kudu to be a 
great platform for ingesting and analyzing data through Spark.  Up till this 
point most of the Spark work has been community driven, but more thorough 
integration testing of the Spark connector is going to be a focus going forward.

- Dan


Cheers,
Ben



On Jun 15, 2016, at 12:47 AM, Todd Lipcon 
mailto:t...@cloudera.com>> wrote:


Hi Benjamin,

What workload are you using for benchmarks? Using spark or something more 
custom? rdd or data frame or SQL, etc? Maybe you can share the schema and some 
queries

Todd

Todd

On Jun 15, 2016 8:10 AM, "Benjamin Kim" 
mailto:bbuil...@gmail.com>> wrote:
Hi Todd,

Now that Kudu 0.9.0 is out. I have done some tests. Already, I am impressed. 
Compared to HBase, read and write performance are better. Write performance has 
the greatest improvement (> 4x), while read is > 1.5x. Albeit, these are only 
preliminary tests. Do you know of a way to really do some conclusive tests? I 
want to see if I can match your results on my 50 node cluster.

Thanks,
Ben

On May 30,