Re: Performance Question

2016-06-28 Thread Todd Lipcon
Cool, thanks for the report, Ben. For what it's worth, I think there's
still some low hanging fruit in the Spark connector for Kudu (for example,
I believe locality on reads is currently broken). So, you can expect
performance to continue to improve in future versions. I'd also be
interested to see results on Kudu for a much larger dataset - my guess is a
lot of the 6 seconds you're seeing is constant overhead from Spark job
setup, etc, given that the performance doesn't seem to get slower as you
went from 700K rows to 13M rows.

-Todd

On Tue, Jun 28, 2016 at 3:03 PM, Benjamin Kim  wrote:

> FYI.
>
> I did a quick-n-dirty performance test.
>
> First, the setup:
> QA cluster:
>
>- 15 data nodes
>   - 64GB memory each
>   - HBase is using 4GB of memory
>   - Kudu is using 1GB of memory
>- 1 HBase/Kudu master node
>   - 64GB memory
>   - HBase/Kudu master is using 1GB of memory each
>- 10Gb Ethernet
>
>
> Using Spark on both to load/read events data (84 columns per row), I was
> able to record performance for each. On the HBase side, I used the Phoenix
> 4.7 Spark plugin where DataFrames can be used directly. On the Kudu side, I
> used the Spark connector. I created an events table in Phoenix using the
> CREATE TABLE statement and created the equivalent in Kudu using the Spark
> method based off of a DataFrame schema.
>
> Here are the numbers for Phoenix/HBase.
> 1st run:
> > 715k rows
> - write: 2.7m
>
> > 715k rows in HBase table
> - read: 0.1s
> - count: 3.8s
> - aggregate: 61s
>
> 2nd run:
> > 5.2M rows
> - write: 11m
> * had 4 region servers go down, had to retry the 5.2M row write
>
> > 5.9M rows in HBase table
> - read: 8s
> - count: 3m
> - aggregate: 46s
>
> 3rd run:
> > 6.8M rows
> - write: 9.6m
>
> > 12.7M rows
> - read: 10s
> - count: 3m
> - aggregate: 44s
>
>
> Here are the numbers for Kudu.
> 1st run:
> > 715k rows
> - write: 18s
>
> > 715k rows in Kudu table
> - read: 0.2s
> - count: 18s
> - aggregate: 5s
>
> 2nd run:
> > 5.2M rows
> - write: 33s
>
> > 5.9M rows in Kudu table
> - read: 0.2s
> - count: 16s
> - aggregate: 6s
>
> 3rd run:
> > 6.8M rows
> - write: 27s
>
> > 12.7M rows in Kudu table
> - read: 0.2s
> - count: 16s
> - aggregate: 6s
>
> The Kudu results are impressive if you take these number as-is. Kudu is
> close to 18x faster at writing (UPSERT). Kudu is 30x faster at reading
> (HBase times increase as data size grows).  Kudu is 7x faster at full row
> counts. Lastly, Kudu is 3x faster doing an aggregate query (count distinct
> event_id’s per user_id). *Remember that this is small cluster, times are
> still respectable for both systems, HBase could have been configured
> better, and the HBase table could have been better tuned.
>
> Cheers,
> Ben
>
>
> On Jun 15, 2016, at 10:13 AM, Dan Burkert  wrote:
>
> Adding partition splits when range partitioning is done via the
> CreateTableOptions.addSplitRow
> 
>  method.
> You can find more about the different partitioning options in the schema
> design guide .
> We generally recommend sticking to hash partitioning if possible, since you
> don't have to determine your own split rows.
>
> - Dan
>
> On Wed, Jun 15, 2016 at 9:17 AM, Benjamin Kim  wrote:
>
>> Todd,
>>
>> I think the locality is not within our setup. We have the compute cluster
>> with Spark, YARN, etc. on its own, and we have the storage cluster with
>> HBase, Kudu, etc. on another. We beefed up the hardware specs on the
>> compute cluster and beefed up storage capacity on the storage cluster. We
>> got this setup idea from the Databricks folks. I do have a question. I
>> created the table to use range partition on columns. I see that if I use
>> hash partition I can set the number of splits, but how do I do that using
>> range (50 nodes * 10 = 500 splits)?
>>
>> Thanks,
>> Ben
>>
>>
>> On Jun 15, 2016, at 9:11 AM, Todd Lipcon  wrote:
>>
>> Awesome use case. One thing to keep in mind is that spark parallelism
>> will be limited by the number of tablets. So, you might want to split into
>> 10 or so buckets per node to get the best query throughput.
>>
>> Usually if you run top on some machines while running the query you can
>> see if it is fully utilizing the cores.
>>
>> Another known issue right now is that spark locality isn't working
>> properly on replicated tables so you will use a lot of network traffic. For
>> a perf test you might want to try a table with replication count 1
>> On Jun 15, 2016 5:26 PM, "Benjamin Kim"  wrote:
>>
>> Hi Todd,
>>
>> I did a simple test of our ad events. We stream using Spark Streaming
>> directly into HBase, and the Data Analysts/Scientists do some
>> insight/discovery work plus some reports generation. For the reports, we
>> use SQL, and the 

Re: Performance Question

2016-06-28 Thread Benjamin Kim
FYI.

I did a quick-n-dirty performance test.

First, the setup:
QA cluster:
15 data nodes
64GB memory each
HBase is using 4GB of memory
Kudu is using 1GB of memory
1 HBase/Kudu master node
64GB memory
HBase/Kudu master is using 1GB of memory each
10Gb Ethernet

Using Spark on both to load/read events data (84 columns per row), I was able 
to record performance for each. On the HBase side, I used the Phoenix 4.7 Spark 
plugin where DataFrames can be used directly. On the Kudu side, I used the 
Spark connector. I created an events table in Phoenix using the CREATE TABLE 
statement and created the equivalent in Kudu using the Spark method based off 
of a DataFrame schema.

Here are the numbers for Phoenix/HBase.
1st run:
> 715k rows
- write: 2.7m

> 715k rows in HBase table
- read: 0.1s
- count: 3.8s
- aggregate: 61s

2nd run:
> 5.2M rows
- write: 11m
* had 4 region servers go down, had to retry the 5.2M row write

> 5.9M rows in HBase table
- read: 8s
- count: 3m
- aggregate: 46s

3rd run:
> 6.8M rows
- write: 9.6m

> 12.7M rows
- read: 10s
- count: 3m
- aggregate: 44s


Here are the numbers for Kudu.
1st run:
> 715k rows
- write: 18s

> 715k rows in Kudu table
- read: 0.2s
- count: 18s
- aggregate: 5s

2nd run:
> 5.2M rows
- write: 33s

> 5.9M rows in Kudu table
- read: 0.2s
- count: 16s
- aggregate: 6s

3rd run:
> 6.8M rows
- write: 27s

> 12.7M rows in Kudu table
- read: 0.2s
- count: 16s
- aggregate: 6s

The Kudu results are impressive if you take these number as-is. Kudu is close 
to 18x faster at writing (UPSERT). Kudu is 30x faster at reading (HBase times 
increase as data size grows).  Kudu is 7x faster at full row counts. Lastly, 
Kudu is 3x faster doing an aggregate query (count distinct event_id’s per 
user_id). *Remember that this is small cluster, times are still respectable for 
both systems, HBase could have been configured better, and the HBase table 
could have been better tuned.

Cheers,
Ben


> On Jun 15, 2016, at 10:13 AM, Dan Burkert  wrote:
> 
> Adding partition splits when range partitioning is done via the 
> CreateTableOptions.addSplitRow 
> 
>  method.  You can find more about the different partitioning options in the 
> schema design guide 
> .  We generally 
> recommend sticking to hash partitioning if possible, since you don't have to 
> determine your own split rows.
> 
> - Dan
> 
> On Wed, Jun 15, 2016 at 9:17 AM, Benjamin Kim  > wrote:
> Todd,
> 
> I think the locality is not within our setup. We have the compute cluster 
> with Spark, YARN, etc. on its own, and we have the storage cluster with 
> HBase, Kudu, etc. on another. We beefed up the hardware specs on the compute 
> cluster and beefed up storage capacity on the storage cluster. We got this 
> setup idea from the Databricks folks. I do have a question. I created the 
> table to use range partition on columns. I see that if I use hash partition I 
> can set the number of splits, but how do I do that using range (50 nodes * 10 
> = 500 splits)?
> 
> Thanks,
> Ben
> 
> 
>> On Jun 15, 2016, at 9:11 AM, Todd Lipcon > > wrote:
>> 
>> Awesome use case. One thing to keep in mind is that spark parallelism will 
>> be limited by the number of tablets. So, you might want to split into 10 or 
>> so buckets per node to get the best query throughput.
>> 
>> Usually if you run top on some machines while running the query you can see 
>> if it is fully utilizing the cores.
>> 
>> Another known issue right now is that spark locality isn't working properly 
>> on replicated tables so you will use a lot of network traffic. For a perf 
>> test you might want to try a table with replication count 1
>> 
>> On Jun 15, 2016 5:26 PM, "Benjamin Kim" > > wrote:
>> Hi Todd,
>> 
>> I did a simple test of our ad events. We stream using Spark Streaming 
>> directly into HBase, and the Data Analysts/Scientists do some 
>> insight/discovery work plus some reports generation. For the reports, we use 
>> SQL, and the more deeper stuff, we use Spark. In Spark, our main data 
>> currency store of choice is DataFrames.
>> 
>> The schema is around 83 columns wide where most are of the string data type.
>> 
>> "event_type", "timestamp", "event_valid", "event_subtype", "user_ip", 
>> "user_id", "mappable_id",
>> "cookie_status", "profile_status", "user_status", "previous_timestamp", 
>> "user_agent", "referer",
>> "host_domain", "uri", "request_elapsed", "browser_languages", "acamp_id", 
>> "creative_id",
>> "location_id", “pcamp_id",
>> "pdomain_id", "continent_code", "country", "region", "dma", "city", "zip", 
>> "isp", "line_speed",
>> "gender", "year_of_birth", "behaviors_read", "behaviors_written", 
>> "key_value_pairs",