Re: Spark on Kudu

2016-06-15 Thread Benjamin Kim
Since I have created permanent tables using org.apache.spark.sql.jdbc and 
com.databricks.spark.csv with sqlContext, I was wondering if I can do the same 
with Kudu tables?

CREATE TABLE 
USING org.kududb.spark.kudu
OPTIONS ("kudu.master” "kudu_master","kudu.table” "kudu_tablename”)

Is this possible? By the way, the above didn’t work for me.

Thanks,
Ben

> On Jun 14, 2016, at 6:08 PM, Dan Burkert  wrote:
> 
> I'm not sure exactly what the semantics will be, but at least one of them 
> will be upsert.  These modes come from spark, and they were really designed 
> for file-backed storage and not table storage.  We may want to do append = 
> upsert, and overwrite = truncate + insert.  I think that may match the normal 
> spark semantics more closely.
> 
> - Dan
> 
> On Tue, Jun 14, 2016 at 6:00 PM, Benjamin Kim  > wrote:
> Dan,
> 
> Thanks for the information. That would mean both “append” and “overwrite” 
> modes would be combined or not needed in the future.
> 
> Cheers,
> Ben
> 
>> On Jun 14, 2016, at 5:57 PM, Dan Burkert > > wrote:
>> 
>> Right now append uses an update Kudu operation, which requires the row 
>> already be present in the table. Overwrite maps to insert.  Kudu very 
>> recently got upsert support baked in, but it hasn't yet been integrated into 
>> the Spark connector.  So pretty soon these sharp edges will get a lot 
>> better, since upsert is the way to go for most spark workloads.
>> 
>> - Dan
>> 
>> On Tue, Jun 14, 2016 at 5:41 PM, Benjamin Kim > > wrote:
>> I tried to use the “append” mode, and it worked. Over 3.8 million rows in 
>> 64s. I would assume that now I can use the “overwrite” mode on existing 
>> data. Now, I have to find answers to these questions. What would happen if I 
>> “append” to the data in the Kudu table if the data already exists? What 
>> would happen if I “overwrite” existing data when the DataFrame has data in 
>> it that does not exist in the Kudu table? I need to evaluate the best way to 
>> simulate the UPSERT behavior in HBase because this is what our use case is.
>> 
>> Thanks,
>> Ben
>> 
>> 
>> 
>>> On Jun 14, 2016, at 5:05 PM, Benjamin Kim >> > wrote:
>>> 
>>> Hi,
>>> 
>>> Now, I’m getting this error when trying to write to the table.
>>> 
>>> import scala.collection.JavaConverters._
>>> val key_seq = Seq(“my_id")
>>> val key_list = List(“my_id”).asJava
>>> kuduContext.createTable(tableName, df.schema, key_seq, new 
>>> CreateTableOptions().setNumReplicas(1).addHashPartitions(key_list, 100))
>>> 
>>> df.write
>>> .options(Map("kudu.master" -> kuduMaster,"kudu.table" -> tableName))
>>> .mode("overwrite")
>>> .kudu
>>> 
>>> java.lang.RuntimeException: failed to write 1000 rows from DataFrame to 
>>> Kudu; sample errors: Not found: key not found (error 0)Not found: key not 
>>> found (error 0)Not found: key not found (error 0)Not found: key not found 
>>> (error 0)Not found: key not found (error 0)
>>> 
>>> Does the key field need to be first in the DataFrame?
>>> 
>>> Thanks,
>>> Ben
>>> 
 On Jun 14, 2016, at 4:28 PM, Dan Burkert > wrote:
 
 
 
 On Tue, Jun 14, 2016 at 4:20 PM, Benjamin Kim > wrote:
 Dan,
 
 Thanks! It got further. Now, how do I set the Primary Key to be a 
 column(s) in the DataFrame and set the partitioning? Is it like this?
 
 kuduContext.createTable(tableName, df.schema, Seq(“my_id"), new 
 CreateTableOptions().setNumReplicas(1).addHashPartitions(“my_id"))
 
 java.lang.IllegalArgumentException: Table partitioning must be specified 
 using setRangePartitionColumns or addHashPartitions
 
 Yep.  The `Seq("my_id")` part of that call is specifying the set of 
 primary key columns, so in this case you have specified the single PK 
 column "my_id".  The `addHashPartitions` call adds hash partitioning to 
 the table, in this case over the column "my_id" (which is good, it must be 
 over one or more PK columns, so in this case "my_id" is the one and only 
 valid combination).  However, the call to `addHashPartition` also takes 
 the number of buckets as the second param.  You shouldn't get the 
 IllegalArgumentException as long as you are specifying either 
 `addHashPartitions` or `setRangePartitionColumns`.
 
 - Dan
  
 
 Thanks,
 Ben
 
 
> On Jun 14, 2016, at 4:07 PM, Dan Burkert  > wrote:
> 
> Looks like we're missing an import statement in that example.  Could you 
> try:
> 
> import org.kududb.client._
> and try again?
> 
> - Dan
> 
> On Tue, Jun 14, 2016 at 4:01 PM, Benjamin Kim 

Re: Kudu QuickStart VM 0.9.0?

2016-06-15 Thread Jean-Daniel Cryans
It's up, it's a different filename since I also upgraded from 5.4.9 to
5.7.1 so you'll need to update your kudu-examples repo first.

J-D

On Wed, Jun 15, 2016 at 10:19 AM, Tom White  wrote:

> Thanks J-D.
>
> Tom
>
> On Wed, Jun 15, 2016 at 6:06 PM, Jean-Daniel Cryans 
> wrote:
> > Hey Tom,
> >
> > Yeah it's on me to update it, trying to get that done this week.
> >
> > J-D
> >
> > On Wed, Jun 15, 2016 at 10:04 AM, Tom White  wrote:
> >>
> >> Hi,
> >>
> >> I tried downloading the VM for the new release, but it looks like it's
> >> still on 0.7.0:
> >>
> >>
> >>
> https://github.com/cloudera/kudu-examples/commit/9a22e9f6280094f029c049a7776cce3458150e7f
> >>
> >> Are there plans to update it? I find it very useful for trying out Kudu.
> >>
> >> Thanks!
> >> Tom
> >
> >
>


Re: Performance Question

2016-06-15 Thread Dan Burkert
Adding partition splits when range partitioning is done via the
CreateTableOptions.addSplitRow

method.
You can find more about the different partitioning options in the schema
design guide .
We generally recommend sticking to hash partitioning if possible, since you
don't have to determine your own split rows.

- Dan

On Wed, Jun 15, 2016 at 9:17 AM, Benjamin Kim  wrote:

> Todd,
>
> I think the locality is not within our setup. We have the compute cluster
> with Spark, YARN, etc. on its own, and we have the storage cluster with
> HBase, Kudu, etc. on another. We beefed up the hardware specs on the
> compute cluster and beefed up storage capacity on the storage cluster. We
> got this setup idea from the Databricks folks. I do have a question. I
> created the table to use range partition on columns. I see that if I use
> hash partition I can set the number of splits, but how do I do that using
> range (50 nodes * 10 = 500 splits)?
>
> Thanks,
> Ben
>
>
> On Jun 15, 2016, at 9:11 AM, Todd Lipcon  wrote:
>
> Awesome use case. One thing to keep in mind is that spark parallelism will
> be limited by the number of tablets. So, you might want to split into 10 or
> so buckets per node to get the best query throughput.
>
> Usually if you run top on some machines while running the query you can
> see if it is fully utilizing the cores.
>
> Another known issue right now is that spark locality isn't working
> properly on replicated tables so you will use a lot of network traffic. For
> a perf test you might want to try a table with replication count 1
> On Jun 15, 2016 5:26 PM, "Benjamin Kim"  wrote:
>
> Hi Todd,
>
> I did a simple test of our ad events. We stream using Spark Streaming
> directly into HBase, and the Data Analysts/Scientists do some
> insight/discovery work plus some reports generation. For the reports, we
> use SQL, and the more deeper stuff, we use Spark. In Spark, our main data
> currency store of choice is DataFrames.
>
> The schema is around 83 columns wide where most are of the string data
> type.
>
> "event_type", "timestamp", "event_valid", "event_subtype", "user_ip",
> "user_id", "mappable_id",
> "cookie_status", "profile_status", "user_status", "previous_timestamp",
> "user_agent", "referer",
> "host_domain", "uri", "request_elapsed", "browser_languages", "acamp_id",
> "creative_id",
> "location_id", “pcamp_id",
> "pdomain_id", "continent_code", "country", "region", "dma", "city", "zip",
> "isp", "line_speed",
> "gender", "year_of_birth", "behaviors_read", "behaviors_written",
> "key_value_pairs", "acamp_candidates",
> "tag_format", "optimizer_name", "optimizer_version", "optimizer_ip",
> "pixel_id", “video_id",
> "video_network_id", "video_time_watched", "video_percentage_watched",
> "video_media_type",
> "video_player_iframed", "video_player_in_view", "video_player_width",
> "video_player_height",
> "conversion_valid_sale", "conversion_sale_amount",
> "conversion_commission_amount", "conversion_step",
> "conversion_currency", "conversion_attribution", "conversion_offer_id",
> "custom_info", "frequency",
> "recency_seconds", "cost", "revenue", “optimizer_acamp_id",
> "optimizer_creative_id", "optimizer_ecpm", "impression_id",
> "diagnostic_data",
> "user_profile_mapping_source", "latitude", "longitude", "area_code",
> "gmt_offset", "in_dst",
> "proxy_type", "mobile_carrier", "pop", "hostname", "profile_expires",
> "timestamp_iso", "reference_id",
> "identity_organization", "identity_method"
>
> Most queries are like counts of how many users use what browser, how many
> are unique users, etc. The part that scares most users is when it comes to
> joining this data with other dimension/3rd party events tables because of
> shear size of it.
>
> We do what most companies do, similar to what I saw in earlier
> presentations of Kudu. We dump data out of HBase into partitioned Parquet
> tables to make query performance manageable.
>
> I will coordinate with a data scientist today to do some tests. He is
> working on identity matching/record linking of users from 2 domains: US and
> Singapore, using probabilistic deduping algorithms. I will load the data
> from ad events from both countries, and let him run his process against
> this data in Kudu. I hope this will “wow” the team.
>
> Thanks,
> Ben
>
> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
>
> Hi Benjamin,
>
> What workload are you using for benchmarks? Using spark or something more
> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
> some queries
>
> Todd
>
> Todd
> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:
>
>> Hi Todd,
>>
>> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
>> impressed. Compared to HBase, read and write 

Kudu QuickStart VM 0.9.0?

2016-06-15 Thread Tom White
Hi,

I tried downloading the VM for the new release, but it looks like it's
still on 0.7.0:

https://github.com/cloudera/kudu-examples/commit/9a22e9f6280094f029c049a7776cce3458150e7f

Are there plans to update it? I find it very useful for trying out Kudu.

Thanks!
Tom


Re: Performance Question

2016-06-15 Thread Benjamin Kim
Todd,

I think the locality is not within our setup. We have the compute cluster with 
Spark, YARN, etc. on its own, and we have the storage cluster with HBase, Kudu, 
etc. on another. We beefed up the hardware specs on the compute cluster and 
beefed up storage capacity on the storage cluster. We got this setup idea from 
the Databricks folks. I do have a question. I created the table to use range 
partition on columns. I see that if I use hash partition I can set the number 
of splits, but how do I do that using range (50 nodes * 10 = 500 splits)?

Thanks,
Ben

> On Jun 15, 2016, at 9:11 AM, Todd Lipcon  wrote:
> 
> Awesome use case. One thing to keep in mind is that spark parallelism will be 
> limited by the number of tablets. So, you might want to split into 10 or so 
> buckets per node to get the best query throughput.
> 
> Usually if you run top on some machines while running the query you can see 
> if it is fully utilizing the cores.
> 
> Another known issue right now is that spark locality isn't working properly 
> on replicated tables so you will use a lot of network traffic. For a perf 
> test you might want to try a table with replication count 1
> 
> On Jun 15, 2016 5:26 PM, "Benjamin Kim"  > wrote:
> Hi Todd,
> 
> I did a simple test of our ad events. We stream using Spark Streaming 
> directly into HBase, and the Data Analysts/Scientists do some 
> insight/discovery work plus some reports generation. For the reports, we use 
> SQL, and the more deeper stuff, we use Spark. In Spark, our main data 
> currency store of choice is DataFrames.
> 
> The schema is around 83 columns wide where most are of the string data type.
> 
> "event_type", "timestamp", "event_valid", "event_subtype", "user_ip", 
> "user_id", "mappable_id",
> "cookie_status", "profile_status", "user_status", "previous_timestamp", 
> "user_agent", "referer",
> "host_domain", "uri", "request_elapsed", "browser_languages", "acamp_id", 
> "creative_id",
> "location_id", “pcamp_id",
> "pdomain_id", "continent_code", "country", "region", "dma", "city", "zip", 
> "isp", "line_speed",
> "gender", "year_of_birth", "behaviors_read", "behaviors_written", 
> "key_value_pairs", "acamp_candidates",
> "tag_format", "optimizer_name", "optimizer_version", "optimizer_ip", 
> "pixel_id", “video_id",
> "video_network_id", "video_time_watched", "video_percentage_watched", 
> "video_media_type",
> "video_player_iframed", "video_player_in_view", "video_player_width", 
> "video_player_height",
> "conversion_valid_sale", "conversion_sale_amount", 
> "conversion_commission_amount", "conversion_step",
> "conversion_currency", "conversion_attribution", "conversion_offer_id", 
> "custom_info", "frequency",
> "recency_seconds", "cost", "revenue", “optimizer_acamp_id",
> "optimizer_creative_id", "optimizer_ecpm", "impression_id", "diagnostic_data",
> "user_profile_mapping_source", "latitude", "longitude", "area_code", 
> "gmt_offset", "in_dst",
> "proxy_type", "mobile_carrier", "pop", "hostname", "profile_expires", 
> "timestamp_iso", "reference_id",
> "identity_organization", "identity_method"
> 
> Most queries are like counts of how many users use what browser, how many are 
> unique users, etc. The part that scares most users is when it comes to 
> joining this data with other dimension/3rd party events tables because of 
> shear size of it.
> 
> We do what most companies do, similar to what I saw in earlier presentations 
> of Kudu. We dump data out of HBase into partitioned Parquet tables to make 
> query performance manageable.
> 
> I will coordinate with a data scientist today to do some tests. He is working 
> on identity matching/record linking of users from 2 domains: US and 
> Singapore, using probabilistic deduping algorithms. I will load the data from 
> ad events from both countries, and let him run his process against this data 
> in Kudu. I hope this will “wow” the team.
> 
> Thanks,
> Ben
> 
>> On Jun 15, 2016, at 12:47 AM, Todd Lipcon > > wrote:
>> 
>> Hi Benjamin,
>> 
>> What workload are you using for benchmarks? Using spark or something more 
>> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and 
>> some queries
>> 
>> Todd
>> 
>> Todd
>> 
>> On Jun 15, 2016 8:10 AM, "Benjamin Kim" > > wrote:
>> Hi Todd,
>> 
>> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am impressed. 
>> Compared to HBase, read and write performance are better. Write performance 
>> has the greatest improvement (> 4x), while read is > 1.5x. Albeit, these are 
>> only preliminary tests. Do you know of a way to really do some conclusive 
>> tests? I want to see if I can match your results on my 50 node cluster.
>> 
>> Thanks,
>> Ben
>> 
>>> On May 30, 2016, at 10:33 AM, Todd Lipcon >> > wrote:
>>> 
>>> On Sat, May 28, 2016 

Re: Performance Question

2016-06-15 Thread Todd Lipcon
Awesome use case. One thing to keep in mind is that spark parallelism will
be limited by the number of tablets. So, you might want to split into 10 or
so buckets per node to get the best query throughput.

Usually if you run top on some machines while running the query you can see
if it is fully utilizing the cores.

Another known issue right now is that spark locality isn't working properly
on replicated tables so you will use a lot of network traffic. For a perf
test you might want to try a table with replication count 1
On Jun 15, 2016 5:26 PM, "Benjamin Kim"  wrote:

Hi Todd,

I did a simple test of our ad events. We stream using Spark Streaming
directly into HBase, and the Data Analysts/Scientists do some
insight/discovery work plus some reports generation. For the reports, we
use SQL, and the more deeper stuff, we use Spark. In Spark, our main data
currency store of choice is DataFrames.

The schema is around 83 columns wide where most are of the string data type.

"event_type", "timestamp", "event_valid", "event_subtype", "user_ip",
"user_id", "mappable_id",
"cookie_status", "profile_status", "user_status", "previous_timestamp",
"user_agent", "referer",
"host_domain", "uri", "request_elapsed", "browser_languages", "acamp_id",
"creative_id",
"location_id", “pcamp_id",
"pdomain_id", "continent_code", "country", "region", "dma", "city", "zip",
"isp", "line_speed",
"gender", "year_of_birth", "behaviors_read", "behaviors_written",
"key_value_pairs", "acamp_candidates",
"tag_format", "optimizer_name", "optimizer_version", "optimizer_ip",
"pixel_id", “video_id",
"video_network_id", "video_time_watched", "video_percentage_watched",
"video_media_type",
"video_player_iframed", "video_player_in_view", "video_player_width",
"video_player_height",
"conversion_valid_sale", "conversion_sale_amount",
"conversion_commission_amount", "conversion_step",
"conversion_currency", "conversion_attribution", "conversion_offer_id",
"custom_info", "frequency",
"recency_seconds", "cost", "revenue", “optimizer_acamp_id",
"optimizer_creative_id", "optimizer_ecpm", "impression_id",
"diagnostic_data",
"user_profile_mapping_source", "latitude", "longitude", "area_code",
"gmt_offset", "in_dst",
"proxy_type", "mobile_carrier", "pop", "hostname", "profile_expires",
"timestamp_iso", "reference_id",
"identity_organization", "identity_method"

Most queries are like counts of how many users use what browser, how many
are unique users, etc. The part that scares most users is when it comes to
joining this data with other dimension/3rd party events tables because of
shear size of it.

We do what most companies do, similar to what I saw in earlier
presentations of Kudu. We dump data out of HBase into partitioned Parquet
tables to make query performance manageable.

I will coordinate with a data scientist today to do some tests. He is
working on identity matching/record linking of users from 2 domains: US and
Singapore, using probabilistic deduping algorithms. I will load the data
from ad events from both countries, and let him run his process against
this data in Kudu. I hope this will “wow” the team.

Thanks,
Ben

On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:

Hi Benjamin,

What workload are you using for benchmarks? Using spark or something more
custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
some queries

Todd

Todd
On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:

> Hi Todd,
>
> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
> impressed. Compared to HBase, read and write performance are better. Write
> performance has the greatest improvement (> 4x), while read is > 1.5x.
> Albeit, these are only preliminary tests. Do you know of a way to really do
> some conclusive tests? I want to see if I can match your results on my 50
> node cluster.
>
> Thanks,
> Ben
>
> On May 30, 2016, at 10:33 AM, Todd Lipcon  wrote:
>
> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim  wrote:
>
>> Todd,
>>
>> It sounds like Kudu can possibly top or match those numbers put out by
>> Aerospike. Do you have any performance statistics published or any
>> instructions as to measure them myself as good way to test? In addition,
>> this will be a test using Spark, so should I wait for Kudu version 0.9.0
>> where support will be built in?
>>
>
> We don't have a lot of benchmarks published yet, especially on the write
> side. I've found that thorough cross-system benchmarks are very difficult
> to do fairly and accurately, and often times users end up misguided if they
> pay too much attention to them :) So, given a finite number of developers
> working on Kudu, I think we've tended to spend more time on the project
> itself and less time focusing on "competition". I'm sure there are use
> cases where Kudu will beat out Aerospike, and probably use cases where
> Aerospike will beat Kudu as well.
>
> From my perspective, it would be 

Re: Performance Question

2016-06-15 Thread Benjamin Kim
Hi Todd,

I did a simple test of our ad events. We stream using Spark Streaming directly 
into HBase, and the Data Analysts/Scientists do some insight/discovery work 
plus some reports generation. For the reports, we use SQL, and the more deeper 
stuff, we use Spark. In Spark, our main data currency store of choice is 
DataFrames.

The schema is around 83 columns wide where most are of the string data type.

"event_type", "timestamp", "event_valid", "event_subtype", "user_ip", 
"user_id", "mappable_id",
"cookie_status", "profile_status", "user_status", "previous_timestamp", 
"user_agent", "referer",
"host_domain", "uri", "request_elapsed", "browser_languages", "acamp_id", 
"creative_id",
"location_id", “pcamp_id",
"pdomain_id", "continent_code", "country", "region", "dma", "city", "zip", 
"isp", "line_speed",
"gender", "year_of_birth", "behaviors_read", "behaviors_written", 
"key_value_pairs", "acamp_candidates",
"tag_format", "optimizer_name", "optimizer_version", "optimizer_ip", 
"pixel_id", “video_id",
"video_network_id", "video_time_watched", "video_percentage_watched", 
"video_media_type",
"video_player_iframed", "video_player_in_view", "video_player_width", 
"video_player_height",
"conversion_valid_sale", "conversion_sale_amount", 
"conversion_commission_amount", "conversion_step",
"conversion_currency", "conversion_attribution", "conversion_offer_id", 
"custom_info", "frequency",
"recency_seconds", "cost", "revenue", “optimizer_acamp_id",
"optimizer_creative_id", "optimizer_ecpm", "impression_id", "diagnostic_data",
"user_profile_mapping_source", "latitude", "longitude", "area_code", 
"gmt_offset", "in_dst",
"proxy_type", "mobile_carrier", "pop", "hostname", "profile_expires", 
"timestamp_iso", "reference_id",
"identity_organization", "identity_method"

Most queries are like counts of how many users use what browser, how many are 
unique users, etc. The part that scares most users is when it comes to joining 
this data with other dimension/3rd party events tables because of shear size of 
it.

We do what most companies do, similar to what I saw in earlier presentations of 
Kudu. We dump data out of HBase into partitioned Parquet tables to make query 
performance manageable.

I will coordinate with a data scientist today to do some tests. He is working 
on identity matching/record linking of users from 2 domains: US and Singapore, 
using probabilistic deduping algorithms. I will load the data from ad events 
from both countries, and let him run his process against this data in Kudu. I 
hope this will “wow” the team.

Thanks,
Ben

> On Jun 15, 2016, at 12:47 AM, Todd Lipcon  wrote:
> 
> Hi Benjamin,
> 
> What workload are you using for benchmarks? Using spark or something more 
> custom? rdd or data frame or SQL, etc? Maybe you can share the schema and 
> some queries
> 
> Todd
> 
> Todd
> 
> On Jun 15, 2016 8:10 AM, "Benjamin Kim"  > wrote:
> Hi Todd,
> 
> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am impressed. 
> Compared to HBase, read and write performance are better. Write performance 
> has the greatest improvement (> 4x), while read is > 1.5x. Albeit, these are 
> only preliminary tests. Do you know of a way to really do some conclusive 
> tests? I want to see if I can match your results on my 50 node cluster.
> 
> Thanks,
> Ben
> 
>> On May 30, 2016, at 10:33 AM, Todd Lipcon > > wrote:
>> 
>> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim > > wrote:
>> Todd,
>> 
>> It sounds like Kudu can possibly top or match those numbers put out by 
>> Aerospike. Do you have any performance statistics published or any 
>> instructions as to measure them myself as good way to test? In addition, 
>> this will be a test using Spark, so should I wait for Kudu version 0.9.0 
>> where support will be built in?
>> 
>> We don't have a lot of benchmarks published yet, especially on the write 
>> side. I've found that thorough cross-system benchmarks are very difficult to 
>> do fairly and accurately, and often times users end up misguided if they pay 
>> too much attention to them :) So, given a finite number of developers 
>> working on Kudu, I think we've tended to spend more time on the project 
>> itself and less time focusing on "competition". I'm sure there are use cases 
>> where Kudu will beat out Aerospike, and probably use cases where Aerospike 
>> will beat Kudu as well.
>> 
>> From my perspective, it would be great if you can share some details of your 
>> workload, especially if there are some areas you're finding Kudu lacking. 
>> Maybe we can spot some easy code changes we could make to improve 
>> performance, or suggest a tuning variable you could change.
>> 
>> -Todd
>> 
>> 
>>> On May 27, 2016, at 9:19 PM, Todd Lipcon >> > wrote:
>>> 
>>> On Fri, May 27, 2016 at 8:20 

Re: Performance Question

2016-06-15 Thread Todd Lipcon
Hi Benjamin,

What workload are you using for benchmarks? Using spark or something more
custom? rdd or data frame or SQL, etc? Maybe you can share the schema and
some queries

Todd

Todd
On Jun 15, 2016 8:10 AM, "Benjamin Kim"  wrote:

> Hi Todd,
>
> Now that Kudu 0.9.0 is out. I have done some tests. Already, I am
> impressed. Compared to HBase, read and write performance are better. Write
> performance has the greatest improvement (> 4x), while read is > 1.5x.
> Albeit, these are only preliminary tests. Do you know of a way to really do
> some conclusive tests? I want to see if I can match your results on my 50
> node cluster.
>
> Thanks,
> Ben
>
> On May 30, 2016, at 10:33 AM, Todd Lipcon  wrote:
>
> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim  wrote:
>
>> Todd,
>>
>> It sounds like Kudu can possibly top or match those numbers put out by
>> Aerospike. Do you have any performance statistics published or any
>> instructions as to measure them myself as good way to test? In addition,
>> this will be a test using Spark, so should I wait for Kudu version 0.9.0
>> where support will be built in?
>>
>
> We don't have a lot of benchmarks published yet, especially on the write
> side. I've found that thorough cross-system benchmarks are very difficult
> to do fairly and accurately, and often times users end up misguided if they
> pay too much attention to them :) So, given a finite number of developers
> working on Kudu, I think we've tended to spend more time on the project
> itself and less time focusing on "competition". I'm sure there are use
> cases where Kudu will beat out Aerospike, and probably use cases where
> Aerospike will beat Kudu as well.
>
> From my perspective, it would be great if you can share some details of
> your workload, especially if there are some areas you're finding Kudu
> lacking. Maybe we can spot some easy code changes we could make to improve
> performance, or suggest a tuning variable you could change.
>
> -Todd
>
>
>> On May 27, 2016, at 9:19 PM, Todd Lipcon  wrote:
>>
>> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim  wrote:
>>
>>> Hi Mike,
>>>
>>> First of all, thanks for the link. It looks like an interesting read. I
>>> checked that Aerospike is currently at version 3.8.2.3, and in the article,
>>> they are evaluating version 3.5.4. The main thing that impressed me was
>>> their claim that they can beat Cassandra and HBase by 8x for writing and
>>> 25x for reading. Their big claim to fame is that Aerospike can write 1M
>>> records per second with only 50 nodes. I wanted to see if this is real.
>>>
>>
>> 1M records per second on 50 nodes is pretty doable by Kudu as well,
>> depending on the size of your records and the insertion order. I've been
>> playing with a ~70 node cluster recently and seen 1M+ writes/second
>> sustained, and bursting above 4M. These are 1KB rows with 11 columns, and
>> with pretty old HDD-only nodes. I think newer flash-based nodes could do
>> better.
>>
>>
>>>
>>> To answer your questions, we have a DMP with user profiles with many
>>> attributes. We create segmentation information off of these attributes to
>>> classify them. Then, we can target advertising appropriately for our sales
>>> department. Much of the data processing is for applying models on all or if
>>> not most of every profile’s attributes to find similarities (nearest
>>> neighbor/clustering) over a large number of rows when batch processing or a
>>> small subset of rows for quick online scoring. So, our use case is a
>>> typical advanced analytics scenario. We have tried HBase, but it doesn’t
>>> work well for these types of analytics.
>>>
>>> I read, that Aerospike in the release notes, they did do many
>>> improvements for batch and scan operations.
>>>
>>> I wonder what your thoughts are for using Kudu for this.
>>>
>>
>> Sounds like a good Kudu use case to me. I've heard great things about
>> Aerospike for the low latency random access portion, but I've also heard
>> that it's _very_ expensive, and not particularly suited to the columnar
>> scan workload. Lastly, I think the Apache license of Kudu is much more
>> appealing than the AGPL3 used by Aerospike. But, that's not really a direct
>> answer to the performance question :)
>>
>>
>>>
>>> Thanks,
>>> Ben
>>>
>>>
>>> On May 27, 2016, at 6:21 PM, Mike Percy  wrote:
>>>
>>> Have you considered whether you have a scan heavy or a random access
>>> heavy workload? Have you considered whether you always access / update a
>>> whole row vs only a partial row? Kudu is a column store so has some
>>> awesome performance characteristics when you are doing a lot of scanning of
>>> just a couple of columns.
>>>
>>> I don't know the answer to your question but if your concern is
>>> performance then I would be interested in seeing comparisons from a perf
>>> perspective on certain workloads.
>>>
>>> Finally, a year ago 

Re: Performance Question

2016-06-15 Thread Benjamin Kim
Hi Todd,

Now that Kudu 0.9.0 is out. I have done some tests. Already, I am impressed. 
Compared to HBase, read and write performance are better. Write performance has 
the greatest improvement (> 4x), while read is > 1.5x. Albeit, these are only 
preliminary tests. Do you know of a way to really do some conclusive tests? I 
want to see if I can match your results on my 50 node cluster.

Thanks,
Ben

> On May 30, 2016, at 10:33 AM, Todd Lipcon  wrote:
> 
> On Sat, May 28, 2016 at 7:12 AM, Benjamin Kim  > wrote:
> Todd,
> 
> It sounds like Kudu can possibly top or match those numbers put out by 
> Aerospike. Do you have any performance statistics published or any 
> instructions as to measure them myself as good way to test? In addition, this 
> will be a test using Spark, so should I wait for Kudu version 0.9.0 where 
> support will be built in?
> 
> We don't have a lot of benchmarks published yet, especially on the write 
> side. I've found that thorough cross-system benchmarks are very difficult to 
> do fairly and accurately, and often times users end up misguided if they pay 
> too much attention to them :) So, given a finite number of developers working 
> on Kudu, I think we've tended to spend more time on the project itself and 
> less time focusing on "competition". I'm sure there are use cases where Kudu 
> will beat out Aerospike, and probably use cases where Aerospike will beat 
> Kudu as well.
> 
> From my perspective, it would be great if you can share some details of your 
> workload, especially if there are some areas you're finding Kudu lacking. 
> Maybe we can spot some easy code changes we could make to improve 
> performance, or suggest a tuning variable you could change.
> 
> -Todd
> 
> 
>> On May 27, 2016, at 9:19 PM, Todd Lipcon > > wrote:
>> 
>> On Fri, May 27, 2016 at 8:20 PM, Benjamin Kim > > wrote:
>> Hi Mike,
>> 
>> First of all, thanks for the link. It looks like an interesting read. I 
>> checked that Aerospike is currently at version 3.8.2.3, and in the article, 
>> they are evaluating version 3.5.4. The main thing that impressed me was 
>> their claim that they can beat Cassandra and HBase by 8x for writing and 25x 
>> for reading. Their big claim to fame is that Aerospike can write 1M records 
>> per second with only 50 nodes. I wanted to see if this is real.
>> 
>> 1M records per second on 50 nodes is pretty doable by Kudu as well, 
>> depending on the size of your records and the insertion order. I've been 
>> playing with a ~70 node cluster recently and seen 1M+ writes/second 
>> sustained, and bursting above 4M. These are 1KB rows with 11 columns, and 
>> with pretty old HDD-only nodes. I think newer flash-based nodes could do 
>> better.
>>  
>> 
>> To answer your questions, we have a DMP with user profiles with many 
>> attributes. We create segmentation information off of these attributes to 
>> classify them. Then, we can target advertising appropriately for our sales 
>> department. Much of the data processing is for applying models on all or if 
>> not most of every profile’s attributes to find similarities (nearest 
>> neighbor/clustering) over a large number of rows when batch processing or a 
>> small subset of rows for quick online scoring. So, our use case is a typical 
>> advanced analytics scenario. We have tried HBase, but it doesn’t work well 
>> for these types of analytics.
>> 
>> I read, that Aerospike in the release notes, they did do many improvements 
>> for batch and scan operations.
>> 
>> I wonder what your thoughts are for using Kudu for this.
>> 
>> Sounds like a good Kudu use case to me. I've heard great things about 
>> Aerospike for the low latency random access portion, but I've also heard 
>> that it's _very_ expensive, and not particularly suited to the columnar scan 
>> workload. Lastly, I think the Apache license of Kudu is much more appealing 
>> than the AGPL3 used by Aerospike. But, that's not really a direct answer to 
>> the performance question :)
>>  
>> 
>> Thanks,
>> Ben
>> 
>> 
>>> On May 27, 2016, at 6:21 PM, Mike Percy >> > wrote:
>>> 
>>> Have you considered whether you have a scan heavy or a random access heavy 
>>> workload? Have you considered whether you always access / update a whole 
>>> row vs only a partial row? Kudu is a column store so has some awesome 
>>> performance characteristics when you are doing a lot of scanning of just a 
>>> couple of columns.
>>> 
>>> I don't know the answer to your question but if your concern is performance 
>>> then I would be interested in seeing comparisons from a perf perspective on 
>>> certain workloads.
>>> 
>>> Finally, a year ago Aerospike did quite poorly in a Jepsen test: 
>>> https://aphyr.com/posts/324-jepsen-aerospike 
>>>