This is an automated email from the ASF dual-hosted git repository.

asdf2014 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/druid.git


The following commit(s) were added to refs/heads/master by this push:
     new 167c452  Update druid-vs-kudu.md (#11470)
167c452 is described below

commit 167c45260c76057b9856bd073661365663bd80f2
Author: benkrug <[email protected]>
AuthorDate: Wed Jul 21 07:58:14 2021 -0700

    Update druid-vs-kudu.md (#11470)
    
    small typo - "need" to "needed"
---
 docs/comparisons/druid-vs-kudu.md | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/comparisons/druid-vs-kudu.md 
b/docs/comparisons/druid-vs-kudu.md
index aca743d..b992a16 100644
--- a/docs/comparisons/druid-vs-kudu.md
+++ b/docs/comparisons/druid-vs-kudu.md
@@ -26,13 +26,13 @@ title: "Apache Druid vs Kudu"
 Kudu's storage format enables single row updates, whereas updates to existing 
Druid segments requires recreating the segment, so theoretically
 the process for updating old values should be higher latency in Druid. 
However, the requirements in Kudu for maintaining extra head space to store
 updates as well as organizing data by id instead of time has the potential to 
introduce some extra latency and accessing
-of data that is not need to answer a query at query time.
+of data that is not needed to answer a query at query time.
 
 Druid summarizes/rollups up data at ingestion time, which in practice reduces 
the raw data that needs to be
 stored significantly (up to 40 times on average), and increases performance of 
scanning raw data significantly.
 Druid segments also contain bitmap indexes for fast filtering, which Kudu does 
not currently support.
 Druid's segment architecture is heavily geared towards fast aggregates and 
filters, and for OLAP workflows. Appends are very
-fast in Druid, whereas updates of older data is higher latency. This is by 
design as the data Druid is good for is typically event data,
+fast in Druid, whereas updates of older data are higher latency. This is by 
design as the data Druid is good for is typically event data,
 and does not need to be updated too frequently. Kudu supports arbitrary 
primary keys with uniqueness constraints, and
 efficient lookup by ranges of those keys. Kudu chooses not to include the 
execution engine, but supports sufficient
 operations so as to allow node-local processing from the execution engines. 
This means that Kudu can support multiple frameworks on the same data (e.g., 
MR, Spark, and SQL).

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to