kishoreg commented on a change in pull request #4435: Onboarding best practices 
doc
URL: https://github.com/apache/incubator-pinot/pull/4435#discussion_r304615454
 
 

 ##########
 File path: docs/onboarding_best_practices.rst
 ##########
 @@ -0,0 +1,165 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..   http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+..
+
+.. _onboarding-best-practices:
+
+Onboarding Best Practices
+==========================
+
+Here's a checklist of things to consider before you begin the process of 
modelling your data and onboarding to Pinot
+
+This has been split up into 2 sections: 
+
+1) Data Preparation 
+2) Querying Pinot
+
+
+
+Data Preparation
+^^^^^^^^^^^^^^^^^
+These are the best practices and considerations when preparing your data 
schema and data format
+
+Considerations common to offline and realtime
+**********************************************
+
+1. Pre-aggregations
+###################
+Pre aggregate the data as much as the application logic allows. This means 
**rolling up the metric values for the unique dimensions and time column 
combinations**. This is beneficial as we will reduce the size of the data being 
stored in Pinot, as well as avoid aggregations to be done in Pinot for every 
query, hence improving query performance. 
+
+- For offline, perform the aggregations in your data preparation hadoop/spark 
job. 
+- For realtime, a samza job can be used to aggregate data at intervals based 
on the freshness requirements of the usecase.
+
+2. Time column
+###################
+Think about what **granularity of time column** is needed for your 
application. This will usually be *HOURS, DAYS, (sometimes 15 MINUTES)*. It is 
not recommended to have the time column in MILLISECONDS or SECONDS granularity. 
If time granularity is in milliseconds, the cardinality of the time column 
becomes very high in realtime systems, causing the time column dictionary to 
get very big. Bucket your time column to the coarsest granularity possible 
(hoursSinceEpoch, daysSinceEpoch). A greater level of pre-aggregations can be 
achieved with coarser time granularity.
 
 Review comment:
   +1. We should just say use the pick the right granularity needed for your 
application and explain the trade-off. I would just remove ```It is not 
recommended to have the time column in MILLISECONDS or SECONDS granularity.```

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
[email protected]


With regards,
Apache Git Services

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to