This is an automated email from the ASF dual-hosted git repository.

xiangfu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
     new 221e73a  adding a banner to old docs for new doc link (#5205)
221e73a is described below

commit 221e73aa4614d74b91983edd81b9c21c168429cc
Author: Xiang Fu <fx19880...@gmail.com>
AuthorDate: Thu Apr 2 13:42:04 2020 -0700

    adding a banner to old docs for new doc link (#5205)
---
 docs/admin_guide.rst                 |   2 +
 docs/architecture.rst                |   2 +
 docs/batch_data_ingestion.rst        | 207 ++++++++++++++++++-----------------
 docs/client_api.rst                  |   2 +
 docs/code_modules.rst                |   2 +
 docs/contribution_guidelines.rst     |   2 +
 docs/customizations.rst              |   2 +
 docs/dev_env.rst                     |   2 +
 docs/dev_guide.rst                   |   2 +
 docs/extensions.rst                  |   2 +
 docs/getting_started.rst             |   4 +-
 docs/in_production.rst               |   2 +
 docs/index.rst                       |   4 +-
 docs/index_techniques.rst            |   2 +
 docs/intro.rst                       |   2 +
 docs/introduction.rst                |   2 +
 docs/pinot_hadoop.rst                |   2 +
 docs/pluggable_storage.rst           |   6 +-
 docs/pluggable_streams.rst           |   2 +
 docs/pql_examples.rst                |   2 +
 docs/record_reader.rst               |   2 +
 docs/schema.rst                      |   2 +
 docs/segment_fetcher.rst             |   2 +
 docs/star-tree/star-tree.rst         |   2 +
 docs/tableconfig_schema.rst          |   2 +
 docs/tuning_pinot.rst                |   2 +
 docs/tuning_realtime_performance.rst |   2 +
 docs/tuning_scatter_and_gather.rst   |   2 +
 docs/user_guide.rst                  |   2 +
 29 files changed, 164 insertions(+), 107 deletions(-)

diff --git a/docs/admin_guide.rst b/docs/admin_guide.rst
index 03c9fb0..332da99 100644
--- a/docs/admin_guide.rst
+++ b/docs/admin_guide.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 :orphan:
 
 ###########
diff --git a/docs/architecture.rst b/docs/architecture.rst
index 03ca092..0348aa3 100644
--- a/docs/architecture.rst
+++ b/docs/architecture.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _pinot-architecture-section:
 
 Architecture
diff --git a/docs/batch_data_ingestion.rst b/docs/batch_data_ingestion.rst
index c8aa0a6..f4c52b9 100644
--- a/docs/batch_data_ingestion.rst
+++ b/docs/batch_data_ingestion.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _batch-data-ingestion:
 
 Batch Data Ingestion
@@ -27,7 +29,7 @@ In practice, we need to run Pinot data ingestion as a 
pipeline or a scheduled jo
 Assuming `pinot-distribution` is already built, inside `examples` directory, 
you could find several sample table layouts.
 
 Table Layout
------------
+------------
 
 Usually each table deserves its own directory, like `airlineStats`.
 
@@ -46,116 +48,116 @@ Below is an example (also located at 
`examples/batch/airlineStats/ingestionJobSp
 
 .. code-block:: none
 
-# executionFrameworkSpec: Defines ingestion jobs to be running.
-executionFrameworkSpec:
+  # executionFrameworkSpec: Defines ingestion jobs to be running.
+  executionFrameworkSpec:
+
+    # name: execution framework name
+    name: 'standalone'
+
+    # segmentGenerationJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentGenerationJobRunner 
interface.
+    segmentGenerationJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
+
+    # segmentTarPushJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentTarPushJobRunner interface.
+    segmentTarPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
+
+    # segmentUriPushJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentUriPushJobRunner interface.
+    segmentUriPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
+
+  # jobType: Pinot ingestion job type.
+  # Supported job types are:
+  #   'SegmentCreation'
+  #   'SegmentTarPush'
+  #   'SegmentUriPush'
+  #   'SegmentCreationAndTarPush'
+  #   'SegmentCreationAndUriPush'
+  jobType: SegmentCreationAndTarPush
+
+  # inputDirURI: Root directory of input data, expected to have scheme 
configured in PinotFS.
+  inputDirURI: 'examples/batch/airlineStats/rawdata'
+
+  # includeFileNamePattern: include file name pattern, supported glob pattern.
+  # Sample usage:
+  #   'glob:*.avro' will include all avro files just under the inputDirURI, 
not sub directories;
+  #   'glob:**\/*.avro' will include all the avro files under inputDirURI 
recursively.
+  includeFileNamePattern: 'glob:**/*.avro'
 
-  # name: execution framework name
-  name: 'standalone'
+  # excludeFileNamePattern: exclude file name pattern, supported glob pattern.
+  # Sample usage:
+  #   'glob:*.avro' will exclude all avro files just under the inputDirURI, 
not sub directories;
+  #   'glob:**\/*.avro' will exclude all the avro files under inputDirURI 
recursively.
+  # _excludeFileNamePattern: ''
 
-  # segmentGenerationJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentGenerationJobRunner 
interface.
-  segmentGenerationJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentGenerationJobRunner'
+  # outputDirURI: Root directory of output segments, expected to have scheme 
configured in PinotFS.
+  outputDirURI: 'examples/batch/airlineStats/segments'
 
-  # segmentTarPushJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentTarPushJobRunner interface.
-  segmentTarPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentTarPushJobRunner'
+  # overwriteOutput: Overwrite output segments if existed.
+  overwriteOutput: true
 
-  # segmentUriPushJobRunnerClassName: class name implements 
org.apache.pinot.spi.batch.ingestion.runner.SegmentUriPushJobRunner interface.
-  segmentUriPushJobRunnerClassName: 
'org.apache.pinot.plugin.ingestion.batch.standalone.SegmentUriPushJobRunner'
+  # pinotFSSpecs: defines all related Pinot file systems.
+  pinotFSSpecs:
 
-# jobType: Pinot ingestion job type.
-# Supported job types are:
-#   'SegmentCreation'
-#   'SegmentTarPush'
-#   'SegmentUriPush'
-#   'SegmentCreationAndTarPush'
-#   'SegmentCreationAndUriPush'
-jobType: SegmentCreationAndTarPush
+    - # scheme: used to identify a PinotFS.
+      # E.g. local, hdfs, dbfs, etc
+      scheme: file
 
-# inputDirURI: Root directory of input data, expected to have scheme 
configured in PinotFS.
-inputDirURI: 'examples/batch/airlineStats/rawdata'
+      # className: Class name used to create the PinotFS instance.
+      # E.g.
+      #   org.apache.pinot.spi.filesystem.LocalPinotFS is used for local 
filesystem
+      #   org.apache.pinot.plugin.filesystem.AzurePinotFS is used for Azure 
Data Lake
+      #   org.apache.pinot.plugin.filesystem.HadoopPinotFS is used for HDFS
+      className: org.apache.pinot.spi.filesystem.LocalPinotFS
 
-# includeFileNamePattern: include file name pattern, supported glob pattern.
-# Sample usage:
-#   'glob:*.avro' will include all avro files just under the inputDirURI, not 
sub directories;
-#   'glob:**\/*.avro' will include all the avro files under inputDirURI 
recursively.
-includeFileNamePattern: 'glob:**/*.avro'
+  # recordReaderSpec: defines all record reader
+  recordReaderSpec:
 
-# excludeFileNamePattern: exclude file name pattern, supported glob pattern.
-# Sample usage:
-#   'glob:*.avro' will exclude all avro files just under the inputDirURI, not 
sub directories;
-#   'glob:**\/*.avro' will exclude all the avro files under inputDirURI 
recursively.
-# _excludeFileNamePattern: ''
+    # dataFormat: Record data format, e.g. 'avro', 'parquet', 'orc', 'csv', 
'json', 'thrift' etc.
+    dataFormat: 'avro'
 
-# outputDirURI: Root directory of output segments, expected to have scheme 
configured in PinotFS.
-outputDirURI: 'examples/batch/airlineStats/segments'
+    # className: Corresponding RecordReader class name.
+    # E.g.
+    #   org.apache.pinot.plugin.inputformat.avro.AvroRecordReader
+    #   org.apache.pinot.plugin.inputformat.csv.CSVRecordReader
+    #   org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader
+    #   org.apache.pinot.plugin.inputformat.json.JsonRecordReader
+    #   org.apache.pinot.plugin.inputformat.orc.OrcRecordReader
+    #   org.apache.pinot.plugin.inputformat.thrift.ThriftRecordReader
+    className: 'org.apache.pinot.plugin.inputformat.avro.AvroRecordReader'
 
-# overwriteOutput: Overwrite output segments if existed.
-overwriteOutput: true
+  # tableSpec: defines table name and where to fetch corresponding table 
config and table schema.
+  tableSpec:
 
-# pinotFSSpecs: defines all related Pinot file systems.
-pinotFSSpecs:
+    # tableName: Table name
+    tableName: 'airlineStats'
 
-  - # scheme: used to identify a PinotFS.
-    # E.g. local, hdfs, dbfs, etc
-    scheme: file
+    # schemaURI: defines where to read the table schema, supports PinotFS or 
HTTP.
+    # E.g.
+    #   hdfs://path/to/table_schema.json
+    #   http://localhost:9000/tables/myTable/schema
+    schemaURI: 'http://localhost:9000/tables/airlineStats/schema'
 
-    # className: Class name used to create the PinotFS instance.
+    # tableConfigURI: defines where to reade the table config.
+    # Supports using PinotFS or HTTP.
     # E.g.
-    #   org.apache.pinot.spi.filesystem.LocalPinotFS is used for local 
filesystem
-    #   org.apache.pinot.plugin.filesystem.AzurePinotFS is used for Azure Data 
Lake
-    #   org.apache.pinot.plugin.filesystem.HadoopPinotFS is used for HDFS
-    className: org.apache.pinot.spi.filesystem.LocalPinotFS
-
-# recordReaderSpec: defines all record reader
-recordReaderSpec:
-
-  # dataFormat: Record data format, e.g. 'avro', 'parquet', 'orc', 'csv', 
'json', 'thrift' etc.
-  dataFormat: 'avro'
-
-  # className: Corresponding RecordReader class name.
-  # E.g.
-  #   org.apache.pinot.plugin.inputformat.avro.AvroRecordReader
-  #   org.apache.pinot.plugin.inputformat.csv.CSVRecordReader
-  #   org.apache.pinot.plugin.inputformat.parquet.ParquetRecordReader
-  #   org.apache.pinot.plugin.inputformat.json.JsonRecordReader
-  #   org.apache.pinot.plugin.inputformat.orc.OrcRecordReader
-  #   org.apache.pinot.plugin.inputformat.thrift.ThriftRecordReader
-  className: 'org.apache.pinot.plugin.inputformat.avro.AvroRecordReader'
-
-# tableSpec: defines table name and where to fetch corresponding table config 
and table schema.
-tableSpec:
-
-  # tableName: Table name
-  tableName: 'airlineStats'
-
-  # schemaURI: defines where to read the table schema, supports PinotFS or 
HTTP.
-  # E.g.
-  #   hdfs://path/to/table_schema.json
-  #   http://localhost:9000/tables/myTable/schema
-  schemaURI: 'http://localhost:9000/tables/airlineStats/schema'
-
-  # tableConfigURI: defines where to reade the table config.
-  # Supports using PinotFS or HTTP.
-  # E.g.
-  #   hdfs://path/to/table_config.json
-  #   http://localhost:9000/tables/myTable
-  # Note that the API to read Pinot table config directly from pinot 
controller contains a JSON wrapper.
-  # The real table config is the object under the field 'OFFLINE'.
-  tableConfigURI: 'http://localhost:9000/tables/airlineStats'
-
-# pinotClusterSpecs: defines the Pinot Cluster Access Point.
-pinotClusterSpecs:
-  - # controllerURI: used to fetch table/schema information and data push.
-    # E.g. http://localhost:9000
-    controllerURI: 'http://localhost:9000'
-
-# pushJobSpec: defines segment push job related configuration.
-pushJobSpec:
-
-  # pushAttempts: number of attempts for push job, default is 1, which means 
no retry.
-  pushAttempts: 2
-
-  # pushRetryIntervalMillis: retry wait Ms, default to 1 second.
-  pushRetryIntervalMillis: 1000
+    #   hdfs://path/to/table_config.json
+    #   http://localhost:9000/tables/myTable
+    # Note that the API to read Pinot table config directly from pinot 
controller contains a JSON wrapper.
+    # The real table config is the object under the field 'OFFLINE'.
+    tableConfigURI: 'http://localhost:9000/tables/airlineStats'
+
+  # pinotClusterSpecs: defines the Pinot Cluster Access Point.
+  pinotClusterSpecs:
+    - # controllerURI: used to fetch table/schema information and data push.
+      # E.g. http://localhost:9000
+      controllerURI: 'http://localhost:9000'
+
+  # pushJobSpec: defines segment push job related configuration.
+  pushJobSpec:
+
+    # pushAttempts: number of attempts for push job, default is 1, which means 
no retry.
+    pushAttempts: 2
+
+    # pushRetryIntervalMillis: retry wait Ms, default to 1 second.
+    pushRetryIntervalMillis: 1000
 
 Executing the job
 -----------------
@@ -179,10 +181,11 @@ Executing the job using Spark
 Below example is running in a spark local mode. You can download spark 
distribution and start it by running:
 
 .. code-block:: bash
-  wget 
http://apache-mirror.8birdsvideo.com/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
-  tar xvf spark-2.4.4-bin-hadoop2.7.tgz
-  cd spark-2.4.4-bin-hadoop2.7
-  ./bin/spark-shell --master 'local[2]'
+
+  $ wget 
http://apache-mirror.8birdsvideo.com/spark/spark-2.4.4/spark-2.4.4-bin-hadoop2.7.tgz
+  $ tar xvf spark-2.4.4-bin-hadoop2.7.tgz
+  $ cd spark-2.4.4-bin-hadoop2.7
+  $ ./bin/spark-shell --master 'local[2]'
 
 Below command shows how to use `spark-submit` command to submit a spark job 
using pinot-all-${PINOT_VERSION}-jar-with-dependencies jar.
 
@@ -203,7 +206,7 @@ Please ensure parameter `PINOT_ROOT_DIR` and 
`PINOT_VERSION` are set properly.
 
 
 Executing the job using Hadoop
------------------------------
+------------------------------
 
 Below command shows how to use `hadoop jar` command to run a hadoop job using 
pinot-all-${PINOT_VERSION}-jar-with-dependencies jar.
 
diff --git a/docs/client_api.rst b/docs/client_api.rst
index 04e9bf0..4764c5c 100644
--- a/docs/client_api.rst
+++ b/docs/client_api.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Executing queries via REST API on the Broker
 ============================================
 
diff --git a/docs/code_modules.rst b/docs/code_modules.rst
index ede8645..e855799 100644
--- a/docs/code_modules.rst
+++ b/docs/code_modules.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _code-modules:
 
 
diff --git a/docs/contribution_guidelines.rst b/docs/contribution_guidelines.rst
index 76414f8..2d7d4b9 100644
--- a/docs/contribution_guidelines.rst
+++ b/docs/contribution_guidelines.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 ***********************
 Contribution Guidelines
 ***********************
diff --git a/docs/customizations.rst b/docs/customizations.rst
index 2d0f69e..0f496bf 100644
--- a/docs/customizations.rst
+++ b/docs/customizations.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _customizing-pinot:
 
 Customizing Pinot
diff --git a/docs/dev_env.rst b/docs/dev_env.rst
index c2b602e..8b62e0f 100644
--- a/docs/dev_env.rst
+++ b/docs/dev_env.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _dev-setup:
 
 *********************
diff --git a/docs/dev_guide.rst b/docs/dev_guide.rst
index 7aa7285..79c2fd6 100644
--- a/docs/dev_guide.rst
+++ b/docs/dev_guide.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 :orphan:
 
 ###############
diff --git a/docs/extensions.rst b/docs/extensions.rst
index 678a924..109849b 100644
--- a/docs/extensions.rst
+++ b/docs/extensions.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Extending Pinot
 ===============
 This section provides an overview of options to extend Pinot code to make 
Pinot work for environments not covered by default.
diff --git a/docs/getting_started.rst b/docs/getting_started.rst
index 3e44091..c4f974a 100644
--- a/docs/getting_started.rst
+++ b/docs/getting_started.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _getting-started:
 
 Getting Started
@@ -80,7 +82,7 @@ Pinot uses PQL, a SQL-like query language, to query data. 
Here are some sample q
 The full reference for the PQL query language is present in the :ref:`pql` 
section of the Pinot documentation.
 
 Trying out Streaming quickstart demo
-~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
 
 Pinot can ingest data from streaming sources such as Kafka.
 
diff --git a/docs/in_production.rst b/docs/in_production.rst
index 2b1e06e..d237707 100644
--- a/docs/in_production.rst
+++ b/docs/in_production.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Running Pinot in Production
 ===========================
 
diff --git a/docs/index.rst b/docs/index.rst
index 40426ab..e1dbafc 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -22,12 +22,12 @@
    You can adapt this file completely to your liking, but it should at least
    contain the root `toctree` directive.
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 #####
 Pinot
 #####
 
-.. note::  The documentation has moved to `Apache Pinot Docs 
<https://docs.pinot.apache.org/>`_.
-
 .. toctree::
    :maxdepth: 1
    :caption: Introduction
diff --git a/docs/index_techniques.rst b/docs/index_techniques.rst
index b5ab93a..c44028d 100644
--- a/docs/index_techniques.rst
+++ b/docs/index_techniques.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. TODO: add more details
 
 
diff --git a/docs/intro.rst b/docs/intro.rst
index 2dfdb69..b6212d5 100644
--- a/docs/intro.rst
+++ b/docs/intro.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 About Pinot
 ===========
 
diff --git a/docs/introduction.rst b/docs/introduction.rst
index 52c1984..8e1df67 100644
--- a/docs/introduction.rst
+++ b/docs/introduction.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 :orphan:
 
 ############
diff --git a/docs/pinot_hadoop.rst b/docs/pinot_hadoop.rst
index 53b71a1..5c045a9 100644
--- a/docs/pinot_hadoop.rst
+++ b/docs/pinot_hadoop.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _creating-segments:
 
 Creating Pinot segments
diff --git a/docs/pluggable_storage.rst b/docs/pluggable_storage.rst
index 482d65a..ed61892 100644
--- a/docs/pluggable_storage.rst
+++ b/docs/pluggable_storage.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _pluggable-storage:
 
 Pluggable Storage
@@ -38,7 +40,7 @@ In order to add a new type of storage backend (say, Amazon 
s3) implement the fol
 S3FS extends `PinotFS 
<https://github.com/apache/incubator-pinot/blob/master/pinot-common/src/main/java/org/apache/pinot/filesystem/PinotFS.java>`_
 
 Configurations for Realtime Tables
-^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 The example here uses the existing org.apache.pinot.filesystem.HadoopPinotFS 
to store realtime segments in a HDFS filesytem. In the Pinot controller config, 
add the following new configs:
 
 .. code-block:: none
@@ -61,7 +63,7 @@ In the Pinot controller config, add the following new configs:
 Note: currently there is a bug in the controller (`issue 
<https://github.com/apache/incubator-pinot/issues/3847>`), for now you can 
cherrypick the PR https://github.com/apache/incubator-pinot/pull/3849 to fix 
the issue as tested already. The PR is under review now.
 
 Configurations for Offline Tables
-^^^^^^^^^^^^^^
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
 These properties for the stream implementation are to be set in your 
controller and server configurations.
 
 In your controller and server configs, please set the FS class you would like 
to support. pinot.controller.storage.factory.class.${YOUR_URI_SCHEME} to the 
full path of the FS class you would like to include
diff --git a/docs/pluggable_streams.rst b/docs/pluggable_streams.rst
index ecaf9c1..8cc231f 100644
--- a/docs/pluggable_streams.rst
+++ b/docs/pluggable_streams.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _pluggable-streams:
 
 Pluggable Streams
diff --git a/docs/pql_examples.rst b/docs/pql_examples.rst
index 455e432..2558c58 100644
--- a/docs/pql_examples.rst
+++ b/docs/pql_examples.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _pql:
 
 PQL
diff --git a/docs/record_reader.rst b/docs/record_reader.rst
index 5948777..e02fbd2 100644
--- a/docs/record_reader.rst
+++ b/docs/record_reader.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Record Reader
 =============
 
diff --git a/docs/schema.rst b/docs/schema.rst
index e65c2a3..a0ba761 100644
--- a/docs/schema.rst
+++ b/docs/schema.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _schema-section:
 
 Pinot Schema
diff --git a/docs/segment_fetcher.rst b/docs/segment_fetcher.rst
index a3184fd..78e843c 100644
--- a/docs/segment_fetcher.rst
+++ b/docs/segment_fetcher.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _segment-fetcher:
 
 Segment Fetchers
diff --git a/docs/star-tree/star-tree.rst b/docs/star-tree/star-tree.rst
index 3055885..630982e 100644
--- a/docs/star-tree/star-tree.rst
+++ b/docs/star-tree/star-tree.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Star-Tree: A Specialized Index for Fast Aggregations
 ====================================================
 
diff --git a/docs/tableconfig_schema.rst b/docs/tableconfig_schema.rst
index e7e918f..7ebe804 100644
--- a/docs/tableconfig_schema.rst
+++ b/docs/tableconfig_schema.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _table-config-section:
 
 Table Config
diff --git a/docs/tuning_pinot.rst b/docs/tuning_pinot.rst
index 4a35601..31325ef 100644
--- a/docs/tuning_pinot.rst
+++ b/docs/tuning_pinot.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 .. _tuning-pinot:
 
 Tuning Pinot
diff --git a/docs/tuning_realtime_performance.rst 
b/docs/tuning_realtime_performance.rst
index 662b0ff..3c559d3 100644
--- a/docs/tuning_realtime_performance.rst
+++ b/docs/tuning_realtime_performance.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Tuning Realtime Performance
 ===========================
 
diff --git a/docs/tuning_scatter_and_gather.rst 
b/docs/tuning_scatter_and_gather.rst
index fe4a723..5429644 100644
--- a/docs/tuning_scatter_and_gather.rst
+++ b/docs/tuning_scatter_and_gather.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 Optimizing Scatter and Gather
 =============================
 
diff --git a/docs/user_guide.rst b/docs/user_guide.rst
index 87f81d5..d14f5f4 100644
--- a/docs/user_guide.rst
+++ b/docs/user_guide.rst
@@ -17,6 +17,8 @@
 .. under the License.
 ..
 
+.. warning::  The documentation is not up-to-date and has moved to `Apache 
Pinot Docs <https://docs.pinot.apache.org/>`_.
+
 :orphan:
 
 ##########


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to