This is an automated email from the ASF dual-hosted git repository.

akshayrai09 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-pinot.git


The following commit(s) were added to refs/heads/master by this push:
     new cb149be  [TE] Make few internal documentationi and mocks publicwq 
(#5743)
cb149be is described below

commit cb149bed54b926b5841e7cc1919c5c3fc8dd8199
Author: Akshay Rai <ak...@linkedin.com>
AuthorDate: Fri Jul 24 09:22:17 2020 -0700

    [TE] Make few internal documentationi and mocks publicwq (#5743)
---
 thirdeye/docs/alert_setup.rst                      |   2 +-
 thirdeye/docs/detection_pipeline_architecture.rst  | 165 ++++++++++++++++++
 .../docs/detection_pipeline_execution_flow.rst     | 192 +++++++++++++++++++++
 thirdeye/docs/index.rst                            |   2 +
 .../{alert_setup.rst => thirdeye_architecture.rst} |  13 +-
 .../{alert_setup.rst => thirdeye_ui_mocks.rst}     |  15 +-
 6 files changed, 372 insertions(+), 17 deletions(-)

diff --git a/thirdeye/docs/alert_setup.rst b/thirdeye/docs/alert_setup.rst
index bda73f0..be00ebe 100644
--- a/thirdeye/docs/alert_setup.rst
+++ b/thirdeye/docs/alert_setup.rst
@@ -29,4 +29,4 @@ Alert Setup
     advanced_config
     templates
     appendix
-    contribute_detection
\ No newline at end of file
+    contribute_detection
diff --git a/thirdeye/docs/detection_pipeline_architecture.rst 
b/thirdeye/docs/detection_pipeline_architecture.rst
new file mode 100644
index 0000000..0419b6c
--- /dev/null
+++ b/thirdeye/docs/detection_pipeline_architecture.rst
@@ -0,0 +1,165 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..   http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+..
+
+.. _detection-pipeline-architecture:
+
+######################################
+Detection Pipeline Architecture
+######################################
+
+This document summarizes the motivation and rationale behind the 2018 
refactoring of ThirdEye's anomaly framework library. We describe critical user, 
dev, and ops pain points, define a list of requirements and discuss our 
approach to building a scalable and robust detection framework for ThirdEye. We 
also discuss conscious design trade-offs made in order to facilitate future 
modification and maintenance of the system.
+
+Motivation
+########################
+ThirdEye has been adopted by numerous teams for production monitoring of 
business and system metrics, and ThirdEye's user base grows consistently. With 
this come challenges of scale and a demand for excellence in development and 
operation. ThirdEye has outgrown the assumptions and limits of its existing 
detection framework in multiple dimensions and we need to address technical 
debt in order to enable the on-boarding of new use-cases and continue to 
satisfy our existing customers. We exp [...]
+
+Lack of support for business rules and custom work flows
+***********************************************************
+Anomaly detection workflows have many common components such as the monitoring 
of different metrics and the drill-down into multiple dimensions of time 
series. However, each individual detection use-case usually comes with an 
additional set of business rules to integrate smoothly with established 
processes at LinkedIn. These typically include cut-off thresholds and fixed 
lists of sub-dimensions to monitor or ignore, but may extend to custom alerting 
workflows and the grouping of detected [...]
+
+Difficult debugging and modification
+**************************************
+The existing framework has grown over multiple generations of developers with 
different preferences, designs, and goals. This lead to an inconsistent and 
undocumented approach to architecture and code design. Worse, existing design 
documentation is outdated and misleading. This is exacerbated by a lack of 
testing infrastructure, integration tests, and unit tests. Many assumptions 
about the behavior of the framework are implicit and only exist in the heads of 
past maintainers of platform  [...]
+
+Tight coupling prevents testing
+************************************
+Another property of the existing detection code base is tight coupling of 
individual components and the leaking of state in unexpected places, such as 
utility functions. It is not currently possible to unit test an individual 
piece, such as the implementation of an anomaly merger strategy, without 
setting up the entire system from caching and data layer, over scheduler and 
worker, to a populated database with pre-existing configuration and anomalies. 
The untangling of these dependencies  [...]
+
+Low performance and limitations to scaling
+********************************************
+The current implementation of detection, on-boarding, and tuning executes 
slowly. This is surprising given the modest amount of time series ingested and 
processed by ThirdEye, especially when considering that Pinot and Autometrics 
do the heavy lifting of data aggregation and filtering. This can be attributed 
to the serial retrieval of data points and redundant data base access, part of 
which is a consequence of the many stacked generations of designs and code. The 
execution of on-boardin [...]
+
+Half-baked self-service and configuration capabilities
+********************************************************
+Configuration currently is specific to the chosen algorithm and no official 
front-end exists to manually modify detection settings. As a workaround users 
access and modify configuration directly via a database admin tool, which comes 
with its own set of dangers and pitfalls. While the database supports JSON 
serialization, the detection algorithms currently use their own custom string 
serialization format that is inaccessible even to advanced users. Additionally, 
the feedback cycle for us [...]
+
+No ensemble detection or multi-metric dependencies
+******************************************************
+Some users prefer execution of rules and algorithms in parallel, using 
algorithms for regular detection but relying on additional rules as fail-safe 
against false negatives. Also, detection cannot easily incorporate information 
from multiple metrics without encoding this functionality directly into the 
algorithm. For example, site-wide impact filters for business metrics are 
currently part of each separate algorithm implementation rather than modular, 
re-usable components.
+
+No sandbox support
+**********************
+On-boarding, performance tuning, and re-configuration of alerts are processes 
that involve iterative back-testing and to some degree rely on human judgement. 
In order for users to experiment with detection settings, the execution of 
algorithms and evaluation of rules must be sandboxed from affecting the state 
of the production system. Similarly, integration testing of new components may 
require parallel execution of production and staging environments to build 
trust after invasive change [...]
+
+Requirements
+################
+An architecture is only as concise as its requirements. After running ThirdEye 
for metric monitoring in production for over a year, many original assumptions 
changed and new requirements came in. In the following we summarize the 
requirements we deem critical for moving ThirdEye's detection capabilities onto 
a robust and scalable new foundation.
+
+De-coupling of components
+**************************************
+Components of the detection framework must be separated to a degree that 
allows testing of individual units and sandboxed execution of detection 
workflows. Furthermore, contracts (interfaces) between components should be 
minimal and should not pre-impose a structure that is modeled after specific 
use-cases.
+
+Full testability
+**************************************
+Every single part of the detection pipeline must be testable as a unit as well 
as in integration with others. This allows us to isolate problems a in 
individual components and avoid regressions via dedicated tests. We must alos 
provide test infrastructure to mock required components with simple 
implementations of existing interfaces. This testability requirement also 
serves as verification step of our efforts to decouple components. 
+
+Gradual migration via emulation of existing anomaly interface
+****************************************************************************
+ThirdEye has an existing use-base that has built trust in existing detection 
methods and tweaked them to their needs, and hence, support for legacy 
algorithms via an emulation layer is a must-have. It is near impossible to 
ensure perfect consistency of legacy and emulated execution due to numerous 
undocumented behavioral quirks. Therefore, the emulation layer will be held to 
a minimum. Additionally, as we migrate users' workflows to newer 
implementations this support will be phased out.
+
+Simple algorithms should be simple to build, test, and configure
+*************************************************************************
+Simple algorithms and rules must be easy to implement, test, and configure. As 
a platform ThirdEye hosts different types of algorithms and continuously adds 
more. In order to scale development to both a larger team of developers and 
collaborators, development of custom workflows and algorithms must be kept as 
as friction-less as possible.
+
+Support multiple metrics and data sources in single algorithm
+*******************************************************************
+Several use-cases require information from several metrics and 
metric-dimensions at once to reliably detect and classify anomalies. Our 
framework needs native support for this integration of data from multiple 
sources. This includes multiple metrics, as well as other sources such as 
external events, past anomalies, etc.
+
+Use-case specific workflows and logic
+**************************************
+Most detection use-cases bring their own domain-specific business logic. These 
processes must be encoded into ThirdEye's detection workflows in order to 
integrate with existing processes at LinkedIn and enable the on-boarding of 
additional users and teams. This integration of business logic should be 
possible via configuration options in the majority of cases, but will 
eventually require additional plug-able code to execute during detection and 
alerting workflows.
+
+Don't repeat yourself (code and component re-use)
+****************************************************
+With often similar but not perfectly equal workflows there is a temptation to 
copy code sequences for existing algorithms and re-use them for new 
implementations. This redundancy leads to code bloat and the duplication of 
mistakes and should be avoided to the maximum degree possible. Code re-use via 
building blocks and utilities correctly designed to be stateless in nature must 
be a priority.
+
+Consistent configuration of algorithms (top-level and nested)
+******************************************************************
+The mechanism for algorithm configuration should be uniform across different 
implementations. This should be true also for nested algorithms. As ThirdEye 
already uses JSON as serialization format for data base storage, configuration 
should be stored in a compatible way. While we note JSON is not the best choice 
for human read-able configuration it is the straight-forward choice given the 
existing metadata infrastructure.
+
+Stateless, semi-stateful, stateful execution
+**********************************************
+Algorithms can exist in multiple environments. A stateless sandbox 
environment, a semi-stateful sandbox environment that has been prepared with 
data such as pre-existing anomalies, and the production environment in which 
the database keeps track of results of previous executions. The algorithm 
implementation should be oblivious to the executing harness to the maximum 
extent possible.
+
+Interactive detection preview and performance tuning for users
+*********************************************************************
+As part of the on-boarding workflow and tuning procedure, ThirdEye allows 
users to tweak settings - either manually or via automated parameter search. 
This functionality should support interactive replay and preview of detection 
results in order to help our users build trust in and improve on the detection 
algorithm or detection rules. This is primarily a performance requirement as is 
demands execution of newly-generated detection rules at user-interactive 
speeds. Furthermore, this inter [...]
+
+Flow parallelism
+***********************
+Multiple independent detection flows must execute in parallel and without 
affecting each other's results. Sub-task level parallelism is out of scope.
+
+Architecture
+#################
+We split the architecture of ThirdEye in three layers: the execution layer, 
the framework layer, and the logic layer. The execution layer is responsible 
for tying in the anomaly detection framework with ThirdEye's existing task 
execution framework and provide configuration facilities. The framework layer 
provides an abstraction for algorithm development by providing an unified 
interface for data and configuration retrieval as well as utilities for common 
aspects involved in algorithm dev [...]
+
+
+.. image:: 
https://user-images.githubusercontent.com/4448437/88264885-670d6500-cc81-11ea-92da-b69073a69e03.png
+  :width: 500
+
+Execution layer
+**********************
+The execution layer ties in the detection framework with ThirdEye's existing 
task execution framework and data layer. ThirdEye triggers the execution of 
detection algorithms either time-based (cron job) or on-demand for testing or 
per on-boarding request from a user. The scheduled execution executes per cron 
schedule in a stateful manner such that the result of previous executions is 
available to the algorithm on every run. This component is especially important 
as it serves most product [...]
+
+Framework layer
+*********************
+The framework provides an abstraction over various data sources and 
configuration facilities in ThirdEye and presents a uniform layer to pipeline 
and algorithm developers. A major aspect of this is the Data Provider, which 
encapsulates time-series, anomaly, and meta-data access. Furthermore, there are 
helpers for configuration injection and utilities for common aspects of 
algorithm development, such as time-series transformations and the data frame 
API. The framework layer also manages t [...]
+
+Logic layer
+*********************
+The business logic layer builds on the framework's pipeline contract to 
implement detection algorithms and specialized pipelines that share 
functionality across groups of similar algorithms. A special aspect of the 
business logic layer are wrapper pipelines which enable implementation and 
configuration of custom detection workflows, such as the exploration of 
individual dimensions or the domain-specific merging of anomalies with common 
dimensions. The framework pipelines supports this fu [...]
+
+Design decisions and trade-offs
+#####################################
+
+"Simple algorithms should be simple to build" vs "arbitrary workflows should 
be possible"
+***********************************************************************************************
+Our detection framework provides a layered pipeline API to balance simplicity 
and flexibility in algorithm and workflow development. We chose to provide two 
layers: the raw "DetectionPipeline" and the safer "StaticDetectionPipeline". 
The raw pipeline layer allows dynamic loading of data and iterative execution 
of nested code, which enables us to implement arbitrary workflows but comes at 
the cost higher complexity and placing the burden of performance optimization 
on the developer. The s [...]
+
+"De-coupling" vs "simple infrastructure"
+*******************************************
+Simplicity and testability stand at the core of the refactoring of the anomaly 
detection framework. De-coupling of components is strictly necessary to enable 
unit testing, however, a separation of the framework into dozens of individual 
components makes the writing of algorithms and test-cases confusing and 
difficult, especially as it introduces various partial-failure modes. The data 
provider shows this trade-off between loose coupling and one-stop simplicity: 
rather than registering in [...]
+
+"batch detection" vs "point-wise walk forward"
+*************************************************
+The detection pipeline contract was designed to handle time ranges rather than 
single timestamps. This enables batch operations and multi-period detection 
scenarios but offloads some complexity of implementing walk-forward analysis 
onto the maintainers of algorithms that perform point-wise anomaly detection. 
At the current state this is mainly an issue with legacy detection algorithms 
and we address it by providing a specialized wrapper pipeline that contains a 
generic implementation of  [...]
+
+"complex algorithms" vs "performance and scalability"
+**********************************************************
+Our architecture currently does not enforce any structure on the algorithm 
implementation besides the specification of inputs and outputs. Specifically, 
there are no limits on the amount of data that can be requested from the 
provider. This enables algorithm maintainers to implement algorithms in 
non-scalable ways, such as re-training the detection model on long time ranges 
before each evaluation of the detection model. It also doesn't prevent the 
system (and its data sources) from mista [...]
+
+Another limitation here is the restriction of parallelism on a per-flow basis. 
Pipelines and algorithms can contain internal state during execution which is 
not stored in any external meta data store. This enables algorithm developers 
to create arbitrary logic, but restricts parallelism to a single serial thread 
of execution per flow in order to avoid the complexity of synchronization and 
distributed processing.
+
+"nesting and non-nesting configuration" vs "implicit coupling via property key 
injection"
+**********************************************************************************************
+There is a fundamental trade-off between separately configuring individual 
metric- or dimension-level alerts and setting up a single detector with 
overrides specific to a single sub-task of detection. Furthermore, this 
configuration may be injected from a wrapper pipeline down into a nested 
pipeline. We explicitly chose to use a single, all-encompassing configuration 
per detection use-case to allow consistent handling of related anomalies in a 
single flow, for example for merging or clus [...]
+
+"generalized configuration object" vs "static type safety of config"
+****************************************************************************
+Configuration of pipelines could be served as statically defined config 
classes or semi-structured (and dynamically typed) key-value maps. Static 
objects provide type-safety and would allow static checking of configuration 
correctness. They add overhead for pipeline development and code however. The 
alternative delays error checking to runtime, i.e. only when the configured 
pipeline is instantiated and executed. This approach is more lightweight and 
flexible in terms of development. When [...]
+
+"atomic execution" vs "redundant computation"
+*************************************************
+Anomaly detection isn't a purely online process, i.e. detection sometimes 
changes its decisions about the past state of the world after detection already 
on this past time range. For example, a new but short short outlier may be 
ignored by detection initially, but may be re-classified as an anomaly when the 
following data points are generated and show similar anomalous behavior. 
ThirdEye's legacy detection pipeline chose to store both "candidates" and 
"confirmed" anomalies in the data ba [...]
+
+"serial execution, custom and re-usable wrappers" vs "parallel execution 
pipeline parts"
+*********************************************************************************************
+Parallelism in ThirdEye performs on job level, but not per task. This allows 
users to specify arbitrary flows of exploration, detection, merging, and 
grouping tasks as all the state is available in a single place during execution 
(see atomic execution above). The trade-off here comes from a limit to scaling 
of extremely large singular detection flows that cannot execute serially. This 
can be mitigated by splitting the job into multiple independent ones, 
effectively allowing the user to c [...]
+
+
diff --git a/thirdeye/docs/detection_pipeline_execution_flow.rst 
b/thirdeye/docs/detection_pipeline_execution_flow.rst
new file mode 100644
index 0000000..d08eac9
--- /dev/null
+++ b/thirdeye/docs/detection_pipeline_execution_flow.rst
@@ -0,0 +1,192 @@
+..
+.. Licensed to the Apache Software Foundation (ASF) under one
+.. or more contributor license agreements.  See the NOTICE file
+.. distributed with this work for additional information
+.. regarding copyright ownership.  The ASF licenses this file
+.. to you under the Apache License, Version 2.0 (the
+.. "License"); you may not use this file except in compliance
+.. with the License.  You may obtain a copy of the License at
+..
+..   http://www.apache.org/licenses/LICENSE-2.0
+..
+.. Unless required by applicable law or agreed to in writing,
+.. software distributed under the License is distributed on an
+.. "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+.. KIND, either express or implied.  See the License for the
+.. specific language governing permissions and limitations
+.. under the License.
+..
+
+.. _detection-pipeline-execution-flow:
+
+######################################
+Detection Pipeline Execution Flow
+######################################
+
+Background & Motivation
+###########################
+Current detection pipeline does not support business rules well. The user-base 
of ThirdEye is growing constantly. A lot of anomaly detection use-case comes 
with an additional set of business rules. For example, growth team want to 
filter out the anomalies whose site-wide impact is less than a certain 
threshold. They need to group anomalies across different dimensions. Some teams 
want to set up anomaly detection using threshold rules or want to use a fixed 
list of sub-dimensions to monito [...]
+
+Users also have no way to configure their business rule without reaching out 
to the pipeline maintainer to change the config JSON manually, which is not 
scalable. Some users, although has their specific inclusion/exclusion rules, 
still want to utilize the auto-tuned algorithm-based detection pipeline. This 
is not achievable in the current pipeline.
+
+Due to the limitations described above, we introduce the composite pipeline 
flow in the new detection pipeline framework to achieve the following goals:
+
+   - More flexibility of adding user-defined business rules into the pipeline
+
+   - User-friendly configuration of the detection rules
+
+   - Robustness and testability
+
+Design & Implementation
+##########################
+
+Composite pipeline flow
+*************************
+The pipeline is shown as follows.
+
+.. image:: 
https://user-images.githubusercontent.com/4448437/88265403-48f43480-cc82-11ea-9efe-1c30016a6669.png
+  :width: 500
+
+Dimension Exploration:
+========================
+Dimension drill down for user-defined dimensions. For example, explore county 
dimension where continent = Europe.
+
+Dimension Filter:
+=====================
+Filter out dimensions based on some business logic criteria. For example, only 
explore the dimension whose contribution to overall metric > 5%.
+
+Rule Detection:
+==================
+User specified rule for anomaly detection. For example, if percentage change 
WoW > 10%, fires an anomaly.
+
+Algorithm Detection:
+=====================
+Existing anomaly detection algorithms. Such as sign test, spline regression, 
etc.
+
+Algorithm alert filter:
+========================
+Existing auto-tune alert filters.
+
+Merger:
+=========
+For each dimension, merge anomalies based on time. See more detailed 
discussion about merging logic below.
+
+Rule filter:
+==============
+Exclusion filter for anomalies defined by user to filter out the anomalies 
they don’t want to receive. For example, if within the anomaly time range, the 
site wide impact of this metric in this dimension is less than 5% , don't 
classify this as an anomaly .
+
+Grouper:
+==========
+Groups anomalies across different dimensions.
+
+The algorithm detection and alert filter will provide backward-compatibility 
to existing anomaly function interface.
+
+For each stage, we provides interfaces so that they can be pluggable. User can 
provide any kind of business logic to customized the logic of each stage. The 
details of the interfaces are listed in this page: Detection pipeline 
interfaces.
+
+Pros of this pipeline:
+========================
+* Users can defines inclusion rules to detect anomalies.
+* Users won't receive the anomalies they explicitly filtered out if they set 
up the exclusion rule-based filter.  
+* For each dimension, users won't see duplicated anomalies generated by 
algorithm & rule pipeline for any time range, since they are merged based on 
time.
+
+
+
+Alternative Pipeline flow designs:
+*************************************
+1.
+
+.. image:: 
https://user-images.githubusercontent.com/4448437/88265408-4b568e80-cc82-11ea-83e7-a833663a68ed.png
+  :width: 500
+
+Pros of this pipeline:
+========================
+* Users can defines inclusion rules to detect anomalies.
+* Users won't receive the anomalies they explicitly filtered out if they set 
up the exclusion rule-based filter.  
+* Users won't see duplicated anomalies generated by algorithm & rule pipeline, 
since they are merged based on time.
+
+Cons of this pipeline:
+========================
+* The algorithm alert filter might filter out the anomalies generated by user 
specified rules, i.e. users could miss anomalies they want to see.
+
+
+2.
+
+.. image:: 
https://user-images.githubusercontent.com/4448437/88265411-4e517f00-cc82-11ea-947a-04bee30ca08c.png
+  :width: 500
+
+Pros of this pipeline:
+========================
+* Users can defines inclusion rules to detect anomalies.
+* Users won't see duplicated anomalies generated by algorithm & rule pipeline, 
since they are merged based on time.
+
+Cons of this pipeline:
+========================
+* Users will still see the anomaly they set rules to explicitly filter out. 
Because the anomalies generated by algorithm detection pipeline does not 
filtered by user’s exclusion rule.
+
+As discussed above, we recommend to use the first discussed design as default. 
The detection framework itself still has the flexibility of executing different 
type of flows if this is needed later.
+
+
+
+Merging logic
+#################
+Merging happens either when merging anomalies within a rule/algorithm 
detection flow or merge anomalies generated by different flows. Merger's 
behavior is slightly different in these two conditions.
+
+Merging only rule-detected anomalies or rule-detected anomalies
+********************************************************************
+Do time-based merging only. Do not keep anomalies before merging.
+
+Merging both rule-detected anomalies and algorithm-detected anomalies
+**********************************************************************
+There will be 3 cases when merging two anomalies:
+
+.. image:: 
https://user-images.githubusercontent.com/4448437/88265414-501b4280-cc82-11ea-904e-83fd54e3a157.png
+  :width: 500
+
+Solution to case 2:
+=====================
+1. Merge all time intervals in both anomalies.
+-------------------------------------------------
+In this example, will send A-D as the anomaly.
+
+Pros:
+======
+* Users will not receive duplicated anomaly for any specific range.
+* Improves the recall.
+
+Cons:
+======
+* Users will receive an extended anomaly range. More period to investigate
+
+
+2. Only classify as an anomaly for the overlapped interval.
+-------------------------------------------------------------
+In this example, will send C-B as the anomaly.  
+
+Pros:
+======
+* User will not receive duplicated anomaly for any specific range.
+* Improved the precision. The anomaly range is shortened. User has less period 
to investigate.
+
+Cons:
+======
+* User could miss the anomaly period he explicitly set rule to detect. Because 
the merger might chop off the anomaly period. Reduce the recall.  
+
+
+3. Don’t merge, send two anomalies.
+-----------------------------------------
+In this example, will send A-B  and C-D as two anomalies.
+
+Pros:
+======
+* Improves the recall
+
+Cons:
+======
+* User will receive duplicated anomaly for a specific time range, in this 
example for C-B.
+* User has more workload to investigate because of more anomalies
+
+
+
+As discuss above, we set merger to behave like solution 1 by default, i.e, 
merger merges the time period. The merger will keep the anomalies before 
merging as the child anomalies. This allows tracing back to the anomalies 
generated by different algorithms/rules.
+
+
diff --git a/thirdeye/docs/index.rst b/thirdeye/docs/index.rst
index d17415a..175bfce 100644
--- a/thirdeye/docs/index.rst
+++ b/thirdeye/docs/index.rst
@@ -29,3 +29,5 @@ ThirdEye
    datasources
    caches
    alert_setup
+   thirdeye_architecture
+   thirdeye_ui_mocks
diff --git a/thirdeye/docs/alert_setup.rst 
b/thirdeye/docs/thirdeye_architecture.rst
similarity index 84%
copy from thirdeye/docs/alert_setup.rst
copy to thirdeye/docs/thirdeye_architecture.rst
index bda73f0..3232c52 100644
--- a/thirdeye/docs/alert_setup.rst
+++ b/thirdeye/docs/thirdeye_architecture.rst
@@ -17,16 +17,13 @@
 .. under the License.
 ..
 
-.. _alert-setup:
+.. _thirdeye-architecture:
 
-Alert Setup
-############
+ThirdEye Architecture
+##########################
 
 .. toctree::
     :maxdepth: 1
 
-    basic_config
-    advanced_config
-    templates
-    appendix
-    contribute_detection
\ No newline at end of file
+    detection_pipeline_architecture
+    detection_pipeline_execution_flow
diff --git a/thirdeye/docs/alert_setup.rst b/thirdeye/docs/thirdeye_ui_mocks.rst
similarity index 58%
copy from thirdeye/docs/alert_setup.rst
copy to thirdeye/docs/thirdeye_ui_mocks.rst
index bda73f0..7a1cd1a 100644
--- a/thirdeye/docs/alert_setup.rst
+++ b/thirdeye/docs/thirdeye_ui_mocks.rst
@@ -17,16 +17,15 @@
 .. under the License.
 ..
 
-.. _alert-setup:
+.. _thirdeye-ui-mocks:
 
-Alert Setup
-############
+ThirdEye UI Mocks
+##########################
 
 .. toctree::
     :maxdepth: 1
 
-    basic_config
-    advanced_config
-    templates
-    appendix
-    contribute_detection
\ No newline at end of file
+    :download: `Entity (multi-metrics) Monitoring 
<https://github.com/apache/incubator-pinot/files/4964811/ThirdEye_EntityMonitoring.pdf>`
+    :download: `Subscription group Management 
<https://github.com/apache/incubator-pinot/files/4964812/ThirdEye_SubscriptionGroups_Management.pdf>`
+    :download: `Suppress Alert 
<https://github.com/apache/incubator-pinot/files/4964813/ThirdEye_Suppress_Alerts.pdf>`
+    :download: `SLA monitoring 
<https://github.com/apache/incubator-pinot/files/4964810/ThirdEye_SLA_monitoring.pdf>`


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscr...@pinot.apache.org
For additional commands, e-mail: commits-h...@pinot.apache.org

Reply via email to