http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_resource_management.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_resource_management.xml 
b/docs/topics/impala_resource_management.xml
new file mode 100644
index 0000000..abc2e3d
--- /dev/null
+++ b/docs/topics/impala_resource_management.xml
@@ -0,0 +1,338 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept rev="1.2" id="resource_management">
+
+  <title>Resource Management for Impala</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="YARN"/>
+      <data name="Category" value="Resource Management"/>
+      <data name="Category" value="Administrators"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <note conref="../shared/impala_common.xml#common/impala_llama_obsolete"/>
+
+    <p>
+      You can limit the CPU and memory resources used by Impala, to manage and 
prioritize workloads on clusters
+      that run jobs from many Hadoop components.
+    </p>
+
+    <p outputclass="toc inpage"/>
+  </conbody>
+
+  <concept audience="Cloudera" id="llama">
+  <!-- Hiding the whole concept now that Llama is desupported. -->
+
+    <title>The Llama Daemon</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Llama"/>
+    </metadata>
+  </prolog>
+
+    <conbody>
+
+<!-- Copied from 
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh_ig_install_llama.html
 - turn into a conref in both places. -->
+
+      <p>
+        Llama is a system that mediates resource management between Impala and 
Hadoop YARN. Llama enables
+        Impala to reserve, use, and release resource allocations in a Hadoop 
cluster. Llama is only required if
+        resource management is enabled in Impala.
+      </p>
+
+      <p id="p_b2y_msl_jp">
+        By default, YARN allocates resources bit-by-bit as needed by MapReduce 
jobs. Impala needs all resources
+        available at the same time, so that intermediate results can be 
exchanged between cluster nodes, and
+        queries do not stall partway through waiting for new resources to be 
allocated. Llama is the intermediary
+        process that ensures all requested resources are available before each 
Impala query actually begins.
+      </p>
+
+<!-- Original URL: 
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-Installation-Guide/cdh_ig_llama_installation.html
 -->
+
+<!-- Highly likely to be removed per 
https://jira.cloudera.com/browse/CDH-21352,
+     so commenting out for now. Feels like there should be some 'installing 
llama via CM' topic
+     even if it says not to worry.
+
+      <p>
+        For Llama installation instructions, see
+        <xref 
href="http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_llama_installation.html";
 scope="external" format="html">Llama
+        Installation</xref>.
+      </p>
+-->
+
+<!-- Original URL: 
http://www.cloudera.com/content/cloudera-content/cloudera-docs/CM5/latest/Cloudera-Manager-Managing-Clusters/cm_mc_impala_service.html
 -->
+
+      <p>
+        For management through Cloudera Manager, see
+        <xref 
href="http://www.cloudera.com/documentation/enterprise/latest/topics/admin_llama.html";
 scope="external" format="html">The
+        Impala Llama ApplicationMaster</xref>.
+      </p>
+    </conbody>
+  </concept>
+
+<!-- Hiding more content per MJ feedback.
+  <concept id="rm_resource_estimates">
+
+    <title>Controlling Resource Estimation Behavior</title>
+
+    <conbody>
+
+      <p>
+        By default, Impala consults the table statistics and column statistics 
for each table in a query, and uses
+        those figures to construct estimates of needed resources for each 
query. See
+        <xref href="impala_compute_stats.xml#compute_stats"/> for the 
statement to collect table and column
+        statistics for a table.
+      </p>
+
+      <p>
+        In <keyword keyref="impala25_full"/> and higher, the preferred way to 
avoid overcommitting memory in a high-concurrency
+        or multitenant scenario is to use Impala admission control together 
with dynamic resource pools.
+        You can specify a <uicontrol>Default Query Memory Limit</uicontrol> 
setting, with a different value for each
+        pool, and Impala uses that value to calculate how many queries can 
safely run within a specified
+        cluster-wide aggregate memory size.
+        See <xref href="impala_admission.xml#admission_control"/> for details.
+      </p>
+
+      <p>
+        To avoid problems with inaccurate or missing statistics, which can 
lead to inaccurate estimates of resource
+        consumption, Impala allows you to set default estimates for CPU and 
memory resource consumption.
+        When the query is complete, those resources are returned to YARN as
+        normal. To enable this feature, use the command-line option 
<codeph>-rm_always_use_defaults</codeph> when
+        starting <cmdname>impalad</cmdname>, and optionally
+        <codeph>-rm_default_memory=<varname>size</varname></codeph> and 
<codeph>-rm_default_cpu_cores</codeph>.
+        See <xref href="impala_resource_management.xml#rm_options"/> for 
details about each option.
+      </p>
+    </conbody>
+  </concept>
+
+  <concept id="rm_checking">
+
+    <title>Checking Resource Estimates and Actual Usage</title>
+
+    <conbody>
+
+      <p>
+        To make resource usage easier to verify, the output of the 
<codeph>EXPLAIN</codeph> SQL statement now
+        includes information about estimated memory usage, whether table and 
column statistics are available for
+        each table, and the number of virtual cores that a query will use. You 
can get this information through the
+        <codeph>EXPLAIN</codeph> statement without actually running the query. 
The extra information requires
+        setting the query option <codeph>EXPLAIN_LEVEL=verbose</codeph>; see
+        <xref href="impala_explain.xml#explain"/> for details. The same 
extended information is shown at the start
+        of the output from the <codeph>PROFILE</codeph> statement in 
<cmdname>impala-shell</cmdname>. The detailed
+        profile information is only available after running the query. You can 
take appropriate actions (gathering
+        statistics, adjusting query options) if you find that queries fail or 
run with suboptimal performance when
+        resource management is enabled.
+      </p>
+    </conbody>
+  </concept>
+-->
+
+  <concept id="rm_enforcement">
+
+    <title>How Resource Limits Are Enforced</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Concepts"/>
+    </metadata>
+  </prolog>
+
+    <conbody>
+
+      <ul>
+        <li>
+          If Cloudera Manager Static Partitioning is used, it creates a cgroup 
in which Impala runs.
+          This cgroup limits CPU, network, and IO according to the static 
partitioning policy.
+        </li>
+
+        <li>
+          Limits on memory usage are enforced by Impala's process memory limit 
(the <codeph>MEM_LIMIT</codeph>
+          query option setting). The admission control feature checks this 
setting to decide how many queries
+          can be safely run at the same time. Then the Impala daemon enforces 
the limit by activating the
+          spill-to-disk mechanism when necessary, or cancelling a query 
altogether if the limit is exceeded at runtime.
+        </li>
+      </ul>
+    </conbody>
+  </concept>
+
+<!--
+  <concept id="rm_enable">
+
+    <title>Enabling Resource Management for Impala</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Configuring"/>
+      <data name="Category" value="Starting and Stopping"/>
+    </metadata>
+  </prolog>
+
+    <conbody>
+
+      <p>
+        To enable resource management for Impala, first you <xref 
href="#rm_cdh_prereqs">set up the YARN
+        service for your CDH cluster</xref>. Then you <xref 
href="#rm_options">add startup options and customize
+        resource management settings</xref> for the Impala services.
+      </p>
+    </conbody>
+
+    <concept id="rm_cdh_prereqs">
+
+      <title>Required CDH Setup for Resource Management with Impala</title>
+
+      <conbody>
+
+        <p>
+          YARN is the general-purpose service that manages resources for many 
Hadoop components within a CDH
+          cluster.
+        </p>
+
+        <p>
+          For information about setting up the YARN service, see the 
instructions for
+          <xref 
href="http://www.cloudera.com/documentation/enterprise/latest/topics/cm_mc_yarn_service.html";
 scope="external" format="html">Cloudera
+          Manager</xref>.
+        </p>
+      </conbody>
+    </concept>
+
+    <concept id="rm_options">
+
+      <title>impalad Startup Options for Resource Management</title>
+
+      <conbody>
+
+        <p id="resource_management_impalad_options">
+          The following startup options for <cmdname>impalad</cmdname> enable 
resource management and customize its
+          parameters for your cluster configuration:
+          <ul>
+            <li>
+              <codeph>-enable_rm</codeph>: Whether to enable resource 
management or not, either
+              <codeph>true</codeph> or <codeph>false</codeph>. The default is 
<codeph>false</codeph>. None of the
+              other resource management options have any effect unless 
<codeph>-enable_rm</codeph> is turned on.
+            </li>
+
+            <li>
+              <codeph>-cgroup_hierarchy_path</codeph>: Path where YARN will 
create cgroups for granted
+              resources. Impala assumes that the cgroup for an allocated 
container is created in the path
+              '<varname>cgroup_hierarchy_path</varname> + 
<varname>container_id</varname>'.
+            </li>
+
+            <li rev="1.4.0">
+              <codeph>-rm_always_use_defaults</codeph>: If this Boolean option 
is enabled, Impala ignores computed
+              estimates and always obtains the default memory and CPU 
allocation settings at the start of the
+              query. These default estimates are approximately 2 CPUs and 4 GB 
of memory, possibly varying slightly
+              depending on cluster size, workload, and so on. <ph 
rev="upstream">Cloudera</ph> recommends enabling
+              <codeph>-rm_always_use_defaults</codeph> whenever resource 
management is used, and relying on these
+              default values (that is, leaving out the two following options).
+            </li>
+
+            <li rev="1.4.0">
+              <codeph>-rm_default_memory=<varname>size</varname></codeph>: 
Optionally sets the default estimate for
+              memory usage for each query. You can use suffixes such as M and 
G for megabytes and gigabytes, the
+              same as with the <xref 
href="impala_mem_limit.xml#mem_limit">MEM_LIMIT</xref> query option. Only has
+              an effect when <codeph>-rm_always_use_defaults</codeph> is also 
enabled.
+            </li>
+
+            <li rev="1.4.0">
+              <codeph>-rm_default_cpu_cores</codeph>: Optionally sets the 
default estimate for number of virtual
+              CPU cores for each query. Only has an effect when 
<codeph>-rm_always_use_defaults</codeph> is also
+              enabled.
+            </li>
+          </ul>
+        </p>
+
+      </conbody>
+    </concept>
+-->
+
+    <concept id="rm_query_options">
+
+      <title>impala-shell Query Options for Resource Management</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala Query Options"/>
+    </metadata>
+  </prolog>
+
+      <conbody>
+
+        <p>
+          Before issuing SQL statements through the 
<cmdname>impala-shell</cmdname> interpreter, you can use the
+          <codeph>SET</codeph> command to configure the following parameters 
related to resource management:
+        </p>
+
+        <ul id="ul_nzt_twf_jp">
+          <li>
+            <xref href="impala_explain_level.xml#explain_level"/>
+          </li>
+
+          <li>
+            <xref href="impala_mem_limit.xml#mem_limit"/>
+          </li>
+
+<!-- Not supported in CDH 5.5 / Impala 2.3 and up.
+          <li>
+            <xref 
href="impala_reservation_request_timeout.xml#reservation_request_timeout"/>
+          </li>
+
+          <li>
+            <xref href="impala_v_cpu_cores.xml#v_cpu_cores"/>
+          </li>
+-->
+        </ul>
+      </conbody>
+    </concept>
+
+<!-- Parent topic is going away, so former subtopic is hoisted up a level.
+  </concept>
+-->
+
+  <concept id="rm_limitations">
+
+    <title>Limitations of Resource Management for Impala</title>
+
+    <conbody>
+
+<!-- Conditionalizing some content here with audience="Cloudera" because there 
are already some XML comments
+     inside the list, so not practical to enclose the whole thing in XML 
comments. -->
+
+      <p audience="Cloudera">
+        Currently, Impala in CDH 5 has the following limitations for resource 
management of Impala queries:
+      </p>
+
+      <ul audience="Cloudera">
+        <li>
+          Table statistics are required, and column statistics are highly 
valuable, for Impala to produce accurate
+          estimates of how much memory to request from YARN. See
+          <xref href="impala_perf_stats.xml#perf_table_stats"/> and
+          <xref href="impala_perf_stats.xml#perf_column_stats"/> for 
instructions on gathering both kinds of
+          statistics, and <xref href="impala_explain.xml#explain"/> for the 
extended <codeph>EXPLAIN</codeph>
+          output where you can check that statistics are available for a 
specific table and set of columns.
+        </li>
+
+        <li>
+          If the Impala estimate of required memory is lower than is actually 
required for a query, Impala
+          dynamically expands the amount of requested memory.
+<!--          Impala will cancel the query when it exceeds the requested 
memory size. -->
+          Queries might still be cancelled if the reservation expansion fails, 
for example if there are
+          insufficient remaining resources for that pool, or the expansion 
request takes long enough that it
+          exceeds the query timeout interval, or because of YARN preemption.
+<!--          This could happen in some cases with complex queries, even when 
table and column statistics are available. -->
+          You can see the actual memory usage after a failed query by issuing 
a <codeph>PROFILE</codeph> command in
+          <cmdname>impala-shell</cmdname>. Specify a larger memory figure with 
the <codeph>MEM_LIMIT</codeph>
+          query option and re-try the query.
+        </li>
+      </ul>
+
+      <p rev="2.0.0">
+        The <codeph>MEM_LIMIT</codeph> query option, and the other 
resource-related query options, are settable
+        through the ODBC or JDBC interfaces in Impala 2.0 and higher. This is 
a former limitation that is now
+        lifted.
+      </p>
+    </conbody>
+  </concept>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_revoke.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_revoke.xml b/docs/topics/impala_revoke.xml
new file mode 100644
index 0000000..9ab81fe
--- /dev/null
+++ b/docs/topics/impala_revoke.xml
@@ -0,0 +1,101 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept rev="2.0.0" id="revoke">
+
+  <title>REVOKE Statement (<keyword keyref="impala20"/> or higher only)</title>
+  <titlealts audience="PDF"><navtitle>REVOKE</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="DDL"/>
+      <data name="Category" value="SQL"/>
+      <data name="Category" value="Security"/>
+      <data name="Category" value="Sentry"/>
+      <data name="Category" value="Roles"/>
+      <data name="Category" value="Administrators"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+      <!-- Consider whether to go deeper into categories like Security for the 
Sentry-related statements. -->
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.0.0">
+      <indexterm audience="Cloudera">REVOKE statement</indexterm>
+<!-- Copied from Sentry docs. Turn into conref. I did some rewording for 
clarity. -->
+      The <codeph>REVOKE</codeph> statement revokes roles or privileges on a 
specified object from groups. Only
+      Sentry administrative users can revoke the role from a group. The 
revocation has a cascading effect. For
+      example, revoking the <codeph>ALL</codeph> privilege on a database also 
revokes the same privilege for all
+      the tables in that database.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/syntax_blurb"/>
+
+<codeblock rev="2.3.0 collevelauth">REVOKE ROLE <varname>role_name</varname> 
FROM GROUP <varname>group_name</varname>
+
+REVOKE <varname>privilege</varname> ON <varname>object_type</varname> 
<varname>object_name</varname>
+  FROM [ROLE] <varname>role_name</varname>
+
+<ph rev="2.3.0">privilege ::= SELECT | SELECT(<varname>column_name</varname>) 
| INSERT | ALL</ph>
+object_type ::= TABLE | DATABASE | SERVER | URI
+</codeblock>
+
+    <p>
+      Typically, the object name is an identifier. For URIs, it is a string 
literal.
+    </p>
+
+    <p rev="2.3.0 collevelauth">
+      The ability to grant or revoke <codeph>SELECT</codeph> privilege on 
specific columns is available
+      in <keyword keyref="impala23_full"/> and higher. See
+      <xref audience="integrated" 
href="sg_hive_sql.xml#concept_c2q_4qx_p4/col_level_auth_sentry"/><xref 
audience="standalone" 
href="https://www.cloudera.com/documentation/enterprise/latest/topics/sg_hive_sql.html";
 format="html" scope="external"/>
+      for details.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/privileges_blurb"/>
+
+    <p>
+      Only administrative users (those with <codeph>ALL</codeph> privileges on 
the server, defined in the Sentry
+      policy file) can use this statement.
+    </p>
+
+<!-- Turn compatibility info into a conref or series of conrefs. (In both 
GRANT and REVOKE.) -->
+
+    <p conref="../shared/impala_common.xml#common/compatibility_blurb"/>
+
+    <p>
+      <ul>
+        <li>
+          The Impala <codeph>GRANT</codeph> and <codeph>REVOKE</codeph> 
statements are available in CDH 5.2 and
+          higher.
+        </li>
+
+        <li>
+          In <ph rev="upstream">CDH 5.1</ph> and higher, Impala makes use of 
any roles and privileges specified by the
+          <codeph>GRANT</codeph> and <codeph>REVOKE</codeph> statements in 
Hive, when your system is configured to
+          use the Sentry service instead of the file-based policy mechanism.
+        </li>
+
+        <li>
+          The Impala <codeph>GRANT</codeph> and <codeph>REVOKE</codeph> 
statements do not require the
+          <codeph>ROLE</codeph> keyword to be repeated before each role name, 
unlike the equivalent Hive
+          statements.
+        </li>
+
+        <li conref="../shared/impala_common.xml#common/grant_revoke_single"/>
+      </ul>
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/cancel_blurb_no"/>
+
+    <p conref="../shared/impala_common.xml#common/permissions_blurb_no"/>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+
+    <p>
+      <xref href="impala_authorization.xml#authorization"/>, <xref 
href="impala_grant.xml#grant"/>
+      <xref href="impala_create_role.xml#create_role"/>, <xref 
href="impala_drop_role.xml#drop_role"/>,
+      <xref href="impala_show.xml#show"/>
+    </p>
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_rm_initial_mem.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_rm_initial_mem.xml 
b/docs/topics/impala_rm_initial_mem.xml
new file mode 100644
index 0000000..cfd77cf
--- /dev/null
+++ b/docs/topics/impala_rm_initial_mem.xml
@@ -0,0 +1,29 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="rm_initial_mem" rev="2.5.0">
+
+  <title>RM_INITIAL_MEM Query Option</title>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.5.0">
+      <indexterm audience="Cloudera">RM_INITIAL_MEM query option</indexterm>
+    </p>
+
+    <p>
+      <b>Type:</b>
+    </p>
+
+    <p>
+      <b>Default:</b>
+    </p>
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_bloom_filter_size.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_bloom_filter_size.xml 
b/docs/topics/impala_runtime_bloom_filter_size.xml
new file mode 100644
index 0000000..8fdc0c3
--- /dev/null
+++ b/docs/topics/impala_runtime_bloom_filter_size.xml
@@ -0,0 +1,93 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_bloom_filter_size" rev="2.5.0">
+
+  <title>RUNTIME_BLOOM_FILTER_SIZE Query Option (<keyword keyref="impala25"/> 
or higher only)</title>
+  <titlealts 
audience="PDF"><navtitle>RUNTIME_BLOOM_FILTER_SIZE</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.5.0">
+      <indexterm audience="Cloudera">RUNTIME_BLOOM_FILTER_SIZE query 
option</indexterm>
+      Size (in bytes) of Bloom filter data structure used by the runtime 
filtering
+      feature.
+    </p>
+
+    <note type="important">
+      <p rev="2.6.0 CDH-41184 IMPALA-3007">
+        In <keyword keyref="impala26_full"/> and higher, this query option 
only applies as a fallback, when statistics
+        are not available. By default, Impala estimates the optimal size of 
the Bloom filter structure
+        regardless of the setting for this option. (This is a change from the 
original behavior in
+        <keyword keyref="impala25_full"/>.)
+      </p>
+      <p rev="2.6.0 CDH-41184 IMPALA-3007">
+        In <keyword keyref="impala26_full"/> and higher, when the value of 
this query option is used for query planning,
+        it is constrained by the minimum and maximum sizes specified by the
+        <codeph>RUNTIME_FILTER_MIN_SIZE</codeph> and 
<codeph>RUNTIME_FILTER_MAX_SIZE</codeph> query options.
+        The filter size is adjusted upward or downward if necessary to fit 
within the minimum/maximum range.
+      </p>
+    </note>
+
+    <p conref="../shared/impala_common.xml#common/type_integer"/>
+
+    <p>
+      <b>Default:</b> 1048576 (1 MB)
+    </p>
+
+    <p>
+      <b>Maximum:</b> 16 MB
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/added_in_250"/>
+
+    <p conref="../shared/impala_common.xml#common/usage_notes_blurb"/>
+
+    <p>
+      This setting affects optimizations for large and complex queries, such
+      as dynamic partition pruning for partitioned tables, and join 
optimization
+      for queries that join large tables.
+      Larger filters are more effective at handling
+      higher cardinality input sets, but consume more memory per filter.
+      <!--
+      See <xref href="impala_runtime_filtering.xml#runtime_filtering"/> for 
details about when to change
+      the setting for this query option.
+      -->
+    </p>
+
+    <p>
+      If your query filters on high-cardinality columns (for example, millions 
of different values)
+      and you do not get the expected speedup from the runtime filtering 
mechanism, consider
+      doing some benchmarks with a higher value for 
<codeph>RUNTIME_BLOOM_FILTER_SIZE</codeph>.
+      The extra memory devoted to the Bloom filter data structures can help 
make the filtering
+      more accurate.
+    </p>
+
+    <p 
conref="../shared/impala_common.xml#common/runtime_filtering_option_caveat"/>
+
+    <p>
+      Because the effectiveness of this setting depends so much on query 
characteristics and data distribution,
+      you typically only use it for specific queries that need some extra 
tuning, and the ideal value depends
+      on the query. Consider setting this query option immediately before the 
expensive query and
+      unsetting it immediately afterward.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+    <p>
+      <xref href="impala_runtime_filtering.xml"/>,
+      <!-- , <xref href="impala_partitioning.xml#dynamic_partition_pruning"/> 
-->
+      <xref href="impala_runtime_filter_mode.xml#runtime_filter_mode"/>,
+      <xref 
href="impala_runtime_filter_min_size.xml#runtime_filter_min_size"/>,
+      <xref href="impala_runtime_filter_max_size.xml#runtime_filter_max_size"/>
+    </p>
+
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_filter_max_size.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_filter_max_size.xml 
b/docs/topics/impala_runtime_filter_max_size.xml
new file mode 100644
index 0000000..1591d3f
--- /dev/null
+++ b/docs/topics/impala_runtime_filter_max_size.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_filter_max_size" rev="2.6.0 IMPALA-3480 CDH-41184">
+
+  <title>RUNTIME_FILTER_MAX_SIZE Query Option (<keyword keyref="impala26"/> or 
higher only)</title>
+  <titlealts 
audience="PDF"><navtitle>RUNTIME_FILTER_MAX_SIZE</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.6.0 IMPALA-3480">
+      <indexterm audience="Cloudera">RUNTIME_FILTER_MAX_SIZE query 
option</indexterm>
+      The <codeph>RUNTIME_FILTER_MAX_SIZE</codeph> query option
+      adjusts the settings for the runtime filtering feature.
+      This option defines the maximum size for a filter,
+      no matter what the estimates produced by the planner are.
+      This value also overrides any lower number specified for the
+      <codeph>RUNTIME_BLOOM_FILTER_SIZE</codeph> query option.
+      Filter sizes are rounded up to the nearest power of two.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/type_integer"/>
+
+    <p>
+      <b>Default:</b> 0 (meaning use the value from the corresponding 
<cmdname>impalad</cmdname> startup option)
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/added_in_260"/>
+
+    <p conref="../shared/impala_common.xml#common/usage_notes_blurb"/>
+
+    <p 
conref="../shared/impala_common.xml#common/runtime_filtering_option_caveat"/>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+    <p>
+      <xref href="impala_runtime_filtering.xml"/>,
+      <xref href="impala_runtime_filter_mode.xml#runtime_filter_mode"/>,
+      <xref 
href="impala_runtime_filter_min_size.xml#runtime_filter_min_size"/>,
+      <xref 
href="impala_runtime_bloom_filter_size.xml#runtime_bloom_filter_size"/>
+    </p>
+
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_filter_min_size.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_filter_min_size.xml 
b/docs/topics/impala_runtime_filter_min_size.xml
new file mode 100644
index 0000000..75944e3
--- /dev/null
+++ b/docs/topics/impala_runtime_filter_min_size.xml
@@ -0,0 +1,51 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_filter_min_size" rev="2.6.0 IMPALA-3480 CDH-41184">
+
+  <title>RUNTIME_FILTER_MIN_SIZE Query Option (<keyword keyref="impala26"/> or 
higher only)</title>
+  <titlealts 
audience="PDF"><navtitle>RUNTIME_FILTER_MIN_SIZE</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.6.0 IMPALA-3480">
+      <indexterm audience="Cloudera">RUNTIME_FILTER_MIN_SIZE query 
option</indexterm>
+      The <codeph>RUNTIME_FILTER_MIN_SIZE</codeph> query option
+      adjusts the settings for the runtime filtering feature.
+      This option defines the minimum size for a filter,
+      no matter what the estimates produced by the planner are.
+      This value also overrides any lower number specified for the
+      <codeph>RUNTIME_BLOOM_FILTER_SIZE</codeph> query option.
+      Filter sizes are rounded up to the nearest power of two.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/type_integer"/>
+
+    <p>
+      <b>Default:</b> 0 (meaning use the value from the corresponding 
<cmdname>impalad</cmdname> startup option)
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/added_in_260"/>
+
+    <p conref="../shared/impala_common.xml#common/usage_notes_blurb"/>
+
+    <p 
conref="../shared/impala_common.xml#common/runtime_filtering_option_caveat"/>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+    <p>
+      <xref href="impala_runtime_filtering.xml"/>,
+      <xref href="impala_runtime_filter_mode.xml#runtime_filter_mode"/>,
+      <xref 
href="impala_runtime_filter_max_size.xml#runtime_filter_max_size"/>,
+      <xref 
href="impala_runtime_bloom_filter_size.xml#runtime_bloom_filter_size"/>
+    </p>
+
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_filter_mode.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_filter_mode.xml 
b/docs/topics/impala_runtime_filter_mode.xml
new file mode 100644
index 0000000..fa75bdf
--- /dev/null
+++ b/docs/topics/impala_runtime_filter_mode.xml
@@ -0,0 +1,77 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_filter_mode" rev="2.5.0">
+
+  <title>RUNTIME_FILTER_MODE Query Option (<keyword keyref="impala25"/> or 
higher only)</title>
+  <titlealts 
audience="PDF"><navtitle>RUNTIME_FILTER_MODE</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.5.0">
+      <indexterm audience="Cloudera">RUNTIME_FILTER_MODE query 
option</indexterm>
+    </p>
+
+    <p>
+      The <codeph>RUNTIME_FILTER_MODE</codeph> query option
+      adjusts the settings for the runtime filtering feature.
+      It turns this feature on and off, and controls how
+      extensively the filters are transmitted between hosts.
+    </p>
+
+    <p>
+      <b>Type:</b> numeric (0, 1, 2)
+      or corresponding mnemonic strings (<codeph>OFF</codeph>, 
<codeph>LOCAL</codeph>, <codeph>GLOBAL</codeph>).
+    </p>
+
+    <p rev="2.6.0 CDH-41184">
+      <b>Default:</b> 2 (equivalent to <codeph>GLOBAL</codeph>); formerly was 
1 / <codeph>LOCAL</codeph>, in <keyword keyref="impala25_full"/>
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/added_in_250"/>
+
+    <p conref="../shared/impala_common.xml#common/usage_notes_blurb"/>
+
+    <p rev="2.6.0 CDH-41184">
+      In <keyword keyref="impala26_full"/> and higher, the default is 
<codeph>GLOBAL</codeph>.
+      This setting is recommended for a wide variety of workloads, to provide 
best
+      performance with <q>out of the box</q> settings.
+    </p>
+
+    <p rev="2.6.0 CDH-41184">
+      The lowest setting of <codeph>LOCAL</codeph> does a similar level of 
optimization
+      (such as partition pruning) as in earlier Impala releases.
+      This setting was the default in <keyword keyref="impala25_full"/>,
+      to allow for a period of post-upgrade testing for existing workloads.
+      This setting is suitable for workloads with non-performance-critical 
queries,
+      or if the coordinator node is under heavy CPU or memory pressure.
+    </p>
+
+    <p>
+      You might change the setting to <codeph>OFF</codeph> if your workload 
contains
+      many queries involving partitioned tables or joins that do not 
experience a performance
+      increase from the runtime filters feature. If the overhead of producing 
the runtime filters
+      outweighs the performance benefit for queries, you can turn the feature 
off entirely.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+    <p>
+      <xref href="impala_partitioning.xml#partitioning"/> for details about 
runtime filtering.
+      <xref 
href="impala_disable_row_runtime_filtering.xml#disable_row_runtime_filtering"/>,
+      <xref 
href="impala_runtime_bloom_filter_size.xml#runtime_bloom_filter_size"/>,
+      <xref 
href="impala_runtime_filter_wait_time_ms.xml#runtime_filter_wait_time_ms"/>,
+      and
+      <xref href="impala_max_num_runtime_filters.xml#max_num_runtime_filters"/>
+      for tuning options for runtime filtering.
+    </p>
+
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_filter_wait_time_ms.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_filter_wait_time_ms.xml 
b/docs/topics/impala_runtime_filter_wait_time_ms.xml
new file mode 100644
index 0000000..fb41885
--- /dev/null
+++ b/docs/topics/impala_runtime_filter_wait_time_ms.xml
@@ -0,0 +1,47 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_filter_wait_time_ms" rev="2.5.0">
+
+  <title>RUNTIME_FILTER_WAIT_TIME_MS Query Option (<keyword 
keyref="impala25"/> or higher only)</title>
+  <titlealts 
audience="PDF"><navtitle>RUNTIME_FILTER_WAIT_TIME_MS</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="Impala Query Options"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.5.0">
+      <indexterm audience="Cloudera">RUNTIME_FILTER_WAIT_TIME_MS query 
option</indexterm>
+      The <codeph>RUNTIME_FILTER_WAIT_TIME_MS</codeph> query option
+      adjusts the settings for the runtime filtering feature.
+      It specifies a time in milliseconds that each scan node waits for
+      runtime filters to be produced by other plan fragments.
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/type_integer"/>
+
+    <p>
+      <b>Default:</b> 0 (meaning use the value from the corresponding 
<cmdname>impalad</cmdname> startup option)
+    </p>
+
+    <p conref="../shared/impala_common.xml#common/added_in_250"/>
+
+    <p conref="../shared/impala_common.xml#common/usage_notes_blurb"/>
+
+    <p 
conref="../shared/impala_common.xml#common/runtime_filtering_option_caveat"/>
+
+    <p conref="../shared/impala_common.xml#common/related_info"/>
+    <p>
+      <xref href="impala_runtime_filtering.xml"/>,
+      <xref href="impala_runtime_filter_mode.xml#runtime_filter_mode"/>
+      <!-- , <xref href="impala_partitioning.xml#dynamic_partition_pruning"/> 
-->
+    </p>
+
+  </conbody>
+</concept>

http://git-wip-us.apache.org/repos/asf/incubator-impala/blob/3be0f122/docs/topics/impala_runtime_filtering.xml
----------------------------------------------------------------------
diff --git a/docs/topics/impala_runtime_filtering.xml 
b/docs/topics/impala_runtime_filtering.xml
new file mode 100644
index 0000000..b5c1fcb
--- /dev/null
+++ b/docs/topics/impala_runtime_filtering.xml
@@ -0,0 +1,506 @@
+<?xml version="1.0" encoding="UTF-8"?>
+<!DOCTYPE concept PUBLIC "-//OASIS//DTD DITA Concept//EN" "concept.dtd">
+<concept id="runtime_filtering" rev="2.5.0">
+
+  <title id="runtime_filters">Runtime Filtering for Impala Queries (<keyword 
keyref="impala25"/> or higher only)</title>
+  <titlealts audience="PDF"><navtitle>Runtime Filtering</navtitle></titlealts>
+  <prolog>
+    <metadata>
+      <data name="Category" value="Impala"/>
+      <data name="Category" value="SQL"/>
+      <data name="Category" value="Querying"/>
+      <data name="Category" value="Performance"/>
+      <data name="Category" value="Developers"/>
+      <data name="Category" value="Data Analysts"/>
+    </metadata>
+  </prolog>
+
+  <conbody>
+
+    <p rev="2.5.0">
+      <indexterm audience="Cloudera">runtime filtering</indexterm>
+      <term>Runtime filtering</term> is a wide-ranging optimization feature 
available in
+      <keyword keyref="impala25_full"/> and higher. When only a fraction of 
the data in a table is
+      needed for a query against a partitioned table or to evaluate a join 
condition,
+      Impala determines the appropriate conditions while the query is running, 
and
+      broadcasts that information to all the <cmdname>impalad</cmdname> nodes 
that are reading the table
+      so that they can avoid unnecessary I/O to read partition data, and avoid
+      unnecessary network transmission by sending only the subset of rows that 
match the join keys
+      across the network.
+    </p>
+
+    <p>
+      This feature is primarily used to optimize queries against large 
partitioned tables
+      (under the name <term>dynamic partition pruning</term>) and joins of 
large tables.
+      The information in this section includes concepts, internals, and 
troubleshooting
+      information for the entire runtime filtering feature.
+      For specific tuning steps for partitioned tables,
+      <!-- and join queries, -->
+      see
+      <xref href="impala_partitioning.xml#dynamic_partition_pruning"/>.
+      <!-- and <xref href="impala_joins.xml#joins"/>. -->
+    </p>
+
+    <note type="important" rev="2.6.0 CDH-41184">
+      <p rev="2.6.0 CDH-41184">
+        When this feature made its debut in CDH 5.7 / Impala 2.5,
+        the default setting was <codeph>RUNTIME_FILTER_MODE=LOCAL</codeph>.
+        Now the default is <codeph>RUNTIME_FILTER_MODE=GLOBAL</codeph> in 
<keyword keyref="impala26_full"/> and higher,
+        which enables more wide-ranging and ambitious query optimization 
without requiring you to
+        explicitly set any query options.
+      </p>
+    </note>
+
+    <p outputclass="toc inpage"/>
+
+  </conbody>
+
+  <concept id="runtime_filtering_concepts">
+    <title>Background Information for Runtime Filtering</title>
+    <conbody>
+      <p>
+        To understand how runtime filtering works at a detailed level, you must
+        be familiar with some terminology from the field of distributed 
database technology:
+      </p>
+      <ul>
+        <li>
+          <p>
+            What a <term>plan fragment</term> is.
+            Impala decomposes each query into smaller units of work that are 
distributed across the cluster.
+            Wherever possible, a data block is read, filtered, and aggregated 
by plan fragments executing
+            on the same host. For some operations, such as joins and combining 
intermediate results into
+            a final result set, data is transmitted across the network from 
one DataNode to another.
+          </p>
+        </li>
+        <li>
+          <p>
+            What <codeph>SCAN</codeph> and <codeph>HASH JOIN</codeph> plan 
nodes are, and their role in computing query results:
+          </p>
+          <p>
+            In the Impala query plan, a <term>scan node</term> performs the 
I/O to read from the underlying data files.
+            Although this is an expensive operation from the traditional 
database perspective, Hadoop clusters and Impala are
+            optimized to do this kind of I/O in a highly parallel fashion. The 
major potential cost savings come from using
+            the columnar Parquet format (where Impala can avoid reading data 
for unneeded columns) and partitioned tables
+            (where Impala can avoid reading data for unneeded partitions).
+          </p>
+          <p>
+            Most Impala joins use the
+            <xref href="https://en.wikipedia.org/wiki/Hash_join"; 
scope="external" format="html"><term>hash join</term></xref>
+            mechanism. (It is only fairly recently that Impala
+            started using the nested-loop join technique, for certain kinds of 
non-equijoin queries.)
+            In a hash join, when evaluating join conditions from two tables, 
Impala constructs a hash table in memory with all
+            the different column values from the table on one side of the join.
+            Then, for each row from the table on the other side of the join, 
Impala tests whether the relevant column values
+            are in this hash table or not.
+          </p>
+          <p>
+            A <term>hash join node</term> constructs such an in-memory hash 
table, then performs the comparisons to
+            identify which rows match the relevant join conditions
+            and should be included in the result set (or at least sent on to 
the subsequent intermediate stage of
+            query processing). Because some of the input for a hash join might 
be transmitted across the network from another host,
+            it is especially important from a performance perspective to prune 
out ahead of time any data that is known to be
+            irrelevant.
+          </p>
+          <p>
+            The more distinct values are in the columns used as join keys, the 
larger the in-memory hash table and
+            thus the more memory required to process the query.
+          </p>
+        </li>
+        <li>
+          <p>
+            The difference between a <term>broadcast join</term> and a 
<term>shuffle join</term>.
+            (The Hadoop notion of a shuffle join is sometimes referred to in 
Impala as a <term>partitioned join</term>.)
+            In a broadcast join, the table from one side of the join 
(typically the smaller table)
+            is sent in its entirety to all the hosts involved in the query. 
Then each host can compare its
+            portion of the data from the other (larger) table against the full 
set of possible join keys.
+            In a shuffle join, there is no obvious <q>smaller</q> table, and 
so the contents of both tables
+            are divided up, and corresponding portions of the data are 
transmitted to each host involved in the query.
+            See <xref href="impala_hints.xml#hints"/> for information about 
how these different kinds of
+            joins are processed.
+          </p>
+        </li>
+        <li>
+          <p>
+            The notion of the build phase and probe phase when Impala 
processes a join query.
+            The <term>build phase</term> is where the rows containing the join 
key columns, typically for the smaller table,
+            are transmitted across the network and built into an in-memory 
hash table data structure on one or
+            more destination nodes.
+            The <term>probe phase</term> is where data is read locally 
(typically from the larger table) and the join key columns
+            are compared to the values in the in-memory hash table.
+            The corresponding input sources (tables, subqueries, and so on) 
for these
+            phases are referred to as the <term>build side</term> and the 
<term>probe side</term>.
+          </p>
+        </li>
+        <li>
+          <p>
+            How to set Impala query options: interactively within an 
<cmdname>impala-shell</cmdname> session through
+            the <codeph>SET</codeph> command, for a JDBC or ODBC application 
through the <codeph>SET</codeph> statement, or
+            globally for all <cmdname>impalad</cmdname> daemons through the 
<codeph>default_query_options</codeph> configuration
+            setting.
+          </p>
+        </li>
+      </ul>
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_internals">
+    <title>Runtime Filtering Internals</title>
+    <conbody>
+      <p>
+        The <term>filter</term> that is transmitted between plan fragments is 
essentially a list
+        of values for join key columns. When this list is values is 
transmitted in time to a scan node,
+        Impala can filter out non-matching values immediately after reading 
them, rather than transmitting
+        the raw data to another host to compare against the in-memory hash 
table on that host.
+        This data structure is implemented as a <term>Bloom filter</term>, 
which uses a probability-based
+        algorithm to determine all possible matching values. (The 
probability-based aspects means that the
+        filter might include some non-matching values, but if so, that does 
not cause any inaccuracy
+        in the final results.)
+      </p>
+      <p>
+        There are different kinds of filters to match the different kinds of 
joins (partitioned and broadcast).
+        A broadcast filter is a complete list of relevant values that can be 
immediately evaluated by a scan node.
+        A partitioned filter is a partial list of relevant values (based on 
the data processed by one host in the
+        cluster); all the partitioned filters must be combined into one (by 
the coordinator node) before the
+        scan nodes can use the results to accurately filter the data as it is 
read from storage.
+      </p>
+      <p>
+        Broadcast filters are also classified as local or global. With a local 
broadcast filter, the information
+        in the filter is used by a subsequent query fragment that is running 
on the same host that produced the filter.
+        A non-local broadcast filter must be transmitted across the network to 
a query fragment that is running on a
+        different host. Impala designates 3 hosts to each produce non-local 
broadcast filters, to guard against the
+        possibility of a single slow host taking too long. Depending on the 
setting of the <codeph>RUNTIME_FILTER_MODE</codeph> query option
+        (<codeph>LOCAL</codeph> or <codeph>GLOBAL</codeph>), Impala either 
uses a conservative optimization
+        strategy where filters are only consumed on the same host that 
produced them, or a more aggressive strategy
+        where filters are eligible to be transmitted across the network.
+      </p>
+
+      <note rev="2.6.0 CDH-41184 IMPALA-3333">
+        In <keyword keyref="impala26_full"/> and higher, the default for 
runtime filtering is the <codeph>GLOBAL</codeph> setting.
+      </note>
+
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_file_formats">
+    <title>File Format Considerations for Runtime Filtering</title>
+    <conbody>
+      <p>
+        Parquet tables get the most benefit from
+        the runtime filtering optimizations. Runtime filtering can speed up
+        join queries against partitioned or unpartitioned Parquet tables,
+        and single-table queries against partitioned Parquet tables.
+        See <xref href="impala_parquet.xml#parquet"/> for information about
+        using Parquet tables with Impala.
+      </p>
+      <p>
+        For other file formats (text, Avro, RCFile, and SequenceFile),
+        runtime filtering speeds up queries against partitioned tables only.
+        Because partitioned tables can use a mixture of formats, Impala 
produces
+        the filters in all cases, even if they are not ultimately used to
+        optimize the query.
+      </p>
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_timing">
+    <title>Wait Intervals for Runtime Filters</title>
+    <conbody>
+      <p>
+        Because it takes time to produce runtime filters, especially for
+        partitioned filters that must be combined by the coordinator node,
+        there is a time interval above which it is more efficient for
+        the scan nodes to go ahead and construct their intermediate result 
sets,
+        even if that intermediate data is larger than optimal. If it only takes
+        a few seconds to produce the filters, it is worth the extra time if 
pruning
+        the unnecessary data can save minutes in the overall query time.
+        You can specify the maximum wait time in milliseconds using the
+        <codeph>RUNTIME_FILTER_WAIT_TIME_MS</codeph> query option.
+      </p>
+      <p>
+        By default, each scan node waits for up to 1 second (1000 milliseconds)
+        for filters to arrive. If all filters have not arrived within the
+        specified interval, the scan node proceeds, using whatever filters
+        did arrive to help avoid reading unnecessary data. If a filter arrives
+        after the scan node begins reading data, the scan node applies that
+        filter to the data that is read after the filter arrives, but not to
+        the data that was already read.
+      </p>
+      <p>
+        If the cluster is relatively busy and your workload contains many
+        resource-intensive or long-running queries, consider increasing the 
wait time
+        so that complicated queries do not miss opportunities for optimization.
+        If the cluster is lightly loaded and your workload contains many small 
queries
+        taking only a few seconds, consider decreasing the wait time to avoid 
the
+        1 second delay for each query.
+      </p>
+    </conbody>
+  </concept>
+
+
+  <concept id="runtime_filtering_query_options">
+    <title>Query Options for Runtime Filtering</title>
+    <conbody>
+      <p>
+        See the following sections for information about the query options 
that control runtime filtering:
+      </p>
+      <ul>
+        <li>
+          <p>
+            The first query option adjusts the <q>sensitivity</q> of this 
feature.
+            <ph rev="2.6.0 CDH-41184 IMPALA-3333">By default, it is set to the 
highest level (<codeph>GLOBAL</codeph>).
+            (This default applies to <keyword keyref="impala26_full"/> and 
higher.
+            In previous releases, the default was <codeph>LOCAL</codeph>.)</ph>
+          </p>
+          <ul>
+            <li>
+              <p>
+                <xref 
href="impala_runtime_filter_mode.xml#runtime_filter_mode"/>
+              </p>
+            </li>
+          </ul>
+        </li>
+        <li>
+          <p>
+            The other query options are tuning knobs that you typically only 
adjust after doing
+            performance testing, and that you might want to change only for 
the duration of a single
+            expensive query:
+          </p>
+          <ul>
+            <li>
+              <p>
+                <xref 
href="impala_max_num_runtime_filters.xml#max_num_runtime_filters"/>
+              </p>
+            </li>
+            <li>
+              <p>
+                <xref 
href="impala_disable_row_runtime_filtering.xml#disable_row_runtime_filtering"/>
+              </p>
+            </li>
+            <li>
+              <p rev="2.6.0 IMPALA-3480">
+                <xref 
href="impala_runtime_filter_max_size.xml#runtime_filter_max_size"/> 
+              </p>
+            </li>
+            <li>
+              <p rev="2.6.0 IMPALA-3480">
+                <xref 
href="impala_runtime_filter_min_size.xml#runtime_filter_min_size"/> 
+              </p>
+            </li>
+            <li>
+              <p rev="2.6.0 IMPALA-3007">
+                <xref 
href="impala_runtime_bloom_filter_size.xml#runtime_bloom_filter_size"/>;
+                in <keyword keyref="impala26_full"/> and higher, this setting 
acts as a fallback when
+                statistics are not available, rather than as a directive.
+              </p>
+            </li>
+          </ul>
+        </li>
+      </ul>
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_explain_plan">
+    <title>Runtime Filtering and Query Plans</title>
+    <conbody>
+      <p>
+        In the same way the query plan displayed by the
+        <codeph>EXPLAIN</codeph> statement includes information
+        about predicates used by each plan fragment, it also
+        includes annotations showing whether a plan fragment
+        produces or consumes a runtime filter.
+        A plan fragment that produces a filter includes an
+        annotation such as
+        <codeph>runtime filters: <varname>filter_id</varname> &lt;- 
<varname>table</varname>.<varname>column</varname></codeph>,
+        while a plan fragment that consumes a filter includes an annotation 
such as
+        <codeph>runtime filters: <varname>filter_id</varname> -&gt; 
<varname>table</varname>.<varname>column</varname></codeph>.
+      </p>
+
+      <p>
+        The following example shows a query that uses a single runtime filter 
(labelled <codeph>RF00</codeph>)
+        to prune the partitions that are scanned in one stage of the query, 
based on evaluating the
+        result set of a subquery:
+      </p>
+
+<codeblock conref="../shared/impala_common.xml#common/simple_dpp_example"/>
+
+      <p>
+        The query profile (displayed by the <codeph>PROFILE</codeph> command 
in <cmdname>impala-shell</cmdname>)
+        contains both the <codeph>EXPLAIN</codeph> plan and more detailed 
information about the internal
+        workings of the query. The profile output includes a section labelled 
the <q>filter routing table</q>,
+        with information about each filter based on its ID.
+      </p>
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_queries">
+    <title>Examples of Queries that Benefit from Runtime Filtering</title>
+    <conbody>
+
+      <p>
+        In this example, Impala would normally do extra work to interpret the 
columns
+        <codeph>C1</codeph>, <codeph>C2</codeph>, <codeph>C3</codeph>, and 
<codeph>ID</codeph>
+        for each row in <codeph>HUGE_T1</codeph>, before checking the 
<codeph>ID</codeph>
+        value against the in-memory hash table constructed from all the 
<codeph>TINY_T2.ID</codeph>
+        values. By producing a filter containing all the 
<codeph>TINY_T2.ID</codeph> values
+        even before the query starts scanning the <codeph>HUGE_T1</codeph> 
table, Impala
+        can skip the unnecessary work to parse the column info as soon as it 
determines
+        that an <codeph>ID</codeph> value does not match any of the values 
from the other table.
+      </p>
+
+      <p>
+        The example shows <codeph>COMPUTE STATS</codeph> statements for both 
the tables (even
+        though that is a one-time operation after loading data into those 
tables) because
+        Impala relies on up-to-date statistics to
+        determine which one has more distinct <codeph>ID</codeph> values than 
the other.
+        That information lets Impala make effective decisions about which 
table to use to
+        construct the in-memory hash table, and which table to read from disk 
and
+        compare against the entries in the hash table.
+      </p>
+
+<codeblock rev="2.6.0 CDH-41184">
+COMPUTE STATS huge_t1;
+COMPUTE STATS tiny_t2;
+SELECT c1, c2, c3 FROM huge_t1 JOIN tiny_t2 WHERE huge_t1.id = tiny_t2.id;
+</codeblock>
+
+<!-- The greater-than comparison prevents runtime filtering from applying. 
Comment out for now;
+     put back if the example can be reworked in a way that does produce some 
filters.
+      <p>
+        In this example, <codeph>T1</codeph> is a table partitioned by year. 
The subquery
+        on <codeph>T2</codeph> produces a single value with the 
<codeph>MIN(year)</codeph> result,
+        and transmits that value as a filter to the plan fragments that are 
reading from <codeph>T1</codeph>.
+        Any non-matching partitions in <codeph>T1</codeph> are skipped.
+      </p>
+
+<codeblock>
+select c1 from t1 where year > (select min(year) from t2);
+</codeblock>
+-->
+
+      <p>
+        In this example, <codeph>T1</codeph> is a table partitioned by year. 
The subquery
+        on <codeph>T2</codeph> produces multiple values, and transmits those 
values as a filter to the plan
+        fragments that are reading from <codeph>T1</codeph>. Any non-matching 
partitions in <codeph>T1</codeph>
+        are skipped.
+      </p>
+
+<codeblock rev="2.6.0 CDH-41184">
+select c1 from t1 where year in (select distinct year from t2);
+</codeblock>
+
+      <p>
+        Now the <codeph>WHERE</codeph> clause contains an additional test that 
does not apply to
+        the partition key column.
+        A filter on a column that is not a partition key is called a per-row 
filter.
+        Because per-row filters only apply for Parquet, <codeph>T1</codeph> 
must be a Parquet table.
+      </p>
+
+      <p>
+        The subqueries result in two filters being transmitted to
+        the scan nodes that read from <codeph>T1</codeph>. The filter on 
<codeph>YEAR</codeph> helps the query eliminate
+        entire partitions based on non-matching years. The filter on 
<codeph>C2</codeph> lets Impala discard
+        rows with non-matching <codeph>C2</codeph> values immediately after 
reading them. Without runtime filtering,
+        Impala would have to keep the non-matching values in memory, assemble 
<codeph>C1</codeph>, <codeph>C2</codeph>,
+        and <codeph>C3</codeph> into rows in the intermediate result set, and 
transmit all the intermediate rows
+        back to the coordinator node, where they would be eliminated only at 
the very end of the query.
+      </p>
+
+<codeblock rev="2.6.0 CDH-41184">
+select c1, c2, c3 from t1
+  where year in (select distinct year from t2)
+    and c2 in (select other_column from t3);
+</codeblock>
+
+      <p>
+        This example involves a broadcast join.
+        The fact that the <codeph>ON</codeph> clause would
+        return a small number of matching rows (because there
+        are not very many rows in <codeph>TINY_T2</codeph>)
+        means that the corresponding filter is very selective.
+        Therefore, runtime filtering will probably be effective
+        in optimizing this query.
+      </p>
+
+<codeblock rev="2.6.0 CDH-41184">
+select c1 from huge_t1 join [broadcast] tiny_t2
+  on huge_t1.id = tiny_t2.id
+  where huge_t1.year in (select distinct year from tiny_t2)
+    and c2 in (select other_column from t3);
+</codeblock>
+
+      <p>
+        This example involves a shuffle or partitioned join.
+        Assume that most rows in <codeph>HUGE_T1</codeph>
+        have a corresponding row in <codeph>HUGE_T2</codeph>.
+        The fact that the <codeph>ON</codeph> clause could
+        return a large number of matching rows means that
+        the corresponding filter would not be very selective.
+        Therefore, runtime filtering might be less effective
+        in optimizing this query.
+      </p>
+
+<codeblock rev="2.6.0 CDH-41184">
+select c1 from huge_t1 join [shuffle] huge_t2
+  on huge_t1.id = huge_t2.id
+  where huge_t1.year in (select distinct year from huge_t2)
+    and c2 in (select other_column from t3);
+</codeblock>
+
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_tuning">
+    <title>Tuning and Troubleshooting Queries that Use Runtime 
Filtering</title>
+    <conbody>
+      <p>
+        These tuning and troubleshooting procedures apply to queries that are
+        resource-intensive enough, long-running enough, and frequent enough
+        that you can devote special attention to optimizing them individually.
+      </p>
+
+      <p>
+        Use the <codeph>EXPLAIN</codeph> statement and examine the 
<codeph>runtime filters:</codeph>
+        lines to determine whether runtime filters are being applied to the 
<codeph>WHERE</codeph> predicates
+        and join clauses that you expect. For example, runtime filtering does 
not apply to queries that use
+        the nested loop join mechanism due to non-equijoin operators.
+      </p>
+
+      <p>
+        Make sure statistics are up-to-date for all tables involved in the 
queries.
+        Use the <codeph>COMPUTE STATS</codeph> statement after loading data 
into non-partitioned tables,
+        and <codeph>COMPUTE INCREMENTAL STATS</codeph> after adding new 
partitions to partitioned tables.
+      </p>
+
+      <p>
+        If join queries involving large tables use unique columns as the join 
keys,
+        for example joining a primary key column with a foreign key column, 
the overhead of
+        producing and transmitting the filter might outweigh the performance 
benefit because
+        not much data could be pruned during the early stages of the query.
+        For such queries, consider setting the query option 
<codeph>RUNTIME_FILTER_MODE=OFF</codeph>.
+      </p>
+
+    </conbody>
+  </concept>
+
+  <concept id="runtime_filtering_limits">
+    <title>Limitations and Restrictions for Runtime Filtering</title>
+    <conbody>
+      <p>
+        The runtime filtering feature is most effective for the Parquet file 
formats.
+        For other file formats, filtering only applies for partitioned tables.
+        See <xref 
href="impala_runtime_filtering.xml#runtime_filtering_file_formats"/>.
+      </p>
+
+      <!-- To do: check if this restriction is lifted in 5.8 / 2.6. -->
+      <p rev="IMPALA-3054">
+        When the spill-to-disk mechanism is activated on a particular host 
during a query,
+        that host does not produce any filters while processing that query.
+        This limitation does not affect the correctness of results; it only 
reduces the
+        amount of optimization that can be applied to the query.
+      </p>
+
+    </conbody>
+  </concept>
+
+
+</concept>

Reply via email to