JingsongLi commented on code in PR #207:
URL: https://github.com/apache/flink-table-store/pull/207#discussion_r917501482


##########
docs/content/docs/development/query-table.md:
##########
@@ -26,49 +26,67 @@ under the License.
 
 # Query Table
 
-The Table Store is streaming batch unified, you can read full
-and incremental data depending on the runtime execution mode:
+You can directly SELECT the table in batch runtime mode of Flink SQL.
 
 ```sql
 -- Batch mode, read latest snapshot
 SET 'execution.runtime-mode' = 'batch';
 SELECT * FROM MyTable;
+```
 
--- Streaming mode, read incremental snapshot, read the snapshot first, then 
read the incremental
-SET 'execution.runtime-mode' = 'streaming';
-SELECT * FROM MyTable;
+## Query Engines
 
--- Streaming mode, read latest incremental
-SET 'execution.runtime-mode' = 'streaming';
-SELECT * FROM MyTable /*+ OPTIONS ('log.scan'='latest') */;
-```
+Table Store not only supports Flink SQL queries natively but also provides
+queries from other popular engines. See [Engines]({{< ref 
"docs/engines/overview" >}})
 
 ## Query Optimization
 
 It is highly recommended to specify partition and primary key filters
 along with the query, which will speed up the data skipping of the query.
-along with the query, which will speed up the data skipping of the query.
 
-Supported filter functions are:
+The filter functions that can accelerate data skipping are:
 - `=`
-- `<>`
 - `<`
 - `<=`
 - `>`
 - `>=`
-- `in`
-- starts with `like`
+- `IN (...)`
+- `LIKE '%abc'`
+- `IS NULL`
+
+Table Store will sort the data by primary key, which is good for point queries

Review Comment:
   `speeds up the` looks good to me.
   But for `the query filters should form`, I think it is not a `should`, it is 
just an optimization.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to