klion26 commented on a change in pull request #247:
URL: https://github.com/apache/flink-web/pull/247#discussion_r437135017



##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +47,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many 
aspects of threading, concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点等许多方面。
 
-As part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 we are working on making this much simpler for sources. New sources should not 
have to deal with any aspect of concurrency/threading and checkpointing any 
more.
+作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力让这些数据源(source)更加简单。新的数据源应该不必处理并发/线程和检查点的任何方面。

Review comment:
       `我们正在努力让这些数据源(source)更加简单` 这里改成 `实现数据源` 或者 `数据源实现` 等会更好一些吗?前面说 
`连接器很难实现`,这里可以对应起来

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +47,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many 
aspects of threading, concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点等许多方面。
 
-As part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 we are working on making this much simpler for sources. New sources should not 
have to deal with any aspect of concurrency/threading and checkpointing any 
more.
+作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力让这些数据源(source)更加简单。新的数据源应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对数据汇(sink)的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to 
run. Except for examples that show how to use specific connectors, like the 
Kafka connector. Sources/sinks that are ok to use are 
`StreamExecutionEnvironment.socketTextStream`, which should not be used in 
production but is quite handy for exploring how things work, and file-based 
sources/sinks. (For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如 Kafka 连接器。数据源/数据汇可以使用 
`StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的数据源/数据源。(对于流,Flink
 提供了连续的文件数据源读取数据)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount 
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between 
real-world code and purely abstract examples. The WordCount example is quite 
long in the tooth by now but it’s a good showcase of simple code that 
highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 
中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general 
idea of the example in the class-level Javadoc and describe what is happening 
and what functionality is used throughout the code. The expected input data and 
output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run path/to/myExample.jar --param1 … 
--param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example 
(from the Jar that is created for each example using `bin/flink run 
path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在 SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016  
[[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。

Review comment:
       `但快速网络搜索可能对此有所帮助` 这个可以有一个更好的描述吗?现在这样读起来有一点突兀。
   这里的意思应该上 标准可能在网上找不到(并不是所有的部分都能找到),但是首先应该通过网络查找确认?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +47,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many 
aspects of threading, concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点等许多方面。
 
-As part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 we are working on making this much simpler for sources. New sources should not 
have to deal with any aspect of concurrency/threading and checkpointing any 
more.
+作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力让这些数据源(source)更加简单。新的数据源应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对数据汇(sink)的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to 
run. Except for examples that show how to use specific connectors, like the 
Kafka connector. Sources/sinks that are ok to use are 
`StreamExecutionEnvironment.socketTextStream`, which should not be used in 
production but is quite handy for exploring how things work, and file-based 
sources/sinks. (For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如 Kafka 连接器。数据源/数据汇可以使用 
`StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的数据源/数据源。(对于流,Flink
 提供了连续的文件数据源读取数据)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount 
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between 
real-world code and purely abstract examples. The WordCount example is quite 
long in the tooth by now but it’s a good showcase of simple code that 
highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 
中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general 
idea of the example in the class-level Javadoc and describe what is happening 
and what functionality is used throughout the code. The expected input data and 
output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run path/to/myExample.jar --param1 … 
--param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example 
(from the Jar that is created for each example using `bin/flink run 
path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在 SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016  
[[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been 
discussed industry-wide and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing 
this document 
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
 ). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 
等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough 
discussion with the community.
-* Please help committers by performing some initial research about how other 
vendors such as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are 
handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming 
world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像 
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical 
programs following the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax 
and names, but can be closer to the way a programming language would do/name 
functions and features, if that helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) 
but should nevertheless “feel like SQL”. Functions and operations should have 
equal semantics and naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL 类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。

Review comment:
       这句话不完全确定:
   `并通过多次接触相同代码路径产生开销` 这个地方时说多次的修改代码会增加负担(修改时算一次)?

##########
File path: contributing/code-style-and-quality-components.zh.md
##########
@@ -48,96 +47,95 @@ How to name config keys:
   }
   ```
 
-* The resulting config keys should hence be:
+* 因此生成的配置键应该:
 
-  **NOT** `"taskmanager.detailed.network.metrics"`
+  **不是** `"taskmanager.detailed.network.metrics"`
 
-  **But rather** `"taskmanager.network.detailed-metrics"`
+  **而是** `"taskmanager.network.detailed-metrics"`
 
 
-### Connectors
+### 连接器
 
-Connectors are historically hard to implement and need to deal with many 
aspects of threading, concurrency, and checkpointing.
+连接器历来很难实现,需要处理多线程、并发和检查点等许多方面。
 
-As part of 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 we are working on making this much simpler for sources. New sources should not 
have to deal with any aspect of concurrency/threading and checkpointing any 
more.
+作为 
[FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)
 的一部分,我们正在努力让这些数据源(source)更加简单。新的数据源应该不必处理并发/线程和检查点的任何方面。
 
-A similar FLIP can be expected for sinks in the near future.
+预计在不久的将来,会有类似针对数据汇(sink)的 FLIP。
 
 
-### Examples
+### 示例
 
-Examples should be self-contained and not require systems other than Flink to 
run. Except for examples that show how to use specific connectors, like the 
Kafka connector. Sources/sinks that are ok to use are 
`StreamExecutionEnvironment.socketTextStream`, which should not be used in 
production but is quite handy for exploring how things work, and file-based 
sources/sinks. (For streaming, there is the continuous file source)
+示例应该是自包含的,不需要运行 Flink 以外的系统。除了显示如何使用具体的连接器的示例,比如 Kafka 连接器。数据源/数据汇可以使用 
`StreamExecutionEnvironment.socketTextStream`,这个不应该在生产中使用,但对于研究示例如何运行是相当方便的,以及基于文件的数据源/数据源。(对于流,Flink
 提供了连续的文件数据源读取数据)
+示例也不应该是纯粹的玩具示例,而是在现实世界的代码和纯粹的抽象示例之间取得平衡。WordCount 
示例到现在已经很久了,但它是一个很好的功能突出并可以做有用事情的简单代码示例。
 
-Examples should also not be pure toy-examples but strike a balance between 
real-world code and purely abstract examples. The WordCount example is quite 
long in the tooth by now but it’s a good showcase of simple code that 
highlights functionality and can do useful things.
+示例中应该有不少的注释。他们可以在类级 Javadoc 
中描述示例的总体思路,并且描述正在发生什么和整个代码里使用了什么功能。还应描述预期的输入数据和输出数据。
 
-Examples should also be heavy in comments. They should describe the general 
idea of the example in the class-level Javadoc and describe what is happening 
and what functionality is used throughout the code. The expected input data and 
output data should also be described.
+示例应该包括参数解析,以便你可以运行一个示例(使用 `bin/flink run path/to/myExample.jar --param1 … 
--param2` 运行程序)。
 
-Examples should include parameter parsing, so that you can run an example 
(from the Jar that is created for each example using `bin/flink run 
path/to/myExample.jar --param1 … --param2`.
 
+### 表和 SQL API
 
-### Table & SQL API
 
+#### 语义
 
-#### Semantics
+**SQL 标准应该是事实的主要来源。**
 
-**The SQL standard should be the main source of truth.**
+* 语法、语义和功能应该和 SQL 保持一致!
+* 我们不需要重造轮子。大部分问题都已经在业界广泛讨论过并写在 SQL 标准中了。
+* 我们依靠最新的标准(在写这篇文档时使用  SQL:2016 or ISO/IEC 9075:2016  
[[下载]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip))。并非每个部分都可在线获取,但快速网络搜索可能对此有所帮助。
 
-* Syntax, semantics, and features should be aligned with SQL!
-* We don’t need to reinvent the wheel. Most problems have already been 
discussed industry-wide and written down in the SQL standard.
-* We rely on the newest standard (SQL:2016 or ISO/IEC 9075:2016 when writing 
this document 
[[download]](https://standards.iso.org/ittf/PubliclyAvailableStandards/c065143_ISO_IEC_TR_19075-5_2016.zip)
 ). Not every part is available online but a quick web search might help here.
+讨论与标准或厂商特定解释的差异。
 
-Discuss divergence from the standard or vendor-specific interpretations.
+* 一旦定义了语法或行为就不能轻易撤销。
+* 需要扩展或解释标准的贡献需要与社区进行深入的讨论。
+* 请通过一些对 Postgres、Microsoft SQL Server、Oracle、Hive、Calcite、Beam 
等其他厂商如何处理此类案例进行初步的探讨来帮助提交者。
 
-* Once a syntax or behavior is defined it cannot be undone easily.
-* Contributions that need to extent or interpret the standard need a thorough 
discussion with the community.
-* Please help committers by performing some initial research about how other 
vendors such as Postgres, Microsoft SQL Server, Oracle, Hive, Calcite, Beam are 
handling such cases.
 
+将 Table API 视为 SQL 和 Java/Scala 编程世界之间的桥梁。
 
-Consider the Table API as a bridge between the SQL and Java/Scala programming 
world.
+* Table API 是一种嵌入式域特定语言,用于遵循关系模型的分析程序。
+在语法和名称方面不需要严格遵循 SQL 标准,但如果这有助于使其感觉更直观,那么可以更接近编程语言的方式/命名函数和功能。
+* Table API 可能有一些非 SQL 功能(例如 map()、flatMap() 等),但还是应该“感觉像 
SQL”。如果可能,函数和算子应该有相等的语义和命名。
 
-* The Table API is an Embedded Domain Specific Language for analytical 
programs following the relational model.
-It is not required to strictly follow the SQL standard in regards of syntax 
and names, but can be closer to the way a programming language would do/name 
functions and features, if that helps make it feel more intuitive.
-* The Table API might have some non-SQL features (e.g. map(), flatMap(), etc.) 
but should nevertheless “feel like SQL”. Functions and operations should have 
equal semantics and naming if possible.
 
+#### 常见错误
 
-#### Common mistakes
+* 添加功能时支持 SQL 的类型系统。
+    * SQL 函数、连接器或格式化从一开始就应该原生的支持大多数 SQL 类型。
+    * 不支持的类型会导致混淆,限制可用性,并通过多次接触相同代码路径产生开销。
+    * 例如,当添加 `SHIFT_LEFT` 函数时,确保贡献足够通用,不仅适用于 `INT` 也适用于 `BIGINT` 或 `TINYINT`。
 
-* Support SQL’s type system when adding a feature.
-    * A SQL function, connector, or format should natively support most SQL 
types from the very beginning.
-    * Unsupported types lead to confusion, limit the usability, and create 
overhead by touching the same code paths multiple times.
-    * For example, when adding a `SHIFT_LEFT` function, make sure that the 
contribution is general enough not only for `INT` but also `BIGINT` or 
`TINYINT`.
 
+#### 测试
 
-#### Testing
+测试为空性
 
-Test for nullability.
+* 几乎每个操作,SQL 都原生支持 `NULL`,并具有 3 值布尔逻辑。
+* 确保测试每个功能的可空性。
 
-* SQL natively supports `NULL` for almost every operation and has a 3-valued 
boolean logic.
-* Make sure to test every feature for nullability as well.
 
+尽量避免集成测试
 
-Avoid full integration tests
+* 启动一个 Flink 集群并且对 SQL 查询生成的代码进行编译会很耗时。
+* 避免对 planner 测试或 API 调用的变更进行集成测试。
+* 相反,使用单元测试验证计划器的优化计划。或者直接测试算子的运行时行为。

Review comment:
       这句话中的 planner 是不是也可以不翻译




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to