yaooqinn opened a new pull request, #46133:
URL: https://github.com/apache/spark/pull/46133
<!--
Thanks for sending a pull request! Here are some tips for you:
1. If this is your first time, please read our contributor guidelines:
https://spark.apache.org/contributing.html
2. Ensure you have added or run the appropriate tests for your PR:
https://spark.apache.org/developer-tools.html
3. If the PR is unfinished, add '[WIP]' in your PR title, e.g.,
'[WIP][SPARK-XXXX] Your PR title ...'.
4. Be sure to keep the PR description updated to reflect all changes.
5. Please write your PR title to summarize what this PR proposes.
6. If possible, provide a concise example to reproduce the issue for a
faster review.
7. If you want to add a new configuration, please read the guideline first
for naming configurations in
'core/src/main/scala/org/apache/spark/internal/config/ConfigEntry.scala'.
8. If you want to add or modify an error type or message, please read the
guideline first in
'common/utils/src/main/resources/error/README.md'.
-->
### What changes were proposed in this pull request?
<!--
Please clarify what changes you are proposing. The purpose of this section
is to outline the changes and how this PR fixes the issue.
If possible, please consider writing useful notes for better and faster
reviews in your PR. See the examples below.
1. If you refactor some codes with changing classes, showing the class
hierarchy will help reviewers.
2. If you fix some SQL features, you can provide some references of other
DBMSes.
3. If there is design documentation, please add the link.
4. If there is a discussion in the mailing list, please add the link.
-->
This PR introduces a universal BinaryFormatter to make binary output
consistent
across all clients, such as `beeline`, `spark-sql`, and `spark-shell`, for
both primitive and nested binaries.
Considering we already have different styles and compatibility with Apache
Hive through the
beeline or Hive JDBC driver, we need a configuration to control the binary
output format.
#### Problem statement
Currently, the binary output format is inconsistent across different
clients. For example,
- Hive beeline(spark thriftserver)
- For primitive binaries, we pass the binary directly to the clients, and
then the clients will convert the binary based on the
option`convertBinaryArrayToString`
- when convertBinaryArrayToString is true, results in UTF8 encoded
strings, `[83, 112, 97, 114, 107] -> "Spark"`
- when convertBinaryArrayToString is false, Hive3-beeline results in
comma-separated byte strings, `[83, 112, 97, 114, 107] -> [83, 112, 97, 114,
107]` Hive4-beeline, a base64 encoded string, `[83, 112, 97, 114, 107] ->
U3Bhcmsg`
- For nested binaries, we pass the binary as UTF8 encoded strings
- Spark SQL CLI
- For both primitive and nested binaries, we print UTF8-encoded strings
- Spark Shell
- We do a special `cast` to convert the binary to a string in
space-separated hexadecimal format, `[83, 112, 97, 114, 107] -> "[53 70 61 72
6b]"`
**Given that no two behaviors are compatible or consistent, this could take
you a lot of time to digest.**
Besides Apache Hive, other modern databases like Postgres, and MySQL, also
support different binary output formats. The hexadecimal format is the most
recommended format for binary output. `[83, 112, 97, 114, 107] ->
"(0x)537061726b"`
### Why are the changes needed?
- A universal BinaryFormatter for consistensy
- A configuration for flexibility to align with Hive or other systems.
### Does this PR introduce _any_ user-facing change?
Yes, executing 'spark.sql("select cast('Spark' as bianry)").show' in Spark
shell displays `"Spark"` instead of `"[53 70 61 72 6b]"``
### How was this patch tested?
<!--
If tests were added, say they were added here. Please make sure to add some
test cases that check the changes thoroughly including negative and positive
cases if possible.
If it was tested in a way different from regular unit tests, please clarify
how you tested step by step, ideally copy and paste-able, so that other
reviewers can test and check, and descendants can verify in the future.
If tests were not added, please describe why they were not added and/or why
it was difficult to add.
If benchmark tests were added, please run the benchmarks in GitHub Actions
for the consistent environment, and the instructions could accord to:
https://spark.apache.org/developer-tools.html#github-workflow-benchmarks.
-->
new tests
### Was this patch authored or co-authored using generative AI tooling?
<!--
If generative AI tooling has been used in the process of authoring this
patch, please include the
phrase: 'Generated-by: ' followed by the name of the tool and its version.
If no, write 'No'.
Please refer to the [ASF Generative Tooling
Guidance](https://www.apache.org/legal/generative-tooling.html) for details.
-->no
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]
---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]