[ 
https://issues.apache.org/jira/browse/FLINK-10134?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16651886#comment-16651886
 ] 

ASF GitHub Bot commented on FLINK-10134:
----------------------------------------

fhueske commented on a change in pull request #6823: [FLINK-10134] UTF-16 
support for TextInputFormat bug refixed
URL: https://github.com/apache/flink/pull/6823#discussion_r225577541
 
 

 ##########
 File path: 
flink-java/src/test/java/org/apache/flink/api/java/io/TextInputFormatTest.java
 ##########
 @@ -207,12 +207,212 @@ private void testRemovingTrailingCR(String lineBreaker, 
String delimiter) {
                                assertEquals(content, result);
                        }
 
+               } catch (Throwable t) {
+                       System.err.println("test failed with exception: " + 
t.getMessage());
+                       t.printStackTrace(System.err);
+                       fail("Test erroneous");
                }
-               catch (Throwable t) {
+       }
+
+       /**
+        * Test different file encodings,for example: UTF-8, UTF-8 with bom, 
UTF-16LE, UTF-16BE, UTF-32LE, UTF-32BE.
+        */
+       @Test
+       public void testFileCharset() {
+               String first = "First line";
+
+               // Test special different languages
+               for (final String data : new String[]{"Hello", "ハロー", "привет", 
"Bonjour", "Сайн байна уу", "안녕하세요."}) {
+                       testAllFileCharsetNoDelimiter(data);
+               }
+
+               // Test special symbol
+               for (final String delimiterStr : new String[]{"\\", "^", "|", 
"[", ".", "*"}) {
 
 Review comment:
   Testing for multiple similar delimiiters does not add much value. Add a 
multi-charater delimiter as well like `"|<>|"`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


> UTF-16 support for TextInputFormat
> ----------------------------------
>
>                 Key: FLINK-10134
>                 URL: https://issues.apache.org/jira/browse/FLINK-10134
>             Project: Flink
>          Issue Type: Bug
>          Components: Core
>    Affects Versions: 1.4.2
>            Reporter: David Dreyfus
>            Priority: Blocker
>              Labels: pull-request-available
>
> It does not appear that Flink supports a charset encoding of "UTF-16". It 
> particular, it doesn't appear that Flink consumes the Byte Order Mark (BOM) 
> to establish whether a UTF-16 file is UTF-16LE or UTF-16BE.
>  
> TextInputFormat.setCharset("UTF-16") calls DelimitedInputFormat.setCharset(), 
> which sets TextInputFormat.charsetName and then modifies the previously set 
> delimiterString to construct the proper byte string encoding of the the 
> delimiter. This same charsetName is also used in TextInputFormat.readRecord() 
> to interpret the bytes read from the file.
>  
> There are two problems that this implementation would seem to have when using 
> UTF-16.
>  # delimiterString.getBytes(getCharset()) in DelimitedInputFormat.java will 
> return a Big Endian byte sequence including the Byte Order Mark (BOM). The 
> actual text file will not contain a BOM at each line ending, so the delimiter 
> will never be read. Moreover, if the actual byte encoding of the file is 
> Little Endian, the bytes will be interpreted incorrectly.
>  # TextInputFormat.readRecord() will not see a BOM each time it decodes a 
> byte sequence with the String(bytes, offset, numBytes, charset) call. 
> Therefore, it will assume Big Endian, which may not always be correct. [1] 
> [https://github.com/apache/flink/blob/master/flink-java/src/main/java/org/apache/flink/api/java/io/TextInputFormat.java#L95]
>  
> While there are likely many solutions, I would think that all of them would 
> have to start by reading the BOM from the file when a Split is opened and 
> then using that BOM to modify the specified encoding to a BOM specific one 
> when the caller doesn't specify one, and to overwrite the caller's 
> specification if the BOM is in conflict with the caller's specification. That 
> is, if the BOM indicates Little Endian and the caller indicates UTF-16BE, 
> Flink should rewrite the charsetName as UTF-16LE.
>  I hope this makes sense and that I haven't been testing incorrectly or 
> misreading the code.
>  
> I've verified the problem on version 1.4.2. I believe the problem exists on 
> all versions. 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to