twalthr commented on a change in pull request #13373:
URL: https://github.com/apache/flink/pull/13373#discussion_r487703709



##########
File path: 
flink-formats/flink-avro/src/test/java/org/apache/flink/table/runtime/batch/AvroTypesITCase.java
##########
@@ -147,24 +150,26 @@ public AvroTypesITCase(
 
        @Test
        public void testAvroToRow() throws Exception {
-               ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+               StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
                env.getConfig().registerTypeWithKryoSerializer(LocalDate.class, 
AvroKryoSerializerUtils.JodaLocalDateSerializer.class);

Review comment:
       These should not be necessary in our tests, right?

##########
File path: 
flink-formats/flink-avro/src/test/java/org/apache/flink/table/runtime/batch/AvroTypesITCase.java
##########
@@ -147,24 +150,26 @@ public AvroTypesITCase(
 
        @Test
        public void testAvroToRow() throws Exception {
-               ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+               StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
                env.getConfig().registerTypeWithKryoSerializer(LocalDate.class, 
AvroKryoSerializerUtils.JodaLocalDateSerializer.class);
                env.getConfig().registerTypeWithKryoSerializer(LocalTime.class, 
AvroKryoSerializerUtils.JodaLocalTimeSerializer.class);
-               BatchTableEnvironment tEnv = BatchTableEnvironment.create(env, 
config());
+               StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env, 
EnvironmentSettings.newInstance().useBlinkPlanner().build());
 
-               Table t = tEnv.fromDataSet(testData(env));
+               Table t = tEnv.fromDataStream(testData(env));
                Table result = t.select($("*"));
 
-               List<Row> results = tEnv.toDataSet(result, Row.class).collect();
+               Iterable<Row> users = () -> 
DataStreamUtils.collect(tEnv.toAppendStream(result, Row.class));
+               List<Row> results = StreamSupport
+                       .stream(users.spliterator(), false)
+                       .collect(Collectors.toList());
                String expected =
                        
"black,null,Whatever,[true],[hello],true,java.nio.HeapByteBuffer[pos=0 lim=10 
cap=10]," +
                        "2014-03-01,java.nio.HeapByteBuffer[pos=0 lim=2 
cap=2],[7, -48],0.0,GREEN," +
                        "[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 
0],42,{},null,null,null,123456," +
                        "12:12:12.000,123456,2014-03-01T12:12:12.321Z,null\n" +
                        
"blue,null,Charlie,[],[],false,java.nio.HeapByteBuffer[pos=0 lim=10 
cap=10],2014-03-01," +
                        "java.nio.HeapByteBuffer[pos=0 lim=2 cap=2],[7, 
-48],1.337,RED,null,1337,{}," +
-                       "{\"num\": 42, \"street\": \"Bakerstreet\", \"city\": 
\"Berlin\", \"state\": " +
-                       "\"Berlin\", \"zip\": 
\"12049\"},null,null,123456,12:12:12.000,123456," +
+                       
"Berlin,42,Berlin,Bakerstreet,12049,null,null,123456,12:12:12.000,123456," +

Review comment:
       can we add a TODO here, because actually the record should pass the 
Table API unmodified, but this will come with FLUP-136

##########
File path: 
flink-formats/flink-avro/src/test/java/org/apache/flink/table/runtime/batch/AvroTypesITCase.java
##########
@@ -147,24 +150,26 @@ public AvroTypesITCase(
 
        @Test
        public void testAvroToRow() throws Exception {
-               ExecutionEnvironment env = 
ExecutionEnvironment.getExecutionEnvironment();
+               StreamExecutionEnvironment env = 
StreamExecutionEnvironment.getExecutionEnvironment();
                env.getConfig().registerTypeWithKryoSerializer(LocalDate.class, 
AvroKryoSerializerUtils.JodaLocalDateSerializer.class);
                env.getConfig().registerTypeWithKryoSerializer(LocalTime.class, 
AvroKryoSerializerUtils.JodaLocalTimeSerializer.class);
-               BatchTableEnvironment tEnv = BatchTableEnvironment.create(env, 
config());
+               StreamTableEnvironment tEnv = 
StreamTableEnvironment.create(env, 
EnvironmentSettings.newInstance().useBlinkPlanner().build());
 
-               Table t = tEnv.fromDataSet(testData(env));
+               Table t = tEnv.fromDataStream(testData(env));
                Table result = t.select($("*"));
 
-               List<Row> results = tEnv.toDataSet(result, Row.class).collect();
+               Iterable<Row> users = () -> 
DataStreamUtils.collect(tEnv.toAppendStream(result, Row.class));
+               List<Row> results = StreamSupport

Review comment:
       nit: We recently added a `CollectionUtils.iterableToList` maybe this is 
easier to read than the StreamSupport class.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to