[ 
https://issues.apache.org/jira/browse/BEAM-6683?focusedWorklogId=255032&page=com.atlassian.jira.plugin.system.issuetabpanels:worklog-tabpanel#worklog-255032
 ]

ASF GitHub Bot logged work on BEAM-6683:
----------------------------------------

                Author: ASF GitHub Bot
            Created on: 06/Jun/19 10:47
            Start Date: 06/Jun/19 10:47
    Worklog Time Spent: 10m 
      Work Description: mxm commented on issue #8174: [BEAM-6683] add 
createCrossLanguageValidatesRunner task
URL: https://github.com/apache/beam/pull/8174#issuecomment-499445538
 
 
   I'm just collecting all the issues here. If they do not appear on Jenkins, 
we can address them separately.
   
   After fixing the Docker executable path manually, I've got the tests 
running. Problematic test is `xlang_parquetio_test.py#test_write_and_read`. I 
wonder, is it not problematic to read and write to a parquet file from the same 
job? Especially because Flink does not usually do staged execution but 
"pipelines" the data through the entire DAG. I think the change in #8693 might 
have made that problem visible. Could we first execute the write pipeline and 
subsequently another read pipeline?
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Issue Time Tracking
-------------------

    Worklog Id:     (was: 255032)
    Time Spent: 14.5h  (was: 14h 20m)

> Add an integration test suite for cross-language transforms for Flink runner
> ----------------------------------------------------------------------------
>
>                 Key: BEAM-6683
>                 URL: https://issues.apache.org/jira/browse/BEAM-6683
>             Project: Beam
>          Issue Type: Test
>          Components: testing
>            Reporter: Chamikara Jayalath
>            Assignee: Heejong Lee
>            Priority: Major
>          Time Spent: 14.5h
>  Remaining Estimate: 0h
>
> We should add an integration test suite that covers following.
> (1) Currently available Java IO connectors that do not use UDFs work for 
> Python SDK on Flink runner.
> (2) Currently available Python IO connectors that do not use UDFs work for 
> Java SDK on Flink runner.
> (3) Currently available Java/Python pipelines work in a scalable manner for 
> cross-language pipelines (for example, try 10GB, 100GB input for 
> textio/avroio for Java and Python). 
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Reply via email to