Done. Thank you for contributing.
On Wed, Apr 24, 2019 at 10:18 AM Yoshiki Obata
wrote:
> Hello everyone
>
> This is Yoshiki Obata.
> I created ticket BEAM-7137 and plan to fix it.
> I'm glad someone would add me as contributer to Jira.
>
> my Jira username is yoshiki.obata
>
> Best regards,
>
I do not know the answer.I believe this will be similar to sharing the RC
artifacts for validation purposes and would not be a formal release by
itself. But I am not an expert and I hope others will share their opinions.
I quickly searched pypi for apache projects and found at least airflow [1]
Hi Robert,
In addition to the questions described by Dian, I also want to know what
difficult problems Py4j's solution will encounter in add UDF support, which
you mentioned as follows:
Using something like Py4j is an easy way to get up an running, especially
> for a very faithful API, but the
Thanks everyone for the discussion here.
Regarding to the Java/Scala UDF and the built-in UDF to execute in the current
Flink way (directly in JVM, not via RPC), I share the same thoughts with Max
and Robert and I think it will not be a big problem. From the design doc, I
guess the main
Pablo, Kenneth and I have a new blog ready for publication which covers how
to create a "looping timer" it allows for default values to be created in a
window when no incoming elements exists. We just need to clear a few bits
before publication, but would be great to have that also include a
Hi,
wouldn't that be in conflict with Apache release policy [1] ?
[1] http://www.apache.org/legal/release-policy.html
On Thu, Apr 25, 2019 at 1:35 AM Alan Myrvold wrote:
> Great idea. I like the RC candidates to follow as much as the release
> artifact process as possible.
>
> On Wed, Apr 24,
Hi,
I assigned you BEAM-352 and restarted the failing tests on your PR. There
is also a reviewer assigned to your PR.
Ahmet
On Wed, Apr 24, 2019 at 11:26 AM Madhusudhan Reddy Vennapusa <
sudhan...@gmail.com> wrote:
> HI Team,
>
> I worked on [BEAM-3344] and raised a pull request, though Java
Great idea. I like the RC candidates to follow as much as the release
artifact process as possible.
On Wed, Apr 24, 2019 at 3:27 PM Ahmet Altay wrote:
> To clarify my proposal, I am proposing publishing to the production pypi
> repository with an rc tag in the version. And in turn allow users
To clarify my proposal, I am proposing publishing to the production pypi
repository with an rc tag in the version. And in turn allow users to depend
on beam's rc version + all the other regular dependencies users would have
directly from pypi.
Publishing to test pypi repo would also be helpful if
I think this is a great idea. A way of doing it for python would be by
using the test repository for PyPi[1], and that way we would not have to do
an official PyPi release, but still would be able to install it with pip
(by passing an extra flag), and test.
In fact, there are some Beam artifacts
Hi all,
What do you think about the idea of publishing pre-release artifacts as
part of the RC emails?
For Python this would translate into publishing the same artifacts from RC
email with a version like "2.X.0rcY" to pypi. I do not know, but I am
guessing we can do a similar thing with Maven
Well state is still not implemented for merging windows even for Java
(though I believe the idea was to disallow ValueState there).
On Wed, Apr 24, 2019 at 1:11 PM Robert Bradshaw wrote:
> It was unclear what the semantics were for ValueState for merging
> windows. (It's also a bit weird as
It was unclear what the semantics were for ValueState for merging
windows. (It's also a bit weird as it's inherently a race condition
wrt element ordering, unlike Bag and CombineState, though you can
always implement it as a CombineState that always returns the latest
value which is a bit more
That's a great idea! I thought about this too after those posts came up on
the list recently. I started to look into it, but I noticed that there's
actually no implementation of ValueState in userstate. Is there a reason
for that? I started to work on a patch to add it but I was just curious if
HI Team,
I worked on [BEAM-3344] and raised a pull request, though Java pre-commit
is failing(due to gcp tests), its not related to my changes. Could someone
please review my changes or suggest me if java pre-commit is actually
failing because i missed something.
Also i would like to start
Thanks for the useful pointers! We are looking forward to integrating both
Portable and Python-specific tests for Samza runner. A few questions:
- For portable running tests: by looking at the portableValidatesRunnerTask in
flink_job_server.gradle, it seems it's the same set of Java tests but
Hello everyone
This is Yoshiki Obata.
I created ticket BEAM-7137 and plan to fix it.
I'm glad someone would add me as contributer to Jira.
my Jira username is yoshiki.obata
Best regards,
Yoshiki
--
Yoshiki Obata
mail: yoshiki.ob...@gmail.com
gh: https://github.com/lazylynx
It seems to me that we can assume that if Beam is running in a Java 11
runtime, any Java 11 features used in the body of a DoFn should just work.
The interesting part will be whether there is anything on the boundary that
changes (e.g. are there changes to type inference rules that make them
Thanks for the meeting summary, Stephan. Sound like you covered a lot of
ground. Some more comments below, adding onto what Max has said.
On Wed, Apr 24, 2019 at 3:20 PM Maximilian Michels wrote:
>
> Hi Stephan,
>
> This is excited! Thanks for sharing. The inter-process communication
> code
Fully agree that this is an effort that goes beyond changing a type
parameter but I think we have a chance here to cooperate between the two
projects. I would be happy to help out where I can.
I'm not sure at this point what exactly is feasible for reuse but I
would imagine the Runner-related
The Nexmark dataflow runs don't seem to be triggered by Run Java
PostCommit.
On Wed, Apr 24, 2019 at 1:58 AM Etienne Chauchot
wrote:
> Reuven,
>
> Nexmark tests are indeed run as PostCommits (each commit on master). I
> guess we have been flooded with jenkins notification emails.
>
> Etienne
>
Hi all,FYI I just submitted a PR (1) to add the CVE audit plugin to the build
as an optional task gradlew audit --info.
[1] https://github.com/apache/beam/pull/8388
Etienne
Le mardi 23 avril 2019 à 17:25 +0200, Etienne Chauchot a écrit :
> Hi,should I merge my branch
>
Hi Stephan,
This is excited! Thanks for sharing. The inter-process communication
code looks like the most natural choice as a common ground. To go
further, there are indeed some challenges to solve.
=> Biggest question is whether the language-independent DAG is expressive
enough to capture
On Wed, Apr 24, 2019 at 12:21 PM Maximilian Michels wrote:
>
> Good idea to let the client expose an artifact staging service that the
> ExpansionService could use to stage artifacts. This solves two problems:
>
> (1) The Expansion Service not being able to access the Job Server
> artifact
Hi all,
I’m currently working on enhancing a Beam test suite to check compatibility
with Java 11 UDFs. As JDK11 introduces several useful features, I wanted to
turn to the Devlist to gather your opinions on which features should be
included in the DoFn.
To give you an idea of how the test will
Good idea to let the client expose an artifact staging service that the
ExpansionService could use to stage artifacts. This solves two problems:
(1) The Expansion Service not being able to access the Job Server
artifact staging service
(2) The client not having access to the dependencies
Hi Kenn, I think you are right, the Python SDK harness can be shared to
Flink, and also need to add some new primitive operations. Regarding
runner-side, I think most of the code which in runners/java-fun- Execution
is can be shared(but need some improvement, such as FnDataService), some of
them
If you are interested in portable python pipeline validation, I think
fn_api_runner_test would also help.
Just to note, Ankur mentioned flinkCompatibilityMatrix, that one uses
fn_api_runner_test with some tooling on top to bring up the test cluster.
On 23.04.19 19:23, Boyuan Zhang wrote:
Hi
Reuven,
Nexmark tests are indeed run as PostCommits (each commit on master). I guess we
have been flooded with jenkins
notification emails.
Etienne
Le mardi 23 avril 2019 à 15:24 -0700, Reuven Lax a écrit :
> I mistakenly though that Java PostCommit would run these tests, and I merged
> based
Hi, I agree that checking Nexmark should be a mandatory task of the release
process, I think it is already mentioned in
the spreadsheet. Indeed it detects both functional and performances regressions
and on all the beam model scope.The only
lacking things in Nexmark are with 2 runners:-
Hi Stephan,
Thanks for your summary, from the points of my view, we are on the same
page about the conclusion of the discussion!
I completely agree that we can divide the support of the Python Table API
into short-term and long-term goals, and the design of short-term goals
should be smoothly
31 matches
Mail list logo