thanks, appreciated :)
On Thu, Mar 23, 2017 at 4:59 PM Ted Yu wrote:
> Looks like you forgot to include JIRA number: BEAM-1799
>
> Cheers
>
> On Thu, Mar 23, 2017 at 4:26 PM, Stephen Sisk
> wrote:
>
> > hi!
> >
> > I just opened a jira ticket that I wanted to make sure the mailing list
> got
>
Looks like you forgot to include JIRA number: BEAM-1799
Cheers
On Thu, Mar 23, 2017 at 4:26 PM, Stephen Sisk
wrote:
> hi!
>
> I just opened a jira ticket that I wanted to make sure the mailing list got
> a chance to see.
>
> The problem is that the current design pattern for doing data loading
hi!
I just opened a jira ticket that I wanted to make sure the mailing list got
a chance to see.
The problem is that the current design pattern for doing data loading in IO
ITs (either writing a small program or using an external tool) is complex,
inefficient and requires extra steps like install
Hi Davor,
Thanks for your response. I am working with my team. We have some questions
where we need little bit of help.
We are creating a pipeline where the source is hdfs. But when the pipeline
is run it can not find the hadoop host.
Do we need to configure before we run this pipeline? I could not
I like the idea of being able to use WindowMappingFns to access state
across windows in a manner similar to how side inputs are accessed.
On Wed, Mar 22, 2017 at 9:56 PM, Kenneth Knowles (JIRA)
wrote:
>
> [ https://issues.apache.org/jira/browse/BEAM-1261?page=
> com.atlassian.jira.plugin.sys
So, if everything is in place in Spark 2.X and we use provided dependencies
for Spark in Beam.
Theoretically, you can run the same code in 2.X without any need for a
branch?
2017-03-23 9:47 GMT+02:00 Amit Sela :
> If StreamingContext is valid and we don't have to use SparkSession, and
> Accumulat
If StreamingContext is valid and we don't have to use SparkSession, and
Accumulators are valid as well and we don't need AccumulatorsV2, I don't
see a reason this shouldn't work (which means there are still tons of
reasons this could break, but I can't think of them off the top of my head
right now
Hi Kobi,
It's part of the plan yes. Let me push the branch on my github and share with
you (rebasing).
Regards
JB
On 03/23/2017 08:20 AM, Kobi Salant wrote:
Hi,
We use SparkContext & SparkContextStreaming extensively in Spark runner to
create the Dsteams & Rdds so we will need to work on mi
Hi,
We use SparkContext & SparkContextStreaming extensively in Spark runner to
create the Dsteams & Rdds so we will need to work on migrating from the 1.X
terms to 2.X terms (We may other incompatibilities that we will find out
during the work).
Regards
Kobi
2017-03-23 6:55 GMT+02:00 Jean-Bapti