t; Thanks for the report!
>
> Regards,
> Robert
>
>
> On Wed, Jun 1, 2016 at 6:42 PM, David Kim <david@braintreepayments.com
> > wrote:
>
>> Hello!
>>
>> Using Flink 1.0.3.
>>
>> This is cosmetic but will help clean up logging I
Hello!
Using Flink 1.0.3.
This is cosmetic but will help clean up logging I think.
The Apache *KafkaConsumer* logs a warning [1] for any unused properties.
This is great in case the developer has a typo or should clean up any
unused keys.
Flink's Kafka consumer and producer have some custom
ira issue to track this: FLINK-3969
> <https://issues.apache.org/jira/browse/FLINK-3969>.
>
> Thanks for reporting!
>
> Aljoscha
>
> On Mon, 23 May 2016 at 22:08 David Kim <david@braintreepayments.com>
> wrote:
>
>> Hi Max!
>>
>> Unfortunate
1 PM Maximilian Michels <m...@apache.org> wrote:
> Hi David,
>
> I'm afraid Flink logs all exceptions. You'll find the exceptions in the
> /log directory.
>
> Cheers,
> Max
>
> On Mon, May 23, 2016 at 6:18 PM, David Kim <
> david@braintree
Hello!
Just wanted to check up on this. :)
I grepped around for `log.error` and it *seems* that currently the only
events for logging out exceptions are for non-application related errors.
Thanks!
David
On Fri, May 20, 2016 at 12:35 PM David Kim <david@braintreepayments.com>
Hello!
Using flink 1.0.2, I noticed that exceptions thrown during a flink program
would show up on the flink dashboard in the 'Exceptions' tab. That's great!
However, I don't think flink currently logs this same exception. I was
hoping there would be an equivalent `log.error` call so that third
Hello all,
I read the documentation at [1] on iterations and had a question on whether
an assumption is safe to make.
As partial solutions are continuously looping through the step function,
when new elements are added as iteration inputs will the insertion order of
all of the elements be
Hi Stephan!
Following up on this issue, it seems the issue doesn't show itself when
using version 1.0.1. I'm able to run our unit tests in IntelliJ now.
Thanks!
David
On Wed, Apr 13, 2016 at 1:59 PM Stephan Ewen wrote:
> Does this problem persist? (It might have been caused
Hi Robert!
Thank you! :)
David
On Tue, Mar 22, 2016 at 7:59 AM, Robert Metzger <rmetz...@apache.org> wrote:
> Hey David,
>
> FLINK-3602 has been merged to master.
>
> On Fri, Mar 11, 2016 at 5:11 PM, David Kim <
> david@braintreepayments.com> wrote:
>
&
gt; recursive types.
>> Let's check this and come up with a fix...
>>
>> Greetings,
>> Stephan
>>
>>
>> On Thu, Mar 10, 2016 at 4:11 PM, David Kim <
>> david@braintreepayments.com> wrote:
>>
>>> Hello!
>>>
>>> Just wanted t
Hello all,
I'm running into a StackOverflowError using flink 1.0.0. I have an Avro
schema that has a self reference. For example:
item.avsc
{
"namespace": "..."
"type": "record"
"name": "Item",
"fields": [
{
"name": "parent"
"type": ["null, "Item"]
}
]
}
When
t downloaded the "flink-1.0-SNAPSHOT-bin-hadoop2_2.11.tgz” but
> there is no jar compiled with Scala 2.10. Could you check again?
> >
> > Regards,
> > Chiwan Park
> >
> >> On Feb 10, 2016, at 2:59 AM, David Kim <david@braintreepayments.com>
> wrote:
Hello again,
I saw the recent change to flink 1.0-SNAPSHOT on explicitly adding the
scala version to the suffix.
I have a sbt project that fails. I don't believe it's a misconfiguration
error on my end because I do see in the logs that it tries to resolve
everything with _2.11.
Could this
d!
>
> The dependencies that SBT marks as wrong
> (org.apache.flink:flink-shaded-hadoop2,
> org.apache.flink:flink-core, org.apache.flink:flink-annotations) are
> actually those that are Scala-independent, and have no suffix at all.
>
> It is possible your SBT file does not like miking depen
" % "3.2" % "it,test",
"net.manub" %% "scalatest-embedded-kafka" % "0.4.1" % "it,test"
)
My project settings are in a file called MyBuild.scala
object MyBuild extends Build {
override lazy val settings = super.settings +
, you
> can set "flink.disable-metrics" to "true" in the properties. This way, you
> disable the metrics.
> I'll probably have to introduce something like a client id to
> differentiate between the producers.
>
> Robert
>
> On Thu, Jan 21, 2016 at 11:51 PM, Da
guess everything in your topology runs with a parallelism of 1? Running
> it with a parallelism higher than 1 will also work around the issue
> (because then the two Sinks are not executed in one Task).
>
> On Fri, Jan 22, 2016 at 4:56 PM, David Kim <
> david@braintreepayment
> Please let me know if the updated code has any issues. I'll fix the issues
> asap.
>
> On Wed, Jan 13, 2016 at 5:06 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Robert! I'll be keeping tabs on the PR.
>>
>> Cheers,
>> Davi
t; Robert
>
>
> On Fri, Jan 15, 2016 at 4:02 PM, David Kim <
> david@braintreepayments.com> wrote:
>
>> Thanks Till! I'll keep an eye out on the JIRA issue. Many thanks for the
>> prompt reply.
>>
>> Cheers,
>> David
>>
>> On Fri,
em out, then call
> tools/change-scala-version.sh
> 2.11 in the root directory and then mvn clean install -DskipTests
> -Dmaven.javadoc.skip=true. These binaries should depend on the right
> Scala version.
>
> Cheers,
> Till
>
>
> On Thu, Jan 14, 2016 at 11:25 PM, David K
Hi,
I have a scala project depending on flink scala_2.11 and am seeing a
compilation error when using sbt.
I'm using flink 1.0-SNAPSHOT and my build was working yesterday. I was
wondering if maybe a recent change to flink could be the cause?
Usually we see flink resolving the scala _2.11
t the fix.
> I hope that I can merge the connector to master this week, then, the fix
> will be available in 1.0-SNAPSHOT as well.
>
> Regards,
> Robert
>
>
>
> Sent from my iPhone
>
> On 11.01.2016, at 21:39, David Kim <david@braintreepayments.com>
> wrote:
Hello all,
I saw that DeserializationSchema has an API "isEndOfStream()".
https://github.com/apache/flink/blob/master/flink-streaming-java/src/main/java/org/apache/flink/streaming/util/serialization/DeserializationSchema.java
Can *isEndOfStream* be utilized to somehow terminate a streaming
23 matches
Mail list logo