Github user asfgit closed the pull request at:
https://github.com/apache/incubator-metron/pull/351
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the
Github user ottobackwards commented on the issue:
https://github.com/apache/incubator-metron/pull/350
changed based on review, thanks!
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/359
@mmiklavc I completely agree that we should ack every tuple that comes
through in the `JoinBolt`. I think, however, that they should be acked only on
a successful join.
The
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/350
I dig it. +1 by inspection.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/354
Yeah, I dig this a lot, @nickwallen +1 by inspection.
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/351
+1 by inspection
---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/352
I worked out an example in vagrant. Verified that the statistical and
profiler functions can be called from all the relevant places:
* The REPL
* The parser topology
* The
Github user mmiklavc commented on the issue:
https://github.com/apache/incubator-metron/pull/359
Looking into this further, but I imagine that we do probably want to ack
all tuples in the join bolt regardless of them being anchored or not. From what
you're describing it looks like
Github user DomenicPuzio commented on the issue:
https://github.com/apache/incubator-metron/pull/359
Here's what I was seeing:
We were running only one Enrichment. So our EnrichmentSplitter sends
_originalTuple_ to the Join and _originalTuple_ to the Enrichment. Enrichment
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/359
I do want to point out that it's really appreciated that you submitted a
PR. I have seen potential bugs similar to this one that turned out to be
duplicate data being sent to kafka. I
Github user dlyle65535 commented on the issue:
https://github.com/apache/incubator-metron/pull/359
@DomenicPuzio - any chance you can introduce a unit or integration test
that exhibits the behavior you're fixing here?
---
If your project is set up for it, you can reply to this email
Github user cestella commented on the issue:
https://github.com/apache/incubator-metron/pull/359
How does this result in duplicate data? Either the message completes a
join and it isn't failed or it does not complete the join, does not pass past
the join bolt and is replayed from
GitHub user DomenicPuzio opened a pull request:
https://github.com/apache/incubator-metron/pull/359
Change acking to prevent duplicate tuples in enrichment topology
Adding this ack statement prevents duplicate messages from being sent
through the _enrichment_ topology. Previously,
The question is if we actually need to back-port at all at this point. I
think the assertion here is that pretty much everyone using Metron right
now is currently getting patches, etc. by upgrading to the latest release.
If/when we find a need to fork release branches we can certainly do it and
Would the back ports also have to go through a full ‘apache release’ process
and be planned out as well?
I don’t think that should all be worked out as we go.
On November 15, 2016 at 12:13:55, Michael Miklavcic
(michael.miklav...@gmail.com) wrote:
I'm a +1 on David and Nick's suggestions. 1
I'm a +1 on David and Nick's suggestions. 1 and 2 now, and let 3 happen
organically when the community has a need.
On Tue, Nov 15, 2016 at 9:29 AM, David Lyle wrote:
> I think that's an excellent understanding and suggestion on #3.
>
> Fwiw, the norm I've seen is to allow
I think that's an excellent understanding and suggestion on #3.
Fwiw, the norm I've seen is to allow the requester and the dev to work that
out.
Thanks,
-D...
On Tue, Nov 15, 2016 at 11:22 AM, Nick Allen wrote:
> I broke down what I am understanding of your suggestion
I broke down what I am understanding of your suggestion into bullet
points. Please correct me if I am wrong.
(1) Bump the rev immediately following a release
(2) Update the current version in master to 0.4.0
(3) Maintain and back port bug fixes to a 0.3.x branch
I would agree with you on items
So, the notion is- we're going to have a 0.4.0 release at some future
point. If, during that release cycle, we found critical bug fix type issues
that we wanted to release out of cycle, we could patch the 0.3.0 branch and
cut a release from there. You're correct that we'd have to commit them to
We'd cut a release from master - we'd initially increment to 0.4.0-SNAPSHOT
with the approach David is recommending. And any patches in 0.3.x would
need to also be applied to master.
On Tue, Nov 15, 2016 at 7:57 AM, Nick Allen wrote:
> And where would the next release get
And where would the next release get cut from; master or the 0.3.x branch?
Or is that something we decide when we cut a release based on what we want
to include?
On Tue, Nov 15, 2016 at 9:52 AM, Nick Allen wrote:
> What kind of PRs would qualify as 0.3.x fixes? How would
What kind of PRs would qualify as 0.3.x fixes? How would we decide that?
For those we would then have to commit them against both the 0.3.x branch
and master (0.4.0), right?
Off the top of your head, can you think of a few recent PRs that would
qualify as patches? I'd just like to get a feel
I noticed that, Nick. Thanks for doing that!
On Tue, Nov 15, 2016 at 9:44 AM, Nick Allen wrote:
> +1 binding; Checksums, licenses, integration tests, Quick Dev all
> validated.
>
> Yay!
>
> Side note: I updated the instructions in the Wiki as they have changed
> slightly.
I plan to do it this Thursday afternoon. Happy to work together if it
would help.
Jon
On Tue, Nov 15, 2016, 03:05 James Sirota wrote:
> Hi Guys,
>
> Just went through the upgrade. It took a while, but mostly because the
> Hadoop (Storm, Hbase, Yarn) upgrades failed to
Hi Guys,
Just went through the upgrade. It took a while, but mostly because the Hadoop
(Storm, Hbase, Yarn) upgrades failed to upgrade properly via the automated
Ambari upgrade from this link:
25 matches
Mail list logo