I have changed all the .yml to have the exact slots you mentioned, and
still ended up with 4.
Late late last night I just turned off the snort and bro parsers in monit
and restarted my topology and got a worker.
But…..
Now i’m seeing data die in enrichment.  I have exceptions on several of the
bolts that i’m going send a mail about, I’m not sure if I need to stop more
services besides the parsers and I’m hitting a memory thing or what.  I
really want to validate my end-to-end before redeploying my cluster.

I have ‘hand’ applied the start ambari cluster fix.  I still see death of
the deployment at times with setting up the kafka topics because there are
0 brokers, but restarting with the right vagrant provision call gets me to
completion ( see METRON-472 ).

Can someone else try changing the slots - is it just me?

On September 29, 2016 at 07:28:34, David Lyle ([email protected]) wrote:

IIRC, you're trying to persist a parser as a default in your single node
machine? If that's the case, changing lines 75 of single_node_vm.yml to
supervisor.slots.ports: "[6700, 6701, 6702, 6703, 6704]" and running full
dev should do the trick. You'll need to apply a patch to get full dev to
build. [1]

Can you post a link to the changes you've made so far?

Alternatively, if you're testing out your parser, you could, and I would
recommend you do, change the configuration of Quick Dev after launch.

Full dev is more for testing the HDP install itself. I don't recommend it
for normal Metron testing and development. Currently, it's a
bit...ah...challenging to get the cluster running using full dev, that's
what METRON-467 [2] was created to address.

To add slots, add additional ports to supervisor.slots.ports to
6700,6701,6702,6703,6704 in the Storm config that Ambari exposes after
launch of Quick Dev and restart Storm. It may help to tail the Storm logs
during restart to see if there is a port conflict or something odd like
that.

It looks like you're on the right track, if you've tried changing the
configuration in Ambari and restarting Storm, please post your storm.yaml

Also, remember, each slot will require additional memory on an already
constrained machine. Rather than adding slots, I'd shut down an existing
sensor topology and sensor probe prior to adding my parser.

That said, if you do land on a configuration that runs reliably with
additional slots, please post it back here.

Thanks for sticking with this!

-D...

[1] https://github.com/apache/incubator-metron/pull/279
[2] https://issues.apache.org/jira/browse/METRON-467

On Thu, Sep 29, 2016 at 6:20 AM, Nick Allen <[email protected]> wrote:

> Are you trying this with Full-dev, not Quick-dev? Quick-dev uses a
> pre-built image that is downloaded from Atlas. This image already has HDP
> installed. Since you want to deploy with custom changes to HDP, you need
> to use Full-Dev for this.
>
> On Sep 29, 2016 12:49 AM, "Otto Fowler" <[email protected]> wrote:
>
> > I have changed these settings for all the configurations ( after having
> > single node not do the trick ) and it has not worked.
> > No matter how many slots.ports I have in the vars, I still get just 4
> > slots.
> > I don’t see anywhere else that it can be changed, or any other ‘slots’
or
> > ‘workers’ in metron-development.
> >
> > Does it work for anyone else?
> >
> > --
> >
> > On September 28, 2016 at 17:26:20, Nick Allen ([email protected])
> wrote:
> >
> > To do it so that it is deployed with more slots then look at
> > `metron-deployment/roles/ambari_config`. Under vars/ are the initial
> > settings that get thrown at Ambari.
> >
> > On Wed, Sep 28, 2016 at 4:56 PM, Otto Fowler <[email protected]>
> > wrote:
> >
> > > Where can I increase the number of workers for storm in
> > metron-deployment
> > > ? Turns out if you add a new parser but don’t have a worker for it,
it
> > > doesn’t …. um work.
> > >
> >
> >
> >
> > --
> > Nick Allen <[email protected]>
> >
> >
>

Reply via email to