If you would like, share a link to a Github repo that has your changes.  We
can take a closer look at what the problem might be.

On Thu, Sep 29, 2016 at 10:34 AM, Nick Allen <[email protected]> wrote:

> It is very difficult to do much with Metron on single node Vagrant.  I
> think Dave is right in saying that adding slots to Storm is probably just
> going to make things worse.  You are probably running out of memory, which
> is causing various core services to either die or be killed by the OS. Many
> weird, annoying things will happen in this case.  You will lose sleep,
> probably lose hair, and may become extremely anti-social.  All have
> happened to me.
>
> I would highly advise working on a multi-node cluster like AWS.  If that
> is not possible, I would do the following.
>
> 1. Edit `metron-deployment/inventory/full-dev-platform/group_vars/all`
> and remove everything from the list of `services_to_start` that you do not
> absolutely need.  After METRON-466, which was just committed to master this
> morning, full-dev and quick-dev will each have their own configuration file
> with this setting.  Choose the appropriate one.
> 2. Spin up a fresh Vagrant deployment.
> 3. Use `monit summary` and make sure things look as you would expect.  Use
> `top` just to make sure there is nothing else unexpected running.
> 4. Login to Ambari and make sure the core services that you need are all
> running and happy.  While in there stop anything you don't need.  For
> example, maybe you do not need MR running.
> 5. Then start-up just the components that you need.
>
>
> Personally, I start things up from the edge in.
>
>    - Start sensor.  Check that sensor data is hitting the topic.
>    - Start parser topology.  Check sensor data is hitting 'enrichments'
>    topic.
>    - Start enrichment topology.  Check that data is hitting 'indexing'
>    topic.
>    - etc
>
>
>
> On Thu, Sep 29, 2016 at 10:04 AM, Otto Fowler <[email protected]>
> wrote:
>
>> I have changed all the .yml to have the exact slots you mentioned, and
>> still ended up with 4.
>> Late late last night I just turned off the snort and bro parsers in monit
>> and restarted my topology and got a worker.
>> But…..
>> Now i’m seeing data die in enrichment.  I have exceptions on several of
>> the
>> bolts that i’m going send a mail about, I’m not sure if I need to stop
>> more
>> services besides the parsers and I’m hitting a memory thing or what.  I
>> really want to validate my end-to-end before redeploying my cluster.
>>
>> I have ‘hand’ applied the start ambari cluster fix.  I still see death of
>> the deployment at times with setting up the kafka topics because there are
>> 0 brokers, but restarting with the right vagrant provision call gets me to
>> completion ( see METRON-472 ).
>>
>> Can someone else try changing the slots - is it just me?
>>
>> On September 29, 2016 at 07:28:34, David Lyle ([email protected])
>> wrote:
>>
>> IIRC, you're trying to persist a parser as a default in your single node
>> machine? If that's the case, changing lines 75 of single_node_vm.yml to
>> supervisor.slots.ports: "[6700, 6701, 6702, 6703, 6704]" and running full
>> dev should do the trick. You'll need to apply a patch to get full dev to
>> build. [1]
>>
>> Can you post a link to the changes you've made so far?
>>
>> Alternatively, if you're testing out your parser, you could, and I would
>> recommend you do, change the configuration of Quick Dev after launch.
>>
>> Full dev is more for testing the HDP install itself. I don't recommend it
>> for normal Metron testing and development. Currently, it's a
>> bit...ah...challenging to get the cluster running using full dev, that's
>> what METRON-467 [2] was created to address.
>>
>> To add slots, add additional ports to supervisor.slots.ports to
>> 6700,6701,6702,6703,6704 in the Storm config that Ambari exposes after
>> launch of Quick Dev and restart Storm. It may help to tail the Storm logs
>> during restart to see if there is a port conflict or something odd like
>> that.
>>
>> It looks like you're on the right track, if you've tried changing the
>> configuration in Ambari and restarting Storm, please post your storm.yaml
>>
>> Also, remember, each slot will require additional memory on an already
>> constrained machine. Rather than adding slots, I'd shut down an existing
>> sensor topology and sensor probe prior to adding my parser.
>>
>> That said, if you do land on a configuration that runs reliably with
>> additional slots, please post it back here.
>>
>> Thanks for sticking with this!
>>
>> -D...
>>
>> [1] https://github.com/apache/incubator-metron/pull/279
>> [2] https://issues.apache.org/jira/browse/METRON-467
>>
>> On Thu, Sep 29, 2016 at 6:20 AM, Nick Allen <[email protected]> wrote:
>>
>> > Are you trying this with Full-dev, not Quick-dev? Quick-dev uses a
>> > pre-built image that is downloaded from Atlas. This image already has
>> HDP
>> > installed. Since you want to deploy with custom changes to HDP, you need
>> > to use Full-Dev for this.
>> >
>> > On Sep 29, 2016 12:49 AM, "Otto Fowler" <[email protected]>
>> wrote:
>> >
>> > > I have changed these settings for all the configurations ( after
>> having
>> > > single node not do the trick ) and it has not worked.
>> > > No matter how many slots.ports I have in the vars, I still get just 4
>> > > slots.
>> > > I don’t see anywhere else that it can be changed, or any other ‘slots’
>> or
>> > > ‘workers’ in metron-development.
>> > >
>> > > Does it work for anyone else?
>> > >
>> > > --
>> > >
>> > > On September 28, 2016 at 17:26:20, Nick Allen ([email protected])
>> > wrote:
>> > >
>> > > To do it so that it is deployed with more slots then look at
>> > > `metron-deployment/roles/ambari_config`. Under vars/ are the initial
>> > > settings that get thrown at Ambari.
>> > >
>> > > On Wed, Sep 28, 2016 at 4:56 PM, Otto Fowler <[email protected]
>> >
>> > > wrote:
>> > >
>> > > > Where can I increase the number of workers for storm in
>> > > metron-deployment
>> > > > ? Turns out if you add a new parser but don’t have a worker for it,
>> it
>> > > > doesn’t …. um work.
>> > > >
>> > >
>> > >
>> > >
>> > > --
>> > > Nick Allen <[email protected]>
>> > >
>> > >
>> >
>>
>
>
>
> --
> Nick Allen <[email protected]>
>



-- 
Nick Allen <[email protected]>

Reply via email to