Happy it's working for you. Most likely the issue you were seeing is a
race. Occasionally the 1.x code can take a while (a few minutes) to start
up due to a race condition in the supervisor code. This is fixed in 2.x.

You can keep using LocalCluster for testing and experimentation, but keep
in mind that you should switch to running the real Storm daemons once you
want to run production code. There's a guide here
https://storm.apache.org/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html.


Den tor. 2. maj 2019 kl. 03.11 skrev David Hutira <[email protected]>:

> Hi Stig,
>
> Thank You for your help. I've run things again, and do see the numerous
> 'Emitted' statements and associated values now. I went back to my previous
> runs, and it looks like I had been getting the following messages at that
> point instead:
>
> 59817 [SLOT_1027] INFO  o.a.s.d.s.Slot - STATE WAITING_FOR_WORKER_START
> msInState: 5 topo:test-1-1556727050
> worker:772729c0-7c69-4905-9fee-a4ad4d7d5e50 -> RUNNING msInState: 0
> topo:test-1-1556727050 worker:772729c0-7c69-4905-9fee-a4ad4d7d5e50
>
> 59817 [SLOT_1027] WARN  o.a.s.d.s.Slot - SLOT 1027: Assignment Changed
> from LocalAssignment(topology_id:test-1-1556727050,
> executors:[ExecutorInfo(task_start:8, task_end:8),
> ExecutorInfo(task_start:12, task_end:12), ExecutorInfo(task_start:2,
> task_end:2), ExecutorInfo(task_start:6, task_end:6),
> ExecutorInfo(task_start:16, task_end:16), ExecutorInfo(task_start:10,
> task_end:10), ExecutorInfo(task_start:14, task_end:14),
> ExecutorInfo(task_start:4, task_end:4), ExecutorInfo(task_start:7,
> task_end:7), ExecutorInfo(task_start:3, task_end:3),
> ExecutorInfo(task_start:1, task_end:1), ExecutorInfo(task_start:9,
> task_end:9), ExecutorInfo(task_start:11, task_end:11),
> ExecutorInfo(task_start:13, task_end:13), ExecutorInfo(task_start:5,
> task_end:5), ExecutorInfo(task_start:15, task_end:15)],
> resources:WorkerResources(mem_on_heap:0.0, mem_off_heap:0.0, cpu:0.0),
> owner:david) to null
>
> 59817 [SLOT_1027] INFO  o.a.s.ProcessSimulator - Begin killing process
> 772729c0-7c69-4905-9fee-a4ad4d7d5e50
>
> 59817 [SLOT_1027] INFO  o.a.s.d.worker - Shutting down worker
> test-1-1556727050 8487bae7-a5b0-43f0-96eb-f198947e95e7 1027
>
>
> I had been piping the output to a file through a '> output.file' at the
> end of the command line; I assume this was causing some kind of problem
> with starting or managing the local tasks. I had done the first runs
> without the pipe, but apparently missed the output. I appreciate your help,
> and can now move on to some more involved usage. Thanks again...
>
>
> Dave H.
>
>
>
>
> On Wed, May 1, 2019 at 5:14 PM Stig Rohde Døssing <[email protected]>
> wrote:
>
>> Hi David,
>>
>> I wouldn't worry about that log. It's not an error as far as I can tell.
>> The log is at INFO level as well.
>>
>> Storm-starter works for me. I get the same logs as you, but after "4987
>> [main] INFO  o.a.s.d.nimbus - Activating test: test-1-1556744624", I get a
>> bunch more log from Storm running the topology. Could you try with the
>> Storm 2.0.0 RC (
>> https://dist.apache.org/repos/dist/dev/storm/apache-storm-2.0.0-rc7/)?
>> I'd be curious to know if this is also a problem there.
>>
>> Also just to be clear, are you getting nothing past the line " 48258
>> [main] INFO  o.a.s.d.nimbus - Activating test: test-1-1556727673"?
>>
>> Den ons. 1. maj 2019 kl. 22.43 skrev David Hutira <[email protected]>:
>>
>>> Hello,
>>>
>>> I'm new to Apache Storm, and have run into some trouble running the
>>> storm-starter ExclamationTopology example. I'm running on a MacBookPro with
>>> Mojave, I've downloaded and installed the Storm binary tar, and generated
>>> the storm-starter uber-jar with no (obvious) errors. When I run locally
>>> with the following command:
>>>
>>> storm jar storm-starter-1.2.2.jar
>>> org.apache.storm.starter.ExclamationTopology
>>>
>>>
>>> I get alot of output; bracketing the Local processing in main...
>>>
>>>
>>>     *else* {
>>>
>>>
>>>       LocalCluster cluster = *new* LocalCluster();
>>>
>>>       System.*out*.println("Starting Topology......");
>>>
>>>       cluster.submitTopology("test", conf, builder.createTopology());
>>>
>>>       System.*out*.println("Ending Topology......");
>>>
>>>       Utils.*sleep*(10000);
>>>
>>>       cluster.killTopology("test");
>>>
>>>       cluster.shutdown();
>>>
>>>     }
>>>
>>>
>>> isolates the following info/error:
>>>
>>>
>>> Starting Topology......
>>>
>>> 48124 [main] WARN  o.a.s.u.Utils - STORM-VERSION new 1.2.2 old null
>>>
>>> 48129 [main] WARN  o.a.s.u.Utils - STORM-VERSION new 1.2.2 old 1.2.2
>>>
>>> 48166 [main] INFO  o.a.s.d.nimbus - Received topology submission for
>>> test (storm-1.2.2 JDK-1.8.0_211) with conf {"topology.max.task.parallelism"
>>> nil, "topology.submitter.principal" "", "topology.acker.executors" nil,
>>> "topology.eventlogger.executors" 0, "topology.debug" true,
>>> "storm.zookeeper.superACL" nil, "topology.users" (),
>>> "topology.submitter.user" "david", "topology.kryo.register" nil,
>>> "topology.kryo.decorators" (), "storm.id" "test-1-1556727673", "
>>> topology.name" "test"}
>>>
>>> 48172 [main] INFO  o.a.s.d.nimbus - uploadedJar
>>>
>>> 48183 [ProcessThread(sid:0 cport:-1):] INFO  
>>> o.a.s.s.o.a.z.s.PrepRequestProcessor
>>> - Got user-level KeeperException when processing
>>> sessionid:0x16a7432a8950000 type:create cxid:0xb zxid:0x25 txntype:-1
>>> reqpath:n/a Error Path:/storm/blobstoremaxkeysequencenumber
>>> Error:KeeperErrorCode = NoNode for /storm/blobstoremaxkeysequencenumber
>>>
>>> 48189 [main] INFO  o.a.s.cluster -
>>> setup-path/blobstore/test-1-1556727673-stormconf.ser/192.168.241.5:6627
>>> -1
>>>
>>> 48207 [main] INFO  o.a.s.cluster -
>>> setup-path/blobstore/test-1-1556727673-stormcode.ser/192.168.241.5:6627
>>> -1
>>>
>>> 48215 [main] INFO  o.a.s.d.nimbus - desired replication count 1
>>> achieved, current-replication-count for conf key = 1,
>>> current-replication-count for code key = 1, current-replication-count for
>>> jar key = 1
>>>
>>> 48258 [main] INFO  o.a.s.d.nimbus - Activating test: test-1-1556727673
>>>
>>> Ending Topology......
>>>
>>>
>>>
>>> I've searched, and it looks like information about the processing should
>>> appear here instead. I haven't found anything specific to Storm concerning
>>> this error. I have found references to it for Zookeeper, so tried
>>> installing and running Zookeeper and Kafka, with no improvement; the Storm
>>> tutorial didn't state this was required, but I thought I'd try anyway. At
>>> this point, I've exhausted my guesses, and was hoping someone could help
>>> point me in the proper direction. I appreciate any suggestions...
>>>
>>>
>>> Dave H.
>>>
>>>
>>>
>>>

Reply via email to