To verify, I downloaded the broker tgz from here
https://builds.apache.org/view/M-R/view/Qpid/job/Qpid-Java-Artefact-Release-0.32/
expanded
it into an empty directory, set QPID work to a non existent location, and
ran bin/qpid-server.

On startup the broker had a single (Json) virtual host node, containing an
(active) Derby virtual host.

This is the behaviour I would expect...

-- Rob

On 5 March 2015 at 11:07, Rob Godfrey <[email protected]> wrote:

>
>
> On 5 March 2015 at 10:55, Gordon Sim <[email protected]> wrote:
>
>> On 03/04/2015 09:38 PM, Rob Godfrey wrote:
>>
>>> That's odd... The out of the box config should have a virtualhost.  Were
>>> you running a completely fresh install, or were you running with an
>>> existing config/work directory?
>>>
>>
>> It was a completely fresh install and I set QPID_WORK to the new install
>> directory. Perhaps that wasn't correct anymore?
>>
>>
> Can you give the exact steps you used to install / start...  The broker
> starts up with an initial config which has a parameterized initial virtual
> host.  I admit I normally just start from within my IDE, so I may be doing
> something different somehow.  If you could also provide the config json
> file that the broker writes on first startup (indeed a tar of the whole
> work directory) prior to you adding a virtual host that would be very
> useful.
>
>
>> The management console showed no virtual hosts, theough the 'default' for
>> the broker was still listed as 'default' in the broker section above this.
>> I create a virtual host named default and everything worked. (Whats a
>> virtual host node btw? I seem to need to create one of those along side my
>> virtual host?)
>>
>>
> The VirtualHostNode was something we added to support HA replication...
> The VirtualHostNode represents the Broker's knowledge of the Virtual Host
> (i.e. enough configuration to go find the virtual host and its
> configuration), for a non HA virtual host that might be something like the
> type of configuration store (JSON, Derby, BDB, etc) and a path.  For the
> BDB HA virtual host node it includes the connection information so the
> virtual host node can join the replication group.  Where a different broker
> is currently the master for a virtual host, the virtual host node will not
> have an active virtual host underneath it, but will show the state of the
> other nodes in the group (which are on other brokers) so you can see where
> the master is currently residing.
>
>
>>  In terms of authentication, by default PLAIN is no longer enabled over
>>> non-TLS connections.
>>>
>>
>> I can see the logic. The python 0-10 client by default (i.e. without the
>> cyrus sasl integration installed) only has ANONYMOUS and PLAIN, so was
>> unable to connect. Next time I'll try ssl.
>>
>>
> Yes - my thinking is really that the default install of the broker should
> either add ANONYMOUS support, or we should have some sort of initial
> configuration questionnaire to guide people through the first run of the
> broker.  Having an initial password file with "guest", "guest" in it is
> just a horrible thing to do.
>
>
>>  There *seems* to be something odd happening when trying to consume
>>>> messages from a queue. Over 0-10 I only get the messages if I issue a
>>>> message-flush. Over 1.0 I sometimes can't get any messages out at all.
>>>> Not
>>>> sure if this is only when queue was created over 0-10. Even for a queue
>>>> created through management, I was not getting messages at times after
>>>> having sent them and had them accepted (switching back to 0-10 and
>>>> flushing, I could see them all as expected). If the receiver is active
>>>> before messages are sent (whether through exchanges or direct through
>>>> queues), receiving seems to work ok.
>>>>
>>>> Got a deadlock trying to stop the (java) broker, see attached. On
>>>> restarting, my virtual host was there but somehow things were corrupted
>>>> and
>>>> could not connect again until I deleted and recreated it.
>>>>
>>>
>>>
>>> So, the deadlock actually happened before the close, it occurred when a
>>> flow arrived at the broker at the same time as a message was being sent
>>> from the broker to a consumer... I'll fix that now on trunk (I don't
>>> think
>>> that looks like a new issue, but we should fix it anyway).  The effect of
>>> the deadlock would be that that queue would no longer be distributing
>>> messages.
>>>
>>
>> I only noticed the deadlock when trying to stop the broker - it just
>> hangs. I can see on my current process I have two deadlocks (before
>> shutting down). Probably the same ones, but just in case its of use,
>> another stackdump attached.
>>
>> This does indeed sound like part of the problem at least with the
>> receive. The queue wasn't completely jammed however, as I could get
>> messages by rerunning the receiver over 0-10 without flushing.
>>
>>
> There'll be potential (depending on whether all the messages in the queue
> are acquired) for independent flows to get through... Basically once it's
> deadlocked there's no guarantees as to how the thing will behave, so I'm
> not going to try to reason about the message sending stuff until/unless we
> can replicate it with the deadlock fixed.
>
> -- Rob
>
>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: [email protected]
>> For additional commands, e-mail: [email protected]
>>
>
>

Reply via email to