Thanks! I was able to unbrick the broker via Management Mode. It works
really well.
Concerning 417 error, browser restart didn't fix it (and the issue was only
happening with just 1 of the 3 nodes). I restarted the broker and it works
OK for now, I'll just monitor it a bit.

On Wed, Apr 10, 2019 at 4:48 PM Oleksandr Rudyy <[email protected]> wrote:

> Hi Igor,
>
> The original intention for fixing configuration errors was to use
> "management mode".
> However, the broker restart is required to switch to management mode. In
> case of HA it could be a bit tricky, as new elected Master might not be the
> broker instance started in "management mode".
> The ACL and authentication configurations are disregarded in broker
> instance started in management node.
> The management mode credentials are printed into system output on broker
> start-up, something like below
>
> [Broker] MNG-1004 : Web Management Ready
> *[Broker] BRK-1012 : Management Mode : User Details : mm_admin /
> WjRNwsWLWP*
> [Broker] BRK-1004 : Qpid Broker Ready
>
> In order to start broker in management mode  a command line argument -mm
> (or --management-mode ) needs to be specified.
>
> Usually any configuration changes should be tested in test environment
> before promoting them to productions. Thus, theoretically, starting broker
> in management mode might never be required in production environment.
>
> HTTP status code 417 usually returned on timeout waiting for the SASL
> challenge response. Thought, it could be caused by a defect in
> authentication functionality. Have you tried to restart browser on running
> into 417? If browser restart fixes the issue, that might be an indication
> of a defect.
>
> Please note that some configuration changes require broker restart. For
> example, enabling TLS transport on HTTP or AMQP ports requires restart.
> Please, verify that broker instance is restarted after enabling TLS.
>
> Kind Regards,
> Alex
>
>
>
>
>
> On Wed, 10 Apr 2019 at 17:19, Igor Natanzon <[email protected]>
> wrote:
>
> > Broker J 7.1.2
> >
> > So I am experimenting with adding vhost ACL into the virtual host itself,
> > so that the ACL is replicated across all nodes in the cluster. Originally
> > the ACL was added at the broker level, so I had to import it to each of
> the
> > 3 nodes.
> >
> > I created a new entry in Virtual Hosts Access Control providers, but by
> > mistake set default action as DENY. The console didn't give me a chance
> to
> > actually upload the ACL, as the global DENY immediately went in effect.
> >
> > A few questions:
> > 1. Should activation of ACL (at least in DENY default mode) be delayed at
> > least until one ACL rule is added?
> > 2. Is there a way for me to un-brick the virtual host (as its pretty much
> > read-only now)? I can recreate the nodes, but it would be good to know
> how
> > to handle this in production setting, if it were to happen accidentally.
> > 3. Is it even a good idea to have ACL inside replicated vhost? I wonder
> > what the best practices are.
> >
> > Incidentally, for the first time ever I got 417 Unable to load
> service/sasl
> > status: 417 when accessing HTTP web console on one of the 3 nodes. The
> > broker log doesn't show anything, and obviously I can recycle it from
> > command line, but this hasn't happened in several months of testing
> > (starting with older version of Broker-J), so I wonder what condition
> could
> > have caused this error? The HTTPS version of the console is not
> responding
> > either, just sits there spinning its wheels. Is there anything I can look
> > at before I recycle the broker?
> >
> > Thanks!
> >
>

Reply via email to