Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-21 Thread Joel Sherrill
On Tue, Feb 20, 2018 at 8:26 PM, Chris Johns  wrote:

> On 21/02/2018 01:23, Sebastian Huber wrote:
> > - Am 19. Feb 2018 um 12:41 schrieb Matthew J Fletcher
> ami...@gmail.com:
> >
> >> Hi,
> >>
> >> I've seen this in our application in certain use cases, if my
> understanding
> >> is correct (an it might not be), it comes
> >> from RTEMS_SCORE_ROBUST_THREAD_DISPATCH,
> >> via CPU_ENABLE_ROBUST_THREAD_DISPATCH.
> >>
> >> It only seems to be TRUE for Arm, which seems a bit inconstant. However
> i
> >> dont really understand what its trying to achieve, i see
> >> https://devel.rtems.org/ticket/2954 but it does more than just
> optimise the
> >> context switch, it significantly alters how an application should be
> >> written, no more freeing memory with interrupts disabled etc,
> >
> > Originally, the CPU_ENABLE_ROBUST_THREAD_DISPATCH was implemented for
> the SMP support (inter-processor interrupt delivery MUST be possible during
> operating system services). On ARMv7-M it catches an undefined behaviour
> case. On the other ARM variants it simplified the context switch code. In
> general, we could enable the CPU_ENABLE_ROBUST_THREAD_DISPATCH on all
> architectures. This would probably break some broken applications.
>
> This is a bit messy because it is clearly wrong on SMP while the Classic
> API's
> `rtems_task_create` [1] lets a user set the mask level which is almost
> saying
> "run a task at a specific mask level". I am not aware of an API call that
> lets a
> user change this level and an interrupt disable and restore returns the
> task's
> initial level so at some point being able to block with a specific mask
> level
> was a requirement. Also on modern processors levels are handled in
> interrupt
> controllers and not in the CPU so how much use is an initial mask level
> these days?
>

rtems_task_mode() lets you change the interrupt mode dynamically.

FWIW I don't know that this feature ever was a good idea and I would be
OK with obsoleting the ability to alter the interrupt level via task mode.
But
that doesn't prevent a user from using other mechanisms including their
own inline assembly from disabling processor interrupts.

Now get off my lawn and let me tell you how long this feature has been
hokey.
Of the six earliest RTEMS ports (m68k, i386, i960, powerpc, hppa, and
sparc),
the m68k, i960, and sparc could meaningfully map the interrupt level in mode
on to something at the processor. The old PowerPC's only had external
exception
and decrementer interrupts. The i386 only has on and off but in all real
implementations
(e.g. PCs) has external interrupt controller. The hppa had on/off as I
recall but
the port/BSP had a clever feature where unused integer values mapped to
configurable settings of the interrupt controller.

This feature is inherently non-portable if you depend on a "level" other
than
on and off.


>
> I am personally OK with a change to catch interrupts being masked in a
> context
> switch because it is more likely this happens in error than on purpose and
> catching this type of error without an internal check like this is hard.
> The
> flow on of this is making obsolete setting the level on a task create.
>

Agreed.

But calling the "robust" implies there a regular context switch isn't
robust. This
implies we don't normally switch enough registers or something. Perhaps
a better word is needed for this feature.

>
> I suspect setting the interrupt level on a task on an ARMv-7 would result
> in the
> internal error if the user did not clear the interrupt mask before calling
> a
> blocking call. This complicates the documentation because of per arch
> differences.
>

Is this a hardware feature? I wouldn't be unhappy with this being a generic
check.

I know it has been possible to do this for 30 years but banning it wouldn't
be a
bad thing.


>
> Also, the documentation should be updated noting the SMP exceptions. I have
> raised #3309 to track this.
>

We recently went through this discussion for just SMP. It would be simpler
to just
obsolete the feature entirely.

>
> > You should never call an operating system service with interrupts
> disabled in thread context. This would destroy all the work done in the
> RTEMS implementation to keep the interrupt latency small.
>
> I agree and so I question if we should allow a task's initial mask level
> to be
> set this way?
>
> My rational is making the API consistent on non-SMP and SMP targets and
> helping
> to isolate a difficult case to catch.
>

I agree with this rationale on this feature.

But we should all remember that no matter what we do the
user and system integrator can easily destroy the performance of a system in
a way that reflects poorly on RTEMS. As someone I work with loves to say,
"You can't prevent stupid." :)

--joel


>
> Chris
>
> [1]
> https://docs.rtems.org/branches/master/c-user/task_
> manager.html#task-create-create-a-task
> ___
> users mailing list
> users@rtems.org

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-20 Thread Chris Johns
On 21/02/2018 01:23, Sebastian Huber wrote:
> - Am 19. Feb 2018 um 12:41 schrieb Matthew J Fletcher ami...@gmail.com:
> 
>> Hi,
>>
>> I've seen this in our application in certain use cases, if my understanding
>> is correct (an it might not be), it comes
>> from RTEMS_SCORE_ROBUST_THREAD_DISPATCH,
>> via CPU_ENABLE_ROBUST_THREAD_DISPATCH.
>>
>> It only seems to be TRUE for Arm, which seems a bit inconstant. However i
>> dont really understand what its trying to achieve, i see
>> https://devel.rtems.org/ticket/2954 but it does more than just optimise the
>> context switch, it significantly alters how an application should be
>> written, no more freeing memory with interrupts disabled etc,
> 
> Originally, the CPU_ENABLE_ROBUST_THREAD_DISPATCH was implemented for the SMP 
> support (inter-processor interrupt delivery MUST be possible during operating 
> system services). On ARMv7-M it catches an undefined behaviour case. On the 
> other ARM variants it simplified the context switch code. In general, we 
> could enable the CPU_ENABLE_ROBUST_THREAD_DISPATCH on all architectures. This 
> would probably break some broken applications. 

This is a bit messy because it is clearly wrong on SMP while the Classic API's
`rtems_task_create` [1] lets a user set the mask level which is almost saying
"run a task at a specific mask level". I am not aware of an API call that lets a
user change this level and an interrupt disable and restore returns the task's
initial level so at some point being able to block with a specific mask level
was a requirement. Also on modern processors levels are handled in interrupt
controllers and not in the CPU so how much use is an initial mask level these 
days?

I am personally OK with a change to catch interrupts being masked in a context
switch because it is more likely this happens in error than on purpose and
catching this type of error without an internal check like this is hard. The
flow on of this is making obsolete setting the level on a task create.

I suspect setting the interrupt level on a task on an ARMv-7 would result in the
internal error if the user did not clear the interrupt mask before calling a
blocking call. This complicates the documentation because of per arch 
differences.

Also, the documentation should be updated noting the SMP exceptions. I have
raised #3309 to track this.

> You should never call an operating system service with interrupts disabled in 
> thread context. This would destroy all the work done in the RTEMS 
> implementation to keep the interrupt latency small.

I agree and so I question if we should allow a task's initial mask level to be
set this way?

My rational is making the API consistent on non-SMP and SMP targets and helping
to isolate a difficult case to catch.

Chris

[1]
https://docs.rtems.org/branches/master/c-user/task_manager.html#task-create-create-a-task
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users


Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-20 Thread Sebastian Huber


- Am 19. Feb 2018 um 12:41 schrieb Matthew J Fletcher ami...@gmail.com:

> Hi,
> 
> I've seen this in our application in certain use cases, if my understanding
> is correct (an it might not be), it comes
> from RTEMS_SCORE_ROBUST_THREAD_DISPATCH,
> via CPU_ENABLE_ROBUST_THREAD_DISPATCH.
> 
> It only seems to be TRUE for Arm, which seems a bit inconstant. However i
> dont really understand what its trying to achieve, i see
> https://devel.rtems.org/ticket/2954 but it does more than just optimise the
> context switch, it significantly alters how an application should be
> written, no more freeing memory with interrupts disabled etc,

Originally, the CPU_ENABLE_ROBUST_THREAD_DISPATCH was implemented for the SMP 
support (inter-processor interrupt delivery MUST be possible during operating 
system services). On ARMv7-M it catches an undefined behaviour case. On the 
other ARM variants it simplified the context switch code. In general, we could 
enable the CPU_ENABLE_ROBUST_THREAD_DISPATCH on all architectures. This would 
probably break some broken applications. You should never call an operating 
system service with interrupts disabled in thread context. This would destroy 
all the work done in the RTEMS implementation to keep the interrupt latency 
small.

You can free memory via free() in interrupt context since it has special case 
code for this. You cannot use rtems_region_return_segment().

> 
> Do most people just edit cpu.h to make it false ?

Users should never edit cpu.h.
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users


Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-20 Thread Sebastian Huber


- Am 19. Feb 2018 um 16:33 schrieb Matthew J Fletcher ami...@gmail.com:

> For those interested the callstack looks like this;
> 
> bsp_fatal_extension() at bspclean.c:32 0x700d42ba
> _User_extensions_Iterate() at userextiterate.c:175 0x700ef42a
> _User_extensions_Fatal() at userextimpl.h:307 0x700eb9b2
> _Terminate() at interr.c:35 0x700eb9b2
> _Internal_error() at interr.c:52 0x700eb9e2
> _Thread_Do_dispatch() at threaddispatch.c:186 0x700edd0a
> _Thread_Dispatch_enable() at threaddispatch.h:227 0x700ef0f8
> _Thread_Change_life() at threadrestart.c:684 0x700ef0f8
> _Thread_Set_life_protection() at threadrestart.c:691 0x700ef10c
> _API_Mutex_Lock() at apimutexlock.c:29 0x700eae86
> _RTEMS_Lock_allocator() at allocatormutex.c:26 0x700eae6a
> _Region_Get_and_lock() at regionimpl.h:76 0x700e9cdc
> rtems_region_get_segment() at regiongetsegment.c:68 0x700e9cdc
> ...
> ... some of my code ...
> ...
> _Thread_Handler() at threadhandler.c:134 0x700edde2
> _Thread_Get() at threadget.c:38 0x700edda8
> 
> Its not clear to me what the issue might be.

You call rtems_regions_get_segment() at thread context with interrupts 
disabled.  This is potentially very bad for interrupt latency in general and 
undefined behaviour on ARMv7-M (thus the fatal error to avoid tedious 
debugging).
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users


Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Joel Sherrill
On Feb 19, 2018 6:55 PM, "Chris Johns"  wrote:

On 20/02/2018 00:13, Matthew J Fletcher wrote:
> All,
>
> Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set FALSE
i get
> a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.
>
> I think i will have to work around the new behavior somehow.
>

I would view this as implementing the correct behaviour. :)

The performance change does generate this runtime error and like you I have
tripped over it. What I learnt is this error exposes incorrect behaviour
that
use to run before now however this has changed with SMP. In all cases what
I was
doing was wrong and needed to be fixed.

Masking interrupts or using interrupt masking as a cheap lock on an SMP
target
provides you with no protection. All it does is stop the masked core from
receiving interrupts and any code without lock protection can execute
concurrently via another core.

You could argue that you are not using an SMP target so why should it matter
however there are ways to handle these cases that are better and end up with
more robust applications.

I hope this helps explain why you are seeing this error


Thanks Chris. I should have remembered that as we added smp, we tried to
encourage applications to "do the right thing" to be SMP correct.


Chris
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/
use rs
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Chris Johns
On 20/02/2018 00:13, Matthew J Fletcher wrote:
> All,
> 
> Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set FALSE i 
> get
> a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.
> 
> I think i will have to work around the new behavior somehow.
> 

I would view this as implementing the correct behaviour. :)

The performance change does generate this runtime error and like you I have
tripped over it. What I learnt is this error exposes incorrect behaviour that
use to run before now however this has changed with SMP. In all cases what I was
doing was wrong and needed to be fixed.

Masking interrupts or using interrupt masking as a cheap lock on an SMP target
provides you with no protection. All it does is stop the masked core from
receiving interrupts and any code without lock protection can execute
concurrently via another core.

You could argue that you are not using an SMP target so why should it matter
however there are ways to handle these cases that are better and end up with
more robust applications.

I hope this helps explain why you are seeing this error.

Chris
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Joel Sherrill
On Mon, Feb 19, 2018 at 10:20 AM, Matthew J Fletcher 
wrote:

> Hi Joel,
>
> Its possible interrupts are disabled. I am using the
> termios RTEMS_IO_RCVWAKEUP callback to get chars from console input. I
> accumulate them for a timer tick or two, then in a timer callback
> do rtems_message_queue_send()
>
> The callstack above this from the rtems_message_queue_receive() in
> another task. The disable / enable interrupts wraps the  
> rtems_message_queue_send()
> and buffer management to protect against the timer being invoked again
> during that operation.
>

Wrapping the call to MQ in ISR disable is the culprit.



>
> Should i be using a semaphore instead ?
>

That should work as long as you use it no wait in the ISR. You don't want
to block in an ISR. That will trigger another fault. :)

Also watch out for the type of semaphore. If it has priority inheritance or
ceiling, it always has to be accessed from a thread.

--joel



>
>
>
>
>
> On 19 February 2018 at 15:38, Joel Sherrill  wrote:
>
>>
>> Based on the code, it looks like you have interrupts disabled
>> when you are making the call to  rtems_region_get_segment().
>>
>> For sure, you shouldn't free memory from an ISR though.
>>
>> On Mon, Feb 19, 2018 at 9:33 AM, Matthew J Fletcher 
>> wrote:
>>
>>> For those interested the callstack looks like this;
>>>
>>> bsp_fatal_extension() at bspclean.c:32 0x700d42ba
>>> _User_extensions_Iterate() at userextiterate.c:175 0x700ef42a
>>> _User_extensions_Fatal() at userextimpl.h:307 0x700eb9b2
>>> _Terminate() at interr.c:35 0x700eb9b2
>>> _Internal_error() at interr.c:52 0x700eb9e2
>>> _Thread_Do_dispatch() at threaddispatch.c:186 0x700edd0a
>>> _Thread_Dispatch_enable() at threaddispatch.h:227 0x700ef0f8
>>> _Thread_Change_life() at threadrestart.c:684 0x700ef0f8
>>> _Thread_Set_life_protection() at threadrestart.c:691 0x700ef10c
>>> _API_Mutex_Lock() at apimutexlock.c:29 0x700eae86
>>> _RTEMS_Lock_allocator() at allocatormutex.c:26 0x700eae6a
>>> _Region_Get_and_lock() at regionimpl.h:76 0x700e9cdc
>>> rtems_region_get_segment() at regiongetsegment.c:68 0x700e9cdc
>>> ...
>>> ... some of my code ...
>>> ...
>>> _Thread_Handler() at threadhandler.c:134 0x700edde2
>>> _Thread_Get() at threadget.c:38 0x700edda8
>>>
>>> Its not clear to me what the issue might be.
>>>
>>>
>>> On 19 February 2018 at 13:13, Matthew J Fletcher 
>>> wrote:
>>>
 All,

 Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set
 FALSE i get a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.

 I think i will have to work around the new behavior somehow.

 --

 regards
 ---
 Matthew J Fletcher


>>>
>>>
>>> --
>>>
>>> regards
>>> ---
>>> Matthew J Fletcher
>>>
>>>
>>> ___
>>> users mailing list
>>> users@rtems.org
>>> http://lists.rtems.org/mailman/listinfo/users
>>>
>>
>>
>
>
> --
>
> regards
> ---
> Matthew J Fletcher
>
>
> ___
> users mailing list
> users@rtems.org
> http://lists.rtems.org/mailman/listinfo/users
>
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Matthew J Fletcher
Hi Joel,

Its possible interrupts are disabled. I am using the
termios RTEMS_IO_RCVWAKEUP callback to get chars from console input. I
accumulate them for a timer tick or two, then in a timer callback
do rtems_message_queue_send()

The callstack above this from the rtems_message_queue_receive() in another
task. The disable / enable interrupts wraps the  rtems_message_queue_send()
and buffer management to protect against the timer being invoked again
during that operation.

Should i be using a semaphore instead ?





On 19 February 2018 at 15:38, Joel Sherrill  wrote:

>
> Based on the code, it looks like you have interrupts disabled
> when you are making the call to  rtems_region_get_segment().
>
> For sure, you shouldn't free memory from an ISR though.
>
> On Mon, Feb 19, 2018 at 9:33 AM, Matthew J Fletcher 
> wrote:
>
>> For those interested the callstack looks like this;
>>
>> bsp_fatal_extension() at bspclean.c:32 0x700d42ba
>> _User_extensions_Iterate() at userextiterate.c:175 0x700ef42a
>> _User_extensions_Fatal() at userextimpl.h:307 0x700eb9b2
>> _Terminate() at interr.c:35 0x700eb9b2
>> _Internal_error() at interr.c:52 0x700eb9e2
>> _Thread_Do_dispatch() at threaddispatch.c:186 0x700edd0a
>> _Thread_Dispatch_enable() at threaddispatch.h:227 0x700ef0f8
>> _Thread_Change_life() at threadrestart.c:684 0x700ef0f8
>> _Thread_Set_life_protection() at threadrestart.c:691 0x700ef10c
>> _API_Mutex_Lock() at apimutexlock.c:29 0x700eae86
>> _RTEMS_Lock_allocator() at allocatormutex.c:26 0x700eae6a
>> _Region_Get_and_lock() at regionimpl.h:76 0x700e9cdc
>> rtems_region_get_segment() at regiongetsegment.c:68 0x700e9cdc
>> ...
>> ... some of my code ...
>> ...
>> _Thread_Handler() at threadhandler.c:134 0x700edde2
>> _Thread_Get() at threadget.c:38 0x700edda8
>>
>> Its not clear to me what the issue might be.
>>
>>
>> On 19 February 2018 at 13:13, Matthew J Fletcher 
>> wrote:
>>
>>> All,
>>>
>>> Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set
>>> FALSE i get a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.
>>>
>>> I think i will have to work around the new behavior somehow.
>>>
>>> --
>>>
>>> regards
>>> ---
>>> Matthew J Fletcher
>>>
>>>
>>
>>
>> --
>>
>> regards
>> ---
>> Matthew J Fletcher
>>
>>
>> ___
>> users mailing list
>> users@rtems.org
>> http://lists.rtems.org/mailman/listinfo/users
>>
>
>


-- 

regards
---
Matthew J Fletcher
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Joel Sherrill
Based on the code, it looks like you have interrupts disabled
when you are making the call to  rtems_region_get_segment().

For sure, you shouldn't free memory from an ISR though.

On Mon, Feb 19, 2018 at 9:33 AM, Matthew J Fletcher 
wrote:

> For those interested the callstack looks like this;
>
> bsp_fatal_extension() at bspclean.c:32 0x700d42ba
> _User_extensions_Iterate() at userextiterate.c:175 0x700ef42a
> _User_extensions_Fatal() at userextimpl.h:307 0x700eb9b2
> _Terminate() at interr.c:35 0x700eb9b2
> _Internal_error() at interr.c:52 0x700eb9e2
> _Thread_Do_dispatch() at threaddispatch.c:186 0x700edd0a
> _Thread_Dispatch_enable() at threaddispatch.h:227 0x700ef0f8
> _Thread_Change_life() at threadrestart.c:684 0x700ef0f8
> _Thread_Set_life_protection() at threadrestart.c:691 0x700ef10c
> _API_Mutex_Lock() at apimutexlock.c:29 0x700eae86
> _RTEMS_Lock_allocator() at allocatormutex.c:26 0x700eae6a
> _Region_Get_and_lock() at regionimpl.h:76 0x700e9cdc
> rtems_region_get_segment() at regiongetsegment.c:68 0x700e9cdc
> ...
> ... some of my code ...
> ...
> _Thread_Handler() at threadhandler.c:134 0x700edde2
> _Thread_Get() at threadget.c:38 0x700edda8
>
> Its not clear to me what the issue might be.
>
>
> On 19 February 2018 at 13:13, Matthew J Fletcher  wrote:
>
>> All,
>>
>> Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set
>> FALSE i get a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.
>>
>> I think i will have to work around the new behavior somehow.
>>
>> --
>>
>> regards
>> ---
>> Matthew J Fletcher
>>
>>
>
>
> --
>
> regards
> ---
> Matthew J Fletcher
>
>
> ___
> users mailing list
> users@rtems.org
> http://lists.rtems.org/mailman/listinfo/users
>
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Matthew J Fletcher
For those interested the callstack looks like this;

bsp_fatal_extension() at bspclean.c:32 0x700d42ba
_User_extensions_Iterate() at userextiterate.c:175 0x700ef42a
_User_extensions_Fatal() at userextimpl.h:307 0x700eb9b2
_Terminate() at interr.c:35 0x700eb9b2
_Internal_error() at interr.c:52 0x700eb9e2
_Thread_Do_dispatch() at threaddispatch.c:186 0x700edd0a
_Thread_Dispatch_enable() at threaddispatch.h:227 0x700ef0f8
_Thread_Change_life() at threadrestart.c:684 0x700ef0f8
_Thread_Set_life_protection() at threadrestart.c:691 0x700ef10c
_API_Mutex_Lock() at apimutexlock.c:29 0x700eae86
_RTEMS_Lock_allocator() at allocatormutex.c:26 0x700eae6a
_Region_Get_and_lock() at regionimpl.h:76 0x700e9cdc
rtems_region_get_segment() at regiongetsegment.c:68 0x700e9cdc
...
... some of my code ...
...
_Thread_Handler() at threadhandler.c:134 0x700edde2
_Thread_Get() at threadget.c:38 0x700edda8

Its not clear to me what the issue might be.


On 19 February 2018 at 13:13, Matthew J Fletcher  wrote:

> All,
>
> Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set FALSE
> i get a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.
>
> I think i will have to work around the new behavior somehow.
>
> --
>
> regards
> ---
> Matthew J Fletcher
>
>


-- 

regards
---
Matthew J Fletcher
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users

Re: INTERNAL_ERROR_BAD_THREAD_DISPATCH_ENVIRONMENT

2018-02-19 Thread Matthew J Fletcher
All,

Replying to my own post, with CPU_ENABLE_ROBUST_THREAD_DISPATCH set FALSE i
get a fatal exception, this on a Cortex-M7, rtems 5.0.0 from git.

I think i will have to work around the new behavior somehow.

-- 

regards
---
Matthew J Fletcher
___
users mailing list
users@rtems.org
http://lists.rtems.org/mailman/listinfo/users