Re: WIP: sensor API

2016-12-12 Thread Kevin Townsend
There are probably instances where 1kHz isn't fast enough for certain 
sensor types:


https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch#diff-ec052d973c26072d9ac2e198f16e764aR226

/**
 * Poll rate in MS for this sensor.
 */
uint32_t s_poll_rate;

Having said that ... if there are callbacks providing data events, we 
may not want that to fire at anything less than 1ms as well, so there is 
a tradeoff. At that point, perhaps collecting a number of data samples 
inside the driver may be valid, and then returning that collection of 
samples in the event callback, although this goes against the design 
decision not to buffer data internally?


Would a microsecond poll rate be too much to deal with on most systems 
in your opinion(s)? That still gives you a valid polling range of 
1us..4294s in a 32-bit value. That's only slightly more than an hour, 
though, which is a shame since I can imagine situations where you might 
want something every 6 hours as well. There may also be timer issues 
going below 1ms resolution as well.


Re: WIP: sensor API

2016-12-12 Thread Kevin Townsend

Hi Sterling,

One other important question is what happens when we don't respect the 
read request intervals between data samples?


In the accel sim (really nice to see sim examples, BTW ... that's 
extremely useful to test sensor algorithms), you just check the number 
of missing samples from the last interval and generate that many new 
samples to fill the gaps: 
https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch#diff-1c17a623363565318dfefdb8891c0376R148


A lot of DSP algorithms are timing sensitive, though, such as sensor 
fusion with accel/mag/gyro inputs, where you want to samples as close as 
possible to a fixed rate like 50Hz or 100Hz, and the three components 
read together as closely as possible (though timestamp at least lest you 
know the difference between components).


Raising some sort of alert when a sample is missed might be a better 
choice, or having the option to indicate which of the two behaviours 
should be used by the timer when a sample interval is missed:


- 1. Read multiple values (default behaviour?)
- 2. Raise a missing sample alert
- 3. ???

Kevin


WIP: sensor API

2016-12-12 Thread Sterling Hughes

Hi,

I’ve added initial support for a sensors API to mynewt in a branch off 
develop called “sensors_branch.”  You can find a full diff here, or 
pull the source code directly:


https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch

I’ll caveat that this API needs work, and further implementation with 
real sensors, however, I thought I’d send it to the list for:


- early comments

- if other people are playing with sensors and want to join the cause on 
the branch, I think now is a decent time.


I have a couple of adafruit sensors locally, and so I plan on adding 
support for the tsl2561 and lis3dh next.  Although, if someone wants to 
beat me to it - by all means.


The interface is defined in this file:

https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch#diff-ec052d973c26072d9ac2e198f16e764a

An example implementation of a sensor driver can be found here:

https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch#diff-1c17a623363565318dfefdb8891c0376

And calling the API for basic shell functionality (usage), can be found 
here:


https://github.com/apache/incubator-mynewt-core/compare/develop...sensors_branch#diff-d90d3b58f5055894b546a7eff86ba20c

Again, I’ll caveat that this is a WIP, so feedback is welcome (don’t 
be shy!), and patches / co-development is even more so!  This is a 
holiday project for me, so I plan on having something pull worthy in 
early January, although no promises.


Cheers,

Sterling




Re: System init and OS eventq ensure

2016-12-12 Thread marko kiiskila

> On Dec 11, 2016, at 11:21 AM, Sterling Hughes  wrote:
> 
> Hi,
> 
>> 
>>> On Dec 11, 2016, at 10:55 AM, Christopher Collins  
>>> wrote:
>>> 
>>> On Sun, Dec 11, 2016 at 10:11:44AM -0800, will sanfilippo wrote:
 Personally, I keep wanting to try and have the OS start up right away.
>>> 
>>> I wonder if this could solve the problem that Sterling raised (no
>>> default event queue during sysinit).  The control flow in main() might
>>> look like this:
>>> 
>>> 1. Start OS
>>> 2. Create and designate default event queue.
>>> 3. sysinit()
>>> 
>>> I think it would be nice if we could avoid adding another initialization
>>> stage.
>>> 
> 
> +1

I agree. If there are too many options here, it becomes even harder to
understand. Preferably we should keep this stuff simple, making the barrier of
entry to development lower.

> 
 There are definitely “issues” with this:
 a) We do not want to waste idle task stack.
 b) When tasks are started they would start running right away. This
 might cause issues where a task does something to a piece of memory
 that another task initializes, but since that other task has not
 initialized it yet…
 
 b) can be avoided by locking the scheduler until initializations are 
 finished.
 
 a) is problematic :-) I think someone brought this up before, but I
 wonder if it is worth the effort to do something “a bit crazy” like
 the following: the idle task uses “the heap” during intialization.
 Once initializations are over (or at some point that we determine),
 the idle task stack is made smaller and the “top” of the heap is set
 to the end of the idle task stack. For example, idle task stack is at
 0x20008000 and is of size 1K bytes; the bottom of the heap is at
 0x20007000; the top of the heap is at 0x20007C00 (in my nomenclature,
 heap allocations start from the bottom). At some point, the top of the
 heap is moved to 0x20007F80.
 
 Yeah, maybe a bit crazy… :-)
>>> 
>>> I don't think that's too crazy.  It would be great if we could just
>>> malloc() a temporary stack, and then free it when initialization
>>> completes.  I guess the worry is that this will cause heap
>>> fragmentation?
>>> 
> 
> I’m not crazy about malloc()’ing this space.  Especially since system init 
> (where we’d use this memory) is where people malloc() their memory pools, and 
> so you have 1K of space that could potentially affect memory exhaustion.  
> Maybe its an awful idea… but why not let people specify the startup task 
> stack, and we can guarantee that this task gets deleted before the rest of 
> the tasks/system runs.  That way, you can just choose one of your task’s 
> stacks that is sufficiently large, and use that for startup stack.
> 

Most of the malloc()’s for packages happen when they’re initialized.
And having malloc()’d init stack present during this step will have effect
on this. And freeing the stack right after will automatically lead to heap
fragmentation.
And we’d need some new, mandatory, architecture specific code which
switches stacks for a task while the task is running. While not complex,
it is yet another thing to write/debug when adding a new architecture.

However, I like the idea of app assigning a startup task, and then executing
sysinit() in that task’s context.



Re: final release step (was: [RESULT][VOTE] Release Apache Mynewt ...)

2016-12-12 Thread Christopher Collins
Hi Greg,

On Sun, Dec 11, 2016 at 08:45:17PM -0600, Greg Stein wrote:
> Hi all,
> 
> I was looking at prior releases, and noticed that the final step to publish
> the release appears to have been done wrong. No harm done, but suboptimal.
> 
> It appears the release artifacts were *added* into
> release/incubator/mynewt/ ... but what *should* be done is to move the
> artifacts that were voted upon into the release area.
> 
> For example:
> 
> $ svn mv -m "Publish mynewt-0.8.0-b1-incubating"
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-0.8.0-b1-incubating/rc2/
> https://dist.apache.org/repos/dist/release/incubator/mynewt/apache-mynewt-0.8.0-b1-incubating/
> 
> That way, you won't duplicate the content within svn, and you are
> publishing *exactly* what was voted. (sure, previous stuff is
> byte-comparable, but the svn log is also a Good Thing)

Thanks, that makes sense.  I'll update our release process document with
this information.  Now for this release, let's see if my svn skills are
any better than my git skills :).

> Also please "svn rm" all the old stuff from dev/incubator/mynewt/ ... no
> reason for that to sit around. (clearly, it will remain in version control,
> but just not in your face)

Sounds good.

Thanks,
Chris


Re: GDB + SIM broken with macOS sierra

2016-12-12 Thread marko kiiskila
I also tried to use lldb for a few days, and decided that gdb setup is still 
worth the
trouble.

> On Dec 10, 2016, at 10:12 PM, Sterling Hughes  wrote:
> 
> Ugh.
> 
> I’ve been using lldb for most of the day, and it’s… fine.  Certainly it has 
> improved over the years.  We might consider switching to it on mac os x for 
> the simulated environment.  I hate having to make users learn yet another 
> debugger, but people are moving away from GCC / GDB and to LLVM / LLDB, and I 
> don’t know how well GDB is going to work on new versions of Mac.
> 
> Sterling
> 
> On 10 Dec 2016, at 12:11, marko kiiskila wrote:
> 
>> I have this running. But it’s not great.
>> 
>> You need gdb 7.12.1 (you can get that with brew).
>> Codesign your gdb; https://sourceware.org/gdb/wiki/BuildingOnDarwin 
>> 
>> And then I also had to make gdb owned by root with SUID bit set (Peter didn’t
>> need to, so your YMMV).
>> Add the following to your .gdbinit:
>>  set startup-with-shell off
>> 
>> And that should do it. It is not without it’s it’s woes. Every process I run
>> under gdb ends up being a zombie. It’s like the walking dead season 1
>> on my laptop. However, I can do ‘newt run’ with my targets.
>> 
>> Hope this helps,
>> M
>> 
>>> On Dec 10, 2016, at 10:59 AM, Sterling Hughes  wrote:
>>> 
>>> I’m wondering if anyone else has seen this / worked around it?
>>> 
>>> I can run SIM directly from command line, or under LLDB, but GBD seems to 
>>> be broken?
>>> 
>>> Sterling