Re: How to upgrade to 1.8.0 for both newt and newtmgr?

2020-05-16 Thread Christopher Collins
I believe I have fixed the Catalina hang.  You can try out the fix at
https://github.com/apache/mynewt-newtmgr/pull/164, or you can wait for
it to be merged.

On Sat, May 16, 2020 at 12:03:41PM -0500, Mo Chen wrote:
> Great.
> 
> Two more questions:
> 
> Version 1.8.0 has already been released, why when we try to install the
> latest version, it still installs 1.7.0? Can we fix that?

It looks like we forgot to upload deb packages for 1.8.0.  I will try to
get to this soon.

> On Ubuntu Linux, under version 1.7.0, with Nordic-pca10040, the IRS works
> fine. However, with the adafruit feather nrf52, the OTA disconnects at
> 99.92% and shows error: disconnected. Any clue?

I recall seeing issues like this in the past.  The problem occurred when
the device performed a slow flash operation (usually an erase).  Since
the code is executing from flash, flash operations cause the MCU to
momentarily stall.  If the stall takes too long or occurs at the wrong
time, the BLE controller misses too many consecutive transmissions and
the connection terminates.

I would expect this to happen on the first upload request though, not
the last one.  The device erases the image slot when it receives the
first request and this erase can be quite slow.  I don't see anything
special in the code that happens while processing the final upload
request, so I'm afraid I'm at a loss here.

> I was trying to get rid of it by upgrading to a newer version. But on
> Linux, it doesn't seem there is such an instruction on how to install 1.9.0
> dev. Would you please guide me on this?

I don't think upgrading will solve this problem (of course I could be
wrong).  Do you want to upgrade the firmware (Mynewt itself) or the
tools (newt and newtmgr)?  If you want to upgrade Mynewt, running `newt
upgrade` from your project directory should do that.

> Many thanks!


Re: How to upgrade to 1.8.0 for both newt and newtmgr?

2020-05-16 Thread Christopher Collins
Thanks Mo.  Someone else reported the same issue with Catalina.  Their
`-ldebug` log looked identical to yours.  I'll upgrade to catalina and
let you know what I find out.

Chris

On Sat, May 16, 2020 at 10:10:34AM -0500, Mo Chen wrote:
> Hi Chris,
> 
> thanks for the clarification.
> 
> I tried with the following debug info returned:
> 
> newtmgr image upload -c mybleprph -ldebug
> /Users/NNL/dev/myproj/bin/targets/myperiph/app/@apache-mynewt-nimble/apps/bleprph/bleprph.img
> 
> DEBU[2020-05-16 10:03:28.615] Using connection profile: name=mybleprph
> type=ble connstring=peer_name=nimble-bleprph
> DEBU[2020-05-16 10:03:28.621] CentralManagerDidUpdateState: cmgr=0x5804b50
> 
> No further responses shown. No signs of connection. Under this status, I
> checked the BLE advertise using my phone, the device is still advertising,
> meaning the connection is not established.
> 
> Would you please help interpret what the message code means?
> 
> other details:
> 
> newt version: 1.9.0
> newtmgr version: 1.9.0
> Device: nordic pca10040
> 
> The same method works under version 1.7.0 on Ubuntu Linux. So we can
> confirm the device works.
> 
> Thanks,
> 
> 
> On Fri, May 15, 2020 at 9:15 PM Christopher Collins 
> wrote:
> 
> > On Fri, May 15, 2020 at 07:47:22PM -0500, Mo Chen wrote:
> > > Hi Chris,
> > >
> > > Thank you for your timely response.
> > >
> > > Under v1.7.0, the error message is:
> > > *Unhandled event: xpc.Dict{"kCBMsgId":4,
> > > "kCBMsgArgs":xpc.Dict{"kCBMsgArgState":5}}*
> > >
> > > Under v1.9.0 dev, without '--ldebug':
> > > Nothing happens. No message, no response.
> > >
> > > Under v1.9.0 with '--ldebug':
> > > I guess I did not get what you meant by "with the '--ldebug' switch". I
> > > tried the following but with an error telling me unknown flag.
> > >
> > > newtmgr image upload -c mybleprph '--ldebug'
> > >
> > /Users/NNL/dev/myproj/bin/targets/myperiph/app/@apache-mynewt-nimble/apps/bleprph/bleprph.img
> > > Error: unknown flag: --ldebug
> > >
> > > Would you please guide me on how to implement the --ldebug switch?
> >
> > Ah, sorry,  I meant `-ldebug` (one dash only)!
> >
> > Thanks,
> > Chris
> >
> 
> 
> -- 
> Mo Chen


Re: How to upgrade to 1.8.0 for both newt and newtmgr?

2020-05-15 Thread Christopher Collins
On Fri, May 15, 2020 at 07:47:22PM -0500, Mo Chen wrote:
> Hi Chris,
> 
> Thank you for your timely response.
> 
> Under v1.7.0, the error message is:
> *Unhandled event: xpc.Dict{"kCBMsgId":4,
> "kCBMsgArgs":xpc.Dict{"kCBMsgArgState":5}}*
> 
> Under v1.9.0 dev, without '--ldebug':
> Nothing happens. No message, no response.
> 
> Under v1.9.0 with '--ldebug':
> I guess I did not get what you meant by "with the '--ldebug' switch". I
> tried the following but with an error telling me unknown flag.
> 
> newtmgr image upload -c mybleprph '--ldebug'
> /Users/NNL/dev/myproj/bin/targets/myperiph/app/@apache-mynewt-nimble/apps/bleprph/bleprph.img
> Error: unknown flag: --ldebug
> 
> Would you please guide me on how to implement the --ldebug switch?

Ah, sorry,  I meant `-ldebug` (one dash only)!

Thanks,
Chris


Re: How to upgrade to 1.8.0 for both newt and newtmgr?

2020-05-15 Thread Christopher Collins
Hi Mo,

On Fri, May 15, 2020 at 05:11:36PM -0500, Mo Chen wrote:
> I am currently using 1.7.0 on MacOS Catalina.
> 
> There is a BLE connection issue. I cannot upload img via OTA.
> 
> I searched online, it seems in verison 1.8.0, this problem has been solved?
> 
> I tried to use install mynewt-newtmgr --HEAD. The 1.9.0 dev version did not
> solve the BLE connection issue.
> 
> Any clue?

The 1.9.0 dev version of newtmgr should work with Catalina.  Can you
please try the upload again with the `--ldebug` switch?  Then please
paste the output in a response.

Chris


Re: windows shell scripts

2020-02-20 Thread Christopher Collins
Thanks ipan, those are all very good points.  Please feel free to submit
one or more PRs.  Otherwise, I will take a look at implementing some of
these if you don't mind.

Chris

On Wed, Feb 19, 2020 at 10:26:53PM +0100, J. Ipanienko wrote:
> Welcome everyone
> 
> 1. mynewt has over 100 same cmd files with this content @bash "%~dp0%~n0.sh"
> 
> 2. pre_build_cmds pre_link_cmds post_link_cmds probably can't be used with 
> .WINDOWS
> 
> 3. go exec.Command and os.StartProcess will not run a file with the .sh 
> extension at Windows
> 
> 4. debug and load for all bsp need bash
> 
> 5. /bin/sh is hardcoded for linux and darwin
> 
> so that you can have a common script for post_build, I suggest that newt 
> checks the extension of the command file and if it is .sh added bash.exe or 
> for example the value of the variable NEWT_SH
> 
> then you could also get rid of those 100 cmd files
> 
> this is a change that executes the same script on Windows (when NEWT_SH is 
> set to the path to bash.exe) and Linux, it also allows getting rid of .cmd 
> files from bsp
> 
> diff --git a/util/util.go b/util/util.go
> index ac92c28..c6a5638 100644
> --- a/util/util.go
> +++ b/util/util.go
> @@ -369,8 +369,14 @@ func ShellCommandLimitDbgOutput(
> name = "/bin/sh"
> args = []string{"-c", strings.Replace(cmd, "\"", "\\\"", -1)}
> } else {
> -   name = cmdStrs[0]
> -   args = cmdStrs[1:]
> +   var newt_sh = os.Getenv("NEWT_SH")
> +   if newt_sh != "" && strings.HasSuffix(cmdStrs[0], ".sh") {
> +   name = newt_sh
> +   args = cmdStrs
> +   } else {
> +   name = cmdStrs[0]
> +   args = cmdStrs[1:]
> +   }
> }
> cmd := exec.Command(name, args...)
> 
> @@ -428,6 +434,10 @@ func ShellInteractiveCommand(cmdStr []string, env 
> map[string]string,
> // Escape special characters for Windows.
> fixupCmdArgs(cmdStr)
> 
> +   var newt_sh = os.Getenv("NEWT_SH")
> +   if newt_sh != "" && strings.HasSuffix(cmdStr[0], ".sh") {
> +   cmdStr = append([]string{newt_sh}, cmdStr...)
> +   }
> log.Print("[VERBOSE] " + cmdStr[0])
> 
> c := make(chan os.Signal, 1)
> 
> 
> god luck
> ipan
> 


Simplifying persisted config

2020-02-07 Thread Christopher Collins
Hello all,

The `sys/config` package lets an app persist and restore data in
permanent storage.  It is documented here:
.

This package does its job well and its design is sound.  It has one
minor problem though (in my opinion, in case that needs saying) - it is
nearly impossible to use!  Here is an example:
.
This file implements a single 8-bit setting called `split/status`.  This
is not an extreme example; it is how all `sys/config` client code looks.
Speaking for myself, it is a major implementation effort whenever I need
to add a new setting, or worse, a new setting group.  As basic and
fundamental as data persistence is to an embedded application, this
should be an easier task.

The problem is obvious: the config package is very powerful, and with
that power comes a very complex API.

I think most application code would be fine with a less powerful
library.  Such a library would allow all settings to be loaded,
individual settings to be saved on demand, and very little else.  It
would not be possible to commit several settings at once or to execute
custom code when a setting is saved or loaded.  With this simplified set
of requirements in mind, I came up with a wish list for an API:

1. It should be easy to create a new setting group.  Ideally you could
just copy and paste from existing code.

2. Creating a new setting should be as simple as appending an element to
an array.

3. Persisting a setting should be possible with a single function call.

So I took that and implemented... something.  The library is called
"scfg" (short for "simple config").  I'll go into more details below,
but first, here is how that `split/status` configuration would be done
using the new library: 
[1].  I think this is a marked improvement, but please judge for
yourselves.

### Implementation

I had the idea that we could just build a simple library on top of the
existing `sys/config` library.  Unfortunately, `sys/config` doesn't lend
itself well to this kind of a wrapper library.  The problem is that the
`conf_handler` callbacks don't accept a `void *arg` parameter, meaning
every config group must define a dedicated set of callback functions.
This is an issue because scfg needs to define a generic set of callbacks
for all config groups.  So I had to modify `sys/config` so that the
callbacks accepted an extra generic argument, and this had to be done in
a backwards compatible way.  It's not pretty, but this is what I came up
with: .

The scfg library required a second change to `sys/config`: Support for
unsigned integer types.  Only signed integer types are currently
supported.  Before my change, when a config setting needed to use an
unsigned type, the handler would work around this limitation by
specifying a signed integer type that is slightly wider than the actual
data.  Then the conf handler callbacks would manually convert between
the unsigned and signed types.  Since scfg does not allow for arbitrary
code to run during save and restore operations, the library needs to
handle unsigned types natively.  I made this change here:
.

Finally, here is the PR for the scfg library itself:
.

### Questions

1. Do we want to enable extended callbacks in `sys/config` by default,
or should this require a syscfg setting to be enabled?  This change adds
eight bytes to every config group which is a bummer.  I prefer that we
enable this change unconditionally.  I think a simplified config
interface is valuable enough that developers should be able to assume it
is available.

2. Does scfg need some sort of "on-loaded" callback for each setting?
My feeling is no, we don't need that.  If applications need to know when
settings are restored, then we can modify `sys/config` to allow
callbacks to be registered.  These callbacks would be executed after
`conf_load()` completes.

3. Do we need support for binary data?  `sys/config` only allows text
values for settings.  Binary blobs must be converted to text using
something like base64 before they are saved.  This is a nuissance for
large settings (e.g., nimble host security data).  I would love it if we
could support raw binary values, but my impression is that this would
require massive changes to the sys/config library.  I think this change
would be quite valuable, but it is a feature in itself and it should be
considered separately.

All comments are welcome.

[1] It seems the split image feature is broken again.  In light of that,
this probably wasn't the best group to use as an example!

Thanks for reading,
Chris



Re: Simplifying Mynewt repo dependencies

2020-01-24 Thread Christopher Collins
Hi Vipul,

On Thu, Jan 23, 2020 at 03:48:51PM -0800, Vipul Rahane wrote:
> Hi,
> 
> Seems fine to me to remove the range feature. I am little bit concerned
> about version.yml specially for backwards compatibility but if that's not
> an issue it's all good.

Repos should keep their `version.yml` files around for a while to
maintain compatibility with older versions of newt.  Alternativelty,
they could update their `repo.newt_compatibility` matrix to prevent
older versions from being used.  I think that mitigates most backwards
compatibility issues.

Chris


Re: Simplifying Mynewt repo dependencies

2020-01-22 Thread Christopher Collins
Hi Szymon,

On Wed, Jan 22, 2020 at 12:33:21PM +0100, Szymon Janc wrote:
> Hi Chris,
> 
> Yes! Let's simplify this as much as possible.
> Do I get this correctly that change we could (if want to) get rid of
> release branches and just freeze master during release?

Thanks for reading.  Yes, that is correct.  Branching is permitted but
not required.

Chris


Simplifying Mynewt repo dependencies

2020-01-21 Thread Christopher Collins
Hello all,

This is a follow up to an email I sent last week (subject: "Proposal:
Remove version.yml files).  In that email my thoughts were somewhat...
illegible.  I'm going to try again, hopefully with a bit more clarity.

This email concerns Mynewt's repo dependency system.  That is to say,
what happens when a user runs `newt upgrade`.  This functionality is
burdened by two unfortunate features:

1. The ability to depend on a *range* of repo versions, and
2. Mandatory `version.yml` files.

I propose that we greatly simplify repo management by removing both of
these features.  I have submitted a PR that implements this proposoal
here: https://github.com/apache/mynewt-newt/pull/365.

Below I will describe each of these features and why I think they should
be removed.

 BAD FEATURE 1: RANGED DEPENDENCIES

I'll start by giving some examples [1].

# Example 1
repository.apache-mynewt-core:
type: git
vers: '>=1.6.0'
url: g...@github.com:apache/mynewt-core.git

# Example 2 [2]
repository.apache-mynewt-nimble:
type: git
vers: '>=1.1.0 <=1.2.0'
url: g...@github.com:apache/mynewt-nimble.git

Each dependency uses comparison operators to express a range of
acceptable versions.

Now let me explain why I think ranged dependencies is a bad feature.
To begin with, a ranged dependency like `>=1.0.0 <2.0.0` only works if
the repo follows the SemVer model (or something like it).  SemVer
requires that the major version number be bumped whenever a
compability-breaking change is made to the interface.  If a repo
doesn't faithfully follow SemVer, then there's no guarantee that 1.0.1
has the same interface as 1.0.0 (for example).  Ranged dependencies
clearly aren't appropriate for such a repo.  My feeling is that most
library maintainers don't follow SemVer even if they intend to.  Part
of the problem is that people don't always want to bump the major
version when they ought to because to do so has certain non-technical
implications (e.g., "2.0" of something is often accompanied by a press
release or other fanfare).

But more importantly, Mynewt is used for embedded development, and I
think most embedded developers want to know exactly which libraries
they're putting in their products.  In this sense, allowing a range of
repo versions does more harm than good because it takes control away
from the developer.  Ranged dependencies allow repos to unexpectedly
change versions when you run `newt upgrade`.

Finally, use of version ranges makes the job of maintaining a Mynewt
repo more difficult.  A new repo version must be tested against a range
of dependency versions rather than just one.

So the first part of my proposal is to remove support for version
ranges in repo dependencies.  A repo dependency must specify a single
version using one of three forms:

* Fixed version (1.0.0)
* Floating version (1.6-latest)
* Commit (7088a7c029b5cc48efa079f5b6bb25e4a5414d24-commit)
 (tagname-commit)
 (branchname-commit)

 BAD FEATURE 2: `VERSION.YML` FILES

To determine the version of a particular commit of a repo, newt checks
out that commit and reads the contents of a file called `version.yml`.
This file contains a single fixed version string.  When a maintainer
releases a new version of a repo, they have to update `version.yml`
before creating the tag.

The entire reason for this `version.yml` requirement is to support
`*-commit` dependencies.  I'll give an example to illustrate what I
mean:

* mynewt-core 1.6.0 DEPENDS ON mynewt-nimble 1.1.0
* mynewt-core 1.7.0 DEPENDS ON mynewt-numble 1.2.0

Now say a user doesn't want to use a versioned release of mynewt-core.
Instead, he wants his project to use a commit of mynewt-core that is
somewhere between 1.6.0 and 1.7.0 in the git history.  What version of
mynewt-nimble should this particular commit depend on?  What if the
user depends on a mynewt-core commit from a completely different
branch?  Newt answers these questions by reading mynewt-core's
`version.yml` file from the requested commit.  If the file contains
"1.6.0", then core depends on nimble 1.1.0.  If it says "1.7.0", core
depends on nimble 1.2.0, and so on.

So that's the rationale and the history of the `version.yml` file.  In
retrospect, I think this `version.yml` idea was a mistake.  Arbitrary
commits are *not* versioned releases and they should not be treated
like them.  When the user specifies a `*-commit` dependency he is
putting newt into a sort of "manual override" mode.  Newt should not
try to figure out inter-repo dependencies in this mode.

Another reason why `version.yml` files are bad is that they make repo
maintenance more difficult.  To keep a `version.yml` file in a sane
state, a repo maintainer has to use a particular branching strategy
when preparing a new release (step 0 and step 5b here:
https://cwiki.apache.org/confluence/display/MYNEWT/Release+Process).
It's easy to get this wrong, and it is just not a very user friendly

Proposal: Remove version.yml files

2020-01-15 Thread Christopher Collins
Hello all,

### TLDR

Repo version tracking is somewhat broken due to a single feature
(commit versions).  To solve this, I propose that:

1. We remove the concept of version.yml files
2. Commit versions now conflict with ALL other versions.

I'll try to distill my thoughts into something readable below.

### DEFINITIONS

REPO DEPENDENCY (dep): A dependency on a specific version of a Mynewt
   repo.

FIXED VERSION: An exact repo version number.
Examples:
1.0.1
2.0.0

FLOATING VERSION: A partial repo version number with a "stability
  specifier".
Examples:
1-latest  # (1.x.x)
0-dev # (0.x.x)
1.1-latest# (1.1.x)

COMMIT VERSION: Points to a git object (branch/tag/hash) of a repo.
Uses the "commit" stability specifier.
Examples:
7088a7c029b5cc48efa079f5b6bb25e4a5414d24-commit
mynewt_1_7_0_tag-commit
master-commit

ROOT DEPENDENCY (ROOTDEP): A repo dependency expressed in `project.yml`

INTER DEPENDENCY (INTERDEP): A dependency of one repo on another.
 These are expressed in the depending
 repo's `repository.yml` file.

### CURRENT IMPLEMENTATION

When the user runs `newt upgrade`, the newt tool calculates which git
hash to check out for each repo.  The process goes like this:

1. Start with an empty working set of deps.
2. Add all rootdeps to the working set.
3. For each unvisited dep `d` in the working set:
a. Add all of d's interdeps to the working set.
4. Repeat step 3 until all deps have been visited.

I think that procedure is pretty simple and straightforward (if a
little tedious to read).  Things get weird in the process that follows:
conflict detection.

A "conflict" is when the working set contains more than one version of
the same repo.  In other words, someone wants version X while someone
else wants version Y.  Newt can only check out a single version of a
given repo, so it reports a conflict and aborts the upgrade.  So far,
this probably sounds reasonable.  Unfortunately, the process gets
really convoluted due to commit versions.

Commit versions are problematic because they aren't linked to a version
numbers.  For example, repo R has a tag defining version 1.2.0.  What
version number should be assigned to the parent commit of this tag?
What about commits on their own branches?

The way newt solves this is by requiring a `version.yml` file in each
repo.  This file contains a single fixed version.  To determine the
version of a particular commit, newt checks out that commit and reads
the contents of `version.yml`.  When a maintainer releases a new
version of a repo, they have to update `version.yml` before creating
the tag.

I think this version.yml idea was a mistake.  The entire reason this
idea was introduced was to make it possible to gauge compatibility when
commit versions are used.  But that motivation is itself flawed; the
notion of "compatibility" is not valid when dealing with commit
versions because arbitrary commits don't come with any compatibility
guarantees.

Secondary problems with the `version.yml` idea is that it is error
prone and requires extra work for a Mynewt repo maintainer.

### PROPOSAL

1. No more version.yml files.
2. Commit versions conflict with ALL other versions.

In other words, newt doesn't try to figure out what a commit version's
"real version number" is.  One way to think of it is: each commit
version has its own unique version number.  As a consequence, different
commit versions always conflict with each other.

I should also mention that newt allows the `project.yml` file to
override any interdep with a commit version rootdep.  Newt accepts the
rootdep and does not report a conflict.

All comments welcome.

Thanks,
Chris


Re: mynewt create-image no work at windows

2019-11-18 Thread Christopher Collins
On Mon, Nov 18, 2019 at 11:07:17AM +0100, Szymon Janc wrote:
> Hi,
> 
> So I was able to bisect this to
> 
> commit e3fdcd68c9ed6fa58a6b68bbf3c45a780c57ab40 (refs/bisect/bad)
> Author: Christopher Collins 
> Date:   Mon Aug 5 11:15:33 2019 -0700
> 
> create-image: Generate a .hex file
> 
> Create a `.hex` equivalent of the `.img` file.  This feature used to be
> present but was inadvertently removed at some point.

Thanks for chasing this down, Szymon.  I will take a look.

Chris


Re: Thoughts on deprecating LOG_VERSION 2 and making the default LOG_VERSION 3

2019-11-07 Thread Christopher Collins
On Thu, Nov 07, 2019 at 11:56:36AM -0800, Vipul Rahane wrote:
> Hello,
> 
> LOG_VERSION 2 has been around for quite some time but is mostly just a
> string based log. LOG_VERSION 3 supports string based logs as well as
> others. LOG_VERSION 2 does not play well with the MCUmgr mobile library as
> well.
> 
> I would like to suggest getting rid of LOG_VERSION 2 which would allow us
> to make the code a bit simpler and not worrying about backwards
> compatibility.
> 
> This discussion was mainly triggered by a PR
> https://github.com/apache/mynewt-core/pull/2087 by Jerzy for adding back
> string based reboot log just to make it work with LOG_VERSION 2.
> 
> LOG_VERSION 2 also doesn't play well with the MCUmgr mobile library.
> 
> I suggest we deprecate LOG_VERSION 2 and make the default 3 going forward.

I agree.  Version 3 has been around for quite a while (two years?) and
it is superior to version 2.  I am fine with dropping support for
version 2.

Chris


Re: newt upgrade (install, sync) fails on mcuboot repo

2019-10-18 Thread Christopher Collins
On Thu, Oct 17, 2019 at 04:13:35PM -0700, Christopher Collins wrote:
> On Thu, Oct 17, 2019 at 10:16:22AM -0700, Christopher Collins wrote:
> > I couldn't think of a good general solution to this problem (if anyone
> > else can, please share!).  Unless I'm missing something, I think I would
> > call this a git bug, so I am going to report it to the git maintainers
> > if it hasn't already been reported. 
> 
> I reported the bug here:
> https://public-inbox.org/git/20191017230751.gc4...@pseudoephedrine.nightmare-heaven.no-ip.biz/T/#u

Just one more follow up- someone on the git mailing list replied to my
bug report.  This is already a known issue.  It was first reported in
2017[1], so it probably won't get fixed any time soon.  I will try to
think of a more general workaround for newt.

Chris

[1]: 
https://public-inbox.org/git/788230417.115707.1507584541...@ox.hosteurope.de/


Re: newt upgrade (install, sync) fails on mcuboot repo

2019-10-17 Thread Christopher Collins
On Thu, Oct 17, 2019 at 10:16:22AM -0700, Christopher Collins wrote:
> I couldn't think of a good general solution to this problem (if anyone
> else can, please share!).  Unless I'm missing something, I think I would
> call this a git bug, so I am going to report it to the git maintainers
> if it hasn't already been reported. 

I reported the bug here:
https://public-inbox.org/git/20191017230751.gc4...@pseudoephedrine.nightmare-heaven.no-ip.biz/T/#u

Chris


Re: newt upgrade (install, sync) fails on mcuboot repo

2019-10-17 Thread Christopher Collins
On Thu, Oct 17, 2019 at 07:51:07AM +0200, Szymon Janc wrote:
> Hello,
> 
> Due to recent changes in mcuboot git (use of submodules) you may hit issues 
> with newt upgrade|install|sync commands. There is a short term hotfix merged 
> into newt master to workaround this.
> 
> Since this is also affecting new users we may need to release newt
> 1.7.1 (only newt tool) in next few days to address this issue.

Thanks Szymon.  This is indeed a bummer.

To add a few details, mcuboot replaced a regular directory with a
submodule (ext/mbedtls).  This was done with two separate commits:
1. Rename (regular directory) `ext/mbedtls` --> `ext/mbedtls-asn1`
2. Add submodule `ext/mbedtls`

Git reports an error when you use `git checkout` to "jump over" these
two commits.  If you try going from post- to pre-, git reports the
following error and aborts the operation:

The following untracked working tree files would be overwritten by
checkout:
ext/mbedtls/include/mbedtls/asn1.h
ext/mbedtls/include/mbedtls/bignum.h
ext/mbedtls/include/mbedtls/check_config.h
ext/mbedtls/include/mbedtls/config.h
ext/mbedtls/include/mbedtls/ecdsa.h
ext/mbedtls/include/mbedtls/ecp.h
ext/mbedtls/include/mbedtls/md.h
ext/mbedtls/include/mbedtls/oid.h
ext/mbedtls/include/mbedtls/pk.h
ext/mbedtls/include/mbedtls/platform.h
ext/mbedtls/include/mbedtls/platform_util.h
ext/mbedtls/include/mbedtls/threading.h
Please move or remove them before you switch branches.
Aborting

If you attempt the reverse (pre- to post-), the operation succeeds, but
git reports a warning and leaves an orphaned directory behind (making
the repo state dirty).

These issues cause `newt upgrade` to fail in most cases.

This problem was addressed in the newt tool with an mcuboot-specific
hack: https://github.com/apache/mynewt-newt/pull/343

I couldn't think of a good general solution to this problem (if anyone
else can, please share!).  Unless I'm missing something, I think I would
call this a git bug, so I am going to report it to the git maintainers
if it hasn't already been reported. 

Chris


Re: Unifying Newtmgr (NMP) protocol and MCUmgr (SMP) into the MCUmgr repo

2019-10-09 Thread Christopher Collins
Hi Vipul,

On Wed, Oct 09, 2019 at 12:42:51PM -0700, Vipul Rahane wrote:
> Hello,
> 
> While making the changes for MCUmgr, we came across nmgr_uart which is a
> predecessor of nmgr_shell. So far, from what I gather, the functionality is
> the same except for the fact that shell can bring in other code which can
> increase the code size a bit.
> 
> As a solution I was suggesting removing nmgr_uart and transitioning to
> smp_shell as part of the MCUmgr changes.
> 
> What does the community think about it and does anybody have issues with it.
> 
> This question was raised as the CI did not catch errors with smp_uart and
> nothing really uses it in the mynewt ecosystem. I am looking for .a quick
> turn around on this question, so, any input would be fine. Thanks.

If it isn't being used anymore, then I see no problem with removing it.

I think the old serial boot loader used to use this package.  Now that
we have switched to mcuboot, there is nothing that depends on it.

It looks like mcuboot's serial boot loader just implements its own
minimal newtmgr server and UART transport.

So I say go ahead and remove it :).

Chris


Re: Newt feature: run custom commands at build time

2019-10-02 Thread Christopher Collins
> I do not have a strong opinion on this, we can keep it as is, however... I
> expected that these paths are relative to package root but seems like they
> are relative to project root. Is this intended behavior? I did not find any
> way to address script in a package other that using full path, i.e.
> 'repos/.../.../script.sh' which is counter-intuitive tbh. Perhaps I
> misunderstood description, but my understanding was that cwd is set to
> project root, but newt will still look for a script in package root - this
> would make more sense I think.

Ah, I see what you mean.  You're right - the script path should be
relative to the package directory, not to the project.  I will change
this.

The user may also want to run scripts relative to the project root.
There is an environment variable containing the project root
(`MYNEWT_PROJECT_ROOT`).  This variable is not set in the newt process
itself, but I think it would be useful.  I will change newt so that it
defines this setting in its own process.

Thanks,
Chris


Re: Question about mynewt Single mode

2019-10-01 Thread Christopher Collins
Hi Nicola,

On Fri, Sep 27, 2019 at 04:48:18PM +0200, Nicola Bizzotto wrote:
> Hi,
> I'm currently using an Espruino Puck.js device. I'm programming it with a
> Segger J-Link programmer. I have already experience with mynewt OS (I used
> it with a Bluefruit nRF52 Feather) and now I want to use it also with the
> Puck.js. I also would like to use 'Single' image setup, which is described
> here: https://mynewt.apache.org/latest/os/modules/split/split.html.
> By now I've always used 'Split' mode.
> I searched on your site but failed to find some information about the setup
> and use of 'Single' mode. Can you give me some help, or point me to some
> useful resources?

Mynewt doesn't offer any "direct" support for the single image setup,
but I believe it is still possible to use (it definitely used to work).

I don't believe there is any documentation for the single-image setup,
so here are some (untested) thoughts on how you should proceed:

1. (bsp.yml): In your BSP's flash map, shrink the
`FLASH_AREA_BOOTLOADER`, `FLASH_AREA_IMAGE_1`, and
`FLASH_AREA_IMAGE_SCRATCH` flash areas down to one sector each.  You
might be able to remove the "image 1" and "image scratch" areas entirely
(I'm not sure), but you definitely need to keep the bootloader area.

2. (bsp.yml): Set the `FLASH_AREA_IMAGE_0` area's offset to 0 and increase
its size as needed.  Your single image will reside in this area.

3. (bsp.yml): Set the `FLASH_AREA_BOOTLOADER` area's offset to something
other than 0.  Make sure it doesn't overlap any other areas.

4. Fix up the BSP linker script.  Copy the normal linker script and
modify its `FLASH` memory region so that it matches the
`FLASH_AREA_IMAGE_0` flash area (set `ORIGIN` to 0; set `LENGTH` to the
size of the flash area).

That should be it.  You might run into issues if some of your target's
dependencies expect all these flash areas to exist.  For example, you
will have to ensure you aren't pulling in `mgmt/imgmgr`.

Chris


Re: Newt feature: run custom commands at build time

2019-09-30 Thread Christopher Collins
Hi Andrzej,

On Thu, Sep 26, 2019 at 07:24:54PM +0200, Andrzej Kaczmarek wrote:
> This looks very good! I was thinking if it would be possible to reference
> other targets (i.e. artifacts) from scripts but with the latest addition of
> shared folder this does not seem to be a problem since it can be also
> shared with another newt build invoked from script and we can copy/write
> data there. I did not yet check how this work in practice but will give it
> a try and perhaps then I'll have some extra ideas.

Thanks for taking a look!

> > post_cmds (run after the build).
> >
> > ### EXAMPLE
> >
> > Example (apps/blinky/pkg.yml):
> >
> > pkg.pre_cmds:
> > scripts/pre_build1.sh: 100
> > scripts/pre_build2.sh: 200
> >
> > pkg.post_cmds:
> > scripts/post_build.sh: 100
> >
> 
> I assume these are relative to package root so perhaps we could assume
> there is scripts/ subdir and execute from there by default? Just the same
> as we have src/ and include/.

I'm reluctant to use an implicit path here.  I think it is good to be
explicit so that there is no confusion about where a script is located.

We use an implicit "targets" path, but I feel like that is easier to
justify because it saves the user from constantly typing the same thing.
I don't think this custom command feature will be used very often at
all, so I am not sure an implicit path would add much in the way of
convenience.

It's easy to add an implicit path later, but impossible to remove it.
Unless you have a strong opinion on this, I suggest we give the feature
some time without the implicit path and make the decision later.

Thanks,
Chris


Re: Newt feature: run custom commands at build time

2019-09-25 Thread Christopher Collins
On Wed, Sep 25, 2019 at 06:29:19PM +0300, marko kiiskila wrote:
> Sounds good. If we need to add more, we can do that later.

Thanks.  I have updated the PR with the discussed changes:


Chris


Re: Newt feature: run custom commands at build time

2019-09-24 Thread Christopher Collins
Hi Marko,

On Tue, Sep 24, 2019 at 03:19:24PM +0300, marko kiiskila wrote:
> Thanks, this is a very useful feature.
> 
> > On 24 Sep 2019, at 3.50, Christopher Collins  wrote:
> > 
> ...
> > 
> > A package specifies custom commands in its `pkg.yml` file.  There are
> > two types of commands: 1) pre_cmds (run before the build), and 2)
> > post_cmds (run after the build). 
> > 
> > ### EXAMPLE
> > 
> > Example (apps/blinky/pkg.yml):
> > 
> >pkg.pre_cmds:
> >scripts/pre_build1.sh: 100
> >scripts/pre_build2.sh: 200
> > 
> >pkg.post_cmds:
> >scripts/post_build.sh: 100
> > 
> > For each command, the string on the left specifies the command to run.
> > The number on the right indicates the command's relative ordering.
> 
> I wasn’t sure about need for numbering, but I can see a use if execution
> of custom commands depends on a syscfg variable being set.

I also wasn't sure about using numbered stages.  It would be simpler if
each package specified a sequence of commands rather than a mapping.

I was envisioning a case where package A generates some intermediate
file, then package B reads this file and produces the final output.  In
this scenario, B's scripts must run after A's scripts.  Assigning stages
to each command allows the user to enforce this ordering.  Yes you are
right - packages would need to use syscfg settings to define the stages
for this to be useful.

Another abstract use case: several scripts emit strings to some temp
files, then a final script would gather these files and generate a .c /
.h pair.  Admittedly, I can't think of a concrete example where this
would be needed.  

My feeling is we should continue using numeric stages here because a)
they could conceivably be useful, and b) it's already implemented this
way :).  I'm really uncertain about this though, so if you (or anyone
else) has a strong opinion about this, please just let me know and I'll
go with it.

> I think one useful addition would be to have another class of commands
> for linking phase. Often there’s a need to convert the binary to some
> other format, or slap in another header (like with DA1469x bootloader).
> Maybe have such an option for BSP packages? pkg.post_cmds would
> get executed after generating object files, and maybe pkg.post_link_cmds
> would get executed after linking?

That is a good idea.  How about this naming scheme:

* pre_build_cmds
* pre_link_cmds
* post_build_cmds

?

I also think we would need some an additional environment variables to
make this useful:

* MYNEWT_PKG_BIN_DIR(dir where .o / .a files get written)
* MYNEWT_PKG_BIN_FILE   (full path of .a file)

While we are at it, might as well add one more:

* MYNEWT_USER_WORK_DIR  (temp dir shared by all scripts)

It might be useful for scripts that feed input to other scripts.

Chris


Newt feature: run custom commands at build time

2019-09-23 Thread Christopher Collins
Hello all,

I have implemented a feature in newt: the ability to run custom shell
commands at build time.



Is there any extra functionality that you'd like to see here?  All
comments are appreciated.  I am duplicating the PR text here, but the
formatting is a little nicer in the PR.

Thanks,
Chris

---

A package specifies custom commands in its `pkg.yml` file.  There are
two types of commands: 1) pre_cmds (run before the build), and 2)
post_cmds (run after the build). 

### EXAMPLE

Example (apps/blinky/pkg.yml):

pkg.pre_cmds:
scripts/pre_build1.sh: 100
scripts/pre_build2.sh: 200

pkg.post_cmds:
scripts/post_build.sh: 100

For each command, the string on the left specifies the command to run.
The number on the right indicates the command's relative ordering.

When newt builds this example, it performs the following sequence:

scripts/pre_build1.sh
scripts/pre_build2.sh
[compile]
[link]
scripts/post_build.sh

If other packages specify custom commands, those commands would also be
executed during the above sequence.  For example, if another package
specifies a pre command with an ordering of 150, that command would run
immediately after `pre_build1.sh`.  In the case of a tie, the commands
are run in lexicographic order.

All commands are run from the project's base directory.  In the above
example, the `scripts` directory would be a sibling of `targets`.

### CUSTOM BUILD INPUTS

A custom pre-build command can produce files that get fed into the
current build.  A command can generate any of the following:

1) .c files for newt to compile.
2) .a files for newt to link.
3) .h files that any package can include.

.c and .a files should be written to "$MYNEWT_USER_SRC_DIR" (or any
subdirectory within). 

.h files should be written to "$MYNEWT_USER_INCLUDE_DIR".  The
directory structure used here is directly reflected by the includer.
E.g., if a script writes to:

$MYNEWT_USER_INCLUDE_DIR/foo/bar.h

then a source file can include this header with:

#include "foo/bar.h"

### DETAILS

1. Environment variables

In addition to the usual environment variables defined for debug and
download scripts, newt defines the following env vars for custom
commands:

* MYNEWT_USER_SRC_DIR: Path where build inputs get written.
* MYNEWT_USER_INCLUDE_DIR: Path where globally-accessible headers get
   written.
* MYNEWT_PKG_NAME: The full name of the package that specifies
   the command being executed.
* MYNEWT_APP_BIN_DIR:  The path where the current target's binary
   gets written.

These environment variables are defined for each processes that a
custom command runs in.  They are *not* defined in the newt process
itself.  So, the following snippet will not produce the expected
output:

BAD Example (apps/blinky/pkg.yml):

pkg.pre_cmds:
'echo $MYNEWT_USER_SRC_DIR': 100

You can execute `sh` here instead if you need access to the environment
variables, but it is probably saner to just use a script.

2. Detect changes in custom build inputs

To avoid unnecessary rebuilds, newt detects if custom build inputs have
changed since the previous build.  If none of the inputs have changed,
then they do not get rebuilt.  If any of them of them have changed,
they all get rebuilt.

The $MYNEWT_USER_[...] base is actually a temp directory.  After the
pre-build commands have run, newt compares the contents of the temp
directory with those of the actual user directory.  If any differences
are detected, newt replaces the user directory with the temp directory,
triggering a rebuild of its contents.

3. Paths

Custom build inputs get written to the following directories:

* bin/targets//user/src
* bin/targets//user/include

Custom commands should not write to these directories.  They should use
the $MYNEWT_USER_[...] environment variables instead.


Re: cmake support or link with a static library

2019-09-18 Thread Christopher Collins
Hi Ondrej,

On Wed, Sep 18, 2019 at 09:07:57PM +0200, Ondrej Pilat wrote:
> Dear mynewt developers,
> 
> I tried to find information how to use mynewt with cmake or link with a
> static library. Unfortunately I failed. Where can I find more
> information or is it possible to integrate mynewt with a cmake project?
> 
> Our cmake project creates static library and we would like to link a
> mynewt project with it.

You can put your `.a` files in a Mynewt package's `src` directory.  When
newt links the final binary, it includes these `.a` files. You can put
these files in an existing package, or you can create a new one just for
this purpose.

For example:

apps/blinky/src/mylib.a

> p.s. Link https://mynewt.apache.org/faq/answers doesn't work.

Thanks for the heads up!  The FAQ can be found here:
https://mynewt.apache.org/latest/mynewt_faq/index.html

Do you remember where you found that bad link?

Chris


Re: newest BSP on OSX

2019-08-06 Thread Christopher Collins
On Mon, Aug 05, 2019 at 02:13:54PM -0700, Christopher Collins wrote:
> I am not sure how the brew distribution gets updated, but it looks
> like we have fallen behind here.  I will look into getting this
> updated today.

FYI- I have updated brew with newt and newtmgr 1.7.

Chris


Re: newest BSP on OSX

2019-08-05 Thread Christopher Collins
Hi Juergen,

Run the following:

newt upgrade && newt sync

(instead of `newt install`).  This will download updates for those
Mynewt repos.

You might also need to upgrade your version of the newt tool.  I am not
sure how the brew distribution gets updated, but it looks like we have
fallen behind here.  I will look into getting this updated today.  In
the meantime, you can download the 1.7.0 macOS binary here:
https://www.apache.org/dyn/closer.lua/mynewt/apache-mynewt-1.7.0/

(click the top link, then `apache-mynewt-newt-bin-osx-1.7.0.tgz`).

Chris

On Mon, Aug 05, 2019 at 12:53:08PM -0700, Dr. Juergen Kienhoefer wrote:
> Hi, I was wondering how I can get the latest BSP on a MacOS installation.
> Installing newt with brew gets version 1.5.0. The bsp is in 1.7.0
> 
> newt install
> 
> Skipping "apache-mynewt-core": already installed (1.5.0)
> 
> Skipping "mcuboot": already installed (0.0.0)
> 
> Skipping "apache-mynewt-nimble": already installed (1.0.0)
> 
> Skipping "mynewt_arduino_zero": already installed (0.0.0)
> 
> 
> then
> 
> 
> newt target set mytarget
> bsp=@apache-mynewt-core/hw/bsp/dialog_da1469x-dk-pro
> 
> newt build mytarget
> 
> 
> gets me that error:
> 
> Error: Could not resolve BSP package:
> @apache-mynewt-core/hw/bsp/dialog_da1469x-dk-pro
> 
> 
> my project file:
> 
> repository.apache-mynewt-core:
> 
> type: github
> 
> vers: 1-latest
> 
> user: apache
> 
> repo: mynewt-core
> 
> 
> Still does not give me that BSP.
> 
> 
> Any suggestions would be appreciated.


Deprecate "install" and "sync" commands?

2019-08-05 Thread Christopher Collins
Hello all,

The newt tool supports three "project commands":

* install
* upgrade
* sync

I always have a hard time remembering the particulars of commands like
these.  For example, when other package management systems support both
"update" and "upgrade", I inevitably mix them up.  I propose we remove
"install" and "sync", reducing the list to just one command.

First, here is a refresher for each of these three commands:

INSTALL: Downloads repos that aren't installed yet.  The downloaded
version matches what `project.yml` specifies.

UPGRADE: Performs an INSTALL, and then ensures the installed version of
each repo matches what `project.yml` specifies.  This is similar to
INSTALL, but it also operates on already-installed repos.

SYNC: Fetches and pulls the latest for each repo, but does not change
the branch (version).  Only necessary for Mynewt repo versions that
point to a git branch (e.g., 0.0.0 usually points to "master").

The distinction between these commands is somewhat subtle, so please
ask if it isn't clear.

I would like to remove INSTALL and SYNC.  I propose we do this as
follows:

A. Remove SYNC:
A recent newt change allows us to deprecate SYNC:
https://github.com/apache/mynewt-newt/pull/312.  With this change,
`newt upgrade` always grabs the latest commit for repos with
branch-versions.  So UPGRADE now subsumes SYNC.

B. Remove INSTALL:
I have never found the INSTALL to be very useful.  I find it easier to
simply use UPGRADE instead.  It's just my experience, but there is never
a time when I want to install new repos without updating existing ones.
If you don't want to upgrade existing repos, then you can just not
change their versions in `project.yml`.

Alternatively, we don't have to go so far as to deprecate and remove
these commands.  We could keep them, but just make them synonyms for
`upgrade`.  This is a less disruptive option.

So I see three options:

1. Change nothing.
2. Make "install" and "sync" synonyms for "upgrade".
3. Deprecate "install" and "sync"

My vote is for 3, but I would be happy with 2.

Thoughts?

Chris


Changes in log configuration

2019-08-01 Thread Christopher Collins
Hello all.  I wanted to report on some logging changes that have gone
into master of mynewt-core.  All questions and comments are welcome.

Thanks,
Chris

### SUMMARY

There have been some changes to how Mynewt logs get configured.  Now,
Log modules are defined in `syscfg.yml` files rather than in C code.
The old method of defining logs still works, but the recommended way
(and the way mynewt-core and mynewt-nimble do it) has changed.

### IMPORTANT

Setting LOG_LEVEL=0 (debug) no longer enables full verbosity for all
logs.  If you want full verbosity, you need to set LOG_LEVEL=0 *and*
all module level settings to 0.  Use the following command to get a
list of log level settings that your target uses:

newt target logcfg show 

To *decrease* system-wide log verbosity, you can simply override
`LOG_LEVEL` with the desired value (as always).

### SUGGESTED USE

* Set LOG_LEVEL to 0 (DEBUG).
* Tune individual module levels as desired.

### RATIONALE

This change was made for the following reasons:

1. Ability to set log level per module at build time. Prior to the
change, setting a global log level of DEBUG (LOG_LEVEL=0) might produce
an image that is too big, and would produce lots of uninteresting log
messages.  Now, we can enable debug logging only for the modules we are
interested in.

2. Easier to visualize system log configuration (see below).

3. Log modules can be remapped at build time in case of conflicts.

DETAILS:

*** Terminology:

* LOG: A medium where log entries get written.  Examples are: 1) the
  console, 2) a flash circular buffer (FCB), 3) a RAM circular buffer
  (cbmem).

* ENTRY: A single unit of log data.  Every entry consists of a header
  (seqeuence number, timestamp, etc.) and a body.

* MODULE: Modules are mapped to one or more logs at runtime.  When
  application code writes a log entry, it specifies the numeric module
  ID to write to.  The `sys/modlog` package then routes the entry to
  the logs that the module is mapped to.  By default, all modules are
  mapped to the console.

*** Defining a log module in a package's `syscfg.yml` file:

1. Define two new settings:
* module ID
* module level

2. Define the module itself:
* syscfg.logs

Example (from `sys/log/common/syscfg.yml`):

syscfg.defs:
DFLT_LOG_MOD:
description: 'Numeric module ID to use for default log
messages.'
value: 0
DFLT_LOG_LVL:
description: 'Minimum level for the default log.'
value: 1

syscfg.logs:
DFLT_LOG:
module: MYNEWT_VAL(DFLT_LOG_MOD)
level: MYNEWT_VAL(DFLT_LOG_LVL)

*** Visualize a target's log configuration:

There are two newt commands for this:

newt target logcfg brief 
newt target logcfg show 

Examples:

# (brief)

newt target logcfg brief slinky-nordic_pca10056

Brief log config for targets/slinky-nordic_pca10056:
   LOG | MODULE   | LEVEL
---+--+--
  DFLT_LOG | 0| 1 (INFO)
IMGMGR_LOG | 176  | 0 (DEBUG)

# (show)

newt target logcfg show slinky-nordic_pca10056

Log config for targets/slinky-nordic_pca10056:
DFLT_LOG:
Package: @apache-mynewt-core/sys/log/common
Module:  0[DFLT_LOG_MOD]
Level:   1 (INFO) [DFLT_LOG_LVL]

IMGMGR_LOG:
Package: @apache-mynewt-core/mgmt/imgmgr
Module:  176  [IMGMGR_LOG_MOD]
Level:   0 (DEBUG)[IMGMGR_LOG_LVL]

*** Macros:

At build time, newt generates the following macros for each defined
log:

_DEBUG(...)
_INFO(...)
_WARN(...)
_ERROR(...)
_CRITICAL(...)

These macros take a printf-style format string and an optional list of
arguments.

A macro writes a log entry if both of the following are true:
* Module level is <= the macro level.
* LOG_LEVEL is <= the macro level.

Otherwise, the macro evaluates to a no-op [*].

So, using the DFLT_LOG above as an example, if

DFLT_LOG_LVL=3  (ERROR)
LOG_LEVEL=0 (DEBUG)

then `DFLT_LOG_ERROR()` and `DFLT_LOG_CRITICAL()` write a log entry,
whereas the `DEBUG`, `INFO`, and `WARN` macros are no-ops.

Another example: if

DFLT_LOG_LVL=3  (ERROR)
LOG_LEVEL=4 (CRITICAL)

then only `DFLT_LOG_CRITICAL()` has any effect.  The other macros
evaluate to no-ops.

[*] These aren't strictly no-ops.  The printf-style argument list gets
evaluated and the results are discarded.  This is done to prevent "set
but unused" warnings.

*** Recap:

* Log modules are defined in `syscfg.yml` files.
* To change a module's ID or level, override the appropriate syscfg
  setting.
* LOG_LEVEL acts as a minimum log level for all modules.
* To view a target's log conguration, use the following newt commands:
* newt target logcfg brief 
* newt target logcfg show 

### OPEN QUESTIONS

1. Do we need a way to set all of a target's modules to a particular
level?  This didn't seem very useful to me.  

Re: Mynewt Installation Query

2019-05-31 Thread Christopher Collins
Hi Asmita,

Could you please send the contents of your `project.yml` file?

Also, please include the output of the following commands:

newt version
newt info

Thanks,
Chris

On Fri, May 31, 2019 at 01:17:54PM +0530, Asmita Jha wrote:
> Hello,
> 
> When I am trying to change vers to 0-dev
> 
>  
> 
> I am getting error as
> 
>  
> 
> Error: cannot normalize version "0-dev" for repo "apache-mynewt-core"; no
> mapping to numeric version
> 
> Thanks & regards,
> 
> Asmita Jha
> 
> Firmware Developer
> 
>   jha.asm...@zerowav.com | +91 9960845898
> 
>   www.zerowav.com
> 
>  
> 
> From: Asmita Jha [mailto:jha.asm...@zerowav.com] 
> Sent: Friday, May 31, 2019 12:47 PM
> To: 'dev@mynewt.apache.org' 
> Subject: Mynewt Installation Query
> 
>  
> 
> Hello,
> 
>  
> 
> I recently started with Mynewt OS.
> 
> I went through the step by documentation for its installation.
> 
> I have successfully installed newt on my Windows10 PC.
> 
> I am following the steps as given on your website to create my first
> project.
> 
> Unfortunately, I am getting following erroe o
> 
> :
> 
>  
> 
>  
> 
> Admin@ZWPL-COM2 MINGW64 ~/dev/myproj
> 
> $ newt install
> 
> Error: project.yml file specifies nonexistent repo versions:
> 
> apache-mynewt-core,==1.6.0
> 
>  
> 
>  
> 
> I tried to change vers in project.yml file but still getting the same error.
> 
> Kindly help me out to resolve this issue. Hope to hear from you soon.
> 
>  
> 
>  
> 
>  
> 
> Thanks & regards,
> 
> Asmita Jha
> 
> Firmware Developer
> 
> jha.asm...@zerowav.com   | +91 9960845898
> 
> www.zerowav.com  
> 
>  
> 


Re: Regarding split images

2019-05-03 Thread Christopher Collins
On Thu, May 02, 2019 at 11:05:11AM +0200, joseph reveane wrote:
> Hi Chris,
> 
> I've just tried your latest fix and I still get the same error:
> 
> *Error: Two app packages in build: apps/splitty, apps/bleprph*

Huh... I am really not sure.  I believe I am using the same target as
you.  Using the two newt fixes I mentioned earlier (since merged to
master), I am able to build this target.  Here is my output:

[ccollins@ccollins:~/proj/myproj]$ newt info
Repository info:
* apache-mynewt-core: 77f75d4bd389d38c3022b815e8684ff49a35ff2f, 0.0.0
* apache-mynewt-nimble: 976bb8f84bf9547efdea444ae79a62d5a355a613, 0.0.0

[ccollins@ccollins:~/proj/myproj]$ newt target show splitty-nrf52dk
targets/splitty-nrf52dk
app=@apache-mynewt-core/apps/splitty
bsp=@apache-mynewt-core/hw/bsp/nordic_pca20020
build_profile=optimized
loader=@apache-mynewt-nimble/apps/bleprph
syscfg=BLE_LL_CFG_FEAT_LE_ENCRYPTION=0:BLE_SM_LEGACY=0

[ccollins@ccollins:~/proj/myproj]$ newt build splitty-nrf52dk
Building target targets/splitty-nrf52dk
[...]
Target successfully built: targets/splitty-nrf52dk

Could you please execute the following two commands and send the output?

newt info
newt target show 

Thanks,
Chris

> 
> Regards.
> 
> /joseph
> 
> Le mer. 1 mai 2019 à 20:35, Christopher Collins  a écrit :
> 
> > Hi Joseph,
> >
> > On Mon, Apr 29, 2019 at 04:39:50PM +0200, joseph reveane wrote:
> > > Hi Chris,
> > >
> > > I've modified my app setup to point to the  nimble repo:
> > >
> > > *$ newt target show split-apptargets/split-app
> > > app=@apache-mynewt-core/apps/splitty
> > > bsp=@apache-mynewt-core/hw/bsp/nordic_pca20020
> > > build_profile=optimizedloader=@apache-mynewt-nimble/apps/bleprph
> > > syscfg=BLE_LL_CFG_FEAT_LE_ENCRYPTION=0:BLE_SM_LEGACY=0*
> > >
> > > Then I've treid to build it:
> >
> > [...]
> >
> > It seems split image support is indeed broken.  I still cannot get the
> > same error message as you, but I encountered a different error when I
> > tried using your target.
> >
> > I believe I have fixed this second issue.  The PR is here:
> > https://github.com/apache/mynewt-newt/pull/295.  Could you please give
> > it a shot and let me know if it works?
> >
> > Thanks,
> > Chris
> >


Re: Regarding split images

2019-05-01 Thread Christopher Collins
Hi Joseph,

On Mon, Apr 29, 2019 at 04:39:50PM +0200, joseph reveane wrote:
> Hi Chris,
> 
> I've modified my app setup to point to the  nimble repo:
> 
> *$ newt target show split-apptargets/split-app
> app=@apache-mynewt-core/apps/splitty
> bsp=@apache-mynewt-core/hw/bsp/nordic_pca20020
> build_profile=optimizedloader=@apache-mynewt-nimble/apps/bleprph
> syscfg=BLE_LL_CFG_FEAT_LE_ENCRYPTION=0:BLE_SM_LEGACY=0*
> 
> Then I've treid to build it:

[...]

It seems split image support is indeed broken.  I still cannot get the
same error message as you, but I encountered a different error when I
tried using your target.

I believe I have fixed this second issue.  The PR is here:
https://github.com/apache/mynewt-newt/pull/295.  Could you please give
it a shot and let me know if it works?

Thanks,
Chris


Re: Regarding split images

2019-04-27 Thread Christopher Collins
Hi Joseph,

On Sat, Apr 27, 2019 at 04:01:17PM +0200, joseph reveane wrote:
> Hi Chris,
> 
> I've fetched your fixes and tried to build a split image with the same
> parameters as the
> ones I used to open this issue:
> 
> 1) loader app:
> 
> 
> 
> 
> 
> 
> *newt target show thingy-loadertargets/thingy-loader
> app=@apache-mynewt-core/apps/bleprph
> bsp=@apache-mynewt-core/hw/bsp/nordic_pca20020
> build_profile=optimized
> syscfg=BLE_LL_CFG_FEAT_LE_ENCRYPTION=0:BLE_SM_LEGACY=0*
> 
> 2) user's application:
> 
> 
> 
> 
> 
> 
> 
> *newt target show split-apptargets/split-app
> app=@apache-mynewt-core/apps/splitty
> bsp=@apache-mynewt-core/hw/bsp/nordic_pca20020
> build_profile=optimizedloader=@apache-mynewt-core/apps/bleprph
> syscfg=BLE_LL_CFG_FEAT_LE_ENCRYPTION=0:BLE_SM_LEGACY=0*
> 
> 3) Build output:
> 
> 
> 
> *newt build -v split-appBuilding target targets/split-app2019/04/27
> 15:58:05.226 [WARNING] Transient package @apache-mynewt-core/apps/bleprph
> used, update configuration to use linked package instead
> (@apache-mynewt-nimble/apps/bleprph)Error: Two app packages in build:
> apps/bleprph, apps/splitty*
> 
> So, this must be an other issue than the one you've fixed then...

This looks like a bug involving transient packages.  The package
`@apache-mynewt-core/apps/bleprph` is transient; it is just a link to
the real bleprph (`@apache-mynewt-nimble/apps/bleprph`).

Can you please try changing your target so that its `loader` setting
points to the "real" version of this app (in the nimble repo)?

Thanks,
Chris


Re: Regarding split images

2019-04-26 Thread Christopher Collins
Hi Joseph,

On Sat, Apr 20, 2019 at 09:55:27AM +0200, joseph reveane wrote:
> Hi,
> 
> I'm looking for examples on how to build split and single images for NRF52
> based devices.
> The goal is obviously to get more FLASH room for the application.
> I didn't find any example to build a single image in the Git source tree and
> building a split image as described in the documentation failed for me.
> 
> Thanks in advance for your help.
> Regards.

It seems newt's split image support was broken.  I submitted a fix here:
https://github.com/apache/mynewt-newt/pull/293.  It has not been merged
yet, but you can try it out to see if it solves your issue.

Chris


Re: newt test suite for a bsp port

2019-04-12 Thread Christopher Collins
Hi Inderpal,

On Fri, Apr 12, 2019 at 02:38:27PM +0530, inderpal singh wrote:
> Hi There,
> 
> How can I run existing test suite for particular BSP instead of native.
> 
> #newt test all
> instead of native how can I run this on my target.

There are two kinds of Mynewt unit tests:

1. Self tests.
2. Hosted test.

Self tests run in the simulator via "newt test".

Hosted tests run on actual hardware.  These need to be built into a
Mynewt app which gets loaded onto a device.

Hosted tests are more challenging to use than self tests because:

1. The system does not reset itself between each unit test.  Your unit
   tests need to ensure a clean state when they are set up.
2. You need some extra tooling to retrieve test results from the device.
3. The tests must run in a constrained embedded environment.

`apps/testbench` is an example of an app that runs hosted tests.  You
use the `newtmgr` tool
(http://mynewt.apache.org/latest/newtmgr/index.html) to run tests and
collect test results.

To run all hosted tests:

newtmgr run test all

To read results:

newtmgr log show testlog

A few notes:

* You will need to specify a connection profile or connection string
  on the command line; see the newtmgr documentation.

* The `log show` command needs to be executed many times to collect
  all the test results.

I recommend starting with this testbench application.  Once you get it
running, you can add additional tests.

Chris


Re: [VOTE] Release Apache Mynewt 1.6.0-rc2 and Apache NimBLE 1.1.0-rc2

2019-04-04 Thread Christopher Collins
On Tue, Apr 02, 2019 at 11:28:27PM +0200, Szymon Janc wrote:
> [X] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding)

Chris


Re: [VOTE] Release Apache Mynewt 1.6.0-rc1 and Apache NimBLE 1.1.0-rc1

2019-03-30 Thread Christopher Collins
On Fri, Mar 29, 2019 at 09:22:27PM +0100, Szymon Janc wrote:
> [X] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding)

Chris


Testutil again

2019-03-04 Thread Christopher Collins
Hello all,

More unit testing excitement!  I think there are some valuable changes
we could make to the `test/testutil` API, so I wanted to share some
ideas with the community.  I introduced many of the issues that I want
to focus on, so I don't really feel obligated to be polite here :).  I
just wanted to give that disclaimer in case it feels like I'm being
overly harsh.

I already touched on one of testutil's problems in an earlier email: a
lack of support for standalone self tests.  Recall that a self test is
something that runs in the simulator when you type `newt test
`.  In my earlier email, I expressed that these tests should
automatically execute `sysinit()` before running.

Now I want to discuss a second problem that I see with this library: an
abundance of configurable callbacks that makes the API too complicated.
I think we can remove about half of the callbacks without losing any
functionality.  The execution flow of a test suite is described in a
comment in `testutil.h`:

   TEST_SUITE
tu_suite_init -> ts_suite_init_cb
tu_suite_pre_test -> ts_case_pre_test_cb
tu_case_init -> tc_case_init_cb
tu_case_pre_test -> tc_case_pre_test_cb
TEST_CASE
tu_case_post_test -> tc_case_post_test_cb
tu_case_pass/tu_case_fail -> ts_case_{pass,fail}_cb
tu_case_complete
tu_suite_post_test -> ts_case_post_test_cb
tu_suite_complete -> ts_suite_complete_cb

That is a lot of callbacks for a suite with one test case!  Let's go
through them one by one.

tu_suite_init:
Description: Gets called once per suite, before the suite runs.
Proposal: Remove.
Rationale: If a suite needs to do any setup, it can do so directly
   at the top of its block.

tu_suite_pre_test:
Description: Gets called at the start of each case in the suite.
Proposal: Keep.
Rationale: This callback is the very reason why we group test cases
   into suites.  Similar test cases need to start from the
   same state.

tu_case_init:
Description: Same as tu_suite_pre_test.
Proposal: Remove.
Rationale: Redundant.

tu_case_pre_test:
Description: Gets called once at the start of the next test case.
Proposal: Remove.
Rationale: A test case is just a block of C code.  If it needs to
   perform some setup, it can do it directly at the start of
   the test.

tu_case_post_test:
Description: Gets called at the end of the current / next test case.
Proposal: Keep.
Rationale: A test case may terminate prematurely due to a failed
   `TEST_ASSERT_FATAL()`.  We need a callback to perform
   clean up because the test case may not reach the end of
   its block.

tu_case_pass / tu_case_fail:
Description: One of these gets called whenever a test case
 completes.
Proposal: Keep.
Rationale: We need some way to report results.

tu_suite_post_test:
Description: Gets called at the end of each case in the suite.
Proposal: Remove.
Rationale: Test cases can already clean up individually using
   `tu_case_post_test`.  I can see how common clean up code
   might be useful, but I don't think this is needed often
   enough to justify the added API complexity.  For
   reference, this function is only used in one place
   (NFFS), and it is used unnecessarily - the calls can be
   removed with no effect on the tests.

tu_suite_complete:
Description: Gets called at the end of each test suite.
Proposal: Remove.
Rationale: If a test suite needs to perform any cleanup (unlikely),
   it can do so directly at the end of its code block.
   Unlike a test case, a test suite never terminates early,
   so the code at the end of a suite always gets executed.

If we make the proposed removals, the execution flow looks like this:

   TEST_SUITE
   TEST_CASE
   tu_suite_pre_test_cb
   
   tu_case_pass/tu_case_fail
   tu_case_post_test_cb

I think that looks better!

I also wanted to talk about how I think these callbacks should get
configured:

tu_case_pass / tu_case_fail:
Configured at the start of the application.

tu_suite_pre_test_cb:
Always configured at the top of a test suite body.

tu_case_post_test_cb:
Always configured at the top of a test case body.

Finally, I've added an example test suite at the bottom of this
email.  This is a totally ridiculous set of tests that verify the
machine's (or compiler's) ability to do arithmetic.  Ridiculous, but it
is just intended to demonstrate the proposed API changes.

I plan to submit a PR with the proposed changes soon.  So feel free to
comment here, or wait for the PR.  In any case, all comments are
welcome.

Thanks,
Chris


--





Re: Mynewt test facilities

2019-03-04 Thread Christopher Collins
Thanks, Will.

Responses to inline comments inline :).

On Sun, Mar 03, 2019 at 06:18:46PM -0800, will sanfilippo wrote:
> > ### PROPOSALS
> > 
> > 1. (testutil): Modify `TEST_CASE()` to call
> > `sysinit()` if `MYNEWT_VAL(SELFTEST)` is enabled.
> Would we want all TEST_CASE() to call sysinit if a single syscfg is
> defined? Maybe for some tests you want it and some you dont?

Before I address this, I want to clarify my proposal a bit.  I did a
poor job of setting up the context, so I'll try again here.  Also, you
would get tired of seeing "IMHO" at the end of every sentence, so I
won't write it.  Please just understand that all the below is just my
opinion :).

Test cases should be as self-contained as possible.  We don't want any
restrictions on how test cases are ordered.  For example, something like
this is bad (hypothetical):

/* Test basic FCB functionality. */
test_case_fcb_basic();

/* The above test initialized and populated the FCB.  Make sure we
 * can delete an entry from it.
 */
test_case_fcb_delete();

The ordering of these test cases matters.  The second case depends on
side effects from the first case.  These kinds of tests are difficult to
maintain, and often it is difficult to tell what is even being tested.

Every test case should start from a clean state.  If
`test_case_fcb_delete()` needs a pre-populated FCB, it should start by
initializing and populating an FCB with the exact contents it needs.

This is why it is good to execute `sysinit()` at the start of each test
case - it lets us clean up state left over from prior test cases.  This
eliminates a lot of manual clean up code, and it helps ensure
correctness of old tests as new ones are added.

Ideally we could just call `sysinit()` at the start of each test case,
whether the test is simulated or performed on real hardware.
Unfortunately, we can't call `sysinit()` in hw tests.  The problem is
that hw tests specifically need to maintain some state between test
cases.  Calling `sysinit()` in a hw test case would reset the runtest
package, reinitialize the test result logs, and otherwise make the
system unusable as a test server.

So I think we should call `sysinit()` at the start of a test case
whenever possible.  In other words, call `sysinit()` for every self
test.  I don't think we should make this behavior conditional on a
syscfg setting, but as I said, I am totally open to opposing viewpoints.

I do think you raise a good point, though.  The semantics of
`TEST_CASE()` should not differ between self tests and hw tests.
Instead, we should create a new macro which creates a test case and
calls `sysinit()` at the top.

That is:

`TEST_CASE()` - creates a test case that does NOT call `sysinit()`.
`TEST_CASE_SELF()` - creates a test case that DOES call `sysinit()`.

(macro name off the top of my head.)

Self tests would use `TEST_CASE_SELF()`; hw tests would use
`TEST_CASE()`.

At first, I thought it would be confusing to have two macros.  Now I
think it is even more confusing to have one macro with varying
semantics.

I have some more changes I would like to make to the testutil API, but I
will save that for a future email (or PR).

> > This example illustrates a few requirements of taskpool:
> > 
> >1) A taskpool task (tp-task) is allowed to "run off" the end of its
> >   handler function.  Normally, it is a fatal error if a Mynewt
> >   task handler returns.
> > 
> Not sure why I do not like this but is there some need to have a task
> basically return?

It is not strictly needed, but it is a convenient way for a task to
signal that it has completed.  Otherwise, each of these temporary tasks
would need to set some condition variable and keep looping.  That isn't
terribly complicated, but I do think it is more complicated than this
needs to be.  These tasks are not meant to loop endlessly like most, so
why require them to be written that way?

Just to clarify - I am not proposing a change to the kernel scheduler.
The handlers for these tp-tasks would just be wrapped with something
that provides this special behavior.

> Seem fairly reasonable. Is taskpool_create() designed to save the test
> developer some time creating tasks or does it have some other
> functionality?

Its main purpose is to allow tasks to be reused among a set of test
cases.  This isn't so important for self tests, but in hw tests, it is
wasteful for each test case to allocate its own pool of tasks and
stacks.  Since only one test runs at a time, these test cases can share
a pool of tasks.

Thanks,
Chris


Mynewt test facilities

2019-02-27 Thread Christopher Collins
Hello all,

In this email, I would like to discuss Mynewt's test facilities.

### INTRO (AKA TL;DR)

Unit tests come in two flavors: 1) simulated, and 2) real hardware.
Writing a test capable of running in both environments is wasted
effort.  Let's revisit Mynewt's test facilities with a clear
understanding that they must solve two different use cases.

In this email, I want to:
1. Clarify the differences between the two test environments.
2. Propose some high-level changes to Mynewt's test facilities.

### TEST ENVIRONMENTS

The two test environments are summarized below.

# SIMULATED

Simulated tests ("self tests"):
* Are executed with `newt test`.
* Automatically execute a set of tests.  No user intervention
  required.
* Report results via stdout/stderr and exit status.
* Terminate when the tests are complete.

A typical self test is a very basic application.  Usually, a self test
contains the package-under-test, the testutil package, and not much
else.  Most self tests don't even include the OS scheduler; they are
simply a single-threaded sequence of test cases.  For these simple
tests, `sysinit()` can be executed between each test case, obviating
the need for "setup" and "teardown" code.  Self tests are allowed to
crash; such crashes get reported as test failures.  Finally, self tests
have effectively unlimited memory at their disposal.

# REAL HARDWARE

Test apps that run on real hardware ("hw tests"):
* Run a test server.  The user executes tests on demand with the
  newtmgr `run` command.
* Typically contain test suites from several packages.
* Record results to a Mynewt log.
* Run indefinitely.  The device does not reset itself between
  tests.

Hw tests are more complex than self tests.  These test apps need to
include the scheduler, the newtmgr server, a transport (e.g., BLE), and
a BSP and all its drivers.  Test code cannot assume a clean state at
the start of each test.  Instead, the code must perform manual clean up
after each test case.  Hw test cases must be idempotent and they must
not crash.  Finally, memory is constrained in this environment.

# PREFER SELF

Clearly, self tests are easier to write than hw tests.  Equally
important, self tests are easier to run - they can run in automated CI
systems without the need for a complicated test setup.

So self tests should always be preferred over hw tests.  Hw tests should
only be used when necessary, i.e., when testing drivers or the system
as a whole.  I won't go so far as to say there is never a reason to run
the same test in both environments, but it is so rarely needed that it
is OK if it requires some extra effort from the developer.

### TEST FACILITIES

Mynewt has two unit test libraries: `testutil` and `runtest`.

# TESTUTIL

Testutil is a primitive unit test framework.  There really isn't
anything novel about this library - suites, cases, pass, fail, you get
the idea :).  This library is used in both test environments.

I have one concern about this library:

In self tests, does `sysinit()` get called automatically at the
start of each test case?

Currently, the answer is no.

# RUNTEST

Runtest is a grab bag of "other" test functionality:

1. Command handler for the `run` newtmgr command.
2. API for logging test results.
3. Generic task creation facility for multithreaded tests.

Features 1 and 2 are only needed by hw tests.  Feature 3 is needed by
both self tests and hw tests.

### PROPOSALS

1. (testutil): Modify `TEST_CASE()` to call
`sysinit()` if `MYNEWT_VAL(SELFTEST)` is enabled.

2. (runtest): Remove the task-creation functionality from `runtest`.
This functionality can go in a new library (call it "taskpool" for the
sake of discusion).

3. (taskpool) Further, I think the taskpool functionality could be
tailored specifically towards unit tests.  The remainder of this email
deals with this proposal

I will use an example as motivation for this library.  Here is a simple
test which uses our to-be-defined taskpool library.  This example tests
that a `cbmem` can handle multiple tasks writing to it at the same
time.  Note: this is not meant to be good test code :).

static struct cbmem cbm;

/**
 * Low-priority task handler.  Writes 20 entries to the cbmem.
 * Uses a rate of 1 OS tick per write.
 */
static void
task_lo(void *arg)
{
int i;

for (i = 0; i < 20; i++) {
cbmem_append(, "hello from task_lo", 18);
os_time_delay(1);
}
}

/**
 * High-priority task handler.  Writes 10 entries to the cbmem.
 * Uses a rate of 2 OS ticks per write.
 */
static void
task_hi(void *arg)
{
int i;

for (i = 0; i < 10; i++) {
cbmem_append(, "hello from task_hi", 18);
os_time_delay(2);
}
}

TEST_CASE(cbmem_thread_safety) {
/* Initialize cbmem. */
uint8_t buf[1000];

Re: Storing Bonding information in NVS memory

2019-02-13 Thread Christopher Collins
Hi Prasad,

On Wed, Feb 13, 2019 at 03:13:26PM +0530, prasad wrote:
> Hi all,
> 
> As it happens, I fixed the bug in my code. It now correctly retrieves 
> LTKs and bond is maintained even after reboot.
> 
> Apart from this I just wanted to understand reason behind including 
> 'idx' in structure 'ble_store_key_sec'. Could you please help me 
> understand use-case behind including 'idx'?
> 
> /** Number of results to skip; 0 means retrieve the first match. */
>          uint8_t idx;

The `idx` field is useful when your search criteria matches several
entries and you want to process them one by one.  For example, the
`ble_store_iterate()` function constructs a `ble_store_key_sec` object
with the following values:

{
/**
 * Key by peer identity address;
 * peer_addr=BLE_ADDR_NONE means don't key off peer.
 */
peer_addr = *BLE_ADDR_ANY,

/** Key by ediv; ediv_rand_present=0 means don't key off ediv. */
ediv = 0,

/** Key by rand_num; ediv_rand_present=0 means don't key off rand_num. 
*/
rand_num = 0

ediv_rand_present = 0,

/** Number of results to skip; 0 means retrieve the first match. */
idx = 0,
}

Then it repeatedly calls `ble_store_read()`, incrementing the `idx`
member each time.  This allows all stored bonds to be processed.

Chris


Re: Disconnect reason message Nimble / BTShell

2019-02-05 Thread Christopher Collins
Hi Fred,

On Tue, Feb 05, 2019 at 02:25:43PM -0500, Copper Dr wrote:
> I'm trying to figure out how to decode these disconnections.
> 
> Reason 688 (0x02B0) and 573 (0x023D)
> 
> I checked
> http://mynewt.apache.org/latest/network/docs/ble_hs/ble_hs_return_codes.html
> and the disconnect code does not make any sense.
> 
> The 573 is the one I'm really interested in it happened just before 65,535
> commands were sent and is repeatable.

I agree - 688 (0x2b0) does not make sense.  This is not a valid error
code, so there must be some memory corruption or some other bug at play
here.

As you noticed, 573 (0x23d) is a legitimate error code:
BLE_ERR_CONN_TERM_MIC.  I don't know if there is a relation to the
number 65535, but that would be an interesting bug!  I will let one of
the controller experts chime in.

Chris

> 
> 
> disconnect; reason=688 handle=1 our_ota_addr_type=0
> our_ota_addr=01:02:03:04:05:06 our_id_addr_type=0
> our_id_addr=01:02:03:04:05:06 peer_ota_addr_type=1
> peer_ota_addr=cb:a9:46:52:45:44 peer_id_addr_type=1
> peer_id_addr=cb:a9:46:52:45:44 conn_itvl=40 conn_latency=0
> supervision_timeout=256 encrypted=1 authenticated=1 bonded=1
> 
> disconnect; reason=573 handle=2 our_ota_addr_type=0
> our_ota_addr=01:02:03:04:05:06 our_id_addr_type=0
> our_id_addr=01:02:03:04:05:06 peer_ota_addr_type=1
> peer_ota_addr=cb:a9:42:31:30:30 peer_id_addr_type=1
> peer_id_addr=cb:a9:42:31:30:30 conn_itvl=40 conn_latency=0
> supervision_timeout=256 encrypted=1 authenticated=1 bonded=1
> 
> 
> Thanks,
> 
> Fred Angstadt


Re: issue with missing log api

2019-02-05 Thread Christopher Collins
Hi Markus,

On Mon, Feb 04, 2019 at 08:34:50PM -0800, markus wrote:
> I updated to the latest master from github (4fedf428) and now my
> projects break with the error message:
> 
> Building target targets/s
> Error: Unsatisfied APIs detected:
> * log, required by: sys/log/modlog

To solve this issue without enabling logging, you can add
`@apache-mynewt-core/sys/log/stub` to your app's list of dependencies
(`pkg.deps`).  This satisfies the `log` API requirement without pulling
in any actual logging functionality.

This is not a great fix--it does the job, but it is a hassle for the
developer.  A solution that does not require you to modify a `pkg.yml`
file would be much better.  I have a few ideas, but haven't implemented
anything yet.  We should definitely do something about this problem
before the next release so others don't struggle with this one.  Thanks
for raising this issue.

You are seeing this issue because logging has been integrated
into several more core packages.  I think most projects already use a
log package and are unaffected, but that is just an assumption.

By the way, the `newt target revdep ` command is a useful
tool for debugging issues like this.  In this case, it will tell you
which package(s) depends on modlog.

> Since I neither need nor want any logging in my app I figured I'll
> turn it off by setting
> 
> NEWT_FEATURE_LOGCFG: 0
> 
> in syscfg.yml - unfortunately this results in a segfault in the newt
> tool. Updating that to the latest master (bc272f6e) has the same
> result.

The `NEWT_FEATURE_[...]` settings should not be overridden.  I
probably would have tried the same thing :).

Chris


Re: MYNEWT_VAL(...) invalid syntax ?

2019-01-08 Thread Christopher Collins
The next version of the newt tool will be backwards compatible with
older Mynewt repos.  The latest versions of the Mynewt repos are taking
advantage of a new newt feature (syscfg values in `pkg.yml`), so they do
not work with older versions of newt.  If a repo does not use this new
feature, then it will be compatible with both old and new versions of
the newt tool.

Backwards compatiblity is important, and it is always bad when an
upgrade breaks something.  This particular feature seemed important
enough to introduce such breakage because it will actually help to
maintain backwards compatibility in the future.  The feature allows
Mynewt YAML files to switch their configuration based on the version of
newt being used.  For example:

pkg.item.NEWT_FEATURE_FOO
- ...

That item only gets processed if the newt tool injects the
`NEWT_FEATURE_FOO` setting.  The item does not get processed with older
versions of newt which do not support the `FOO` feature.  Older versions
of newt could not process conditional items in most of the YAML files,
so would always process the above item.

Chris

On Tue, Jan 08, 2019 at 11:23:58AM -0800, markus wrote:
> Hey Chris,
> 
> that is understood and fine, the question and to some degree concern is
> the other way around. Will the next version of the newt tool be
> incompatible with all existing repositories?
> 
> I understand that apache-mynewt-core and all other mynewt managed
> repositories don't have a problem with such a change. But what about
> all the other repositories?
> 
> Thanks,
> Markus
> 
> 
> 
> On Tue, 8 Jan 2019 10:29:24 -0800
> Christopher Collins  wrote:
> 
> > Hi Markus,
> > 
> > On Tue, Jan 08, 2019 at 10:15:22AM -0800, markus wrote:
> > > Hi Lukasz,
> > > 
> > > got it, I guess I have to start building newt.
> > > 
> > > Follow up question: Does this mean the next release will break all
> > > repositories out there or is backwards compatibility still on the
> > > roadmap for this release?  
> > 
> > The next mynewt release will only be compatible with the version of
> > newt that is released at the same time.  An attempt to use an older
> > newt with the newt repos will fail with instructions to upgrade newt.
> > 
> > Chris
> 


Re: MYNEWT_VAL(...) invalid syntax ?

2019-01-08 Thread Christopher Collins
Hi Markus,

On Tue, Jan 08, 2019 at 10:15:22AM -0800, markus wrote:
> Hi Lukasz,
> 
> got it, I guess I have to start building newt.
> 
> Follow up question: Does this mean the next release will break all
> repositories out there or is backwards compatibility still on the
> roadmap for this release?

The next mynewt release will only be compatible with the version of newt
that is released at the same time.  An attempt to use an older newt with
the newt repos will fail with instructions to upgrade newt.

Chris


Re: Setting and displaying time

2018-12-27 Thread Christopher Collins
Hi Rohit,

On Wed, Dec 26, 2018 at 11:55:56PM +0530, Rohit Gujarathi wrote:
> Hi Everyone,
> 
> I wanted to make a desktop clock using mynewt and am stuck at setting the
> time. I read the os_timeval part but How do i set time in mynewt and
> display it human form? I am using the nrf52840 which has a RTC, how can i
> use that? has anyone used the __DATE__ and __TIME__ macro?

The __DATE__ and __TIME__ macros are useful when you need to know when a
program was built.  Since you are interested in the actual real time
(independent of the build time), these macros won't help you.

I would look at the following two functions:

os_gettimeofday()

(http://mynewt.apache.org/latest/os/core_os/time/os_time.html#c.os_gettimeofday)

os_settimeofday()

(http://mynewt.apache.org/latest/os/core_os/time/os_time.html#c.os_settimeofday)

When the device boots up, set its time using `os_settimeofday()`.  To
display the current time, call `os_gettimeofday()` and convert the
result to a human readable format.

These functions deal in UNIX time, i.e., seconds since 1970/01/01.
I'm afraid converting this number to a human readable format is not
trivial due to pesky human factors such as time zones, leap years, etc.
Luckily, these functions use the POSIX time data structures, so there
should be a lot of free code avialable online that does this conversion.

I am not very familiar with the nRF52 RTC.  Maybe others who are more
knowledgable can chime in.

Chris


Re: NimBLE function naming question

2018-12-14 Thread Christopher Collins
Hi Cristoph,

On Fri, Dec 14, 2018 at 04:34:05PM +0100, Christoph Jabs wrote:
> Hello mynewt devs,
> 
> I have a question regarding the mynewt-NimBLE stack.
> 
> Whilst working my way through the code of the stack, trying to understand 
> the structure of it and how to use it, I came accross the functions defined 
> in nimble/host/include/host/ble_gatt.h for initiating GATT procedures. I 
> realized that the functions are all named ble_gatts or ble_gattc (and are 
> devided into two corresponding source files ble_gattc.c and ble_gatts.c) 
> and I assume this devides the functions in server or client side 
> functionality.

All correct.

> However, a four functions seem to break that system. These are the 
> functions for sending notifications or indications to connections. My 
> understanding is that the BLE peripheral (the GATT server) sends 
> notifications to the BLE central (the GATT client). This seems to match the 
> fact that the functions for notifying and indicating are being called in 
> ble_gatts_tx_notifications. Why then are the notification and indication 
> functions labeled gattc?
> 
> Feel free to correct me if i'm missing something or if I completely 
> misunderstand the things these functions do. From my perspective this seems 
> just a bit confusing.

I agree that it is confusing.  Notifications and indications are
peripheral-side (server) procedures, so it is reasonable to look in
`ble_gatts.c` for their corresponding functions.

However, ignoring the Bluetooth spec for a bit, these procedures are
logically client-side operations.  They are unsolicited, and in the case
of indications, the sender needs to listen for an acknowledgement.
These operations have a lot in common with central-initiated procedures,
and very little in common with the others.  As such, the code
implementing these procedures makes use of functions and data structures
that are private (`static`) to the `ble_gattc.c` file, and as a
consequence, these public functions must also go in this file.  And
since the functions are in this file, well... they are named with this
file's prefix :).

I'm not sure if this was the best approach.  It is too late to rename
the functions now (though I suppose we could add some `ble_gatts_...`
aliases).  At the time, I convinced myself that the naming was OK,
because "client" and "server" are general descriptions of the
functionality, and not directly derived from the spec.

Chris


Re: Question about unit test packages

2018-12-12 Thread Christopher Collins
Hi Mikhail,

On Wed, Dec 12, 2018 at 02:22:28PM +, Mikhail Efroimson wrote:
> I am writing my first MyNewt unit test package and I was hoping that
> someone could help me resolve an issue that I'm having. It seems that
> I have to include the project BSP as a dependency in my unit test
> package but when I do that and try to run it using newt test, the
> compiler complains that "Settings defined by multiple packages:
> MCU_FLASH_MIN_WRITE_SIZE" which is defined for under syscfg.defs of
> different MCU families in apache-mynewt-core/hw/mcu. How can I avoid
> this multiple definitions issue which only occurs when I try to run
> the unit tests?

The `newt test` command runs unit tests in the simulator.  To ensure
the simulator environment is available, newt automatically pulls in the
`hw/bsp/native` package and all its dependencies.

Only one BSP package is allowed per Mynewt build.  So, when building
something for the simulated environment, you must not include any
"real" BSP or MCU packages.  In other words, it is not possible to test
BSP or other "hardware" code in the simulator.

To test code meant for real hardware, you need to forego the simulator.
Instead, you create an app which runs the unit tests, and create a
target which ties that app to your BSP.  Unfortunately, it seems we
don't have any documentation on running unit tests on native hardware,
but a good way to get started is to take a look at the `apps/testbench`
app.  Specifically, you'll want to:

1. Create a new target for your unit test build (`newt target create`)
2. Set the target's app to `@apache-mynewt-core/apps/testbench` and
   the BSP to your BSP package.
3. Modify the testbench app to run your desired tests:
a. Add dependencies to `pkg.yml` for your unit test packages.
b. Modify `main.c`: Add an invocation of `TEST_SUITE_REGISTER()`
   for each unit test the app needs to run.

When the build is running on hardware, you can invoke tests using the
newtmgr tool, specifically the `run` command
(http://mynewt.apache.org/latest/newtmgr/command_list/newtmgr_run.html).
The test results get recorded to "testlog" log, also accessible via
newtmgr.

Chris


Re: Manufacturing Image Proposal

2018-12-11 Thread Christopher Collins
Hi Will,

On Tue, Dec 11, 2018 at 04:25:41PM -0800, will sanfilippo wrote:
> I read this over myself and it looks good to me. What I am not sure I 
> understood, and still not sure I do, is the sectors where these MMR will go. 
> Are these going to go into some write protected location? Or will they just 
> so somewhere that does not get erased when we do image upgrades? Not sure I 
> need to understand either :-) 

That is a good point.  The firmware probably ought to write protect
the extra MMR areas at startup.  Ideally, the `sys/mfg` package which
reads the MMRs would do this automatically.  This package can do this
using the `hal_flash_write_protect()` function, but that is just
software protection.  I don't believe we have support for generic
hardware-based protection.

Chris


Re: Manufacturing Image Proposal

2018-12-11 Thread Christopher Collins


On Tue, Dec 11, 2018 at 12:43:02PM +0100, Łukasz Rymanowski wrote:
> Hi Chris,
> 
> I read it all and indeed it was thrilling :)

Thanks for reading!

> I think this is a good idea and this is a way to go. I have just feeling
> that internal mfgimage should be able to verify external one somehow, to
> make sure second factory did a good job
> But maybe this is not needed as  bootloader will do signature validation of
> the images inside the external mfgimage (if I recall correctly). Anyway,
> just a thought to consider.

I agree that it would be good if the boot mfgimage could verify the
others.  I think there is a problem here, though.  Mfgimages are weird
things in that their contents don't remain intact on a device.  An
mfgimage might contain a Mynewt image and a pre-filled sys/config FCB,
for example.  When the device starts up in the field, it will append new
data to the FCB.  A back end management service may upload a new Mynewt
image to the device, overwriting the one that came from the mfgimage.
So, the mfgimage hashes on a device become inaccurate very quickly.
Their purpose is not to validate what is on the device now, but to
identify what was put on the device originally.

So, I don't think we can use the mfgimage hash to verify anything.
Maybe there is another approach that would work?

Chris


Re: Custom boot loader

2018-11-20 Thread Christopher Collins
Hi Jeff,

On Tue, Nov 20, 2018 at 04:12:54PM +, Jeff Belz wrote:
> All:
> 
> I have to use a custom boot loader for my application.  Has anyone
> done this before and if so, what are the tricks to doing this?
> 
> I'm running a stm32f412.  I've got the application running on 0x802
>    and the mynewt loader is at 0x800.  Looking to bypass the
> mynewt loader, and use mine instead. My bootloader works for my
> other non-mynewt application, so I'm looking for possibility why it's
> not working now.I'm thinking it's the naming of the entry points,
> but not sure of the newt OS needs something special it look for in
> order to run the application.

If I understand, you want to replace the Mynewt boot loader with a
custom one (as opposed to using two boot loaders).  Is that right?

That should be fairly straightforward.  The interface between boot
loader and application is very simple.  In particular, both components
just have to agree on the application starting address.  When the boot
loader is done doing whatever it is meant to do, it simply jumps to the
application starting address.  Entry point names don't matter; these are
discarded during the build process.

The specific address to jump to is:
slot-0-offset + image-header-size + start-word

Using the default stm32f412 BSP definition, this would be:
0x802 + 32 + 4 = 0x8020024

(It probably doesn't make sense to use the 32-byte Mynewt image header
at all with your custom boot loader, but that is a separate issue.)

Chris


Re: Persist stats across resets?

2018-11-08 Thread Christopher Collins
Hi Kevin,

Bit of a delayed response!

On Sun, Sep 09, 2018 at 07:18:44PM +0200, Kevin Townsend wrote:
> Is there currently an obvious mechanism to persist 'stats' across resets (
> https://mynewt.apache.org/master/os/modules/stats/stats.html), repopulating
> them with appropriate values coming out of reset?
> 
> It seems like a potentially common requirement, so I wanted to ping the dev
> list before diving into an implementation myself. Has anyone made a viable
> attempt at this already, that might be able to share any blocking issues
> they ran into?
> 
> There will always be a gray area between the moment you persist the data
> (IF you persisted it as well) and the moment you try to restore it, plus
> wear on the flash memory since the nRF52 is limited here compared to some
> other MCUs), but those are just things you'll have accept and try to
> minimise.

I agree this would be a very useful feature.  Did you end up
investigating this?  I have written some of my thoughts on the subject
below.  All comments welcome.

I. What gets persisted - individual stats, or entire stat modules?
1. Individual stats: we waste flash by including the module name in
   every record.

2. Entire stat modules - two options:
a. Waste flash persisting unchanged stats, or

b. Need to to remember which stats have changed since the last
   time the module was persisted.  This allows us to only
   persist stats whose values have changed.

I think option 2b is the best.  There is some implementation effort, but
the savings in flash usage is worth it, in my opinion.

II. When do we persist?

There are a few options.  I think the most flexible option is: give each
stat module a timer and a configurable "delay".  When a stat changes,
schedule the module's timer to expire after its configured delay (only
if the timer isn't already scheduled).  When the timer expires, the
module gets persisted.

If the user wants stats to be persisted immediately each time they
change, the user can configure their stats modules with a delay of 0.
For less important stats, a much larger delay can be used.

III. Implementation

1. Add a timer to each stats module.

An instance of `struct os_callout` is 32 bytes, which I think is a bit
too big to burden every stats module with.  Instead of adding the
callout to `struct stats_hdr`, I propose we "subclass" stats_hdr as
follows:

struct stats_hdr {
char *s_name;
uint8_t s_size;
uint8_t s_cnt;
uint16_t s_flags; /* <-- This used to be padding */
#if MYNEWT_VAL(STATS_NAMES)
const struct stats_name_map *s_map;
int s_map_cnt;
#endif
STAILQ_ENTRY(stats_hdr) s_next;
};

struct stats_persisted_hdr {
struct stats_hdr sp_hdr;
struct os_callout sp_callout;
os_time_t sp_delay;
};

A bit from the new `s_flags` field could indicate whether the stats
module is "classic" or "persisted".

2. Remember which stats have changed since being persisted.

Each stat needs a "dirty bit."  There are two ways to do this:
a. Hijack the most-significant-bit of each stat.
b. Maintain dirty state in a separate bitmap in the stats module.

Option a has some issues:
* We lose half the range of each stat.
* Looking at these stats in gdb will be quite confusing.

Unfortunately, I think we are stuck with option a for now.  Option b
requires each module to contain a block of memory sized according to the
number of stats in the module.  The way stats are declared
(`STATS_SECT_START` and `STATS_SECT_END`) just doesn't allow for us to
do this.  This is the same reason we need to define stats names
separately with the `STATS_NAME_START` / `STATS_NAME_END` macros).

Ultimately, I think we may want to add stats support to the newt tool.
Newt could generate the stats definitions C code.  This would allow the
dirty state to be stored in a bitmap in the module, and it would
eliminate the annoyance of defining each stats module twice just to get
names.  However, I think we should get a working implementation of stat
persistence before considering this path.

Thanks,
Chris


Upcoming compatibility-breaking change

2018-11-01 Thread Christopher Collins
Hello all,

I wanted to give a heads up: a compatibility-breaking change will likely
be made to the `mynewt-core` and `mynewt-nimble` repos soon.  These
repos will no longer be compatible with older versions of newt.  If you
use development versions of these repos, I suggest you upgrade to the
latest development version of newt now.

You can read the details of this change in the following PRs:

Newt: https://github.com/apache/mynewt-newt/pull/230 (merged)
Core: https://github.com/apache/mynewt-core/pull/1486
Nimble: https://github.com/apache/mynewt-nimble/pull/232

I will not merge these PRs until 2018-11-05 (Monday) at the earliest.
If you have any objections or other feedback, please don't hesitate to
share them here or as PR comments.

A summary of the change is reproduced below:

This change adds four newt commands:

newt target sysinit show 
newt target sysinit brief 
newt target sysdown show 
newt target sysdown brief 

Both sysinit commands produce a report of all the sysinit entries
configured for the target. The sysdown commands are similar; they
show sysdown entries rather than sysinit entries.

This change attempts to address two problems:

1. It is difficult to know the exact order of package initialization
and shutdown. The brief commands allow this information to be
visualized in a condensed table.

2. The sysinit or sysdown stages chosen used by third-party packages
may not be suitable for a particular target. This change allows
syscfg settings to be used as stage numbers. The expectation is that
all packages will define syscfg settings for their sysinit and
sysdown stages. This allows the target to reorder the stages by
overriding the corresponding syscfg settings.

I wanted to make this change backwards compatible via an injected newt
syscfg setting, e.g.,

pkg.init.NEWT_FEATURE_SYSCFG_STAGES:
log_init: 'MYNEWT_VAL(LOG_SYSINIT_STAGE)'

pkg.init.!NEWT_FEATURE_SYSCFG_STAGES:
log_init: 100

Unfortunately, this is not possible, as the old newt does not allow
conditionals to be applied to pkg.init.  This is fixed by the merged
newt PR, but that doesn't help us in the short term.

I think it is still worth it to get this change in.  When 1.6.0 is
released, we can make sure to add the appropriate newt version
restriction.

Thanks,
Chris


Re: [VOTE] Release Apache Mynewt 1.5.0-rc1

2018-10-25 Thread Christopher Collins
On Tue, Oct 23, 2018 at 04:20:40PM +0200, Szymon Janc wrote:
> Hello all,
> 
> I am pleased to be calling this vote for the source release of
> Apache Mynewt 1.5.0.

> [X] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding).

Thanks,
Chris


Re: Unlikely recoverable failures

2018-10-19 Thread Christopher Collins
FYI- I have submitted a PR that implements this new macro:
https://github.com/apache/mynewt-core/pull/1471

I went with `DEBUG_PANIC()` for the name.

Chris

On Thu, Oct 18, 2018 at 11:13:27AM -0700, Christopher Collins wrote:
> Hello all,
> 
> I think Mynewt lacks a mechanism for dealing with a certain class of
> errors: unlikely recoverable failures.  For the purposes of this email,
> I divide failures into two groups:
> 
> 1. Failures we cannot recover from, or that we don't want to recover
>from.
> 
> This group includes: immediate bugs in the code, memory corruption,
> software watchdog expiry, etc.
> 
> The best way to handle these failures is to crash immediately, causing
> gdb to hit a breakpoint and / or creating a core dump.  We already have
> a tool for this: assert().
> 
> 2. Failures we want to recover from in production, but not always
>during development.
> 
> This group includes: failure of nonessential hardware, dynamic memory
> allocation failures, file system write failures due to insufficient
> space, etc.
> 
> For this second group, I think we need a "crash()" macro that can be
> enabled or disabled at compile time[1].  During development and debugging,
> the macro would crash the system.  In production builds, the macro would
> expand to nothing, allowing the recovery code to execute.
> 
> I would like to implement such a macro in mynewt-core.  Here are some of
> my thoughts on the subject.  As always, please don't hesitate to express
> any criticism.
> 
> A. No conditional.
> 
> Unlike `assert()`, this new macro should not evaluate an expression to
> determine success or failure.  Instead, the calling code should detect
> failure with an `if` statement or similar, and only invoke the macro in
> the failure case.  Since these failures are expected, I think it is
> likely that the application will always want to execute some code in the
> failure case, regardless of whether the macro is configured to trigger a
> crash.  For example: logging a message to the console when the failure
> is detected.  If the macro itself detects the failure, the application
> doesn't have the opportunity to execute any code before the crash.
> 
> B. No severity.
> 
> Initially, I was thinking we could have a severity associated with each
> failure.  The new macro would only trigger a crash if invoked with a
> severity less than or equal to some `PANIC_LEVEL` setting.  However, I
> decided that this just complicates the feature without adding any real
> value.  I can't think of a good way to assign a severity to each
> failure.  If we ever need this, we can add it on top of the
> single-severity implementation.
> 
> I do think the ability to enable this feature per-package would be
> useful, so that is something to consider for the future.
> 
> C. Name?
> 
> This is what has been causing me the most agony: what to name the macro.
> The names I hate the least are:
> 
> * DBG_PANIC() // "Debug panic"
> * DEV_PANIC() // "Development panic"
> 
> But I am not crazy about these.  Any other suggestions?  Even though
> all-caps is ugly, I do think it should be used here to make it obvious
> that this construct does very macro-like things (inspects the file and
> line number, possibly expands to nothing, etc.).
> 
> Thanks,
> Chris
> 
> [1] `assert()` can be enabled and disabled via the `NDEBUG` macro.
> However, I am of the opinion that asserts are too useful to ever
> disable, so I would like an additional level of configurability here.


Unlikely recoverable failures

2018-10-18 Thread Christopher Collins
Hello all,

I think Mynewt lacks a mechanism for dealing with a certain class of
errors: unlikely recoverable failures.  For the purposes of this email,
I divide failures into two groups:

1. Failures we cannot recover from, or that we don't want to recover
   from.

This group includes: immediate bugs in the code, memory corruption,
software watchdog expiry, etc.

The best way to handle these failures is to crash immediately, causing
gdb to hit a breakpoint and / or creating a core dump.  We already have
a tool for this: assert().

2. Failures we want to recover from in production, but not always
   during development.

This group includes: failure of nonessential hardware, dynamic memory
allocation failures, file system write failures due to insufficient
space, etc.

For this second group, I think we need a "crash()" macro that can be
enabled or disabled at compile time[1].  During development and debugging,
the macro would crash the system.  In production builds, the macro would
expand to nothing, allowing the recovery code to execute.

I would like to implement such a macro in mynewt-core.  Here are some of
my thoughts on the subject.  As always, please don't hesitate to express
any criticism.

A. No conditional.

Unlike `assert()`, this new macro should not evaluate an expression to
determine success or failure.  Instead, the calling code should detect
failure with an `if` statement or similar, and only invoke the macro in
the failure case.  Since these failures are expected, I think it is
likely that the application will always want to execute some code in the
failure case, regardless of whether the macro is configured to trigger a
crash.  For example: logging a message to the console when the failure
is detected.  If the macro itself detects the failure, the application
doesn't have the opportunity to execute any code before the crash.

B. No severity.

Initially, I was thinking we could have a severity associated with each
failure.  The new macro would only trigger a crash if invoked with a
severity less than or equal to some `PANIC_LEVEL` setting.  However, I
decided that this just complicates the feature without adding any real
value.  I can't think of a good way to assign a severity to each
failure.  If we ever need this, we can add it on top of the
single-severity implementation.

I do think the ability to enable this feature per-package would be
useful, so that is something to consider for the future.

C. Name?

This is what has been causing me the most agony: what to name the macro.
The names I hate the least are:

* DBG_PANIC() // "Debug panic"
* DEV_PANIC() // "Development panic"

But I am not crazy about these.  Any other suggestions?  Even though
all-caps is ugly, I do think it should be used here to make it obvious
that this construct does very macro-like things (inspects the file and
line number, possibly expands to nothing, etc.).

Thanks,
Chris

[1] `assert()` can be enabled and disabled via the `NDEBUG` macro.
However, I am of the opinion that asserts are too useful to ever
disable, so I would like an additional level of configurability here.


Re: Controlled shutdown

2018-10-09 Thread Christopher Collins
Hi Vipul,

On Tue, Oct 09, 2018 at 12:01:46PM -0700, Vipul Rahane wrote:
> Sorry for the late reply. 

No problem!

> I really like the idea. Thank you for doing this Chris. A much needed
> feature. A possible use case just came to my mind.
> 
> One module might have to be shutdown before shutting down others for
> example: Sensors using I2C/SPI would have to be shut down before
> shutting down the underlying interfaces.
> 
> This is kind of similar to pkg.init levels. I wanted to understand if
> you had any kind of priority in mind.

That is exactly what I had in mind.  I have submitted the relevant PRs
for this feature here:

Newt: https://github.com/apache/mynewt-newt/pull/218
Core: https://github.com/apache/mynewt-core/pull/1447
Nimble: https://github.com/apache/mynewt-nimble/pull/216

The newt PR describes the syntax for configuring a package's sysdown
functions:

pkg.down:
: 
e.g.,

pkg.down:
ble_hs_shutdown: 200

As with sysinit, sysdown functions are excuted in ascending order of
stage number.  When there are two or more identical stage numbers, the
functions are executed in lexicographic order according to their C
function name.

Thanks,
Chris


Re: Controlled shutdown

2018-10-01 Thread Christopher Collins
Hi Martin,

On Fri, Sep 28, 2018 at 04:30:17PM -0700, Martin Turon wrote:
> +1 for graceful shutdown.
> 
> It could also be useful for coarse, multi-protocol use cases such as using
> BLE for commissioning, shutting down that stack, and then starting a 15.4
> stack such as Thread.
> 
> In general, it would be great if nimble could support high level stack
> start / stop commands such as the following shell API imply:
> 
> ble start# start nimble stack
> ble scan start   # start BLE scan ...
> ble scan stop# stop BLE scan
> *ble stop *# NEW: gracefully shutdown nimble
> 
> 
> All of the above can be supported right now, except "ble stop", so your
> proposal is welcome.
> 
> I would also suggest keeping the concepts of graceful shutdown and reset
> separable.

I agree with all of the above.  Unfortunately, the ability to stop the
stack absent a reset, which implies the ability to reenable it, adds a
bit of complexity to this feature.  I think this functionality is
important enough to justify the added complexity.

Chris


Re: Controlled shutdown

2018-09-28 Thread Christopher Collins
On Fri, Sep 28, 2018 at 04:18:08PM -0700, will sanfilippo wrote:
> Some comments:
> 
> 1) Are there are any other cases you can see for a controlled shutdown? I get 
> the reset command. I am trying to think of others.

I think the newtmgr reset command is the main use case (as well as the
shell "reset" command).  It seems plausible that a device would want to
reset itself for other reasons, but I can't think of an actual use case!

> 2) I am curious: how do you know how many of these functions, or which ones, 
> return in progress? Curious to know how you were going to implement that 
> specifically.

I was thinking the sysdown module would maintain a counter representing
the number of in-progress shutdown subprocedures.  When a subprocedure
completes, it calls `sysdown_complete()`, which decrements the counter.
When the counter reaches 0, the system restarts.

There is something else I should mention.  I now think it is a bad idea
to use return codes to indicate whether the subprocedure is complete.
Each subprocedure is called by generated code, and branching on return
codes makes the generated code too complicated in my opinion.  I think
it is better to add a new function, `sysdown_in_progress()`.  If a
subprocedure needs to continue beyond the initial callback, it calls
this new function before returning.

Chris

> Otherwise, sounds good and seems like a good addition to mynewt!
> 
> 
> > On Sep 28, 2018, at 1:08 PM, Christopher Collins  wrote:
> > 
> > Hello all,
> > 
> > I have been looking into implementing a graceful shutdown for Mynewt.
> > The system may want to perform a cleanup procedure immediately before it
> > resets, and I wanted to allow this procedure to be configured.  I am
> > calling this shutdown facility "sysdown", as a counterpart to "sysinit".
> > 
> > ### BASIC IDEA:
> > 
> > My idea is to allow each Mynewt package to specify a sequence of
> > shutdown function calls, similar to a package's `pkg.init` function call
> > list.  The newt tool would generate a C shutdown function called
> > `sysdown()`.  This function would consist of calls to each package's
> > shutdown functions.  When a controlled shutdown is initiated,
> > `sysdown()` would be called prior to the final call to
> > `hal_system_reset()`.
> > 
> > To clarify, this procedure would only be performed for a controlled
> > shutdown.  It would be executed when the system processes a newtmgr
> > "reset" command, for example.  It would not be executed when the system
> > crashes, browns out, or restarts due to the hardware watchdog.
> > 
> > I think this scheme is pretty straightforward and I see no issues so far
> > (but please pipe up if this doesn't seem right!).
> > 
> > ### PROBLEM:
> > 
> > Then I tried applying this to an actual use case, and of course I
> > immediately encountered some problems :).
> > 
> > My actual use case is this: when I reset the Mynewt device, I would like
> > the nimble stack to gracefully terminate all open Bluetooth connections.
> > This isn't strictly necessary; the connected peer will eventually
> > realize that the connection has dropped some time after the reset.  The
> > problem is that Android centrals take a really long time to realize that
> > the connection has dropped, as described here:
> > https://blog.classycode.com/a-short-story-about-android-ble-connection-timeouts-and-gatt-internal-errors-fa89e3f6a456.
> > So, I wanted to explicitly terminate the connections to speed up the
> > process.
> > 
> > Ideally, I could configure the nimble host package with a shutdown
> > callback that just performs a blocking terminate of each open
> > connection in sequence.  Unfortunately, the nimble host is likely
> > running in the same task as the one that initiated the shutdown, so it
> > is not possible to perform a blocking operation.  Instead, the running
> > task needs to terminate each connection asynchronously: enqueue a GAP
> > terminate procedure, then return so that the task can process its event
> > queue.  Eventually, the BLE terminate procedure will complete, and the
> > result of the procedure will be indicated via an event on this event
> > queue.  The sysdown mechanism I described earlier in this email doesn't
> > allow for asynchronous procedures.  It reboots the system immediately
> > after executing all the shutdown callbacks.
> > 
> > I think this will be a common issue for other packages, so I am
> > trying to come up with a general solution.
> > 
> > ### SOLUTION:
> > 
> > Each shutdown callback returns one of the following codes:
> >

Re: Unit Tests with newt pkg new

2018-09-10 Thread Christopher Collins
On Sun, Sep 09, 2018 at 01:04:35AM +0200, Kevin Townsend wrote:
[...]
> My +1 would be to have /test as a standard feature of any package, and you
> can always delete it, but other people might find this delete burden
> inappropriate?

+1.  I think that is a great idea.

Chris


NimBLE host GAP event listeners

2018-09-07 Thread Christopher Collins
Hello all,

TL;DR: Proposal for registration of GAP event listeners in the NimBLE
host.  Currently, GAP event callbacks are specified by the code which
creates a connection.  This proposal allows code to listen for GAP
events without creating a connection.

The NimBLE host allows the application to associate callbacks with GAP
events.  Callbacks are configured *per connection*.  When the
application initiates a connection, it specifies a single callback to
be executed for all GAP events involving the connection.  Similarly,
when the application starts advertising, it specifies the callback to
use with the connection that may result.

This design provides some degree of modularity.  If a package
independently creates a connection, it can handle all the relevant GAP
events without involving other packages in the system.  However, there
is a type of modularity that this does not provide: *per event*
callbacks.  If a package doesn't want to initiate any GAP events, but
it wants to be notified every time a connection is established (for
example), there is no way to achieve this.  The problem here is that
there is only a single callback associated with each connection.  What
we really need is a list of callbacks that all get called whenever a
GAP event occurs.

My proposal is to add the following to the NimBLE host API:

struct ble_gap_listener {
/*** Public. */
ble_gap_event_fn *fn;
void *arg;

/*** Internal. */
STAILQ_ENTRY(ble_gap_listener) next;
};

/**
 * @brief Registers a BLE GAP listener.
 *
 * When a GAP event occurs, callbacks are executed in the following
 * order:
 * 1. Connection-specific GAP event callback.
 * 2. Each registered listener, in the order they were registered.
 *
 * @param listener The listener to register.
 */
void ble_gap_register(struct ble_gap_listener *listener);

/**
 * @brief Unregisters a BLE GAP listener.
 *
 * @param listener The listener to unregister.
 *
 * @return  0 on success;
 *  BLE_HS_ENOENT if the listener is
 *  not registered.
 */
int ble_gap_unregister(struct ble_gap_listener *listener);

Initially, I thought this functionality could be achieved with a new
package implemented on top of the host (call it `blemux`).
Unfortunately, I think there are some issues that make such an approach
unwieldy for the user.  I imagined such a package would be used as
follows:

1. The `blemux` package exposes a GAP callback (call it
   `blemux_gap_event`).
2. Elsewhere in the sytem, for each GAP function call, the caller
   specifies `blemux_gap_event` as the callback.  
3. Each package that is interested in GAP events registers one or
   more listeners via `blemux_register()`.
4. When a GAP event occurs, `blemux_gap_event` is executed.  This
   callback executes every registered listener.

The problem is that packages have no guarantee that the app is using
blemux.  A package can depend on `blemux` and can register listeners,
but it really only works if every other BLE-aware package in the system
also uses `blemux`.  If any package doesn't use `blemux`, then the GAP
procedures that it initiates won't be reported to the `blemux`
listeners.  A secondary problem is that this just feels like a clumsy
API.  It is confusing and error prone to need to specify
`blemux_gap_event` as the GAP event callback.

So, I think this functionality should be implemented directly in the
host.

Thoughts?

Thanks,
Chris


Re: how to print floating points in mynewt

2018-09-05 Thread Christopher Collins
Hi Rohit,

baselibc does support floating point formatting, but it is not enabled
by default.  To enable it, set the following syscfg setting to 1 in your
target:

FLOAT_USER

baselibc's float printf support is a bit limited.  In particular, it
ignores precision specifiers and always prints three digits after the
decimal point.

Chris

On Wed, Sep 05, 2018 at 11:00:32PM +0530, Rohit Gujarathi wrote:
> Hi everyone,
> 
> I am trying to print floating point in mynewt using
> *console_printf("%f",floating_var)* but i am unable to do so. I read that
> the baselibc does not support floating point, so I tried removing the
> libc/baselibc from pkg.deps: in pkg.yml so that the compiler uses newlib
> but then i hit the following error:
> Linking /project/bin/targets/kwp1/app/apps/bme280/bme280.elf
> Error:/project/bin/targets/kwp1/app/hw/bsp/nrf52dk/hw_bsp_nrf52dk.a(gcc_startup_nrf52.o):
> In function `.bss_zero_loop':
> /project/repos/apache-mynewt-core/hw/bsp/nrf52dk/src/arch/cortex_m4/gcc_startup_nrf52.s:182:
> undefined reference to `_start'
> collect2: error: ld returned 1 exit status
> 
> how can i resolve this issue?
> 
> thank you


Re: I2C retries

2018-08-30 Thread Christopher Collins
On Thu, Aug 30, 2018 at 11:35:44AM -0700, Vipul Rahane wrote:
> Hey,
> 
> I think that’s a really great idea. One thing I would add is that we
> definitely should honor timeouts in any of the retries. Also, a way to
> disable the retries should be a good idea. Probably making it a syscfg
> which when set to 0 does not do any retries.

Yes, the `timo` parameter should apply to each retry.  That said, I
think a timeout should only occur if there is something wrong with the
I2C bus (e.g., the problem we've seen with the nRF52:
https://github.com/apache/mynewt-core/blob/1702cdeed8d8f718ed75f40961b9b2f37bae2ff3/hw/mcu/nordic/nrf52xxx/src/hal_i2c.c#L314-L325).

> I am assuming the errors codes will be HAL_I2C_ERR_ error codes,
> so all the mcus would have to return the same error codes on errors.

Yes, sounds good.

> Personally, I like calling them retries than tries since if they are
> set to 0 the operation still happens once, but that’s just me.

`retries` it is, then :).

Chris


Re: I2C retries

2018-08-30 Thread Christopher Collins
On Thu, Aug 30, 2018 at 09:48:32AM -0700, will sanfilippo wrote:
> I think my only comment is tries vs retries. You always want to make
> at least one try so you would have to set that to 1 every time right?
> I like retries personally but that is just me. Mainly because you
> would never allow zero for tries.

That's a good point.  I don't feel too strongly about this.  I think my
mind just prefers to see a `3` in the code when something is attempted
up to three times.

If any other HAL APIs specify tries / retries, we should stay consistent
with them.  Otherwise, I am fine with either.

Chris


Re: I2C retries

2018-08-30 Thread Christopher Collins
On Thu, Aug 30, 2018 at 07:47:34PM +0300, marko kiiskila wrote:
> > On Aug 30, 2018, at 7:21 PM, Christopher Collins  wrote:
> > [1] We should do the same for the other HAL APIs (i.e., non-I2C), but
> > that can come later.
> 
> Not sure this makes sense for other ones, as i2c is the only one with
> ACK.

Perhaps we don't need dedicated status code for the other APIs, but I
think we want to make sure the implementations don't return MCU-specific
errors.

Chris


I2C retries

2018-08-29 Thread Christopher Collins
Hello all,

I noticed the HAL master I2C API does not include any retry logic.  As
you probably know, in I2C, the default state of the SCL line is NACK.
This means an unresponsive peripheral is indistinguishable from one that
is actively nacking the master's writes.  If the master's writes are
being NACKed because the slave is simply not ready to receive data, then
retrying seems like the appropriate solution.

I can think of two ways to add I2C retries.  Each of them requires
changes:

(1) Delegate the retry logic to the code calling into the HAL (e.g.,
drivers).
(2) Implement retry logic directly in the HAL.

The problem with (1) is that HAL implementations currently return
MCU-specific error codes.  A generic driver cannot determine if a retry
is in order just from inspecting the error code.  If there were a common
set of error codes that all HAL I2C implementations returned, drivers
would have the information they need.  Actually, even ignoring the
subject of retries, I think common error codes here makes sense.

Re: (2) - I was thinking this could be implemented by adding two new
members to the `hal_i2c_master_data` struct that gets passed to every
master I2C function:

/**
 * Total number of times to attempt the I2C operation.  Certain
 * failures get retried:
 * o NACK received after sending address.
 * o (if configured) NACK received after sending data byte.
 */
uint8_t tries;

/** Whether to retry when a data byte gets NACKed. */
uint8_t retry_on_dnack:1;

(I hate to complicate things with the `retry_on_dnack` member, but some
peripherals seem to become unresponsive in the middle of a transaction.)

Since these additions are members of a struct, rather than new function
parameters, this change would be mostly backwards-compatible.  There is
still a problem here, though: code that doesn't zero-out the
`hal_i2c_master_data` struct will likely get retries when it isn't
expecting them, which could conceivably be a problem.

I am leaning towards (1).  Thoughts?

Thanks,
Chris


Re: debounce/debounce.h

2018-08-28 Thread Christopher Collins
I added the second one (util/debounce) without checking if another one
already existed.  I'll come up with a new name for util/debounce today
and submit a PR.

Chris

On Tue, Aug 28, 2018 at 05:33:22PM +0300, marko kiiskila wrote:
> mynewt-core has 2 debounce packages. Trying to include both packages
> within a project is giving me trouble.
> 
> I think one should be renamed. Which one?
> 
> [marko@IsMyLaptop:~/src2/incubator-mynewt-blinky/repos/apache-mynewt-core]$ 
> find . -name debounce.h
> ./hw/drivers/debounce/include/debounce/debounce.h
> ./util/debounce/include/debounce/debounce.h
> 


Re: Reducing GATT write attribute's timeout and read attribute's BLE_HS_ENOMEM

2018-08-06 Thread Christopher Collins
On Mon, Aug 06, 2018 at 02:03:22AM +0100, Lukasz Wolnik wrote:
> Hi Chris,
> 
> I have resolved the issue. It wasn't my mbuf structure but MSYS_1's pool
> memory leak (caused by my app).

[...]

> 
> I finally have a stable Mynewt app <-> Android repeated communication even
> on MSYS_1_BLOCK_COUNT = 8! So happy with it. Thanks again for the ride this
> bug turned out to be.

Greet to hear.  Good job chasing the problem down!

Chris


Re: Common CONFIG_* flag missing?

2018-07-07 Thread Christopher Collins
On Sat, Jul 07, 2018 at 06:13:22PM +0200, Kevin Townsend wrote:
> In the sys/config package, there are two main flags to test against in 
> code, depending on your implementation:
> 
>   * CONFIG_FCB (storage in FCB)
>   * CONFIG_NFFS (storage in NFFS)
> 
> https://github.com/apache/mynewt-core/blob/master/sys/config/syscfg.yml
> 
> I'd like to add config key-pairs as an option to a sensor driver (the 
> TSL2591 I submitted a PR for), to reload the integration time and other 
> settings on startup, and persist changes when made, but am I correct 
> that there isn't a single higher level 'CONFIG' flag and I'll need to 
> test against both of the values above to determine if the config system 
> is present or not? It seems not, but maybe I'm missing something.
> 
> Would anyone else consider it 'cleaner' to add a single flag that is 
> defined in either case (more if another persistance layer is added in 
> the future), since the API itself makes FCB or NFFS (or ???) mostly 
> transparent?

I think "package-present" settings are often not as useful as they might
seem.  Most of the time, code doesn't care whether a package is present
in the build; it only cares whether it has access to the package.  That
is, what matters is whether it has a dependency on the package, directly
or indirectly.

For example, the `sys/reboot` package depends on `sys/config`.  Since
pretty much every build includes the `sys/reboot` package, `sys/config`
will be present, and all its settings will be defined and available to
the rest of the system.  However, a driver package won't have access to
the `sys/config` header files unless it also depends on the package.

I think cases like the one you described are usually solved by adding a
new setting to the driver package (e.g., `TSL2591_CONFIG`).  If that
setting is enabled, the driver package depends on `sys/config`, e.g.,

pkg.deps.TSL2591_CONFIG:
- '@apache-mynewt-core/sys/config'

> Kevin
> 
> PS: As a side note, persisting the NFFS partition to an external file 
> under ARCH_sim would make certain test cases easier., storing values 
> across reets and builds. If this seems useful to anyone, I can raise it 
> as a Github issue and look at putting a PR together?

I don't know if this helps in your particular case, but you can specify
"-f " to use a local file for internal flash (assuming the app
calls `mcu_sim_parse_args()` in main).  If you specify the name of an
existing file, the simulator will reuse its contents.

Chris


Re: newtmgr fs command fails in sim

2018-07-06 Thread Christopher Collins
Hi Kevin,

On Fri, Jul 06, 2018 at 02:41:17PM +0200, Kevin Townsend wrote:
> I'm doing some initial development using only the simulator (for 
> convenience sake), and was testing out 'newtmgr fs' support to quickly 
> get data to and from the simulator.
> 
> My sim target is setup to use NFFS using mostly default values:
> 
>      # NFFS filesystem
>      FS_CLI: 1 # Include shell FS commands
>      CONFIG_NFFS: 1    # Initialize and configure NFFS into the
> system
>      NFFS_DETECT_FAIL: 2   # Format a new NFFS file system on
> failure to detect
> 
> Everything works fine from shell when I start the .elf file and create 
> an NFFS file in code, and I can see the file and contents via cat:
> 
> ls /cfg
> 604693   5 /cfg/id.txt
> 604693 1 files
> 604900 compat> cat /cfg/id.txt
> 5678
> 
> But when I try to do anything with 'newtmgr fs' (up or down) I always 
> get *error 5*, which I assume corresponds to 
> https://github.com/apache/mynewt-core/blob/master/fs/fs/include/fs/fs.h#L72: 

The error codes that come back in newtmgr responses are always (or at
least should be) MGMT_ERR codes:
https://github.com/apache/mynewt-core/blob/42bb5acc2f049d346c81f25e8c354bc3c6afefd4/mgmt/mgmt/include/mgmt/mgmt.h#L65

`MGMT_ERR_ENOENT` is indicated when the newtmgr command isn't recognized
(it is also used for other conditions; seems like we should have a
dedicated error code for "command not supported).

Did you enable the `FS_NMGR` syscfg setting?

Chris


Logging (again!)

2018-07-05 Thread Christopher Collins
Hello all,

My logging obsession continues.  I have submitted a giant PR
(https://github.com/apache/mynewt-core/pull/1249) which changes all
existing packages to use the modlog facility rather than directly call
into the `sys/log` API.  I think this is the right direction, but it is
a somewhat major change, and I am certainly open to feedback.

I see two benefits of this change:

* For simple logging to the console, apps don't need to define and
  register a log.  Just use `MODLOG_DFLT` to log a message.

* Most packages don't need to define and expose a log object, and the
  application doesn't need to register it.  The package just writes log
  messages to its reserved log module ID.  This eliminates the need for
  every app to register the `ble_hs_log` and `oc_log` log objects, for
  example.

Here is what I personally would like to see in Mynewt, going forward:

1. Every package uses modlog; nothing uses the log API directly
(except for modlog itself :) ).

2. Applications that just want to log to the console just use the
`MODLOG_DFLT` macro.  This macro writes a printf-style string to the
"default" module, which routes to the console unless remapped.

3. If a library needs to log messages, it defines a log module syscfg
setting for each module it uses.  It is important that syscfg is used
here so that the user can override them in case two packages choose the
same module ID(s).

4. Newt adds two new syscfg setting types:
* log_module_owner
* log_module

`log_module_owner` is used to stake out a unique module ID.  If two
`log_module_owner` settings specify the same number, that indicates a
conflict in the module number space, and newt can abort the build with a
meaningful error.

`log_module` is used when a package wants to log to "someone else's"
module.  Newt doesn't complain if one or more `log_module` settings have
the same value as a `log_module_owner` setting.

A new command would be added to newt to display all assigned log module
IDs in a target.

All comments welcome.

Thanks,
Chris


Re: bleprph using HCI 4 wire

2018-06-27 Thread Christopher Collins
Hi Jeff,

The `ble_hci_ram_rx_cmd_ll_cb` callback should be getting configured by
the controller at startup (assuming you are running the
combined-host-controller).

I have a few questions:
1. What version of Mynewt are you using?
2. What changes have you made to the apps that are failing?

Also, could you please post your bleprph target?  You can display it
with:

newt target show 

Thanks,
Chris

On Wed, Jun 27, 2018 at 08:03:06PM +, Jeff Belz wrote:
> Nimble error?
> I have tried the bleprph and bleuart  and both fail at this point in the 
> ble_hci_ram.c
> 
> assert(ble_hci_ram_rx_cmd_ll_cb != NULL);   
> 
> Either I'm missing a setting or maybe an error in Nimble
> 
> /This is the whole trace
> #12 0x08020b30 in os_eventq_run (evq=) at 
> repos/apache-mynewt-core/kernel/os/src/os_eventq.c:162
> 162 ev->ev_cb(ev);
> (gdb)
> #11 0x08025c12 in ble_hs_event_start (ev=) at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs.c:498
> 498 rc = ble_hs_start();
> (gdb)
> #10 0x08025bf6 in ble_hs_start () at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs.c:563
> 563 ble_hs_sync();
> (gdb)
> #9  0x08025966 in ble_hs_sync () at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs.c:325
> 325 rc = ble_hs_startup_go();
> (gdb)
> #8  0x0802800e in ble_hs_startup_go () at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs_startup.c:341
> 341 rc = ble_hs_startup_reset_tx();
> (gdb)
> #7  0x08027d52 in ble_hs_startup_reset_tx () at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs_startup.c:326
> 326 rc = 
> ble_hs_hci_cmd_tx_empty_ack(BLE_HCI_OP(BLE_HCI_OGF_CTLR_BASEBAND,
> (gdb)
> #6  0x0802696a in ble_hs_hci_cmd_tx_empty_ack (opcode=opcode@entry=3075, 
> cmd=cmd@entry=0x0,
> cmd_len=cmd_len@entry=0 '\000') at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs_hci.c:332
> 332 rc = ble_hs_hci_cmd_tx(opcode, cmd, cmd_len, NULL, 0, NULL);
> (gdb)
> #5  0x08026906 in ble_hs_hci_cmd_tx (opcode=, 
> cmd=cmd@entry=0x0, cmd_len=,
> evt_buf=evt_buf@entry=0x0, evt_buf_len=evt_buf_len@entry=0 '\000', 
> out_evt_buf_len=out_evt_buf_len@entry=0x0)
> at repos/apache-mynewt-nimble/nimble/host/src/ble_hs_hci.c:294
> 294 rc = ble_hs_hci_cmd_send_buf(opcode, cmd, cmd_len);
> (gdb)
> #4  0x08026efc in ble_hs_hci_cmd_send_buf (opcode=opcode@entry=3075, 
> buf=buf@entry=0x0,
> buf_len=buf_len@entry=0 '\000') at 
> repos/apache-mynewt-nimble/nimble/host/src/ble_hs_hci_cmd.c:122
> 122 return ble_hs_hci_cmd_send(opcode, buf_len, buf);
> (gdb)
> #3  0x08026d0e in ble_hs_hci_cmd_send (opcode=opcode@entry=3075, 
> len=len@entry=0 '\000', cmddata=cmddata@entry=0x0)
> at repos/apache-mynewt-nimble/nimble/host/src/ble_hs_hci_cmd.c:90
> 90  rc = ble_hs_hci_cmd_transport(buf);
> (gdb)
> #2  0x08026c9e in ble_hs_hci_cmd_transport (cmdbuf=cmdbuf@entry=0x0)
> at repos/apache-mynewt-nimble/nimble/host/src/ble_hs_hci_cmd.c:42
> 42  rc = ble_hci_trans_hs_cmd_tx(cmdbuf);
> (gdb)
> #1  0x0802e834 in ble_hci_trans_hs_cmd_tx (cmd=cmd@entry=0x0)
> at repos/apache-mynewt-nimble/nimble/transport/ram/src/ble_hci_ram.c:89
> 89  assert(ble_hci_ram_rx_cmd_ll_cb != NULL);
> (gdb)
> #0  __assert_func (file=file@entry=0x0, line=line@entry=0, 
> func=func@entry=0x0, e=e@entry=0x0)
> at repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:137
> 137asm("bkpt");
> 
> -Original Message-
> From: Christopher Collins  
> Sent: Monday, June 25, 2018 11:25 PM
> To: dev@mynewt.apache.org
> Subject: Re: bleprph using HCI 4 wire
> 
> Hi Jeff,
> 
> My responses are inline.
> 
> On Tue, Jun 26, 2018 at 02:11:46AM +, Jeff Belz wrote:
> > All:
> > 
> > 
> > I'm using a BroadCom(Cypress)43438 Bluetooth chip that receives a 4 
> > wire HCI.  I got one response that said I just have to change the  
> > syscfg setting in my target to
> > 
> > 
> > 
> > BLE_HCI_TRANSPORT_NIMBLE_BUILTIN: 0
> > BLE_HCI_TRANSPORT_UART: 1
> > 
> >   1.  I can't find any documentation to what these lines do?
> 
> The documentation for syscfg settings is in the packages themselves.
> Both of the above settings are defined by the 
> @apache-mynewt-nimble/nimble/transport package.  You can see the full list of 
> settings in a project, along with their descriptions, with this
> command:
> 
> newt target config show 
> 
> However, I don't think you will see either of these settings if you execute 
> this command.  From the dependency list you quoted, it looks like you are 
> using an older version 

Re: [VOTE] Release Apache Mynewt 1.4.1-rc1

2018-06-26 Thread Christopher Collins
On Fri, Jun 22, 2018 at 04:14:37PM +0200, Szymon Janc wrote:
> Hello all,
> 
> I am pleased to be calling this vote for the source release of
> Apache Mynewt 1.4.1.

[...]

> The vote is open for at least 72 hours and passes if a majority of at
> least three +1 PMC votes are cast.
> 
> [x] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding)

Chris


Re: bleprph using HCI 4 wire

2018-06-25 Thread Christopher Collins
Hi Jeff,

My responses are inline.

On Tue, Jun 26, 2018 at 02:11:46AM +, Jeff Belz wrote:
> All:
> 
> 
> I'm using a BroadCom(Cypress)43438 Bluetooth chip that receives a 4 wire HCI. 
>  I got one response that said I just have to change the  syscfg setting in my 
> target to
> 
> 
> 
> BLE_HCI_TRANSPORT_NIMBLE_BUILTIN: 0
> BLE_HCI_TRANSPORT_UART: 1
> 
>   1.  I can't find any documentation to what these lines do?

The documentation for syscfg settings is in the packages themselves.
Both of the above settings are defined by the
@apache-mynewt-nimble/nimble/transport package.  You can see the full
list of settings in a project, along with their descriptions, with this
command:

newt target config show 

However, I don't think you will see either of these settings if you
execute this command.  From the dependency list you quoted, it looks
like you are using an older version of Mynewt which does not support
these two settings.  I believe you are using Mynewt 1.3.0; you will want
to upgrade to 1.4.0, released about one week ago.  There was a long
delay between the releases of 1.3.0 and 1.4.0, and I mistakenly forgot
that 1.4.0 was not yet released when I wrote my original email.

The latest version introduces some fairly major changes, so I suggest
you upgrade as follows:

1. Download Newt 1.4.0 as described here:
http://mynewt.apache.org/develop/get_started/native_install/index.html

2. Upgrade the Mynewt repos to 1.4.0 by running:

newt upgrade

inside your project directory.

>   2.  How can I make sure it's configuring the right UART?

There is a syscfg setting defined by
@apache-mynewt-nimble/nimble/transport/uart called `BLE_HCI_UART_PORT`.
By default, this is defined to be 0.  You can change its value if you
need to use a different UART.

>   3.  Do I change the target syscfg or the one in the app folder?

I recommend changing the target's syscfg.  The target configuration
overrides the app configuration, and it is best not to change a foreign
repo except when necessary.  The syscfg system is described in more
detail here:
http://mynewt.apache.org/develop/os/modules/sysinitconfig/sysinitconfig.html

>   4.  Do I really need the bootloader? If so, is there documentation to why, 
> I will eventually need to modify this.

The boot loader is not strictly required, but much of the Mynewt
infrastructure assumes it is present.  When you are getting Mynewt up
and running for the first time, I recommend you use the boot loader so
that you can follow the documentation more closely.

Chris


Logging changes, part 2 - Module-mapped logging

2018-06-19 Thread Christopher Collins
Hello all,

In my previous email, I mentioned some proposed logging changes that I
was less sure of.  In particular, there are these two PRs that I
submitted:

(1) log/modlog - Module-mapped logging
(https://github.com/apache/mynewt-core/pull/1174)

(2) sys/log/full: Allow min-level per module
(https://github.com/apache/mynewt-core/pull/1179)

I think the changes are sufficiently described in the PRs themselves, so
I won't repeat that here.

My concern with (1) is: the logging API is getting pretty crowded.  If
we add this feature, there really won't be room for much else in the
future, so I just want to be sure this is the logging API we want.

Re: (2) - I wanted a user-friendly way to configure individual log
levels on startup, and to turn up or down log levels while debugging.
My concern is that this would be the *fourth* place where logging is
supressed based on level.  The other three are:

1. The compile time `LOG_LEVEL` setting.
2. Each log object as a minimum log level (`l_level`).
3. (1) above adds a minimum log level per module-mapping.

My feeling is that this won't be an issue in practice: `LOG_LEVEL` feels
different from the others, and I don't see people ever changing
`l_level` or module-mapping levels.

Please feel free to share any thoughts, suggestions, criticisms, etc.

Thanks,
Chris


Logging changes, part 1 - Separate header and body

2018-06-19 Thread Christopher Collins
Hello all,

I have submitted a few logging related PRs to the core repos.  I think I
like the changes, but I have a nagging feeling they miss the point
somehow.  I wanted to get the community's opinion on these changes as
well as start a discussion on minimizing backwards compatibility
breakage.  I plan to split this into two emails.

This email concerns this PR:
https://github.com/apache/mynewt-core/pull/1196

This email is a bit easier than the next one, because I am mostly
convinced that this change is good.  The PR addresses some awkwardness
in the logging API.  Specifically, the old API requires the user to know
that each log entry starts with a header, and to know the offset of the
message body within a log entry.  This burden is imposed on the user as
follows:

1. log_append() - User supplied buffer must contain sufficient padding
to accommodate a log entry header at the start (described in more detail
here: https://github.com/apache/mynewt-core/issues/1173).

2. log_read() - The offset argument is relative to the start of the
log entry, not to the start of the message body.  Reading from offset 0
retrieves the header; reading from offset (0+hdr_size) retrieves the
body.

3. log_walk() - Length passed to walk callback indicates the size of the
entire entry, not just the size of the body.  The callback has to adjust
the length and offset to read the message body.

I think these are all deficiencies in the logging API.  The user should
not need to know how log entries are structured and where the message
body begins.  However, I think the most important issue is the
padding requirement of `log_append()`.

The PR referenced above addresses these issues by introducing some new
funtions:

* log_append_body
* log_append_mbuf_body
* log_read_hdr
* log_read_body
* log_read_mbuf_body
* log_walk_body

The PR description goes into more detail about these functions.  I think
these functions are an improvement over the existing ones, because they
are less error-prone without sacrificing any power (but please voice any
disagreements).  That said, I don't think it is necessary to break
backwards compatibility.  While the new function names are not quite
ideal, it doesn't seem llike too much of a burden for users to include
the "_body" suffix when accessing the log API.

All comments welcome.

Thanks,
Chris


Re: ADC device not allowed multiple opens

2018-06-06 Thread Christopher Collins
On Wed, Jun 06, 2018 at 11:57:34AM -0700, will sanfilippo wrote:
> Chris:
> 
> I might be missing something here, but given that os_dev already has a 
> reference count and that handles multiple folks opening/closing the device, 
> does the underlying adc driver need a reference count itself? If it just 
> returned no error if opened again this would be fine.
> 
> I do note that os_dev_open() and os_dev_close() always call the open/close 
> handlers regardless of reference count. I wonder if that should be changed 
> (meaning only call open/close once)?

No, you aren't missing anything; I just misunderstood the os_dev
reference counting.  Thanks for setting me straight :).

Another option: the ADC open function checks its os_dev's reference
count.  If the value is greater than zero, then return without doing
anything.

Chris

> 
> 
> > On Jun 6, 2018, at 10:13 AM, Christopher Collins  wrote:
> > 
> > On Wed, Jun 06, 2018 at 08:50:34AM -0700, will sanfilippo wrote:
> >> Hello:
> >> 
> >> I am not the most familiar with the ADC device so it is possible that it 
> >> was being used incorrectly but in any event I ran across something I 
> >> wanted to discuss. The call to os_dev_open() allows a device to be opened 
> >> multiple times (there is a reference count there). However, the call to 
> >> nrf52_adc_open() returns an error (OS_EBUSY) if the device has already 
> >> been opened.
> >> 
> >> This presented a problem in the following case: consider two independent 
> >> packages both of which want to use ADC_0. Each package is going to attempt 
> >> to open the ADC device (since it has no idea if it was already opened) but 
> >> the second attempt to open the device will result in an error code 
> >> returned. Depending on how the code is written in the package, this could 
> >> be a problem. Given that an ADC is almost always a mutli-channel 
> >> peripheral (one adc device has multple channels) I would suspect the above 
> >> case to be common: multiple packages wanting an ADC channel from a single 
> >> device. 
> >> 
> >> I am not sure if anything needs to be done here; just wanted to see if 
> >> folks thought there should different behavior with regards to the function 
> >> returning an error if the device was already opened. If not, folks are 
> >> going to have to be careful when they write code using the adc device. 
> >> Seems to me if nothing is going to change we have two options:
> >> 
> >> 1) The device gets created and opened in some place and handed to the 
> >> packages that need it.
> >> 2) The device gets created (say by the bsp) and each package can attempt 
> >> to open the device. If os_dev_lookup() returns !NULL but os_dev_open() 
> >> returns NULL it means that the device has already been opened.
> >> 
> >> Something about #2 just sort of bothers me. I do not like ambiguous stuff 
> >> like that; how do you know if there was an error for another reason?
> > 
> > Why not:
> > 
> > 3) Make the ADC driver consistent with other drivers by adding a
> > reference count.
> > 
> > ?
> > 
> > I know something less than nothing about the ADC code, so I could
> > certainly be missing something.
> > 
> > Chris
> 


Re: ADC device not allowed multiple opens

2018-06-06 Thread Christopher Collins
On Wed, Jun 06, 2018 at 08:50:34AM -0700, will sanfilippo wrote:
> Hello:
> 
> I am not the most familiar with the ADC device so it is possible that it was 
> being used incorrectly but in any event I ran across something I wanted to 
> discuss. The call to os_dev_open() allows a device to be opened multiple 
> times (there is a reference count there). However, the call to 
> nrf52_adc_open() returns an error (OS_EBUSY) if the device has already been 
> opened.
> 
> This presented a problem in the following case: consider two independent 
> packages both of which want to use ADC_0. Each package is going to attempt to 
> open the ADC device (since it has no idea if it was already opened) but the 
> second attempt to open the device will result in an error code returned. 
> Depending on how the code is written in the package, this could be a problem. 
> Given that an ADC is almost always a mutli-channel peripheral (one adc device 
> has multple channels) I would suspect the above case to be common: multiple 
> packages wanting an ADC channel from a single device. 
> 
> I am not sure if anything needs to be done here; just wanted to see if folks 
> thought there should different behavior with regards to the function 
> returning an error if the device was already opened. If not, folks are going 
> to have to be careful when they write code using the adc device. Seems to me 
> if nothing is going to change we have two options:
> 
> 1) The device gets created and opened in some place and handed to the 
> packages that need it.
> 2) The device gets created (say by the bsp) and each package can attempt to 
> open the device. If os_dev_lookup() returns !NULL but os_dev_open() returns 
> NULL it means that the device has already been opened.
> 
> Something about #2 just sort of bothers me. I do not like ambiguous stuff 
> like that; how do you know if there was an error for another reason?

Why not:

3) Make the ADC driver consistent with other drivers by adding a
reference count.

?

I know something less than nothing about the ADC code, so I could
certainly be missing something.

Chris


No longer need to call `conf_load()`

2018-05-16 Thread Christopher Collins
Hello all,

I am about to merge a PR
(https://github.com/apache/mynewt-core/pull/1075) which eliminates the
need to explicitly call `conf_load()`.  Nothing bad will happen if your
app continues to call `conf_load()`, but the call can be safely removed
after the PR is merged.

Background-
The `sys/config` package is used for persisting data across reboots.  To
restore all the persisted data from flash, an app had to manually call
`conf_load()`, typically from `main()`.  It would be preferable for the
data to get loaded automatically during system initialization, but
this is not possible; the underlying storage containing the config data
is not ready for use until the application explicitly initializes it in
`main()`.  Typically, config data is stored in a flash circular buffer
(fcb), something which sysinit and syscfg know nothing about, and which
must be manually initialzed by the app.  Thus, `conf_load()` during
sysinit would fail.  

The PR referenced above causes `conf_load()` to be called automatically,
but it delays the call until the default event queue begins being
processed.  Typically, the default event queue is processed in an
infinite loop at the end of `main()`, so delaying the call to
`conf_load()` to this point gives `main()` the opportunity to initialize
FCBs and other storage first.

Issue-
Packages may attempt to access persisted data before `conf_load()` has
been called.  This issue has always existed, but the referenced PR
exacerbates it.  This issue can arise in a number of ways:

(1) Package tries to use persisted data during sysinit.
(2) High priority task tries to use persisted data as soon as it starts
running.
(3) Package that runs in the default task tries to use persisted data
immediately after sysinit completes.

(1) and (2) were always an issue.  (3) is a new issue that arises if an
app takes advantage of the automatic call to `conf_load()`.  If your app
uses a package which tries to use persisted data immediately on startup,
then you may want to continue calling `conf_load()` from `main()`.

Thanks,
Chris


Re: managing repository mirrors with the newt tool

2018-04-09 Thread Christopher Collins
Hi David,

On Mon, Apr 09, 2018 at 09:29:34PM -0500, david zuhn wrote:
> I'm trying to understand how the newt tool manages repository versions.
> 
> Here's my situation -- I don't want to depend on github/master, yet I don't
> want to introduce gratuitous incompatibility.
> 
> So I would like to redefine the apache-mynewt-core repository to be my own
> repository (either my own fork on github, or perhaps an internal github
> server). That's an easy change to make in project.yml.  No problem yet.
> 
> On my own fork, I like to keep master to be exactly the same as the
> upstream master, although it may lag behind as I don't need to run a pull
> constantly.   So I'm keeping my own changes in a development branch.
> 
> I'd like to specify this in project.yml, but the 'vers' field has some
> strict guidelines about what I can use, and it seems that I can only use
> master branch, so "0-dev" is "0.0.0" is "master" according to
> repository.yml.
> 
> I want to have everything I'm building in source control, for my Continuous
> Integration engine if nothing else.   At the moment, I have a patch to the
> BAS service in mynewt-nimble that I wish to commit.  I believe that it will
> be in the official repository "soon" (for some definition of soon), but
> that doesn't help the fact that I would like to check in (to my local
> repositories my application that depends on this fix, which then entails
> that I want to check in the fix too).
> 
> From what I'm reading so far, it would be nice if the newt tool were a
> little less picky about the format of the 'vers' tag, or had an alternate
> tag that would let me bypass 'vers' altogether and let be specify a branch
> name directly (which would also alleviate the need to special case "0.0.0"
> to being "master").
> 
> Or is there a capability to do what I'm looking for that I just can't find
> (I don't speak fluent go, so perusing the source hasn't shown me anything
> yet).

I agree - more flexibility here would be an improvement.  I think you
have a good understanding of the system.  Here is my summary of how it
currently works:

* The `project.yml` file consists of "repo specifiers."
* A repo specifier contains a single "version string."
* A version string has one of the following forms:
* Normalized:  "#.#.#"
* Floating:"#[.#]-"

  where  is "dev", "latest", etc.

Each Mynewt repo contains a `repository.yml` file in its master
branch.  The `repository.yml` file:
* Maps floating version strings to normalized version strings.
* Finally, maps normalized version strings to git commits
  (branch, tag, or hash).

I think this system works well for users who want to use official Mynewt
releases.  However, it may not be the best for more adventurous users.

To solve your problem, I think you'll need to modify your repo's
`repository.yml` file such that "0.0.0" points to the specific commit
hash that you want to pin yourself to.  For example:

repo.versions:
"0.0.0": "815254f5166ef3954b214efdd37549814521c9d6"

For the future, I suggest we make the following changes to newt:

1. Allow a repo specifier (in `project.yml`) to directly specify a git
commit (branch, tag, or hash).

2. Allow a `repository.yml` file to map a floating version number
directly to a git commit, rather than requiring an intermediate
normalized version.

Chris


Re: MBUF behaviour

2018-04-08 Thread Christopher Collins
On Sat, Apr 07, 2018 at 08:15:55PM +0530, Aditya Xavier wrote:
> That explains everything. However, one question. 
> 
> When we do a os_mbuf_free_chain, shouldn’t om_data also provide “”, instead 
> of the previous value ?

The contents of an mbuf's data buffer beyond `om_len` are always
indeterminate.  The issue here is that the app prints the mbuf data
despite `om_len` being equal to 0.

What is actually happening is `os_mbuf_get()` is returning the same mbuf
that was just freed.  This is a consequence of Mynewt's mempool
implementation; elements are allocated in the reverse order they were
freed.  Since the app only allocates one mbuf at a time, the mempool
always returns a pointer to the same mbuf.

The mbufpool implementation could zero out an mbuf when it gets
allocated.  This would prevent the app from printing stale contents.
However, this behavior would be a waste of time; setting `om_len` to 0
is sufficient as long as the app respects the mbuf API.

Chris

> 
> I understand its not wise to delete the data from the memory for the sake of 
> efficiency, however was wondering if thats the expected result.
> 
> Thanks for the explanation though. Would change the code accordingly.
> 
> Thanks,
> Aditya Xavier.
> 
> 
> > On 07-Apr-2018, at 8:55 AM, Christopher Collins <ch...@runtime.io> wrote:
> > 
> > Hi Aditya,
> > 
> > On Sat, Apr 07, 2018 at 07:59:44AM +0530, Aditya Xavier wrote:
> >> Hi Christopher,
> >> 
> >> That is the expected behaviour, however if you try running the sample app 
> >> I gave you would notice the following 
> >> After step 11, I.e initialise the os_mbuf by doing a os_mbuf_get from a 
> >> mempool, the new value is overwritten on the previous value.
> >> 
> >> I.e
> >> 1. Accessing the mbuf after doing a free chain, om_data still holds the 
> >> value.
> >> 2. Initialising it again by doing a os_mbuf_get is not getting me a clean 
> >> mbuf, rather it holds the previous value and om_mbuf_copyinto merely 
> >> overwrites it. So incase the new string length is smaller, om_data would 
> >> return wrong value.
> >> 
> >> Am sorry if am not able to explain it properly however I would appreciate 
> >> it if you can test the app once.
> > 
> > I ran your app, and I see nothing unusual in the output.  Here is what I
> > get:
> > 
> >(gdb) r
> >Starting program:
> >
> > /mnt/data2/work/micosa/repos/mynewt-core/bin/targets/blinky-sim/app/apps/blinky/blinky.elf
> >uart0 at /dev/pts/16
> >UART MBUF Created 1 to 1
> >Received Value :- abc
> >Received Length :- 3
> >Value after Reinit :- abc
> >Length after Reinit :- 0
> >Received Value :- hello
> >Received Length :- 5
> >Value after Reinit :- hello
> >Length after Reinit :- 0
> >Received Value :- gagao
> >Received Length :- 4
> >Value after Reinit :- gagao
> >Length after Reinit :- 0
> > 
> > To get this output, I typed the following strings into the console:
> > 
> >abc
> >hello
> >gaga
> > 
> > If I understand correctly, your concern is the following part of the
> > output:
> > 
> >Received Value :- gagao
> >Received Length :- 4
> >Value after Reinit :- gagao
> >Length after Reinit :- 0
> > 
> > Specifically, you are unsure why:
> > 
> >* The first line contains "gagao" rather than "gaga".
> >* The third line contains "gagao" rather than "".
> > 
> > Your program uses the `%s` format specifier to print the contents of an
> > mbuf.  This is probably not what you want, for a number of reasons:
> > 
> >* Mbuf contents are not typically null-terminated.
> >* Mbuf contents are not guaranteed to be contiguous (i.e., multiple
> >  bufers may be chained).
> > 
> > Here is a reliable, if inefficient, way to print an mbuf as a string:
> > 
> >static void
> >print_mbuf(const struct os_mbuf *om)
> >{
> >int i;
> > 
> >for (; om != NULL; om = SLIST_NEXT(om, om_next)) {
> >for (i = 0; i < om->om_len; i++) {
> >putchar(om->om_data[i]);
> >}
> >}
> >}
> > 
> > If you are sure the mbuf is not chained, then you don't need the outer
> > loop.
> > 
> > Chris
> 


Re: MBUF behaviour

2018-04-06 Thread Christopher Collins
Hi Aditya,

On Sat, Apr 07, 2018 at 07:59:44AM +0530, Aditya Xavier wrote:
> Hi Christopher,
> 
> That is the expected behaviour, however if you try running the sample app I 
> gave you would notice the following 
> After step 11, I.e initialise the os_mbuf by doing a os_mbuf_get from a 
> mempool, the new value is overwritten on the previous value.
> 
> I.e
> 1. Accessing the mbuf after doing a free chain, om_data still holds the value.
> 2. Initialising it again by doing a os_mbuf_get is not getting me a clean 
> mbuf, rather it holds the previous value and om_mbuf_copyinto merely 
> overwrites it. So incase the new string length is smaller, om_data would 
> return wrong value.
> 
> Am sorry if am not able to explain it properly however I would appreciate it 
> if you can test the app once.

I ran your app, and I see nothing unusual in the output.  Here is what I
get:

(gdb) r
Starting program:

/mnt/data2/work/micosa/repos/mynewt-core/bin/targets/blinky-sim/app/apps/blinky/blinky.elf
uart0 at /dev/pts/16
UART MBUF Created 1 to 1
Received Value :- abc
Received Length :- 3
Value after Reinit :- abc
Length after Reinit :- 0
Received Value :- hello
Received Length :- 5
Value after Reinit :- hello
Length after Reinit :- 0
Received Value :- gagao
Received Length :- 4
Value after Reinit :- gagao
Length after Reinit :- 0

To get this output, I typed the following strings into the console:

abc
hello
gaga

If I understand correctly, your concern is the following part of the
output:

Received Value :- gagao
Received Length :- 4
Value after Reinit :- gagao
Length after Reinit :- 0

Specifically, you are unsure why:

* The first line contains "gagao" rather than "gaga".
* The third line contains "gagao" rather than "".

Your program uses the `%s` format specifier to print the contents of an
mbuf.  This is probably not what you want, for a number of reasons:

* Mbuf contents are not typically null-terminated.
* Mbuf contents are not guaranteed to be contiguous (i.e., multiple
  bufers may be chained).

Here is a reliable, if inefficient, way to print an mbuf as a string:

static void
print_mbuf(const struct os_mbuf *om)
{
int i;

for (; om != NULL; om = SLIST_NEXT(om, om_next)) {
for (i = 0; i < om->om_len; i++) {
putchar(om->om_data[i]);
}
}
}

If you are sure the mbuf is not chained, then you don't need the outer
loop.

Chris


Re: MBUF behaviour

2018-04-06 Thread Christopher Collins
Hi Aditya,

On Fri, Apr 06, 2018 at 07:36:41PM +0530, Aditya Xavier wrote:
> Hi Mynewt Team,
> 
> Please help me understand the behavior of MBUF.
> 
> PFB the steps I did :-
> 
> 1.os_mempool_init
> 2.os_mbuf_pool_init
> 3.Initialized a os_mbuf by os_mbuf_get
> 4.Triggered Console Reader.
> 5.os_mbuf_copyinto the console_buf into the os_mbuf
> 6.Did a eventq_put to get into a event callback.
> 7.Read os_mbuf value & os_mbuf len
> 8.os_mbuf_free_chain
> 9.Read os_mbuf value & os_mbuf len
> 10.   Initialized a os_mbuf by os_mbuf_get
> 11.   Repeat step 4 onwards.
> 
> Problem :-
> In step 7, I read the previous string, however the length is correct.
> In step 9, I read the previous string, however the length is 0.

`os_mbuf_free_chain` frees the mbuf chain back to its source pool.  From
that point, accessing the mbuf via this pointer is an error.

This is analogous to the following example:

int *x = malloc(sizeof(*x));
*x = 99;
free(x);
printf("*x = %d\n", *x);

In other words, don't access something after you free it! :)  You'll
need to allocate a new mbuf if you need one after freeing the first.

Chris


Re: os data.core memory section

2018-04-02 Thread Christopher Collins
Hi Markus,

On Sat, Mar 31, 2018 at 04:02:05PM -0700, markus wrote:
> I looked into moving the stack into the CCM memory of the stm32
> mcu's - and although almost every linker script defines ".data.core"
> sections and there are some defines in bsp.h's for section
> attributes they don't seem to be used.
> 
> Is there some hidden magic going on or is the CCM reserved for
> application code?

No hidden magic; CCM is mostly unused and is up for grabs.  I think
there was some attempt to use this memory intelligently a while back,
but as the number of supported BSPs increased, it became impractical.

When you say "the stack", do you mean the interrupt handler stack?  That
sounds like a reasonable use of CCM (though you and others probably have
a better sense of this than I do).  If this is something that users will
want to do, it might be good to create a syscfg setting to control
whether the stack gets put in CCM or normal RAM.

Chris


Re: CBOR encoding problems.

2018-03-29 Thread Christopher Collins
On Thu, Mar 29, 2018 at 10:27:36PM +0530, Aditya Xavier wrote:
> Thanks for the tip. Would try that too..
> 
> However, I have tried few variations to check the BLE side ( whether it was 
> responsible for truncating )
> 
> 1. I am able to send the same message using encoding only.. i.e. if I trigger 
> the same using a button to send over ble.
>   i.e. BLE Connection, Button -> Encoding -> BLE Output, works.
> 
> 2. The method which receives the char * data, itself receives a truncated 
> value.
>   BLE Connection, BLE Incoming -> Decoding -> Encoding -> BLE Outgoing, 
> does not work. Truncation.
> 
> 3. I am able to send the same message using encoding only.. i.e if I trigger 
> only the encoding method after receiving a message over BLE.
>   i.e. BLE Connection, BLE Incoming -> Encoding -> BLE Outgoing, works.
> 
> This kinda makes me feel that it should be a memory issue when its BLE + 
> Decoding + Encoding. With or without using mbuf / mbuf_pool
> 
> Let me know if you require to see the code, I can write another small test 
> app for you.

I think I understand the sequences you described.  I'm afraid I don't
have a good answer.

Are you checking the return code of the encoding function?  If the
system is running out of mbufs during encoding, the function should
return `CborErrorOutOfMemory`, and the mbuf chain will be partially
filled.

If you are running in gdb, another way to check for mbuf exhaustion is:
```
p os_msys_init_1_mempool
```

and look at the `mp_min_free` value.  If it is 0, that means the pool
has been exhausted at some point, and it is likely that some allocations
have failed.

You can also just try increasing the number of mbufs in the system:
```
newt target amend  syscfg=MSYS_1_BLOCK_COUNT=16
```

(assuming you are currently using the default value of 12; adjust
accordingly if not)

Chris


Re: CBOR encoding problems.

2018-03-29 Thread Christopher Collins
Hi Aditya,

On Thu, Mar 29, 2018 at 08:52:08PM +0530, Aditya Xavier wrote:
[...]
> And it doesn’t work when am trying to trigger it from BLE. 
> 
> Assuming it was a memory issue, I used a mem_pool, mbuf, allocating
> and reserving space. However, problem remains.
> 
> It usually encodes and sends the following structure:
> 
> {“field1”:1, “field2”:2, “field3”:[{“field4”:4,
> 
> Which naturally fails because of incomplete cbor structure.

My guess is that the Bluetooth ATT MTU is too low to accommodate the
full packet.  The ATT layer is specified with the somewhat surprising
behavior of silently truncating overly-long attribute values [1].

How are you sending the CBOR data?  If you are using a simple "write
characteristic" procedure (`ble_gattc_write()`), you can use the "write
long characteristics" procedure instead (`ble_gattc_write_long()`),
which will fragment the long attribute value for you.  If you are
sending the CBOR data in a notification, on the other hand, then I'm
afraid the BLE stack doesn't offer any means of fragmentation; you'll
need to implement an application-layer fragmentation scheme.

Chris

[1] Perhaps we should add a syscfg setting which causes an over-long
write to indicate an error to the application instead of silently
truncating the data.  The stack wouldn't be standards-conforming in this
mode, but it seems more useful.


Re: Convenience header: mynewt.h

2018-03-27 Thread Christopher Collins
FYI- I have filed a PR implementing this change:
https://github.com/apache/mynewt-core/pull/969

Chris

On Wed, Mar 21, 2018 at 10:47:14AM -0700, Christopher Collins wrote:
> Hello all,
> 
> I was thinking about adding a new header to the core repo that just
> includes the (more or less) mandatory headers:
> 
> * syscfg/syscfg.h
> * sysinit/sysinit.h
> * os/os.h
> * defs/error.h
> 
> The rule of thumb would be: just include "mynewt.h" in every file.  I
> think this would make an application developer's job easier, and
> simplify the introduction to Mynewt.
> 
> Thoughts?
> 
> Thanks,
> Chris


Convenience header: mynewt.h

2018-03-21 Thread Christopher Collins
Hello all,

I was thinking about adding a new header to the core repo that just
includes the (more or less) mandatory headers:

* syscfg/syscfg.h
* sysinit/sysinit.h
* os/os.h
* defs/error.h

The rule of thumb would be: just include "mynewt.h" in every file.  I
think this would make an application developer's job easier, and
simplify the introduction to Mynewt.

Thoughts?

Thanks,
Chris


Re: Error "The filename or extension is too long." while building a split image app

2018-03-18 Thread Christopher Collins
On Sun, Mar 18, 2018 at 09:26:19PM -0400, Abderrezak Mekkaoui wrote:
> Hi Chris.
> The result of the build with -ldebug can be found here:
> 
> https://www.dropbox.com/s/o749a6x6xjvva6t/clvrt_ess_split.log?dl=0

Darn... it seems Windows 7+ limits the command line length to 32767
characters [1].  The objcopy command that elicits the error is 80819
characters long, well in excess of the maximum.

Normally, the objcopy command isn't so massive.  However, as this
particular target is a split image, newt needs to specify which symbols
go in which image (loader or app).  Newt does this by explicitly
specifying each symbol to keep using the `-K ` syntax.

This is a tricky problem.  I'm afraid I don't know of a workaround,
other than building in a non-Windows environment.

I have filed this issue in github:
https://github.com/apache/mynewt-newt/issues/149.  Thanks for the
report.

Chris

[1] https://blogs.msdn.microsoft.com/oldnewthing/20031210-00/?p=41553


Re: Error "The filename or extension is too long." while building a split image app

2018-03-18 Thread Christopher Collins
Hi Abderrezak,

Could you try building again with the `-ldebug` switch?  Please include
the output in a follow up.

Thanks,
Chris

On Sun, Mar 18, 2018 at 05:05:40PM -0400, Abderrezak Mekkaoui wrote:
> Hi All,
> 
> I was following the tutorial to build a split image application. The 
> process got stalled with the following error:
> 
> 
> 
> ...
> Generating ROM elf
> Error: fork/exec C:\Program Files (x86)\GNU Tools ARM Embedded\7.0 
> 2017q4\bin\arm-none-eabi-objcopy.exe: The filename or extension is too long.
> 
> 
> 
> I am suing mingw64 on windows 10 machine.
> 
> Any help is appreciated.
> 
> Best regards
> Abderrezak
> 


Re: Nimble Questions

2018-03-15 Thread Christopher Collins
Hi Ram,

On Thu, Mar 15, 2018 at 12:07:55PM +0530, Sriram V wrote:
> Hi,
> 
> I have the following doubts on NimBle:
> 
> The document says 32+ concurrent connections, multiple connections in
> simultaneous central and peripheral roles. Does that mean the "device
> running Nimble" can connect to 32 different other devices like phones?

Yes, with one caveat: each of the 32 centrals needs to be sufficiently
cooperative in choosing connection parameters (
http://www.novelbits.io/ble-connection-intervals/).  My knowledge of the
BLE link layer is pretty limited, so someone more informed may need to
chime in.  Your app might need to request different connection
parameters from the centrals (using
http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_update_params/).

Of course, you will also need to build the Mynewt image device with a
configuration suitable for 32 concurrent connections (e.g.,
NIMBLE_MAX_CONNECTIONS=32, etc.).

> Also, I wanted to check if the stack provides firmware upgrade
> capability and if so, can you provide an example on how it is being
> done.

The newtmgr tool (http://mynewt.apache.org/latest/newtmgr/overview/) is
used to upgrade Mynewt devices.  Newtmgr is a command line tool, but I
believe there are other client libraries available (node.js, android?).
I don't see any examples of image upgrades, but there is some
information here, under the "Image Upgrade" heading:
http://mynewt.apache.org/latest/os/modules/split/split/

> Can we upgrade multiple devices (having Nimble) at the same time using
> a single device/phone supporting BT?

It probably not the most helpful answer, but: yes, as long as the phone
can handle it.  You can certainly perform multiple simultaneous upgrades
from a computer using the newtmgr tool.  That said, I have a feeling you
will get better overall throughput if you limit yourself to one upgrade
at a time.

Chris


Re: Question regarding exchanging long characteristic values over BLE

2018-03-09 Thread Christopher Collins
Hi Simon,

On Thu, Mar 08, 2018 at 12:18:41AM -0800, Simon Ratner wrote:
> Old thread, but I just bumped into this myself so want to resurrect it.
> 
> Current api makes it very difficult to implement a long characteristic that
> changes frequently (e.g. time- or sensor-based reading, or including a
> random component). In the case where mtu exchange fails or does not
> complete in time, the client may receive a mashup of two or more different
> values, if access_cb returns a current value each time. For example, a
> 32-byte value might end up as 22 bytes from sample 0 plus 10 bytes from
> sample 1 -- a combination that does not decode to a valid reading. One way
> around this is to lock in a stable sample for each connection, but that
> becomes harder to keep track of with many concurrent connections.

If you expect all clients to support a reasonable MTU, you might just
punt on the problem: if you can't fit the characteristic value in a
single response, send some sort of "characteristic not ready" response
instead.  You can determine the connection's MTU by calling
`ble_att_mtu()`.

This doesn't solve the general problem, of course.

> I don't really have a solution yet, just complaining ;) Perhaps nimble
> holding on to the value for subsequent offset reads makes sense after all.
> I guess the difficulty there is knowing when to free it?

That seems like a good solution.  Regarding how long to hold onto the
cached value, well, that's what syscfg is for :).

I thought of an alternative that won't actually work, but I'll share it
anyway: allow the application to associate a minimum MTU with each
characteristic.  If a peer attempts to read the characteristic over a
connection with an MTU that is too low, the stack initiates an MTU
exchange procedure, and only responds to the read request after the MTU
has increased.  Unfortunately, this doesn't work because changes in the
MTU don't apply to transactions that are already in progress.

Chris

> 
> Cheers,
> simon
> 
> 
> On Tue, Jul 25, 2017 at 12:19 PM, Andrzej Kaczmarek <
> andrzej.kaczma...@codecoup.pl> wrote:
> 
> > Hi,
> >
> >
> > On Tue, Jul 25, 2017 at 8:14 PM, Christopher Collins <ch...@runtime.io>
> > wrote:
> >
> > > On Tue, Jul 25, 2017 at 10:46:32AM -0700, Pritish Gandhi wrote:
> > > [...]
> > >
> > ​[...]​
> >
> >
> > > > Is this true for notifications too? If I need to send notifications for
> > > > that long characteristic value, will my access callback be called
> > several
> > > > times based on the MTU of the connection?
> > >
> > > Unfortunately, Bluetooth restricts notifications to a single packet.  If
> > > the characteristic value is longer than the MTU allows, the notification
> > > gets truncated.  To get around this, the application needs to chunk the
> > > value into several notifications, and possibly use some protocol which
> > > indicates the total length, parts remaining, etc.
> > >
> >
> > Also client can just do long read in order to read remaining portion of
> > characteristic value​
> >
> > ​- this is what ATT spec suggests.
> > It depends on actual use case, but this way client may be able to decide
> > whether it should read remaining portion of value or skip it, e.g. some
> > flags can be placed at the beginning of the characteristic value and they
> > will be always sent in notification.
> >
> > Best regards,
> > Andrzej
> >


Re: [DISCUSSION] Moving NimBLE to separate project

2018-03-05 Thread Christopher Collins
On Tue, Feb 27, 2018 at 08:39:04AM -0800, Christopher Collins wrote:
> I agree with others that the best option is 2a
> (@apache-mynewt-core/net/nimble* become empty packages that pull in the
> external nimble packages).  However, newt doesn't currently support repo
> dependencies; if a repo is not specified in the user's `project.yml`
> file, then newt won't download it.
[...]

I have submitted a PR which fixes repo dependencies:
https://github.com/apache/mynewt-newt/pull/140.  

This PR changes the `repository.deps` format to the one I used in my
previous email.  I thought this was a good idea because it eliminates
the need to repeat a depended-on repo's git information for each
version.

> 
> I've been looking at repo dependencies in newt, and it appears to be a
> pretty complex feature.  I don't think we can expect to have this
> working for 1.4.0, but maybe there are some compromises we can make to
> implement a simplified model that solves the nimble case.  I'll explain
> why I think this is complicated to solve below.  If you aren't
> interested in newt particulars, feel free to stop reading now :).
> 
> Here is how apache-mynewt-core's `repository.yml` file might look like
> this after the nimble and nffs dependencies are added (I've made some
> syntax changes for readability):
> 
> repository.deps:
> mynewt-nimble:
> type: git
> url: 'g...@github.com:apache/mynewt-nimble.git'
> vers:
> master: 0-dev
> mynewt_1_4_0_tag:   >=1.4.0
> 
> mynewt-nffs:
> type: git
> url: 'g...@github.com:apache/mynewt-nffs.git'
> vers:
> master: 0-dev
> mynewt_1_4_0_tag:   1.4.0
> 
> What this says is:
> * The master branch of apache-mynewt-core depends on:
> o nimble 0-dev
> o nffs 0-dev
> * The mynewt_1_4_0_tag tag of apache-mynewt-core depends on:
> o nimble 1.4.0 or later
> o nffs 1.4.0 only
> 
> The key points are: 
> * A repo's dependency tree may differ with version (branch).  
> * A dependency can specify a range of dependant repos (e.g., `>=`).
> 
> I think this is a good design, and I am not suggesting changing it.
> Some aspects of this are just quite complicated.  In particular, if the
> user wants to upgrade some repos, calculating the versions to upgrade to
> is a bit tricky.  Newt solves the similar problem of determining which
> packages to include during a build operation, but there is a fundamental
> difference: for packages, there is only a single dependency graph.  For
> repos, on the other hand, there are potentially many separate graphs,
> each of which needs to be evaluated to detect conflicts.  Furthermore,
> each graph may differ quite substantially, as some secondary dependency
> may add a tertiary dependency for certain versions (and so on and so
> on).  The number of potential graphs is quite large.  Consider a project
> with 10 repos, each of which specifies 10 versions in its
> `repository.yml` file.  Ignoring some details, there are potentially
> 10^10 (10 billion) graphs that would need to be evaluated to find the
> one combination that doesn't have any conflicts.  Furthermore, producing
> a helpful error message when there is no suitable combination of repo
> versions is quite a challenge in itself.
> 
> Linux distributions that use a rolling release model solve this problem,
> so it certainly is possible to implement.  However, it is going to
> require a lot of tricky code.
> 
> Chris
> 
> On Tue, Feb 27, 2018 at 09:44:37AM +0100, Szymon Janc wrote:
> > Hi,
> > 
> > 1. I'm fine with doing it either before or after 1.4. We just need to
> > make sure update works correctly for 1.4 if we do it before.
> > 2. I'd go with 2a as Will,  no need to keep duplicated code around.
> > 3. Agree with Will,  lets start with 1.0 NimBLE release. Only the
> > first release needs to be coordinated with -core anyway.
> > 
> > 
> > On 24 February 2018 at 01:07, will sanfilippo <wi...@runtime.io> wrote:
> > > My thoughts:
> > >
> > > 1. Since the 1.4 release is coming up very quickly I would do this for 
> > > 1.5 (personally).
> > > 2. I would choose 2a.
> > > 3. Seems a bit confusing to me to label nimBLE releases with same number 
> > > as Mynewt releases. Why not just make the first stable release of nimBLE 
> > > 1.0? Not a big deal either way but since they are going to diverge 
> > > eventually.
> > >
> > > Will
> > >> On Feb 22, 2018, at 1:01 AM, Andrzej Kaczmarek 
> > >> <

Re: Device numbering clarification

2018-02-28 Thread Christopher Collins
Hi Markus,

On Mon, Feb 26, 2018 at 11:36:41AM -0800, markus wrote:
> Is there some documentation about the numbers, meanings and relationship (if 
> any) of the 3 different device numbers?
> 
>  -) syscfg, eg UART_0
>  -) device name, eg "uart0"
>  -) mcu HW device, eg "USART2"

The short answer is: unfortunately no, there isn't any documentation
about how devices should be numbered.  I think the device numbering
scheme is something that evolved without any concrete plan, so a
conversation might be a good idea.

For the first two (syscfg and os_dev name): these should always
correspond to the same device.  I.e., `UART_0` syscfg settings should
configure the device called "uart0".

The third item (name that the MCU datasheet assigned to a peripheral) is
not so straightforward.  I think there is some tension due to two
conflicting goals:
1. Make setting names consistent with names in MCU documentation.
2. Make setting names consistent among all BSPs.

The second goal seems to have won out over the first.  The reason the
second goal is important is that it allows for BSP-agnostic libraries
and apps.  For example, a library that requires a UART can be configured
to use a UART number without worrying about whether the MCU
documentation calls the device "UART 0" or "USART 0".

If you have any thoughts on how this could be improved, please don't
hesitate to share them.  The same goes for other readers!

Chris


Re: JSON Encoding and Decoding

2018-02-28 Thread Christopher Collins
Hi Aditya,

On Wed, Feb 28, 2018 at 06:35:04PM +0530, Aditya Xavier wrote:
> Thanks got Encoding working for the required JSON.
> 
> Any pointers on how to get the decoding working ?
> 
> From the example 
> https://github.com/apache/mynewt-core/blob/master/encoding/json/test/src/testcases/json_simple_decode.c
>  
> 
> 
> Am able to decode upto name2 only.
> 
> {“name1": 1,”name2": “value2”, “name3”: [{“name4”: 2, “name5”: 3}]};
> 
> Please let me know the Structure for Array of objects.

Example 2 in the microjson document
(http://www.catb.org/~esr/microjson/microjson.html) does something
similar, so I would take a look at that.  This example is under the
"Compound Value Types" heading.

Chris


Re: [DISCUSSION] Moving NimBLE to separate project

2018-02-27 Thread Christopher Collins
I agree with others that the best option is 2a
(@apache-mynewt-core/net/nimble* become empty packages that pull in the
external nimble packages).  However, newt doesn't currently support repo
dependencies; if a repo is not specified in the user's `project.yml`
file, then newt won't download it.

I've been looking at repo dependencies in newt, and it appears to be a
pretty complex feature.  I don't think we can expect to have this
working for 1.4.0, but maybe there are some compromises we can make to
implement a simplified model that solves the nimble case.  I'll explain
why I think this is complicated to solve below.  If you aren't
interested in newt particulars, feel free to stop reading now :).

Here is how apache-mynewt-core's `repository.yml` file might look like
this after the nimble and nffs dependencies are added (I've made some
syntax changes for readability):

repository.deps:
mynewt-nimble:
type: git
url: 'g...@github.com:apache/mynewt-nimble.git'
vers:
master: 0-dev
mynewt_1_4_0_tag:   >=1.4.0

mynewt-nffs:
type: git
url: 'g...@github.com:apache/mynewt-nffs.git'
vers:
master: 0-dev
mynewt_1_4_0_tag:   1.4.0

What this says is:
* The master branch of apache-mynewt-core depends on:
o nimble 0-dev
o nffs 0-dev
* The mynewt_1_4_0_tag tag of apache-mynewt-core depends on:
o nimble 1.4.0 or later
o nffs 1.4.0 only

The key points are: 
* A repo's dependency tree may differ with version (branch).  
* A dependency can specify a range of dependant repos (e.g., `>=`).

I think this is a good design, and I am not suggesting changing it.
Some aspects of this are just quite complicated.  In particular, if the
user wants to upgrade some repos, calculating the versions to upgrade to
is a bit tricky.  Newt solves the similar problem of determining which
packages to include during a build operation, but there is a fundamental
difference: for packages, there is only a single dependency graph.  For
repos, on the other hand, there are potentially many separate graphs,
each of which needs to be evaluated to detect conflicts.  Furthermore,
each graph may differ quite substantially, as some secondary dependency
may add a tertiary dependency for certain versions (and so on and so
on).  The number of potential graphs is quite large.  Consider a project
with 10 repos, each of which specifies 10 versions in its
`repository.yml` file.  Ignoring some details, there are potentially
10^10 (10 billion) graphs that would need to be evaluated to find the
one combination that doesn't have any conflicts.  Furthermore, producing
a helpful error message when there is no suitable combination of repo
versions is quite a challenge in itself.

Linux distributions that use a rolling release model solve this problem,
so it certainly is possible to implement.  However, it is going to
require a lot of tricky code.

Chris

On Tue, Feb 27, 2018 at 09:44:37AM +0100, Szymon Janc wrote:
> Hi,
> 
> 1. I'm fine with doing it either before or after 1.4. We just need to
> make sure update works correctly for 1.4 if we do it before.
> 2. I'd go with 2a as Will,  no need to keep duplicated code around.
> 3. Agree with Will,  lets start with 1.0 NimBLE release. Only the
> first release needs to be coordinated with -core anyway.
> 
> 
> On 24 February 2018 at 01:07, will sanfilippo  wrote:
> > My thoughts:
> >
> > 1. Since the 1.4 release is coming up very quickly I would do this for 1.5 
> > (personally).
> > 2. I would choose 2a.
> > 3. Seems a bit confusing to me to label nimBLE releases with same number as 
> > Mynewt releases. Why not just make the first stable release of nimBLE 1.0? 
> > Not a big deal either way but since they are going to diverge eventually.
> >
> > Will
> >> On Feb 22, 2018, at 1:01 AM, Andrzej Kaczmarek 
> >>  wrote:
> >>
> >> Hi all,
> >>
> >> As some of you may already noticed, there is apache/mynewt-nimble
> >> repository created where NimBLE code was pushed along with some extra
> >> changes, most notably initial attempt to create port of NimBLE for
> >> FreeRTOS, but other platforms will be supported as well (native Linux port
> >> is also prepared).
> >>
> >> The problem is that this repo is now not synced with apache/mynewt-core and
> >> having two repositories with the same code is troublesome so we'd like to
> >> end development of NimBLE code in core repository and move it entirely to
> >> nimble repository. There are three open points on how this should be done:
> >> 1. When to do this switch? Before 1.4 release or after it?
> >> 2. How to deal with NimBLE in core repository?
> >> 3. How to manage NimBLE releases?
> >>
> >> My proposals are as follows:
> >>
> >> 2a. Remove NimBLE code from mynewt-core repository leaving only packages
> >> with 

Re: Define __MYNEWT__ symbol for Mynewt build

2018-02-09 Thread Christopher Collins
Hi Andrzej,

The newt tool already adds the `-DMYNEWT=1` compiler flag during builds.
Mynewt and newt are not part of the C implementation, so in my opinion
they should not introduce identifiers in the reserved namespace (leading
underscore).  I know opinions differ on this point, and I could probably
be convinced otherwise :).

Chris

On Fri, Feb 09, 2018 at 04:03:24PM +0100, Andrzej Kaczmarek wrote:
> Hi all,
> 
> I created a PR which modifies all compiler packages by adding
> -D__MYNEWT__=1 to compiler.flags.base, see here:
> https://github.com/apache/mynewt-core/pull/803
> 
> The reason for this is to have some common symbol which can be used to
> distinguish between building code for Mynewt or for some other platform -
> similar symbols are defined for other platforms. This will be especially
> useful for components which have separate repository, e.g. mynewt-nffs or
> upcoming mynewt-nimble which can be also ported to other platforms so may
> use different set of headers or some porting layer.
> 
> Best regards,
> Andrzej


Re: problem running the bare ble example on nrf52480

2018-01-04 Thread Christopher Collins
On Thu, Jan 04, 2018 at 08:02:20PM -0500, Abderrezak Mekkaoui wrote:
> Hi Chris
> 
> The result is as follows:
> 
> Program received signal SIGTRAP, Trace/breakpoint trap.
> __assert_func (file=file@entry=0x0, line=line@entry=0, 
> func=func@entry=0x0, e=e@entry=0x0) at 
> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:137
> 137    asm("bkpt");
> (gdb) bt
> #0  __assert_func (file=file@entry=0x0, line=line@entry=0, 
> func=func@entry=0x0, e=e@entry=0x0) at 
> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:137
> #1  0x00016ae0 in ble_hs_event_start (ev=) at 
> repos/apache-mynewt-core/net/nimble/host/src/ble_hs.c:474
> #2  0x00016b0c in ble_hs_sync () at 
> repos/apache-mynewt-core/net/nimble/host/src/ble_hs.c:316
> #3  0x00016cfc in ble_hs_start () at 
> repos/apache-mynewt-core/net/nimble/host/src/ble_hs.c:560
> #4  0x00016d16 in ble_hs_event_start (ev=) at 
> repos/apache-mynewt-core/net/nimble/host/src/ble_hs.c:473
> #5  0xc322 in main (argc=, argv=) at 
> apps/ble_app/src/main.c:36

Darn... I see what the problem is.  This bug was fixed after the
1.3 release: https://github.com/apache/mynewt-core/pull/704.  I didn't
realize the BLE tutorial was broken, though!

The easiest fix is probably to add a store package to the app.  You can
do this by adding the following dependency to your app's pkg.yml file:

- "@apache-mynewt-core/net/nimble/host/store/config"

Thanks for the heads up!  I will submit a fix for the BLE tutorial to
include this change.

Chris


Re: newt load hang

2017-12-06 Thread Christopher Collins
Hi Mostafa,

On Wed, Dec 06, 2017 at 10:45:16AM -0500, Mostafa Abdulla Uddin wrote:
> Hello,
> 
> I have install mynewt 1.2.0 in my Mac OS 10.11.6.
> I am using Nordic Board
> nRF52 Development Kit (for nRF52832)
> 
> I am trying do load blinky project with shell enable. Unfortunately, my
> load command paused. Note that I have erase previous image using Jlinkexe.
> I see it paused at following point

Could you please run the `newt load` command again, but also specify the
`-v` and `-ldebug` command line switches?  Then include the output in a
follow up email.

Thanks,
Chris


Re: [VOTE] Release Apache Mynewt 1.3.0-rc1

2017-12-01 Thread Christopher Collins
On Thu, Nov 30, 2017 at 10:10:00PM -0200, Fabio Utzig wrote:
[...]
> [X] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because...

+1 (binding)

Chris


Re: Cannot build lora_app_shell

2017-11-29 Thread Christopher Collins
On Wed, Nov 29, 2017 at 07:43:20AM -0800, will sanfilippo wrote:
> I doubt it was ever tested with no sx1276 actually connected. Where is it 
> crashing? What function is at 0x81bc?

> 
> > On Nov 29, 2017, at 6:20 AM, K Dmitry  wrote:
> > 
> > Thanks! That helped. I had to define few more pins and was able to build 
> > app. Now I'm trying to test it without SX1276 actually connected, but looks 
> > like app crashes:
> > 
> > 00 ICSR:0x00421002
> > 00 Assert @ 0xfb63
> > 00 Unhandled interrupt (2), exception sp 0x200013c0
> > 00  r0:0x  r1:0x00016289  r2:0x8000  r3:0xe000ed00
> > 00  r4:0xfb63  r5:0x0008  r6:0x  r7:0x2000268c
> > 00  r8:0x  r9:0x r10:0x r11:0x
> > 00 r12:0x  lr:0x8dcf  pc:0x81bc psr:0x81000200
> > 
> > Is it expected behavior when no SX1276 is present?

There are a few settings you can enable to help narrow down the cause of
the crash:

BASELIBC_ASSERT_FILE_LINE
SYSINIT_PANIC_FILE_LINE
SYSINIT_PANIC_MESSAGE

With these enabled, more information will get printed to the console at
the time of the crash, including the filename and line number where the
crash occurred.  They are disabled by default to reduce code size.

You can enable these settings as follows:

newt target amend  
syscfg='BASELIBC_ASSERT_FILE_LINE=1:SYSINIT_PANIC_FILE_LINE=1:SYSINIT_PANIC_MESSAGE=1'

I would enable these settings, and then reproduce the crash.

To later disble these settings, you can use the following command:

newt target amend -d  
syscfg='BASELIBC_ASSERT_FILE_LINE:SYSINIT_PANIC_FILE_LINE:SYSINIT_PANIC_MESSAGE'

or just manually remove them from your
`targets//syscfg.yml` file.

Chris


Re: Inconsistent HAL GPIO semantics

2017-11-13 Thread Christopher Collins
On Mon, Nov 13, 2017 at 04:32:58PM -0800, will sanfilippo wrote:
> Chris:
> 
> Personally, I think there should be separate API as it is more flexible and 
> the API names more accurately describe what the API is doing.
> 
> I do realize that this is more work and given that there currently is no API 
> to clear a pending interrupt, I suspect that everyone who used the enable API 
> expected the interrupt to be cleared prior to enabling.
> 
> Another possible solution (and yes, I suspect folks might think this crazy):
> 
> * Rename the API to something like: hal_gpio_irq_clear_and_enable()
> * Make all implementations consistent and use this API.
> 
> This way we could add the separate API over time and the code will work as 
> expected. Yeah, I know, crazy thought :-)

I don't think that is crazy, but I think it might be a bit disruptive
for some users.  Any code that calls `hal_gpio_irq_enable()` will fail
to build after the rename.  I assume that is the point: make sure things
break loudly if they are going to break.  At the risk of sounding lazy,
that seems like it could be a lot of work for everyone :).  Especially
if we plan on ultimately deprecating `hal_gpio_irq_clear_and_enable()`.

Here is an alternative plan for introducing the API change:

v1.3:
- Don't change any implementations of `hal_gpio_irq_enable()`
- Add the `hal_gpio_set/clear` to the API
- Notify users that the behavior of `hal_gpio_irq_enable()` will
  be changing in the future (for some MCUs).
- Modify code in apache-mynewt-core such that it doesn't assume
  that `hal_gpio_irq_enable()` clears the pending interrupt.
  That is, explicitly call the new `clear` function prior to
  enabling the interrupt.

v1.x [not sure if this is v1.4 or a later release]:
- Modify implementations of `hal_gpio_irq_enable()` such that
  they only enable the interrupt (don't clear it).

Chris


  1   2   >