Re: What are the best config settings to reduce ram usage, so that an app runs in 16 kb with least impact on functionality?

2017-06-18 Thread will sanfilippo
Yes, the code chains mbufs together so that if you have data length extension 
enabled you will be able to send larger PDUs (and receive them as well).

Note: there is a limit on how small the mbufs can be with nimble as the 
controller code assumes contiguous space for certain types of packets 
(advertisements, LL control packets, etc). There is also overhead for the mbuf 
and packet header so not 100% sure what the smallest mbuf size is currently. 
100 bytes is a decent number to use. You might be able to go a bit smaller but 
I you would not gain all that much.

Thinking about this, I think it might be a good idea for the controller code to 
check if the mbufs are large enough and use a #error so that the compiler can 
error out if the mbufs are too small.


> On Jun 18, 2017, at 2:50 AM, Khaled Elsayed <kelsa...@gmail.com> wrote:
> 
> Hi
> 
> If Nimble with  LE Data Packet Length Extension feature enabled, would mbuf
> block size of 100 still work? Could a PDU occupy more than one mbuf (e.g.
> two consecutive blocks)?
> 
> Best,
> 
> Khaled
> 
> On Fri, Jun 16, 2017 at 9:08 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Alfred:
>> 
>> Another thing with the nimble stack. The mbuf block size of 292, which
>> Chris shows reduced to 200, is much larger than it needs to be. I am not
>> sure what the smallest size actually is, but you can set it to 100 bytes
>> and it should work. This can get you more buffers and/or reduce your memory
>> usage quite a bit. The amount of buffers you actually need is quite
>> application dependent and generally requires some experimentation
>> unfortunately.
>> 
>> And thanks for the kind words regarding the stats, logging and image
>> update functionality. We appreciate it!
>> 
>>> On Jun 16, 2017, at 10:39 AM, Christopher Collins <ch...@runtime.io>
>> wrote:
>>> 
>>> Hi Alfred,
>>> 
>>> On Fri, Jun 16, 2017 at 07:02:47PM +0200, Alfred Schilken wrote:
>>>> Hello all, I’m experimenting with mynewt for several weeks now and I’m
>> especially impressed by the statistics, logging and image update
>> functionality.
>>>> I’m having problems with getting all the statistics using newtmgr via
>> BLE.
>>>> 
>>>> I  took bleprph as base, added ADC and enabled several stats, logs and
>> so on.
>>>> To reduce flash size I had to switch off debug and reduce the logging.
>>>> 
>>>> When I started the program I got an Assert-reboot-loop.
>>>> The target board is a bbc microbit.
>>>> 
>>>> This seems to be cause of the error:
>>>> ble_att_svr_entry_mem = malloc(
>>>>   OS_MEMPOOL_BYTES(ble_hs_max_attrs, sizeof (struct
>> ble_att_svr_entry)));
>>>> if (ble_att_svr_entry_mem == NULL) {
>>>>   rc = BLE_HS_ENOMEM;
>>>>   goto err;
>>>> }
>>> 
>>> Fitting BLE and newtmgr into 16kB requires some creativity :).  There
>>> are certainly some memory and mbuf optimizations that could be made, we
>>> just need to go through the exercise of scouring the code.
>>> 
>>>> I tweaked these four config values to fix the crash-loop.
>>>> BLE_ACL_BUF_COUNT: 4# was 4
>>>> BLE_ACL_BUF_SIZE: 128  # was 255
>>>> MSYS_1_BLOCK_COUNT: 4  # was 12
>>>> MSYS_1_BLOCK_SIZE: 292 # was 292
>>>> 
>>>> The program boots now, I can see and connect to the BLE services and
>> all this is fine.
>>>> 
>>>> If I use newtmgr via ble transport to read the statistics, some of them
>> are responding, others return just Error: 2.
>>>> mpstat and taskstat also don’t work.
>>>> But „image list“ is working, I could even upload a new image.
>>>> 
>>>> 
>>>> My question is:
>>>> What are the best config settings to reduce ram usage, so that an app
>> runs in 16 kb with least impact on functionality?
>>> 
>>> I was able to get mpstats working on a 16kB device with the following
>>> mbuf settings:
>>> 
>>>   MSYS_1_BLOCK_COUNT: 11
>>>   MSYS_1_BLOCK_SIZE: 200
>>> 
>>>> Some hints to reduce flash size would also be appreciated :-)
>>> 
>>> You might want to look into the split image functionality:
>>> https://mynewt.apache.org/latest/os/modules/split/split.  This allows
>>> you dedicate more flash space to your application code.  I don't think
>>> this will help with your immediates issues, however, since BLE and
>>> newtmgr would both need to go into the loader image.
>>> 
>>> Chris
>> 
>> 



Re: What are the best config settings to reduce ram usage, so that an app runs in 16 kb with least impact on functionality?

2017-06-16 Thread will sanfilippo
Alfred:

Another thing with the nimble stack. The mbuf block size of 292, which Chris 
shows reduced to 200, is much larger than it needs to be. I am not sure what 
the smallest size actually is, but you can set it to 100 bytes and it should 
work. This can get you more buffers and/or reduce your memory usage quite a 
bit. The amount of buffers you actually need is quite application dependent and 
generally requires some experimentation unfortunately.

And thanks for the kind words regarding the stats, logging and image update 
functionality. We appreciate it!

> On Jun 16, 2017, at 10:39 AM, Christopher Collins  wrote:
> 
> Hi Alfred,
> 
> On Fri, Jun 16, 2017 at 07:02:47PM +0200, Alfred Schilken wrote:
>> Hello all, I’m experimenting with mynewt for several weeks now and I’m 
>> especially impressed by the statistics, logging and image update 
>> functionality.
>> I’m having problems with getting all the statistics using newtmgr via BLE.
>> 
>> I  took bleprph as base, added ADC and enabled several stats, logs and so on.
>> To reduce flash size I had to switch off debug and reduce the logging.
>> 
>> When I started the program I got an Assert-reboot-loop.
>> The target board is a bbc microbit.
>> 
>> This seems to be cause of the error:
>> ble_att_svr_entry_mem = malloc(
>>OS_MEMPOOL_BYTES(ble_hs_max_attrs, sizeof (struct ble_att_svr_entry)));
>> if (ble_att_svr_entry_mem == NULL) {
>>rc = BLE_HS_ENOMEM;
>>goto err;
>> }
> 
> Fitting BLE and newtmgr into 16kB requires some creativity :).  There
> are certainly some memory and mbuf optimizations that could be made, we
> just need to go through the exercise of scouring the code.
> 
>> I tweaked these four config values to fix the crash-loop.
>> BLE_ACL_BUF_COUNT: 4# was 4
>> BLE_ACL_BUF_SIZE: 128  # was 255
>> MSYS_1_BLOCK_COUNT: 4  # was 12
>> MSYS_1_BLOCK_SIZE: 292 # was 292
>> 
>> The program boots now, I can see and connect to the BLE services and all 
>> this is fine.
>> 
>> If I use newtmgr via ble transport to read the statistics, some of them are 
>> responding, others return just Error: 2.
>> mpstat and taskstat also don’t work.
>> But „image list“ is working, I could even upload a new image.
>> 
>> 
>> My question is:
>> What are the best config settings to reduce ram usage, so that an app runs 
>> in 16 kb with least impact on functionality? 
> 
> I was able to get mpstats working on a 16kB device with the following
> mbuf settings:
> 
>MSYS_1_BLOCK_COUNT: 11
>MSYS_1_BLOCK_SIZE: 200
> 
>> Some hints to reduce flash size would also be appreciated :-) 
> 
> You might want to look into the split image functionality:
> https://mynewt.apache.org/latest/os/modules/split/split.  This allows
> you dedicate more flash space to your application code.  I don't think
> this will help with your immediates issues, however, since BLE and
> newtmgr would both need to go into the loader image.
> 
> Chris



Re: Conditional compilation based on BSP name

2017-06-14 Thread will sanfilippo
Ugo:

Just as a preface, BSPs are intended to be examples of how one could create a 
BSP for their own system. It was assumed/expected that folks would take these 
BSPs and modify them for their own use. Well, at least I thought so anyway. 
Another decision we made in the BSP and one that we have changed over time and 
is still not terribly consistent is how GPIO are defined in the BSP. We moved 
away from syscfg variables so that you cannot easily override them in your 
target; you need to modify the code in the BSP itself. That may not have been 
the wisest choice but "it is what it is" as we say here in the states. Of 
course, you can modify the code if you desire. Now, on to your issues:

Issue number 1:

Funny you mention this. I always thought that they way we did this is a bit odd 
regarding the 0, 1 for turning on/off the LED. Depending on how the LED is 
wired, setting the GPIO to 0 or 1 will do the opposite thing. It would probably 
be better to have done something like this:

LED_BLINK_OFF   (0)
LED_BLINK_ON(1)

These states would then be defined for each BSP and it would do the right thing 
based on the particular setup that you have.

Issue number 2:
There are different ways to do this but you could simply modify your BSPs and 
set up the pins you want in the BSP. I do understand that you now have a 
different BSP than the one in the repo. You could also do something with a 
conditional directive or you could create syscfg variables or define some of 
this in the target. I realize that I am not being very explicit; I just do not 
know the best way to accomplish what you want.

Hope this helps and I am not confusing things further :-)


> On Jun 14, 2017, at 11:02 AM, Ugo Mastracchio  wrote:
> 
> Thank you so much Chris and Will for your notes.
> 
> Let me specify my case.
> There are actually 2 different scenarios that I need to address.
> 
> BACKGROUND: I have developed (or should I say Will has and I have
> subsequently hacked it) an app that runs on 2 NRF52-based BSPs, namely
> NRF52DK and RBN2 and that exhibits same behaviour on  both.
> 
> ISSUE  ǸUMBER 1 
> 
> Both boards has one LED on the board and both BSPs define LED_BLINK_PIN.
> 
> But the issue is the following code would turn on the led on NRF52DK and
> will turn it off on RBN2 !
> 
> g_led_pin = LED_BLINK_PIN;
> hal_gpio_init_out(g_led_pin, 0);
> 
> 
> ISSUE NUMBER 2 
> 
> For practical wiring reasons I want to enable different GPIO input PINS
> on the 2 BSPs 
> 
> Looking forward to your comments.
> 
> Ciao
> Ugo
> 
> 
> On Wed, Jun 14, 2017, at 07:12 PM, Christopher Collins wrote:
>> On Wed, Jun 14, 2017 at 04:22:52PM +0200, Ugo Mastracchio wrote:
>>> Hello everyone, may I throw an absolute beginner's question ? 
>>> 
>>> How do I conditionally compile based on the BSP the target is
>>> associated with ?I want to use different GPIO pins depending on the 
>>> board
>>> 
>>> Is there a system configuration setting valued with the BSP name ?
>> 
>> After writing my previous response, I am thinking I may have
>> misunderstood the question.  Generally, the PIN mappings are defined in
>> the BSP package itself, so there should be no need to remap pins based
>> on the BSP being used.  Are you perhaps trying to use the same BSP
>> package for two slightly different boards?
>> 
>> If this is what you want to do, you may want to take a look at how the
>> arduino_zero BSP handles this.  The 1) arduino zero and 2) arduino zero
>> pro hardware is very similar.  I believe the only difference are a few
>> GPIO mappings.  Rather than create a separate BSP for each board, the
>> arduino BSP package code uses conditional compilation.
>> 
>> Within the arduino repo
>> (https://github.com/runtimeco/mynewt_arduino_zero), the arduino_zero BSP
>> defines these settings:
>> 
>>BSP_ARDUINO_ZERO:
>>description: 'TBD'
>>value: 0
>>restrictions:
>>- "!BSP_ARDUINO_ZERO_PRO"
>> 
>>BSP_ARDUINO_ZERO_PRO:
>>description: 'TBD'
>>value: 0
>>restrictions:
>>- "!BSP_ARDUINO_ZERO"
>> 
>> Then, in hw/bsp/arduino_zero/include/bsp/bsp.h, pins are mapped as
>> follows:
>> 
>>#if MYNEWT_VAL(BSP_ARDUINO_ZERO_PRO)
>> ARDUINO_ZERO_D2 = (8),
>> ARDUINO_ZERO_D4 = (14),
>>#endif
>> 
>>#if MYNEWT_VAL(BSP_ARDUINO_ZERO)
>> ARDUINO_ZERO_D2 = (14),
>> ARDUINO_ZERO_D4 = (8),
>>#endif
>> 
>> It is up to the target package to define one (and only one) of
>> BSP_ARDUINO_ZERO_PRO or BSP_ARDUINO_ZERO.
>> 
>> This approach is nice because it eliminates the need for a lot of
>> duplicate code in a second BSP package.  One hassle involved is the
>> necessity to define the appropriate syscfg setting in the target
>> package.
>> 
>> Chris
> 
> 
> -- 
> Ugo Mastracchio, 
> mastr...@fastmail.co.uk 
> Telefono: +39
> 

Re: Conditional compilation based on BSP name

2017-06-14 Thread will sanfilippo
Ugo:

I believe that there is -DBSP_NAME passed to everything. Here is an excerpt 
debug output from a build. You can see BSP_NAME=nrf52dk

2017/06/14 09:07:57.635 [DEBUG] arm-none-eabi-gcc -DADC_ENABLED=0 
-DAPP_NAME=bletest -DAPP_bletest -DARCH_NAME=cortex_m4 -DARCH_cortex_m4 
-DBLETEST -DBLETEST_CONCURRENT_CONN_TEST=1 -DBSP_NAME=nrf52dk -DBSP_nrf52dk 
-DCLOCK_ENABLED=1 -DCOMP_ENABLED=1 -DEGU_ENABLED=0 -DGPIOTE_ENABLED=1 
-DI2S_ENABLED=1 -DLPCOMP_ENABLED=1 -DNRF52 -DPDM_ENABLED=0 
-DPERIPHERAL_RESOURCE_SHARING_ENABLED=1 -DPWM0_ENABLED=1 -DPWM1_ENABLED=0 
-DPWM2_ENABLED=0 -DQDEC_ENABLED=1 -DRNG_ENABLED=1 -DRTC0_ENABLED=0 
-DRTC1_ENABLED=0 -DRTC2_ENABLED=0 -DSAADC_ENABLED=1 -DSPI0_CONFIG_MISO_PIN=25 
-DSPI0_CONFIG_MOSI_PIN=24 -DSPI0_CONFIG_SCK_PIN=23 -DSPI0_ENABLED=1 
-DSPI0_USE_EASY_DMA=1 -DSPI1_ENABLED=0 -DSPI2_ENABLED=0 
-DSPIS0_CONFIG_MISO_PIN=25 -DSPIS0_CONFIG_MOSI_PIN=24

That should do the trick.

> On Jun 14, 2017, at 7:22 AM, Ugo Mastracchio  wrote:
> 
> Hello everyone, may I throw an absolute beginner's question ? 
> 
> How do I conditionally compile based on the BSP the target is
> associated with ?I want to use different GPIO pins depending on the board
> 
> Is there a system configuration setting valued with the BSP name ?
> 
> Regards
> Ugo
> 
> Ugo Mastracchio,
> mastr...@fastmail.co.uk
> 
> 



Re: [RFC] experimental features and APIs

2017-06-13 Thread will sanfilippo
Not sure that I sent a reply about this but I meant to :-)

Seems fine to me.

> On Jun 12, 2017, at 4:58 AM, Szymon Janc  wrote:
> 
> Hi,
> 
> I was wondering on how we could add 'experimental' features or APIs to Mynewt.
> My rationale is that sometimes we may want to include feature in
> master branch for broader exposition but leave a room for API tweaks.
> 
> My rough proposal would be to add "Experimental" flag that
> experimental features could depend on. This would warn user that API
> used may change in incompatible way before it is stabilized. This flag
> would have to be explicitly set by user on syscfg.yml.
> 
> Thoughts?
> 
> -- 
> pozdrawiam
> Szymon K. Janc



Re: MYNEWT-490

2017-06-12 Thread will sanfilippo
Seems fine to me.

> On Jun 12, 2017, at 4:01 PM, marko kiiskila  wrote:
> 
> Hi,
> 
> part of the ticket includes renaming os_tick_idle() and os_tick_init() -> 
> hal_os_tick_idle() and
> hal_os_tick_init(), respectively.
> 
> This is a name change in the API, but this contract is between kernel/os and 
> respective hw/mcu/*
> implementations. Therefore, I was going to do this change without maintaining 
> old name.
> 
> I assume everyone is ok with this.



Re: BLE connection timeout

2017-06-09 Thread will sanfilippo
Jan:

I am not 100% sure but we have seen issues with certain phones due to how the 
nimble stack controller starts some LL control procedures. This has been 
addressed in a development branch but I do not think it has been merged into 
the master branch yet (it will in the upcoming release).

I could go into more details if you like but all you probably want is a fix. 
There are two things you could try:

1) This might be a temporary work-around: disable data length extension. You 
can do this by setting the following syscfg variable to 0: 
BLE_LL_CFG_FEAT_DATA_LEN_EXT. A simple way to do this is to go in and hack the 
syscfg.yml in net/nimble/controller/ and set the value to 0. If that works you 
can set that in your target.

- or -

2) Get the bluetooth5 branch. I think this has the fixes in it.

Let us know how it goes.

> On Jun 9, 2017, at 12:03 AM, Jan Becker  wrote:
> 
> Hey,
> 
> I've just been getting started with mynewt and build a small program
> that is very similar to the bleprph example provided by you.
> The board is a readbear blend v2 (based on the nRF52832 chipset) and I
> have the latest development of mynewt-core installed.
> However, it seems like the connection to my xperia z5 running Android
> 7.0 always times out after 40 seconds with reason 34
> (BLE_ERR_LMP_LL_RSP_TMO).
> Here is the log:
> 
> ...
> 011403 [ts=89085932ssb, mod=4 level=0] Disconnection Complete: status=0
> handle=1 reason=34
> 011405 [ts=89101556ssb, mod=64 level=1] connection updated; status=546
> handle=1 our_ota_addr_type=0 our_ota_addr=c0:fa:ac:cf:fa:0a
> our_id_addr_type=0 our_id_addr=c0:fa:ac:cf:fa:0a peer_ota_addr_type=1
> peer_ota_addr=72:0f:3c:e1:f5:75 peer_id_addr_type=1
> peer_id_addr=72:0f:3c:e1:f5:75 conn_itvl=39 conn_latency=0
> supervision_timeout=2000 encrypted=0 authenticated=0 bonded=0
> 011414 [ts=89171864ssb, mod=64 level=1]
> 011415 [ts=89179676ssb, mod=64 level=1] subscribe event; conn_handle=1
> attr_handle=14 reason=2 prevn=0 curn=0 previ=1 curi=0
> 011418 [ts=89203112ssb, mod=64 level=1] subscribe event; conn_handle=1
> attr_handle=18 reason=2 prevn=1 curn=0 previ=0 curi=0
> 011422 [ts=89234360ssb, mod=64 level=1] disconnect; reason=546 handle=1
> our_ota_addr_type=0 our_ota_addr=c0:fa:ac:cf:fa:0a our_id_addr_type=0
> our_id_addr=c0:fa:ac:cf:fa:0a peer_ota_addr_type=1
> peer_ota_addr=72:0f:3c:e1:f5:75 peer_id_addr_type=1
> peer_id_addr=72:0f:3c:e1:f5:75 conn_itvl=39 conn_latency=0
> supervision_timeout=2000 encrypted=0 authenticated=0 bonded=0
> 011431 [ts=89304668ssb, mod=64 level=1]
> 011432 [ts=89312480ssb, mod=4 level=0] ble_hs_hci_cmd_send: ogf=0x08
> ocf=0x0007 len=0
> 011434 [ts=89328104ssb, mod=4 level=0] 0x07 0x20 0x00
> 011435 [ts=89335916ssb, mod=4 level=0] Command complete: cmd_pkts=1
> ogf=0x8 ocf=0x7 status=0
> 011438 [ts=89359352ssb, mod=4 level=0] ble_hs_hci_cmd_send: ogf=0x08
> ocf=0x0008 len=32
> 011440 [ts=89374976ssb, mod=4 level=0] 0x08 0x20 0x20 0x12 0x02 0x01
> 0x06 0x0b 0x09 0x53 0x70 0x65 0x65 0x64 0x54 0x72 0x61 0x63 0x6b 0x02
> 0x0a 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
> 011445 [ts=89414036ssb, mod=4 level=0] Command complete: cmd_pkts=1
> ogf=0x8 ocf=0x8 status=0
> 011448 [ts=89437472ssb, mod=4 level=0] ble_hs_hci_cmd_send: ogf=0x08
> ocf=0x0009 len=32
> 011450 [ts=89453096ssb, mod=4 level=0] 0x09 0x20 0x20 0x12 0x11 0x07
> 0x23 0x7e 0xe9 0xa8 0xf5 0xcd 0xe0 0xfc 0x8a 0x84 0x4d 0xdf 0xf7 0x3c
> 0x65 0xb3 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00 0x00
> ...
> 
> Then I tried the same with my other test device, a wileyfox swift 2, and
> everything was fine.
> At first it seems like there is something wrong with my phone, but I
> never had such issues when running Nordic's Softdevice on the board.
> So, what does Softdevice do what nimble does not (or does wrong)?
> Unfortunately, I don't have the resources or knowledge to further
> investigate this.
> 
> What do you think might be the issue here?
> 
> Thanks,
> Jan



Re: FCC Pre-scan

2017-06-08 Thread will sanfilippo
Jitesh:

Sorry, I probably did not explain myself all that well.

1) The ble_ll_reset() call was mentioned in case you wanted to do something 
after the FCC test. This way you could be sure that the radio registers are all 
initialized properly after your special FCC test code messes with them (if 
indeed it does).

2) You should never have to issue ble_phy_disable() yourself. I am not sure how 
you are going to execute your FCC test code but certainly you could call 
ble_ll_reset() and once that happened (and you did not tell the controller to 
do anything else) you would have control of the radio. If your app has stopped 
advertising, scanning, initiating and has no connections, the radio will be 
free and you can mess with it to your hearts content. You would not even have 
to issue ble_ll_reset() in that case. Like I said, there are a number of 
different ways to do this… it all depends on the rest of your code and where 
you intend to call the function that will perform your FCC test code.

> On Jun 8, 2017, at 5:27 PM, Jitesh Shah <jit...@liveathos.com> wrote:
> 
> Great!
> So then ble_ll_reset() followed by ble_phy_disable() should take care of
> most cases, right?
> 
> As far as giving back control to the nimBLE stack is concerned - that
> probably won't be necessary. FCC is a pretty controlled environment, so you
> could get away with manually resetting your device after every test.
> 
> If all goes well, would you guys be interested in patches/mechanisms that
> help people with the FCC test?
> 
> Jitesh
> 
> On Thu, Jun 8, 2017 at 4:40 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Question 2: Unless it got added recently and I did not catch it, currently
>> the nimble stack does not provide support for the FCC test.
>> 
>> Question 1: I guess it depends what you mean by take over control. The
>> call to ble_phy_disable() will certainly halt anything going on in the
>> radio. However, that is a pretty low-layer call and if the link layer is
>> doing something it could grab control again. As long as you are sure the
>> device is not advertising, scanning or in a connection that radio should
>> not be used and the link layer should not try to start using it.
>> 
>> There are a number of ways you could “hack” this into the code and I could
>> help with more information if you need it. And btw: if you do take control
>> of the radio and then want to return it to the nimble stack you should
>> perform a link-layer reset of the controller. The reason being is that
>> there are some radio registers that are only written once and if you change
>> them the device may no longer operate correctly.
>> 
>> Awesome that you are doing this! Let us know how it goes.
>> 
>> Will
>> 
>>> On Jun 8, 2017, at 3:23 PM, Jitesh Shah <jit...@liveathos.com> wrote:
>>> 
>>> Hey all,
>>> We'll be going through the FCC pre-scan soon with 1.0 version of the
>> nimBLE
>>> stack. yay!
>>> 
>>> Of all the things needed for pre-scan the two which are the most
>> important
>>> are:
>>> 1) Ability to blast packets on only one of the 40 channels (and the
>> ability
>>> to choose which channel)
>>> 2) Ability to shut off the radio
>>> (Others like transmit power, connection interval are trivial to achieve)
>>> 
>>> *Question 1: *When we were using the softdevice, we disabled the
>> softdevice
>>> and took over the radio. With nimBLE, is ble_phy_disable() enough to take
>>> over the radio control from nimBLE?
>>> 
>>> *Question 2: *Optionally, does nimBLE provide any support for the FCC
>> test?
>>> I scoured the source, but didn't find any. Have any of you guys been
>>> through the FCC pre-scan using nimBLE?
>>> 
>>> Thanks,
>>> Jitesh
>>> 
>>> --
>>> This email including attachments contains Mad Apparel, Inc. DBA Athos
>>> privileged, confidential, and proprietary information solely for the use
>>> for the addressed recipients. If you are not the intended recipient,
>> please
>>> be aware that any review, disclosure, copying, distribution, or use of
>> the
>>> contents of this message is strictly prohibited. If you have received
>> this
>>> in error, please delete it immediately and notify the sender. All rights
>>> reserved by Mad Apparel, Inc. 2012. The information contained herein is
>> the
>>> exclusive property of Mad Apparel, Inc. and should not be used,
>>> distributed, reproduced, or disclosed in whole or in part without prior
>>> written permission of M

Re: FCC Pre-scan

2017-06-08 Thread will sanfilippo
Question 2: Unless it got added recently and I did not catch it, currently the 
nimble stack does not provide support for the FCC test.

Question 1: I guess it depends what you mean by take over control. The call to 
ble_phy_disable() will certainly halt anything going on in the radio. However, 
that is a pretty low-layer call and if the link layer is doing something it 
could grab control again. As long as you are sure the device is not 
advertising, scanning or in a connection that radio should not be used and the 
link layer should not try to start using it.

There are a number of ways you could “hack” this into the code and I could help 
with more information if you need it. And btw: if you do take control of the 
radio and then want to return it to the nimble stack you should perform a 
link-layer reset of the controller. The reason being is that there are some 
radio registers that are only written once and if you change them the device 
may no longer operate correctly.

Awesome that you are doing this! Let us know how it goes.

Will

> On Jun 8, 2017, at 3:23 PM, Jitesh Shah  wrote:
> 
> Hey all,
> We'll be going through the FCC pre-scan soon with 1.0 version of the nimBLE
> stack. yay!
> 
> Of all the things needed for pre-scan the two which are the most important
> are:
> 1) Ability to blast packets on only one of the 40 channels (and the ability
> to choose which channel)
> 2) Ability to shut off the radio
> (Others like transmit power, connection interval are trivial to achieve)
> 
> *Question 1: *When we were using the softdevice, we disabled the softdevice
> and took over the radio. With nimBLE, is ble_phy_disable() enough to take
> over the radio control from nimBLE?
> 
> *Question 2: *Optionally, does nimBLE provide any support for the FCC test?
> I scoured the source, but didn't find any. Have any of you guys been
> through the FCC pre-scan using nimBLE?
> 
> Thanks,
> Jitesh
> 
> -- 
> This email including attachments contains Mad Apparel, Inc. DBA Athos 
> privileged, confidential, and proprietary information solely for the use 
> for the addressed recipients. If you are not the intended recipient, please 
> be aware that any review, disclosure, copying, distribution, or use of the 
> contents of this message is strictly prohibited. If you have received this 
> in error, please delete it immediately and notify the sender. All rights 
> reserved by Mad Apparel, Inc. 2012. The information contained herein is the 
> exclusive property of Mad Apparel, Inc. and should not be used, 
> distributed, reproduced, or disclosed in whole or in part without prior 
> written permission of Mad Apparel, Inc.



Re: #if directives and the MYNEWT_VAL(..) macro

2017-06-06 Thread will sanfilippo
This is not a bug although some folks might want these things turned on by 
default. It might also be a bit confusing and hard to find in code or 
documentation, but the default for alot of these peripheral config variables 
are by default 0. You need to turn them on if you want to use a certain 
peripheral. There are two basic reasons for this: in some cases there are not 
enough pins to go around, so enabling everything simply wont work. The other 
reason is to create a smaller image by default.

I am not sure I understand the #if vs #ifdef thing. #if MYNEWT_VAL(X) should 
work just fine if you ask me.

Will 

> On Jun 6, 2017, at 4:59 AM, Fabio Utzig  wrote:
> 
> On Tue, Jun 6, 2017, at 08:01 AM, Sigve Sebastian Farstad wrote:
>> Hi all,
>> 
>> *Short version: *I2C didn't work, and I think I've narrowed it down to
>> a bug with #if directives and the MYNEWT_VAL(..) macro in the core
>> that potentially affects many boards.> 
>> *Long version:*
>> I started looking at mynewt for work, and I'm trying to use I2C with
>> an nRF52 (both nRF52dk and telee02 boards/bsps). It doesn't work out-of-the-
>> box as far as I can tell, so I did some snooping around in the mynewt
>> code base. Here's why it didn't work, and some suggestions for how it
>> could be fixed. Of course, I'm not very familiar with mynewt, so I
>> might be completely off-target.> 
>> First, here is the way it is set up today: In syscfg.yml,  I2C_0 is
>> declared so that it can be used by the bsp (e.g. in hal_i2c.c):> Inline 
>> image 1
>> (incubator-mynewt-core/hw/bsp/nrf52dk/syscfg.yml)
>> 
>> These declarations are magically converted into header defines by the
>> newt tool, which will end up looking something like this:> Inline image 2
>> (bin/targets/TARGET_NAME/generated/include/syscfg/syscfg.h in a newt
>> project)> 
>> There is a convenient that macro is used to access these defines:
>> Inline image 1
>> (bin/targets/TARGET_NAME/generated/include/syscfg/syscfg.h in a newt
>> project)> 
>> This macro is used in the bsp to include I2C if it is declared in
>> syscfg.yml:> Inline image 2
>> (incubator-mynewt-core/hw/mcu/nordic/nrf52xxx/src/hal_i2c.c)
>> 
>> Herein lies the problem. MYNEWT_VAL_I2C_0 is defined to 0. The #if
>> directive evaluates truthiness, and the truthiness of the value 0 is
>> false. Hence, I2C code is not included even though it should be.> 
>> So, there are perhaps two bugs in here. 1) why is MYNEWT_VAL_I2C_0 0
>> and not '0', as it is defined in syscfg.yml? And 2) I2C code is not
>> included properly.> 
>> For 1), CfgEntry structs in the newt tool's source only support string
>> entries, so I imagine something gets when going from yml entries to C
>> literals.> 
>> One way of solving 2) is to use #ifdef instead of #if when checking if
>> a value is defined. However, this doesn't work with the MYNEWT_VAL(..)
>> macro, as far as I can tell, so one would need to use MYNEWT_VAL_I2C_0
>> directly instead.> 
>> Currently, I've worked around the issue by setting the I2C_0 value in
>> syscfg.yml to something truthy (e.g. true), but it seems to me that
>> the intention is for it to be e.g. 0 for I2C_0 and 1 for I2C_1, since
>> the code in hal_i2c.c seems to want to use these numbers as array
>> indexes as well.> 
>> Lastly, this probably affects more than just I2C as the #if
>> MYNEWT_VAL(..) pattern is used many places in the core.> 
>> 
>> Thoughts?
> 
> I think the "value" in the syscfg.yml files usually means a flag to
> enable/disable something. I2C_0 with value 0 means it's disabled, 1
> means it is enabled. Also looking at
> "hw/mcu/nordic/nrf52xxx/src/hal_i2c.c", it seems to only use the value
> as an enable/disable flag, not as an indexing value.
> Cheers,
> Fabio Utzig



Re: nrf52 uicr

2017-05-30 Thread will sanfilippo
Pierre:

Accessing the UICR is pretty simple. There is a structure defined in nrf52.h 
that can be used to read the customer variables. You just do this: 
NRF_UICR->CUSTOMER[0]

Note that Aditi has pointed out that we decided to use some of the customer 
registers by default. You can change this if you want but that might be a bit 
of a pain when you upgrade. Currently, the first two locations are used in the 
customer space.

Note that there is no “generic” API to read them in ble_hw.c. You would just 
add your own code to the nrf specific code to read what you wanted from them.


> On May 30, 2017, at 4:08 AM, aditi hilbert  wrote:
> 
> Hi Pierre,
> 
> Yes, there is a “ble_hw_get_public_addr" function that does the following:
> 
> * If the user has overridden the default public address (the syscfg variable) 
> with a non-zero public address, that address will be returned by this 
> function.
> * If the default public address in the syscfg is all zero, the code will read 
> FICR and check if the device address type in the FICR is public. If so, it 
> means the nordic chip was factory programmed with a public address and this 
> will be used.
> * If both of the above checks fail, the code will read UICR[0] and UICR[1] to 
> see if a public address has been programmed into the UICR. We are doing this 
> to make it easy for folks to program their development kits with public 
> addresses so they do not have to hardcode them. UICR[0] will contain the 
> least significant 4 bytes of the device address. UICR[1] will contain the 
> most significant two bytes. The upper 16 bits of this word should be set to 
> 0. The API will presume that this is a valid public device address as long as 
> the upper 16-bits of this 32-bit word are all zero. We will also check to see 
> if this is a valid public address (see below). If both UICR[0] and UICR[1] 
> are zero, this will not be considered a valid public address.
> 
> thanks,
> aditi
> 
> 
>> On May 30, 2017, at 4:41 PM, Pierre Kircher  wrote:
>> 
>> the nrf52 has a user config memory so called UICR starts at 0x10001080 and 
>> stores 32 bytes .. 
>> 
>> id like to use those for internal 1 time settings like device id ..
>> or offsets for sensors 
>> 
>> wiriting isnt the issue i just need to be able to access them 
>> 
>> in the softdevice they are usualy applied like
>> 
>> uint32_t UICR_ADDR_0x80 __attribute__((at(0x10001080))) 
>> __attribute__((used)) = 0x12345678;
>> 
>> is there a current way to read them from mynewt ?
>> 
>> thanks a ton, and sorry if thats a noob question.
>> 
>> pierre
>> 
>> http://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.nrf52832.ps.v1.1%2Fuicr.html=2_2_0_13_0_61
>>  
>> 



Re: Tx Power Adjustment on the nRF52dk

2017-05-17 Thread will sanfilippo
Well, I think you need to set the advertising power in the state machine 
regardless of multi-adv support. Otherwise the default (syscfg) power level 
will be used for the advertisements.

> On May 16, 2017, at 9:42 AM, Gurpreet Singh <gurpr...@mistsys.com> wrote:
> 
> Yup.. this worked just like you described.
> My method should work when I dont enable the multi-advertisement flag,
> however, right? Much of the code you described is protected by that flag.
> 
> -Gurpreet
> 
> On Mon, May 15, 2017 at 4:58 PM, Gurpreet Singh <gurpr...@mistsys.com>
> wrote:
> 
>> Ah... interesting. Thanks for the tip... I see the code you're talking
>> about. I'll try this out.
>> 
>> Thanks!
>> 
>> On Mon, May 15, 2017 at 4:47 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>>> This is definitely a bit confusing in the current code base. To set
>>> advertising power for a particular advertising state machine you need to
>>> use the multi-advertising code. If you see the code in this function:
>>> ble_ll_adv_set_adv_params() you will see that the advsm->adv_txpwr element
>>> gets set to either a syscfg value, or in the case of multi-advertising
>>> supported, the value specified in the command.
>>> 
>>> The code here:  ble_ll_adv_tx_start_cb() calls the ble phy power set
>>> command and thus effectively trumps whatever you set it to if you used
>>> ble_phy_txpwr_set() elsewhere in the code. It sets the transmit power to
>>> advsm->adv_txpwr.
>>> 
>>> 
>>>> On May 15, 2017, at 4:05 PM, Gurpreet Singh <gurpr...@mistsys.com>
>>> wrote:
>>>> 
>>>> Hi,
>>>> 
>>>> I used the example snippet from this mail thread
>>>> <http://www.mail-archive.com/dev@mynewt.incubator.apache.org
>>> /msg01236.html> [1]
>>>> to adjust the Tx Power Level of an eddystone URL beacon from the nRF52dk
>>>> 
>>>> Unfortunately, I dont see the rssi change as I change the values. I
>>> checked
>>>> and tried values in the range 4 and -40, but lightblue and eBeacon on my
>>>> iPhone dont show a change in the rssi values.
>>>> 
>>>> I do see the call to ble_phy_txpwr_get returns the newly applied value,
>>>> however.
>>>> 
>>>> Am I missing some configuration somewhere, because in the thread I
>>>> referenced Simon did mention that he saw the power change?
>>>> 
>>>> Right now I'm changing this at compile time, but at some point I'd like
>>> to
>>>> be able to do this as a configuration switch at run time, if thats
>>>> possible?
>>>> 
>>>> Thanks in advance,
>>>> Gurpreet
>>>> 
>>>> [1]
>>>> http://www.mail-archive.com/dev@mynewt.incubator.apache.org/
>>> msg01236.html
>>> 
>>> 
>> 



Re: Tx Power Adjustment on the nRF52dk

2017-05-15 Thread will sanfilippo
This is definitely a bit confusing in the current code base. To set advertising 
power for a particular advertising state machine you need to use the 
multi-advertising code. If you see the code in this function: 
ble_ll_adv_set_adv_params() you will see that the advsm->adv_txpwr element gets 
set to either a syscfg value, or in the case of multi-advertising supported, 
the value specified in the command.

The code here:  ble_ll_adv_tx_start_cb() calls the ble phy power set command 
and thus effectively trumps whatever you set it to if you used 
ble_phy_txpwr_set() elsewhere in the code. It sets the transmit power to 
advsm->adv_txpwr.


> On May 15, 2017, at 4:05 PM, Gurpreet Singh  wrote:
> 
> Hi,
> 
> I used the example snippet from this mail thread
>  
> [1]
> to adjust the Tx Power Level of an eddystone URL beacon from the nRF52dk
> 
> Unfortunately, I dont see the rssi change as I change the values. I checked
> and tried values in the range 4 and -40, but lightblue and eBeacon on my
> iPhone dont show a change in the rssi values.
> 
> I do see the call to ble_phy_txpwr_get returns the newly applied value,
> however.
> 
> Am I missing some configuration somewhere, because in the thread I
> referenced Simon did mention that he saw the power change?
> 
> Right now I'm changing this at compile time, but at some point I'd like to
> be able to do this as a configuration switch at run time, if thats
> possible?
> 
> Thanks in advance,
> Gurpreet
> 
> [1]
> http://www.mail-archive.com/dev@mynewt.incubator.apache.org/msg01236.html



Re: Writing PWM api, and nrf51 implementation

2017-05-14 Thread will sanfilippo
Not quite sure myself off the top of my head. I am not the best newt tool 
person. I can try to take a look at it later today if you cant figure it out 
before then.


> On May 14, 2017, at 10:00 AM, Jacob Rosenthal <jakerosent...@gmail.com> wrote:
> 
> Good point. Ive obviously seen pwm implementaitons on nrf51 many times. Im
> guessing theyre using the 'low power pwm" driver I stumbled across that
> appears to be using the lfclk
> https://infocenter.nordicsemi.com/index.jsp?topic=%2Fcom.nordic.infocenter.sdk51.v10.0.0%2Flow_power_pwm_example.html
> 
> Sameish question, I tried to port that example with the same problem, Im
> having trouble figuring out how to get newt to see the SDK files
> 
> I tried adding low_power_pwm to ign_files and  src_dirs
> 
> pkg.ign_files.BSP_NRF51:
>- "nrf_saadc.c"
>- "nrf_drv_saadc.c"
>- "nrf_drv_comp.c"
>- "nrf_drv_i2s.c"
>- "nrf_drv_pdm.c"
>- "nrf_drv_pwm.c"
>- "nrf_drv_spis.c"
>- "nrf_drv_twis.c"
>- "spi_5W_master.c"
>- "pstorage*"
>- "sdk_mapped_flags.c"
>- "low_power_pwm.c"
> 
> pkg.ign_dirs:
>- "deprecated"
> 
> pkg.src_dirs:
>- "src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/"
>- "src/ext/nRF5_SDK_11.0.0_89a8197/components/libraries/fifo/"
>- "src/ext/nRF5_SDK_11.0.0_89a8197/components/libraries/util/"
>- "src/ext/nRF5_SDK_11.0.0_89a8197/components/libraries/low_power_pwm/"
> 
> Error:
> /Users/jacobrosenthal/Downloads/chippd3/bin/targets/split-central-misfit-flash/app/hw/drivers/lf_pwm/hw_drivers_lf_pwm.a(lf_pwm.o):
> In function `low_power_init':
> /Users/jacobrosenthal/Downloads/chippd3/repos/mynewt-nordic/hw/drivers/lf_pwm/src/lf_pwm.c:127:
> undefined reference to `low_power_pwm_init'
> /Users/jacobrosenthal/Downloads/chippd3/repos/mynewt-nordic/hw/drivers/lf_pwm/src/lf_pwm.c:129:
> undefined reference to `low_power_pwm_duty_set'
> /Users/jacobrosenthal/Downloads/chippd3/repos/mynewt-nordic/hw/drivers/lf_pwm/src/lf_pwm.c:154:
> undefined reference to `low_power_pwm_start'
> /Users/jacobrosenthal/Downloads/chippd3/bin/targets/split-central-misfit-flash/app/hw/drivers/lf_pwm/hw_drivers_lf_pwm.a(lf_pwm.o):
> In function `lf_pwm_init':
> /Users/jacobrosenthal/Downloads/chippd3/repos/mynewt-nordic/hw/drivers/lf_pwm/src/lf_pwm.c:182:
> undefined reference to `app_timer_init'
> collect2: error: ld returned 1 exit status
> 
> 
> On Sun, May 14, 2017 at 9:00 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Jacob:
>> 
>> Does the nrf51 have a PWM peripheral? I do not think it does. I looked at
>> the chip spec and nrf51.h and there is no PWM peripheral like the nrf52 in
>> either.
>> 
>> 
>>> On May 13, 2017, at 7:46 PM, Jacob Rosenthal <jakerosent...@gmail.com>
>> wrote:
>>> 
>>> Im attempting to write a pwm api and nrf51 driver. Im mostly just
>> stubbing
>>> right now based on the adc driver, and have mynewt pwm driver stubbed,
>> and
>>> am now stubbing mynewt_nordic/hw/drivers/pwm_nrf51 which includes
>>> nrf_drv_pwm.h and nrf_pwm.h from the sdk but thats giving tons of errors.
>>> 
>>> Maybe something needs to be added to the mcu/nordic_sdk?
>>> 
>>> 
>>> 
>>> 
>>> 
>>> 
>>> Error: In file included from
>>> repos/apache-mynewt-core/libc/baselibc/include/inttypes.h:10:0,
>>> 
>>>from
>>> repos/apache-mynewt-core/hw/hal/include/hal/hal_bsp.h:27,
>>> 
>>>from
>>> repos/mynewt-nordic/hw/drivers/pwm/pwm_nrf51/src/pwm_nrf51.c:19:
>>> 
>>> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_
>> 11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:56:39:
>>> error: unknown type name 'NRF_PWM_Type'
>>> 
>>>NRF_PWM_TASK_STOP  = offsetof(NRF_PWM_Type, TASKS_STOP),
>>> ///< Stops PWM pulse generation on all channels at the end of the current
>>> PWM period, and stops the sequence playback.
>>> 
>>>  ^
>>> 
>>> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_
>> 11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:57:39:
>>> error: unknown type name 'NRF_PWM_Type'
>>> 
>>>NRF_PWM_TASK_SEQSTART0 = offsetof(NRF_PWM_Type, TASKS_SEQSTART[0]),
>>> ///< Starts playback of sequence 0.
>>> 
>>>  ^
>>> 
>>

Re: Writing PWM api, and nrf51 implementation

2017-05-14 Thread will sanfilippo
Jacob:

Does the nrf51 have a PWM peripheral? I do not think it does. I looked at the 
chip spec and nrf51.h and there is no PWM peripheral like the nrf52 in either.


> On May 13, 2017, at 7:46 PM, Jacob Rosenthal  wrote:
> 
> Im attempting to write a pwm api and nrf51 driver. Im mostly just stubbing
> right now based on the adc driver, and have mynewt pwm driver stubbed, and
> am now stubbing mynewt_nordic/hw/drivers/pwm_nrf51 which includes
> nrf_drv_pwm.h and nrf_pwm.h from the sdk but thats giving tons of errors.
> 
> Maybe something needs to be added to the mcu/nordic_sdk?
> 
> 
> 
> 
> 
> 
> Error: In file included from
> repos/apache-mynewt-core/libc/baselibc/include/inttypes.h:10:0,
> 
> from
> repos/apache-mynewt-core/hw/hal/include/hal/hal_bsp.h:27,
> 
> from
> repos/mynewt-nordic/hw/drivers/pwm/pwm_nrf51/src/pwm_nrf51.c:19:
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:56:39:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_TASK_STOP  = offsetof(NRF_PWM_Type, TASKS_STOP),
> ///< Stops PWM pulse generation on all channels at the end of the current
> PWM period, and stops the sequence playback.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:57:39:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_TASK_SEQSTART0 = offsetof(NRF_PWM_Type, TASKS_SEQSTART[0]),
> ///< Starts playback of sequence 0.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:58:39:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_TASK_SEQSTART1 = offsetof(NRF_PWM_Type, TASKS_SEQSTART[1]),
> ///< Starts playback of sequence 1.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:59:39:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_TASK_NEXTSTEP  = offsetof(NRF_PWM_Type, TASKS_NEXTSTEP)
> ///< Steps by one value in the current sequence if the decoder is set to
> @ref NRF_PWM_STEP_TRIGGERED mode.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:69:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_STOPPED  = offsetof(NRF_PWM_Type, EVENTS_STOPPED),
>///< Response to STOP task, emitted when PWM pulses are no longer
> generated.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:70:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_SEQSTARTED0  = offsetof(NRF_PWM_Type,
> EVENTS_SEQSTARTED[0]), ///< First PWM period started on sequence 0.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:71:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_SEQSTARTED1  = offsetof(NRF_PWM_Type,
> EVENTS_SEQSTARTED[1]), ///< First PWM period started on sequence 1.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:72:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_SEQEND0  = offsetof(NRF_PWM_Type, EVENTS_SEQEND[0]),
>///< Emitted at the end of every sequence 0 when its last value has
> been read from RAM.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:73:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_SEQEND1  = offsetof(NRF_PWM_Type, EVENTS_SEQEND[1]),
>///< Emitted at the end of every sequence 1 when its last value has
> been read from RAM.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:74:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_PWMPERIODEND = offsetof(NRF_PWM_Type,
> EVENTS_PWMPERIODEND),  ///< Emitted at the end of each PWM period.
> 
>   ^
> 
> repos/mynewt-nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/hal/nrf_pwm.h:75:43:
> error: unknown type name 'NRF_PWM_Type'
> 
> NRF_PWM_EVENT_LOOPSDONE= offsetof(NRF_PWM_Type, EVENTS_LOOPSDONE)
>///< Concatenated sequences have been played the requested number of
> times.
> 
>   ^
> 
> In file included from
> 

Re: OS starting before main

2017-05-12 Thread will sanfilippo
Julian:

OK, I see the issue now; thanks for explaining it. And to state the obvious, 
the order of initialization is indeed important given the task design in this 
example.


> On May 12, 2017, at 2:47 AM, Julian Ingram <julian.ing...@imgtec.com> wrote:
> 
> Hi Will,
> 
> Thanks for your response, and sorry, the priorities were incorrect... "idle" 
> should be a lower priority "proc" but higher than "main". I will also attempt 
> a better explanation: In the example, the code in proc_task should only be 
> executed after `some_event()` returns truthy. If there is a scheduling point 
> after the perhaps poorly named "idle" task is initialised and before the 
> "proc" task is added to the task list, the execution switches to the "idle" 
> task and then even when `some_event()` returns truthy and releases the 
> semaphore, nothing is waiting on it so the task spins forever.
> 
> Corrected priorities:
> 
>> os_task_init(_task, "idle", idle_task_handler, NULL, 2, 
>> OS_WAIT_FOREVER, idle_stack, IDLE_STACK_SIZE);
>>   os_task_init(_task, "proc", proc_task_handler,  NULL, 1, 
>> OS_WAIT_FOREVER, proc_stack, PROC_STACK_SIZE); }
> 
> Single threaded equivalent (desired) functionality:
> 
>>   while (1) {
>>   if (some_event()) {
>>   //...
>>   }
>>}
> 
> It is likely that being careful about the order of initialised tasks is the 
> answer and I just preferred it when I had control over when the OS started!
> 
> Also thanks for the assert advice.
> 
> Julian
> 
> -Original Message-
> From: will sanfilippo [mailto:wi...@runtime.io] 
> Sent: 11 May 2017 18:45
> To: dev@mynewt.incubator.apache.org
> Subject: Re: OS starting before main
> 
> Julian:
> 
> Given your example I am a bit confused about some things. I think part of my 
> confusion is that you intialized the idle task to priority 1 and the other 
> task to priority 2. Priority 1 is higher priority than 2. I guess idle task 
> to me is something special but I realize this could just be an example name. 
> So I will go with the presumption that the priorities you have specified are 
> correct: idle_task_handler is the highest priority task and proc_task_handler 
> is a task lower in priority.
> 
> The other part of this example that confuses me is what happens in 
> some_event(). I presume that this function is waiting for some event to occur 
> and it is yielding if no event is available. If it is not, you will never 
> yield and you will be stuck in that task forever (and that does not have 
> anything to do with the order of initialization as far as I can see). If that 
> call does yield, I do not see the problem.
> 
> Anyway, and I probably should have said this first, yes, you have to be 
> careful about the order in which things are initialized. This does indeed get 
> tricky at times but should be possible given that sysinit has stages of 
> initialization and can be used to make sure that data structures are 
> initialized prior to them being called.
> 
> Just an FYI. I realize that you are just showing example code and probably 
> typed this up quickly, but just in case… I would not do the following: 
> assert( func_call() == OK ). This is because assert may get defined out and 
> then you are not making that function call. Better to do this:
> 
> rc = func_call();
> assert(rc == OK);
> 
> I have been burnt by this in the past so I wanted to point it out.
> 
>> On May 11, 2017, at 2:41 AM, Julian Ingram <julian.ing...@imgtec.com> wrote:
>> 
>> Hi all,
>> 
>> Having moved the PIC32 port to the newer start-up method where os_start is 
>> called before main, there have been problems with a tick potentially 
>> occurring between task initialisations.
>> 
>> Am I missing something here? If not, what is the standard fix, just be 
>> careful about the order of task initialisation, initialise them a critical 
>> section or?
>> 
>> For example:
>> 
>> void
>> idle_task_handler(void *arg)
>> {
>>   while (1) {
>>   if (some_event()) {
>>   os_sem_release();
>>   }
>>}
>> }
>> 
>> void
>> proc_task_handler(void *arg)
>> {
>>   while (1) {
>> int err = os_sem_pend(, OS_TIMEOUT_NEVER);
>>   // ...
>> }
>> }
>> 
>> int
>

Re: Question Regarding Multiple Advertising Instances (BLE_MULTI_ADV_SUPPORT)

2017-05-10 Thread will sanfilippo
I would say the bluetooth5 branch is fairly stable but I am not in the best 
postiion to comment on it. Furthermore, I am not sure if the multi-advertising 
support has been fully added to that branch. If it has been added I would go 
ahead and use it as it should be fairly stable.

If you wanted to expand support for multiple advertisers in 4.2 it sounds like 
you are going about it the correct way: use the code from the bletest app to 
construct your HCI commands to send to the controller. Presumably you are not 
doing any connected advertising so all you need to do is write wrappers for the 
bletest code and you should be fine. I realize that I did not give you detailed 
information on exactly what to do but it sounds like you have a decent handle 
on it.

If you have questions about the bletest code let us know and we can answer them.

> On May 10, 2017, at 4:15 PM, Gurpreet Singh <gurpr...@mistsys.com> wrote:
> 
> Hi Will,
> 
> Thanks for the quick response. The controller only support on 4.2 makes
> sense based on what I found in the source code. (I found some test code
> that exercises this, and was trying to build on top of that.)
> 
> How stable would you say the bluetooth5 branch is? If I wanted to try this
> feature out as I build out a quick prototype while exploring the use of
> Nimble/MyNewt, do you think I'd be hampered too much? Alternatively, some
> quick pointers on how to expand this support in the 4.2 codebase to the
> host code will be welcomed as well.
> 
> -Gurpreet
> 
> 
> 
> On Wed, May 10, 2017 at 4:05 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Hello Gurpreet:
>> 
>> So a few things here:
>> 
>> The multi-advertising support that was added was “vendor specific”,
>> meaning that it is (or shall I say was) not in the BLE specification at the
>> time the code was written. Bluetooth 5 adds support for multi-advertising
>> and that standard support will be added to Mynewt. Actually, it might
>> already be there; I know it is in process on the bluetooth5 branch but it
>> may not be fully complete.
>> 
>> The Mynewt code that supports 4.2 only supports mutliple advertisers in
>> the controller; it does not support them through the host API. If you are
>> using the combined host-controller you will have to write some of your own
>> code to deal with advertising instances other than the default. This is not
>> terribly difficult to do and we could provide some help/pointers if needed.
>> 
>>> On May 10, 2017, at 3:56 PM, Gurpreet Singh <gurpr...@mistsys.com>
>> wrote:
>>> 
>>> Hi,
>>> 
>>> I've been playing around with MyNewt and Nimble on the nRF52dk board, and
>>> noticed the recently added support for multiple advertising instances
>>> (MYNEWT-508). (BLE_MULTI_ADV_SUPPORT)
>>> 
>>> I want to use this to set up interleaved Eddystone and iBeacon
>>> advertisements, but it doesnt look like those APIs support this quite
>> yet?
>>> Or perhaps, I'm missing something?
>>> 
>>> I'd appreciate any insight, or thoughts.
>>> 
>>> Thanks in advance
>>> GPS
>> 
>> 



Re: Question Regarding Multiple Advertising Instances (BLE_MULTI_ADV_SUPPORT)

2017-05-10 Thread will sanfilippo
Hello Gurpreet:

So a few things here:

The multi-advertising support that was added was “vendor specific”, meaning 
that it is (or shall I say was) not in the BLE specification at the time the 
code was written. Bluetooth 5 adds support for multi-advertising and that 
standard support will be added to Mynewt. Actually, it might already be there; 
I know it is in process on the bluetooth5 branch but it may not be fully 
complete.

The Mynewt code that supports 4.2 only supports mutliple advertisers in the 
controller; it does not support them through the host API. If you are using the 
combined host-controller you will have to write some of your own code to deal 
with advertising instances other than the default. This is not terribly 
difficult to do and we could provide some help/pointers if needed.

> On May 10, 2017, at 3:56 PM, Gurpreet Singh  wrote:
> 
> Hi,
> 
> I've been playing around with MyNewt and Nimble on the nRF52dk board, and
> noticed the recently added support for multiple advertising instances
> (MYNEWT-508). (BLE_MULTI_ADV_SUPPORT)
> 
> I want to use this to set up interleaved Eddystone and iBeacon
> advertisements, but it doesnt look like those APIs support this quite yet?
> Or perhaps, I'm missing something?
> 
> I'd appreciate any insight, or thoughts.
> 
> Thanks in advance
> GPS



Re: Problems with erasing flash on nordic devices

2017-05-05 Thread will sanfilippo
Jacob:

I think we might be talking past each other here. I do understand that the code 
is auto-erasing on upload. Let’s face it: if you want to upload a new image you 
are gonna have to erase the one that is there. Sure, there may be a case where 
it is not there or is already the one you want, but that is not going to be the 
norm.

So to be a bit more concise, here is what I think would have to be done:

1) Add a command to erase the unused image (the image that is not running).
2) Modify the code such that it only erases the flash if it is not already 
erased.

The application would then have to deal with reconnecting after any disconnects.


> On May 5, 2017, at 5:34 PM, Jacob Rosenthal <jakerosent...@gmail.com> wrote:
> 
> Im not erasing. It is auto erasing on upload. So I cant upload.
> 
> On Fri, May 5, 2017 at 5:32 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> BTW: there is an image list command that will tell you if there is an
>> image that you need to erase.
>> 



Re: Problems with erasing flash on nordic devices

2017-05-05 Thread will sanfilippo
So I left something out of my proposed work-around. The code would have to be 
modified such that if the page/sector was already erased the image upload would 
not blindly erase it.

BTW: there is an image list command that will tell you if there is an image 
that you need to erase.


> On May 5, 2017, at 5:15 PM, Jacob Rosenthal <jakerosent...@gmail.com> wrote:
> 
> You've hit on my easy solution exactly. If we only erase when theres an
> image present to be erase, then I can just reconnect after disconnect. Its
> not fixed, but itll do.
> 
> *Currently It is not possible to update an nrf51 device over ble.  *When I
> reconnect, it erases again, and again drops the connection.
> 
> So perhaps theres a good way to determine if an image is present and if an
> erase is necessary?
> 
> 
> 
> On Fri, May 5, 2017 at 5:02 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Anyway, there have been thoughts on this but no solution has been decided
>> upon. For now your best bet is to simply reconnect if the connection drops
>> (imo).
>> 



Re: newtmgr image upload nrf51dk disconnects with reason=8

2017-04-20 Thread will sanfilippo
Not sure I am answering the right question here as I am sort of jumping in the 
middle, but with the NRF51 when you erase the flash the CPU is halted. At 
least, that is what the documentation states. The joy :-)

> On Apr 20, 2017, at 4:16 PM, marko kiiskila <ma...@runtime.io> wrote:
> 
> Does flash erase disable interrupts?
> I.e. would running newtmgr on a separate thread help?
> 
>> On Apr 20, 2017, at 4:14 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Hello:
>> 
>> They are both related (in a way). When you increase the slave latency the 
>> supervision timeout sort of increases along with it (in a manner so 
>> speaking). The minimum supervision timeout is this: (1 + connSlaveLatency) * 
>> connInterval * 2.
>> 
>> So if you increase the slave latency you need to increase the supervision 
>> timeout.
>> 
>> I have not completely read through all the emails in this thread but you can 
>> do either to address this issue. However, there are pros and cons to 
>> changing the slave latency. The pro is that you save battery power on the 
>> peripheral; the con is that it will take longer to transfer data if data is 
>> not queued up at the master.
>> 
>> If all you want to do is to avoid a period of time where one side basically 
>> goes away and does not come back for a while, I think I would change the 
>> supervision timeout. If you are connected but generally do not send lots of 
>> information, and do not care about latency and care about battery power, I 
>> would change the slave latency.
>> 
>> I do think increasing the supervision timeout will help this particular 
>> issue. If you are erasing the flash and it takes, say, 100 msecs, having a 
>> long supervision timeout (greater than 100 msecs plus a number of connection 
>> intervals) should do the trick. I would give three or four connection 
>> intervals plus the time you think you might be away for the supervision 
>> timeout.
>> 
>> 
>> 
>>> On Apr 20, 2017, at 3:42 PM, Simon Ratner <si...@proxy.co> wrote:
>>> 
>>> I believe the setting to tweak is the slave latency, and have some
>>> empirical evidence to back that up. Increasing supervision timeout doesn't
>>> help if the timeout is at the link manager level (because of a blocking
>>> call, say), while increasing slave latency allows the peripheral to take
>>> more than one connection itvl to ack the frames.
>>> 
>>> On Thu, Apr 20, 2017 at 2:55 PM, Christopher Collins <ch...@runtime.io>
>>> wrote:
>>> 
>>>> Hi Jacob,
>>>> 
>>>> On Thu, Apr 20, 2017 at 02:21:01PM -0700, Jacob Rosenthal wrote:
>>>>> I think the default intervals are here, which should be sufficiently over
>>>>> 20ms
>>>>> /** 30 ms. */
>>>>> #define BLE_GAP_ADV_FAST_INTERVAL1_MIN  (30 * 1000 /
>>>> BLE_HCI_ADV_ITVL)
>>>>> //48
>>>>> 
>>>>> /** 60 ms. */
>>>>> #define BLE_GAP_ADV_FAST_INTERVAL1_MAX  (60 * 1000 /
>>>> BLE_HCI_ADV_ITVL).
>>>>> //96
>>>>> 
>>>>> /** 100 ms. */
>>>>> #define BLE_GAP_ADV_FAST_INTERVAL2_MIN  (100 * 1000 /
>>>> BLE_HCI_ADV_ITVL)
>>>>> //160
>>>>> 
>>>>> /** 150 ms. */
>>>>> #define BLE_GAP_ADV_FAST_INTERVAL2_MAX  (150 * 1000 /
>>>> BLE_HCI_ADV_ITVL)
>>>>> //240
>>>>> 
>>>>> or I can even force to the higher ones in bleprph:
>>>>> 
>>>>>  adv_params.itvl_min = BLE_GAP_ADV_FAST_INTERVAL2_MIN;
>>>>>  adv_params.itvl_max = BLE_GAP_ADV_FAST_INTERVAL2_MAX;
>>>> [...]
>>>>>> <https://devzone.nordicsemi.com/users/580/olha/>
>>>>>> In logs, interval advertises as adv_itvl_min=0 adv_itvl_max=0, on
>>>>>> connection logs ive seen both itvl=9 and itvl=12, not sure what that
>>>> maps
>>>>>> to yet though..
>>>> 
>>>> These are advertising intervals, i.e., how frequently the device sends
>>>> out an advertisement, and not relevant here.  BLE has a lot of
>>>> "intervals," so I don't fault you for getting confused!
>>>> 
>>>> The interval of importance here is the connection interval, i.e., how
>>>> frequently the two connected peers attempt communication.
>>>> 
>>>> I think adjusting the connection interval is not the best 

Re: newtmgr image upload nrf51dk disconnects with reason=8

2017-04-20 Thread will sanfilippo
Hello:

They are both related (in a way). When you increase the slave latency the 
supervision timeout sort of increases along with it (in a manner so speaking). 
The minimum supervision timeout is this: (1 + connSlaveLatency) * connInterval 
* 2.

So if you increase the slave latency you need to increase the supervision 
timeout.

I have not completely read through all the emails in this thread but you can do 
either to address this issue. However, there are pros and cons to changing the 
slave latency. The pro is that you save battery power on the peripheral; the 
con is that it will take longer to transfer data if data is not queued up at 
the master.

If all you want to do is to avoid a period of time where one side basically 
goes away and does not come back for a while, I think I would change the 
supervision timeout. If you are connected but generally do not send lots of 
information, and do not care about latency and care about battery power, I 
would change the slave latency.

I do think increasing the supervision timeout will help this particular issue. 
If you are erasing the flash and it takes, say, 100 msecs, having a long 
supervision timeout (greater than 100 msecs plus a number of connection 
intervals) should do the trick. I would give three or four connection intervals 
plus the time you think you might be away for the supervision timeout.



> On Apr 20, 2017, at 3:42 PM, Simon Ratner  wrote:
> 
> I believe the setting to tweak is the slave latency, and have some
> empirical evidence to back that up. Increasing supervision timeout doesn't
> help if the timeout is at the link manager level (because of a blocking
> call, say), while increasing slave latency allows the peripheral to take
> more than one connection itvl to ack the frames.
> 
> On Thu, Apr 20, 2017 at 2:55 PM, Christopher Collins 
> wrote:
> 
>> Hi Jacob,
>> 
>> On Thu, Apr 20, 2017 at 02:21:01PM -0700, Jacob Rosenthal wrote:
>>> I think the default intervals are here, which should be sufficiently over
>>> 20ms
>>> /** 30 ms. */
>>> #define BLE_GAP_ADV_FAST_INTERVAL1_MIN  (30 * 1000 /
>> BLE_HCI_ADV_ITVL)
>>>  //48
>>> 
>>> /** 60 ms. */
>>> #define BLE_GAP_ADV_FAST_INTERVAL1_MAX  (60 * 1000 /
>> BLE_HCI_ADV_ITVL).
>>>  //96
>>> 
>>> /** 100 ms. */
>>> #define BLE_GAP_ADV_FAST_INTERVAL2_MIN  (100 * 1000 /
>> BLE_HCI_ADV_ITVL)
>>> //160
>>> 
>>> /** 150 ms. */
>>> #define BLE_GAP_ADV_FAST_INTERVAL2_MAX  (150 * 1000 /
>> BLE_HCI_ADV_ITVL)
>>> //240
>>> 
>>> or I can even force to the higher ones in bleprph:
>>> 
>>>adv_params.itvl_min = BLE_GAP_ADV_FAST_INTERVAL2_MIN;
>>>adv_params.itvl_max = BLE_GAP_ADV_FAST_INTERVAL2_MAX;
>> [...]
 
 In logs, interval advertises as adv_itvl_min=0 adv_itvl_max=0, on
 connection logs ive seen both itvl=9 and itvl=12, not sure what that
>> maps
 to yet though..
>> 
>> These are advertising intervals, i.e., how frequently the device sends
>> out an advertisement, and not relevant here.  BLE has a lot of
>> "intervals," so I don't fault you for getting confused!
>> 
>> The interval of importance here is the connection interval, i.e., how
>> frequently the two connected peers attempt communication.
>> 
>> I think adjusting the connection interval is not the best solution for
>> the supervision timeout problem.  If you increase the interval, this
>> might help maintain the connection, but it will also impair response
>> time between the two devices.  The correct setting to increase, in my
>> opinion, is the supervision timeout.  By increasing this, you allow the
>> peripheral device to stay quiet for a longer period of time without the
>> connection getting dropped.
>> 
>> Regardless of which parameter you change, there are two ways to do it:
>>1. Configure the central device such that it initiates connections
>>   with the preferred settings (this is what Vipul suggested in his
>>   newtmgr diff).
>> 
>>2. Have the peripheral device request a connection parameter update
>>   after the connection is established.  With NimBLE, you would use
>>   the ble_gap_update_params() function for this
>>   (https://mynewt.apache.org/latest/network/ble/ble_hs/ble_
>> gap/functions/ble_gap_update_params/).
>> 
>> Option 1 is the reliable way to do this.  With option 2, the central may
>> choose to disregard your requested parameters, so you're really at the
>> central's mercy.
>> 
>> Chris
>> 



Re: newtmgr image upload nrf51dk disconnects with reason=8

2017-04-17 Thread will sanfilippo
I wonder if this issue is related to other controllers that we have had issues 
with. Try disabling Data Length Extension. Set BLE_LL_CFG_FEAT_DATA_LEN_EXT in 
net/nimble/controller/syscfg.yml to 0

Let us know if that changes anything.
> On Apr 16, 2017, at 6:13 PM, Jacob Rosenthal  wrote:
> 
> On Sun, Apr 16, 2017 at 5:46 PM, Christopher Collins 
> wrote:
> 
>> Welcome to return code hell :).  The status=8 is actually a Bluetooth
>> error code, not a NimBLE host error code.  In this case, the controller
>> belonging to the destination device has indicated a disconnect reason of
>> 8, which translates to "supervision timeout."  In other words, your PC's
>> controller unexpectedly went silent, so the NimBLE device was forced to
>> drop the connection.
>> 
> Oh interesting.
> 
>> 
>> The GATT library (what newtmgr uses for BLE) is a bit sketchy, so it's
>> hard to tell who as at fault.  What controller are you using to send the
>> newtmgr command (e.g., built-in Bluetooth radio)?
>> 
>> This is the build in radio on my a1466 2012 macbook air in fedora.
> Interestingly enough this happens on my node port on osx as well, which is
> why I went back to trying to get newtmgr under linux working first.
> 
> 
>> Do you have any luck with a smaller newtmgr command (e.g., echo)?  The
>> image upload command is especially troublesome because it causes the
>> destination device to perform a flash erase operation.  This causes the
>> MCU to stall, which can lead to supervision timeouts.  I believe the
>> newtmgr tool uses fairly lenient connection settings for this reason,
>> but this could still be the problem, especially on the nRF51.
>> 
>> All other commands have worked great so far.
> 
> 
>> You could try connecting to the NimBLE device with a known good setup
>> such as the Lightblue app on OS X or an iphone.  Then, subscribe to the
>> newtmgr characteristic and try writing a value to the same
>> characteristic.  If the connection stays up and you get a notification
>> from the NimBLE device (newtmgr response), then I would suspect
>> something on the PC side.
>> 
> Im just writing 0x01 and the connection stays up, but I dont seem to get
> any response.



Re: MyNewt and Ignite demo at ApacheCon

2017-04-14 Thread will sanfilippo
Hello Dennis:

I have not tried this myself but a colleague pointed me at this: 
https://www.allaboutcircuits.com/technical-articles/getting-started-with-openocd-using-ft2232h-adapter-for-swd-debugging/

So you could use mynewt and openocd to download something to the adafruit 
feather board without needing J-Link.

Will
> On Apr 14, 2017, at 2:22 PM, Denis Magda  wrote:
> 
> Hi Folks,
> 
> Pretty soon I will be presenting a talk [1] about ASF projects that can be 
> used as a foundation for a combined IoT + Fast Data solution. MyNewt was 
> chosen to be a platform for the IoT side and now the goal is to prepare a 
> demo for the conference with a real hardware harnessed by MyNewt and a 
> multi-node Apache Ignite [2] cluster running on a cloud.
> 
> For MyNewt part I want to pick up a BLE empowered board such that the data 
> can be transferred over BLE to my Mac OS laptop that will send it to the 
> cloud, in particular, this is the one I’m thinking of [3] (Adafruit Feather 
> nRF52 Bluefruit LE - nRF52832).
> 
> However, I’m not sure if the Mac OS laptop can discover that particular board 
> [3] and load a built target there. Have anyone tried this before? Are 
> appliances like J-Link the must for my dev environment or optional?
> 
> [1] https://apachecon2017.sched.com/event/9zot 
> 
> [2] https://ignite.apache.org
> [3] 
> https://www.amazon.com/Adafruit-Feather-nRF52-Bluefruit-nRF52832/dp/B06XXSVYLC
>  
> 
> 
> —
> Denis



Re: [DISCUSS] Release policy update for handling feature branches

2017-04-14 Thread will sanfilippo
I think you are correct about this. Someone needs to determine which pull 
requests against master need to get merged into the various branches.

> On Apr 14, 2017, at 11:54 AM, aditi hilbert  wrote:
> 
> Hi all,
> 
> It’s good to see our Release policy going into effect post 1.0 release. The 
> develop branch is gone and feature branches have emerged. Smaller changes are 
> being merged into the master branch which is now relatively up to date. Most 
> of the feature branches are going to be long-lived, so wanted to discuss how 
> to manage them efficiently. 
> 
> Typically, there is an “owner” for each feature branch who oversees the 
> merges into that branch and periodically updates it with changes from master 
> to keep it aligned with master and leverage recent changes. Currently we have 
> “bluetooth5” - so we would need an owner for that. I can see other 
> connectivity stacks (e.g. LoRa, sub-GHz) being built on separate feature 
> branches. 
> 
> Please share your thoughts, concerns, suggestions. And do we have a volunteer 
> for owning the “bluetooth5” feature branch? :)
> 
> thanks,
> aditi
> 
> 
> 
> 
> 
> 



Re: Bluetooth 5 support - configurability

2017-04-10 Thread will sanfilippo
+1 on opt-in for BT5 although I do think there are quite a few configuration 
variables for features that are on by default. Not sure there is a rhyme or 
reason, other than possibly the thought that “most people would be enabling 
this, so let’s have it on by default”.


> On Apr 10, 2017, at 11:50 AM, Łukasz Rymanowski 
> <lukasz.rymanow...@codecoup.pl> wrote:
> 
> Hi,
> 
> On 10 April 2017 at 18:15, will sanfilippo <wi...@runtime.io> wrote:
> 
>> I think #3 is fine as well. If, for some reason, folks do not want to
>> claim 5.0 support they can always use release 1.0.0 of Mynewt.
>> 
>> 
>> 
>> On Apr 10, 2017, at 6:16 AM, Szymon Janc <szymon.j...@codecoup.pl> wrote:
>>> 
>>> Hello Community,
>>> 
>>> We are currently upstreaming Bluetooth 5 functionality into Apache
>> Mynewt.
>>> Sine all of the new features are optional to support (excluding internal
>>> dependencies) we could make Mynewt code configurable per feature. It
>> shouldn't
>>> be too much hasle to support this via syscfg.yml with MYNEWT_VALs.
>>> 
>>> There are few possible paths and I'd like to gather some feedback.
>>> 
>>> 1. Always claim 5.0 (LL version) support and leave all features
>> configurable.
>>> 2. Same as 1. but also allow to configure 4.2 vs 5.0 support.
>>> 3. Same as 1. but always enable triavials (Privacy Erratas, High Duty Un-
>>> Directed Advertising) and leave other features configurable.
>> 
>> 4. Always enabled everything.
>>> 
>>> Personnally I'd opt for 3. Mostly due to fact that it doesn't increse
>> code
>>> size comparing to 4.2 and reduces number of configuration variables. So
>> it
>>> feels to be a good compromise between configurability and complexity.
>>> 
>> 
> 
> That looks good to me as well.
> 
>> There is also open point of opt-in vs opt-out configruation. I think we
>> should
>>> go with opt-in ie. optional feature needs to be explicitly enabled in
>> syscfg.
>>> 
>> 
> 
> I think other features  (e.g. LE CoC) are opt-in so we should follow that.
> 
> 
>>> Comments?
>>> 
>>> --
>>> pozdrawiam
>>> Szymon Janc
>> 
>> 
> Best
> Łukasz



Re: DC/DC regulator enable for nrf52. Where should it go?

2017-04-10 Thread will sanfilippo
There is a syscfg value called BOOT_LOADER that is defined by the bootloader 
app. I think this is sufficient. Agreed?

Will

> On Apr 10, 2017, at 9:55 AM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Agree- but there should be some way of easily knowing whether called by boot 
> loader or actual system.
> 
> Sterling
> 
>> On Apr 10, 2017, at 6:53 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Thinking about this more… (which is almost never a good thing with me):
>> 
>> Cortex-M MCU manufacturers produce a file called system_.c (where xxx is 
>> the name of the mcu). There is a function in that file called SystemInit. 
>> Generally, this is something that should be called as early as possible and 
>> is called in the startup code. What I am now considering doing is adding our 
>> own version of “SystemInit” as I do not think it proper to modify the 
>> system_xxx.c module unless absolutely necessary.
>> 
>> To be more concise:
>> 
>> * SystemInit() will still be called and will remain exactly the way it is 
>> now.
>> * Add a function in hal_system.c (which is in hw/mcu) called 
>> hal_system_init().
>> * This function will be called by the startup code after SystemInit is 
>> called.
>> * The configuration variable for DCDCEN will live in the MCU (default to 0) 
>> and will be overridden by any BSP that can support it.
>> 
>> NOTE: I will comment on this in the hal_system_init() function, but this 
>> function will be called by both the bootloader and the app. If there is a 
>> case where doing something twice is undesirable that will have to be dealt 
>> with by anyone adding code to hal_system_init().
>> 
>> Comments?
>> 
>>> On Apr 7, 2017, at 4:50 PM, Sterling Hughes 
>>> <sterling.hughes.pub...@gmail.com> wrote:
>>> 
>>> Hi,
>>> 
>>> Couple of thoughts:
>>> 
>>> - I think this function/syscfg belongs in the MCU definition, as a 
>>> configuration item that can be controlled by the BSP.
>>> 
>>> - I think it should be called as early as possible, so probably 
>>> hal_bsp_init().
>>> 
>>> - It’s a bid odd that hal_bsp_init() is the same for bootloader and running 
>>> image, we should probably have an option to make this different.  I’d lean 
>>> to having more functionality in the image, because it’s upgradable.
>>> 
>>> Sterling
>>> 
>>>> On 7 Apr 2017, at 16:32, will sanfilippo wrote:
>>>> 
>>>> Hello:
>>>> 
>>>> I want to add some code that enables the DC/DC regulator for the nordic 
>>>> chips. Enabling this regulator reduces power consumption (considerably). 
>>>> For example, using the LDO when running from flash (cache enabled) is 
>>>> typically 7.4mA; using the DC/DC regulator it goes to 3.7 mA.
>>>> 
>>>> It would be best to turn this on as soon as possible but it should only be 
>>>> enabled if there is some external circuitry attached to some of the pins 
>>>> (see the product specifications for more details). For all the BSP’s 
>>>> currently in the repo, the DC/DC regulator can (and should) be enabled. 
>>>> GIven that there is external circuitry involved I was going to create a 
>>>> syscfg variable that would either exist in the BSP or be overridden by the 
>>>> BSP. What I am having a bit of trouble figuring out is where should the 
>>>> code to enable the DC/DC regulator go?
>>>> 
>>>> We have a choice of putting it in an existing place or doing something 
>>>> new. It seems to me that if we choose an exisiting place it would go in 
>>>> either hal_bsp_init() or hal_system_start().
>>>> 
>>>> Some comments about the existing functions:
>>>> 
>>>> hal_system_start():
>>>> * Code would only need to be modified in one place (in hw/mcu).
>>>> * This function is called after the bootloader does some work, so more 
>>>> power savings could be realized earlier on.
>>>> * If you build an image with no bootloader I do not think this is called.
>>>> * It might be a bit of an odd place to put this code (enabled the DC/DC 
>>>> regulator).
>>>> 
>>>> hal_bsp_init():
>>>> * This is called early on by the bootloader. It is also called by the 
>>>> application which is a bit confusing to me. I am not super familiar with 
>>>> the bootloader but

Re: DC/DC regulator enable for nrf52. Where should it go?

2017-04-10 Thread will sanfilippo
Thinking about this more… (which is almost never a good thing with me):

Cortex-M MCU manufacturers produce a file called system_.c (where xxx is 
the name of the mcu). There is a function in that file called SystemInit. 
Generally, this is something that should be called as early as possible and is 
called in the startup code. What I am now considering doing is adding our own 
version of “SystemInit” as I do not think it proper to modify the system_xxx.c 
module unless absolutely necessary.

To be more concise:

* SystemInit() will still be called and will remain exactly the way it is now.
* Add a function in hal_system.c (which is in hw/mcu) called hal_system_init().
* This function will be called by the startup code after SystemInit is called.
* The configuration variable for DCDCEN will live in the MCU (default to 0) and 
will be overridden by any BSP that can support it.

NOTE: I will comment on this in the hal_system_init() function, but this 
function will be called by both the bootloader and the app. If there is a case 
where doing something twice is undesirable that will have to be dealt with by 
anyone adding code to hal_system_init().

Comments?

> On Apr 7, 2017, at 4:50 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Hi,
> 
> Couple of thoughts:
> 
> - I think this function/syscfg belongs in the MCU definition, as a 
> configuration item that can be controlled by the BSP.
> 
> - I think it should be called as early as possible, so probably 
> hal_bsp_init().
> 
> - It’s a bid odd that hal_bsp_init() is the same for bootloader and running 
> image, we should probably have an option to make this different.  I’d lean to 
> having more functionality in the image, because it’s upgradable.
> 
> Sterling
> 
> On 7 Apr 2017, at 16:32, will sanfilippo wrote:
> 
>> Hello:
>> 
>> I want to add some code that enables the DC/DC regulator for the nordic 
>> chips. Enabling this regulator reduces power consumption (considerably). For 
>> example, using the LDO when running from flash (cache enabled) is typically 
>> 7.4mA; using the DC/DC regulator it goes to 3.7 mA.
>> 
>> It would be best to turn this on as soon as possible but it should only be 
>> enabled if there is some external circuitry attached to some of the pins 
>> (see the product specifications for more details). For all the BSP’s 
>> currently in the repo, the DC/DC regulator can (and should) be enabled. 
>> GIven that there is external circuitry involved I was going to create a 
>> syscfg variable that would either exist in the BSP or be overridden by the 
>> BSP. What I am having a bit of trouble figuring out is where should the code 
>> to enable the DC/DC regulator go?
>> 
>> We have a choice of putting it in an existing place or doing something new. 
>> It seems to me that if we choose an exisiting place it would go in either 
>> hal_bsp_init() or hal_system_start().
>> 
>> Some comments about the existing functions:
>> 
>> hal_system_start():
>> * Code would only need to be modified in one place (in hw/mcu).
>> * This function is called after the bootloader does some work, so more power 
>> savings could be realized earlier on.
>> * If you build an image with no bootloader I do not think this is called.
>> * It might be a bit of an odd place to put this code (enabled the DC/DC 
>> regulator).
>> 
>> hal_bsp_init():
>> * This is called early on by the bootloader. It is also called by the 
>> application which is a bit confusing to me. I am not super familiar with the 
>> bootloader but unless this function exists in some place I do not see or we 
>> override bsp syscfg variables in the bootloader app, hal_bsp_init() is going 
>> to do things twice. Is this true or am I missing something?
>> * We would have to modify hal_bsp_init() in all the bsps.
>> 
>> Honestly, I am not too concerned about having to modify all the bsps that 
>> use the nordic chip if the community thinks this code belongs in 
>> hal_bsp_init().
>> 
>> Any comments/suggestions?
>> 
>> Thanks!
>> 
>> PS If we decide that this code should exist in hw/mcu what I would do is to 
>> create a syscfg variable in hw/mcu/nordic (or hw/mcu/nordic/nrf52xxx and 
>> hw/mcu/nordic/nrf51xxx) called MCU_DCDC_ENABLE (or some such). By default, 
>> this will be 0. All the bsp syscfg.yml files will override this setting.



Re: Bluetooth 5 support - configurability

2017-04-10 Thread will sanfilippo
I think #3 is fine as well. If, for some reason, folks do not want to claim 5.0 
support they can always use release 1.0.0 of Mynewt.


> On Apr 10, 2017, at 6:16 AM, Szymon Janc  wrote:
> 
> Hello Community,
> 
> We are currently upstreaming Bluetooth 5 functionality into Apache Mynewt. 
> Sine all of the new features are optional to support (excluding internal 
> dependencies) we could make Mynewt code configurable per feature. It 
> shouldn't 
> be too much hasle to support this via syscfg.yml with MYNEWT_VALs.
> 
> There are few possible paths and I'd like to gather some feedback.
> 
> 1. Always claim 5.0 (LL version) support and leave all features configurable.
> 2. Same as 1. but also allow to configure 4.2 vs 5.0 support.
> 3. Same as 1. but always enable triavials (Privacy Erratas, High Duty Un-
> Directed Advertising) and leave other features configurable.
> 4. Always enabled everything.
> 
> Personnally I'd opt for 3. Mostly due to fact that it doesn't increse code 
> size comparing to 4.2 and reduces number of configuration variables. So it 
> feels to be a good compromise between configurability and complexity.
> 
> There is also open point of opt-in vs opt-out configruation. I think we 
> should 
> go with opt-in ie. optional feature needs to be explicitly enabled in syscfg.
> 
> Comments?
> 
> -- 
> pozdrawiam
> Szymon Janc



Re: Use of os_error_t and OS_OK

2017-04-10 Thread will sanfilippo
Well, I replied quite differently, but I did not realize that os_error_t was a 
relic. If that is the case, I agree with what you have here :-)

+1
> On Apr 10, 2017, at 7:42 AM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> I don’t think we ever came to agreement, and things are a bit of a mishmash.  
> Ben brings up a good point.
> 
> Mynewt wide, in my view:
> 
> * os_error is a relic, and sys/defs codes should be used.
> 
> * All functions should return “int” and not “os_error_t” or a specific error 
> type.
> 
> * 0 and -1 are SYS_EOK and SYS_EUNKNOWN (new value) respectively, and can be 
> safely returned as numbers not defines.
> 
> * For other errors, the SYS_* equivalents should be returned.
> 
> Thoughts?
> 
> Sterling
> 
> On 10 Apr 2017, at 16:38, will sanfilippo wrote:
> 
>> Not sure if anyone answered this. This is just my opinion of course:
>> 
>> * The OS functions should use return type os_error_t.
>> * Those functions should return OS_OK or some other OS error.
>> * Checks against functions with type os_error_t should be against OS_OK and 
>> not 0.
>> 
>> The bubbling up of errors, well, not sure. If some function not in the OS 
>> calls an os function which returns os_error_t does not need to use a return 
>> type of os_error_t; that can be int.
>> 
>> 
>>> On Apr 9, 2017, at 7:55 PM, Ben Harper <btharper1...@gmail.com> wrote:
>>> 
>>> While mucking about in the source I found a few places where the use of
>>> OS_OK was either returned and checked against a hardcoded zero, or the
>>> other way around, and some function signatures that give os_error_t or int
>>> and return the other. The documentation has similar disconnects in portions
>>> as to what the return type is, and some functions seem to bubble up the
>>> response code from underlying system calls and the type changes as it is
>>> returned.  I'd like to work through fixing these, but I'm not able to find
>>> a single source of truth as to which they should be. Is there currently any
>>> set guidance on this? Or would it be fine if I just made my best guesses
>>> and brought it together as a PR against the github repository?
>>> 
>>> Thanks for any help you can give on the matter.
>>> - Ben Harper



Re: I2C not working on nrf52xxx's rb-nano2

2017-04-10 Thread will sanfilippo
Just an FYI: I was playing around with some code for something I was working on 
and I was able to disable UART0 although I did not try to re-purpose the pins. 
The UART was certainly not enabled did not attempt to grab the gpio. This was 
using the nrf52dk bsp though.

Anyway, I think it worth a bit of effort to try and figure out exactly what 
happened in your case Lukasz.

> On Apr 10, 2017, at 1:04 AM, Szymon Janc  wrote:
> 
> Hi,
> 
> On Sunday, 9 April 2017 21:47:26 CEST Łukasz Wolnik wrote:
> 
> 
> 
>> Later I found out that my pull-up resistors were actually 100 Ohms instead
>> of 10k. So actually above change to the drive wasn't necessary (it should
>> stay as GPIO_PIN_CNF_DRIVE_S0D1) but I have to say that fixing hardware
>> issues using software feels good. Besides RedBear Nano2 has it's own
>> pull-ups so no resistors are required in the first place.
>> 
>> And just for completeness, I have tried different setups to share the lines
>> with UART_0 (pins 2, 28, 29 and 30) but none of them worked. So my previous
>> try with the Bluetooth serial logging must have been sabotaged by the 100
>> Ohm pull-up resistors. I have a feeling that it's not so easy to disable
>> UART_0 completely so your advice to change SPI0_CONFIG_SCK_PIN pin might
>> have been necessary as well. Luckily there's no need to checking it.
> 
> Just a heads up, but Michał was working on adding RTT console support for 
> Mynewt and with those patches it should be possible to disable UART_0 ie. we 
> are able to use console over RTT while using UART for HCI monitoring (other 
> feature we are working on). Currently this is pending review and PR should be 
> updated in upcoming days.
> 
> Code is available at
> https://github.com/michal-narajowski/incubator-mynewt-core/tree/bletiny2
> 
> -- 
> pozdrawiam
> Szymon Janc



Re: Use of os_error_t and OS_OK

2017-04-10 Thread will sanfilippo
Not sure if anyone answered this. This is just my opinion of course:

* The OS functions should use return type os_error_t.
* Those functions should return OS_OK or some other OS error.
* Checks against functions with type os_error_t should be against OS_OK and not 
0.

The bubbling up of errors, well, not sure. If some function not in the OS calls 
an os function which returns os_error_t does not need to use a return type of 
os_error_t; that can be int.


> On Apr 9, 2017, at 7:55 PM, Ben Harper  wrote:
> 
> While mucking about in the source I found a few places where the use of
> OS_OK was either returned and checked against a hardcoded zero, or the
> other way around, and some function signatures that give os_error_t or int
> and return the other. The documentation has similar disconnects in portions
> as to what the return type is, and some functions seem to bubble up the
> response code from underlying system calls and the type changes as it is
> returned.  I'd like to work through fixing these, but I'm not able to find
> a single source of truth as to which they should be. Is there currently any
> set guidance on this? Or would it be fine if I just made my best guesses
> and brought it together as a PR against the github repository?
> 
> Thanks for any help you can give on the matter.
> - Ben Harper



Re: I2C not working on nrf52xxx's rb-nano2

2017-04-09 Thread will sanfilippo
I was going to mention drive strength but I had never had to do that in the 
past. Glad it worked! But with I2C it might help in some cases.

It is also a bit of a bummer that disabling UART_0 was not that easy or did not 
do the trick; we will look into this a bit more to see what the issue is.

Will
> On Apr 9, 2017, at 12:47 PM, Łukasz Wolnik <lukasz.wol...@gmail.com> wrote:
> 
> Will, thank you very much for a handful of tips. Thanks to you I made it
> work!
> 
> It turned out that all I had to do was to change I2C's SDA/SCL pins to
> unallocated ones.
> 
> 
> But at first changing the pins didn't work. I had found a similar issue [1]
> on GitHub that resolved it by driving a higher current to SDA/SCL lines:
> 
> https://github.com/limal/incubator-mynewt-core/commit/3510504c5cccde7de54086672cde15f945b79a3e
> 
> And changing the settings to GPIO_PIN_CNF_DRIVE_H0D1 did the work [2]. My
> output was now the correct 0x33 value identifier of the accelerometer.
> 
> 2:[ts=15624ssb, mod=64 level=1] hal_i2c_master_write rc: 0
> 3:[ts=23436ssb, mod=64 level=1] readCheck: 51 rc: 0
> 
> 
> 
> Later I found out that my pull-up resistors were actually 100 Ohms instead
> of 10k. So actually above change to the drive wasn't necessary (it should
> stay as GPIO_PIN_CNF_DRIVE_S0D1) but I have to say that fixing hardware
> issues using software feels good. Besides RedBear Nano2 has it's own
> pull-ups so no resistors are required in the first place.
> 
> And just for completeness, I have tried different setups to share the lines
> with UART_0 (pins 2, 28, 29 and 30) but none of them worked. So my previous
> try with the Bluetooth serial logging must have been sabotaged by the 100
> Ohm pull-up resistors. I have a feeling that it's not so easy to disable
> UART_0 completely so your advice to change SPI0_CONFIG_SCK_PIN pin might
> have been necessary as well. Luckily there's no need to checking it.
> 
> Thanks again,
> Lukasz
> 
> [1]
> https://github.com/RedBearLab/nRF51822-Arduino/issues/38#issuecomment-186752735
> [2]
> https://github.com/limal/incubator-mynewt-core/commit/3510504c5cccde7de54086672cde15f945b79a3e
> 
> On Sun, Apr 9, 2017 at 5:30 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Just an FYI:
>> 
>> The timeout value is in units of “os ticks”. If you want a timeout of 1
>> second you would use OS_TICKS_PER_SEC. A timeout of 5 seconds would be 5 *
>> OS_TICKS_PER_SEC and a timeout of 100 msecs would be OS_TICKS_PER_SEC / 10.
>> 
>> For example: hal_i2c_master_write(0, , OS_TICKS_PER_SEC, 1);
>> 
>> I am not trying to say that is why it is failing btw; just wanted to point
>> it out as the documentation may not be completely clear.
>> 
>> I know you disabled the UART, but something interesting to try might be to
>> use a completely different set of GPIO just in case. It also might be
>> interesting to change the pkg.yml file in the nano2 directory such that the
>> SPI0_CONFIG_SCK_PIN pin does not use pin 2. I doubt that is it but worth a
>> try as well.
>> 
>> Unfortunately I do not have a device handy to take a look at this but we
>> should be able to figure this out pretty quickly.
>> 
>> Looks like what you did in the code to enable things should work. I
>> presume that you stepped through the code and that indeed hal_i2c_init()
>> was being called and that it was returning 0.
>> 
>> 
>>> On Apr 9, 2017, at 7:02 AM, Łukasz Wolnik <lukasz.wol...@gmail.com>
>> wrote:
>>> 
>>> P.S. The code that invokes i2c read/write functions in my project is
>> below:
>>> 
>>> uint8_t check = 0x0F;
>>> struct hal_i2c_master_data pwrite = {
>>>   .address = 0x19,
>>>   .len = 1,
>>>   .buffer = 
>>> };
>>> 
>>> rc = hal_i2c_master_write(0, , 500, 1); // always returns -1
>>> 
>>> uint8_t readCheck = 0;
>>> struct hal_i2c_master_data pdata = {
>>>   .address = 0x19,
>>>   .len = 1,
>>>   .buffer = 
>>> };
>>> 
>>> rc = hal_i2c_master_read(0, , 500, 1); // always returns -1
>>> 
>>> 
>>> And the timeouts are coming from /hw/mcu/nordic/nrf52xxx/src/hal_i2c.c
>>> 
>>>   while (!regs->EVENTS_TXDSENT && !regs->EVENTS_ERROR) {
>>>   if (os_time_get() - start > timo) {
>>>   regs->TASKS_STOP = 1;
>>>   goto err;
>>>   }
>>>   }
>>> 
>>> and
>>> 
>>>   while (!regs->EVENTS_RXDREADY && !regs->EVENTS_ERROR) {
>>

DC/DC regulator enable for nrf52. Where should it go?

2017-04-07 Thread will sanfilippo
Hello:

I want to add some code that enables the DC/DC regulator for the nordic chips. 
Enabling this regulator reduces power consumption (considerably). For example, 
using the LDO when running from flash (cache enabled) is typically 7.4mA; using 
the DC/DC regulator it goes to 3.7 mA.

It would be best to turn this on as soon as possible but it should only be 
enabled if there is some external circuitry attached to some of the pins (see 
the product specifications for more details). For all the BSP’s currently in 
the repo, the DC/DC regulator can (and should) be enabled. GIven that there is 
external circuitry involved I was going to create a syscfg variable that would 
either exist in the BSP or be overridden by the BSP. What I am having a bit of 
trouble figuring out is where should the code to enable the DC/DC regulator go?

We have a choice of putting it in an existing place or doing something new. It 
seems to me that if we choose an exisiting place it would go in either 
hal_bsp_init() or hal_system_start().

Some comments about the existing functions:

hal_system_start():
* Code would only need to be modified in one place (in hw/mcu).
* This function is called after the bootloader does some work, so more power 
savings could be realized earlier on.
* If you build an image with no bootloader I do not think this is called.
* It might be a bit of an odd place to put this code (enabled the DC/DC 
regulator).

hal_bsp_init():
* This is called early on by the bootloader. It is also called by the 
application which is a bit confusing to me. I am not super familiar with the 
bootloader but unless this function exists in some place I do not see or we 
override bsp syscfg variables in the bootloader app, hal_bsp_init() is going to 
do things twice. Is this true or am I missing something?
* We would have to modify hal_bsp_init() in all the bsps.

Honestly, I am not too concerned about having to modify all the bsps that use 
the nordic chip if the community thinks this code belongs in hal_bsp_init().

Any comments/suggestions?

Thanks!

PS If we decide that this code should exist in hw/mcu what I would do is to 
create a syscfg variable in hw/mcu/nordic (or hw/mcu/nordic/nrf52xxx and 
hw/mcu/nordic/nrf51xxx) called MCU_DCDC_ENABLE (or some such). By default, this 
will be 0. All the bsp syscfg.yml files will override this setting.



Re: Problem loading images to nRF52DK board

2017-04-04 Thread will sanfilippo
BTW, I am curious: what modifications are you going to make to the controller? 
If you do not want or cannot say, no problem. Just interested to hear different 
use cases/modifications that folks want to do.

Oh, and my post was just to get you going quickly; I am sure we can debug your 
issue.

> On Apr 4, 2017, at 11:48 AM, Greg Foringer  wrote:
> 
> Hello,
> 
> I've spent about 16 hours trying to get the nrf52_boot and blehci image
> flashed to my board. I'm hoping someone has some insight about my problem.
> 
> I have installed all the SEGGER JLink tools and drivers (v6.14c) on both my
> OSX host and my Ubuntu 16.04 guest. When using JLinkExe, it can connect to
> the board over USB. I'm using the latest 1.0.0 version of newt. I've
> followed the instructions to build the nrf52_boot bootloader for my nRF52DK
> board (PCA10040 v1.1.0) as well as the blehci app.
> 
> 
> The targets build just fine and I get a boot.elf.bin and a blehci.img.
> 
> 
> newt load -v nrf52_boot fails whether I do it on Linux or OSX with the
> following message (I've added newlines to make it more readable):
> 
> 
> $ newt load -v nrf52_boot
> 
> Loading bootloader
> 
> Load command:
> /workspace/repos/apache-mynewt-core/hw/bsp/nrf52dk/nrf52dk_download.sh
> /workspace/repos/apache-mynewt-core/hw/bsp/nrf52dk
> /workspace/bin/targets/nrf52_boot/app/apps/boot/boot
> 
> 
> load - Error: Downloading
> /workspace/bin/targets/nrf52_boot/app/apps/boot/boot.elf.bin to 0x0
> 
> 
> GNU gdb (7.8-0ubuntu1+6) 7.8 Copyright (C) 2014 Free Software Foundation,
> Inc. License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html> This is free software: you are free to
> change and redistribute it. There is NO WARRANTY, to the extent permitted
> by law. Type "show copying" and "show warranty" for details.
> 
> 
> This GDB was configured as "--host=x86_64-linux-gnu --target=arm-none-eabi".
> 
> 
> Type "show configuration" for configuration details.
> 
> For bug reporting instructions, please see: <
> http://www.gnu.org/software/gdb/bugs/>.
> 
> Find the GDB manual and other documentation resources online at: <
> http://www.gnu.org/software/gdb/documentation/>.
> 
> For help, type "help". Type "apropos word" to search for commands related
> to "word".
> 
> 
> SEGGER J-Link GDB Server V5.12c Command Line Version JLinkARM.dll V5.12c
> (DLL compiled Apr 21 2016 16:22:40)
> 
> -GDB Server start settings-
> 
> GDBInit file: none
> 
> GDB Server Listening port: 
> 
> SWO raw output listening port: 2332
> 
> Terminal I/O port: 2333
> 
> Accept remote connection: yes
> 
> Generate logfile: off
> 
> Verify download: off
> 
> Init regs on
> 
> start: off
> 
> Silent mode: off
> 
> Single run mode: on
> 
> 
> Target connection timeout: 0 ms
> 
> 
> --J-Link related settings--
> 
> J-Link Host interface: USB
> 
> J-Link script: none
> 
> J-Link settings file: none
> 
> 
> --Target related settings--
> 
> Target device: nRF52
> 
> Target interface: SWD
> 
> Target interface speed: 4000kHz
> 
> Target endian: little
> 
> 
> Connecting to J-Link...
> 
> Connecting to J-Link failed.
> 
> Connected correctly?
> 
> GDBServer will be closed... Shutting down...
> 
> Could not connect to J-Link.
> 
> Please check power, connection and settings..
> 
> gdb_cmds:2: Error in sourced command file: localhost:: Connection timed
> out.
> 
> (gdb) quit
> 
> 
> I have tried erasing the device memory using JLinkExe and trying again,
> I've tried different operating systems (except Windows because I only have
> a Macbook with a Windows Guest VM and there doesn't appear to be a native
> Windows newt tool). I've tried loading the images myself using the SEGGER
> JLink tools but they all expect .hex files and can't read the build output
> from the newt tool. I'd be happy with any solution that can simply get
> these images onto my device, because I'd really like to customize the
> nimBLE controller firmware for a project I'm working on this week.



Re: Problem loading images to nRF52DK board

2017-04-04 Thread will sanfilippo
Hello Greg:

Not sure why you are having problems using the newt tool to download, but 
JLinkExe can load binary files. There is a command called “loadbin” that will 
allow you to load a binary file at a given location. Load the bootloader at 0 
and the img file that you created at address 0x8000.

The loadbin syntax is: loadbin  

Just to be sure, first erase it again (using the erase command).


> On Apr 4, 2017, at 11:48 AM, Greg Foringer  wrote:
> 
> Hello,
> 
> I've spent about 16 hours trying to get the nrf52_boot and blehci image
> flashed to my board. I'm hoping someone has some insight about my problem.
> 
> I have installed all the SEGGER JLink tools and drivers (v6.14c) on both my
> OSX host and my Ubuntu 16.04 guest. When using JLinkExe, it can connect to
> the board over USB. I'm using the latest 1.0.0 version of newt. I've
> followed the instructions to build the nrf52_boot bootloader for my nRF52DK
> board (PCA10040 v1.1.0) as well as the blehci app.
> 
> 
> The targets build just fine and I get a boot.elf.bin and a blehci.img.
> 
> 
> newt load -v nrf52_boot fails whether I do it on Linux or OSX with the
> following message (I've added newlines to make it more readable):
> 
> 
> $ newt load -v nrf52_boot
> 
> Loading bootloader
> 
> Load command:
> /workspace/repos/apache-mynewt-core/hw/bsp/nrf52dk/nrf52dk_download.sh
> /workspace/repos/apache-mynewt-core/hw/bsp/nrf52dk
> /workspace/bin/targets/nrf52_boot/app/apps/boot/boot
> 
> 
> load - Error: Downloading
> /workspace/bin/targets/nrf52_boot/app/apps/boot/boot.elf.bin to 0x0
> 
> 
> GNU gdb (7.8-0ubuntu1+6) 7.8 Copyright (C) 2014 Free Software Foundation,
> Inc. License GPLv3+: GNU GPL version 3 or later <
> http://gnu.org/licenses/gpl.html> This is free software: you are free to
> change and redistribute it. There is NO WARRANTY, to the extent permitted
> by law. Type "show copying" and "show warranty" for details.
> 
> 
> This GDB was configured as "--host=x86_64-linux-gnu --target=arm-none-eabi".
> 
> 
> Type "show configuration" for configuration details.
> 
> For bug reporting instructions, please see: <
> http://www.gnu.org/software/gdb/bugs/>.
> 
> Find the GDB manual and other documentation resources online at: <
> http://www.gnu.org/software/gdb/documentation/>.
> 
> For help, type "help". Type "apropos word" to search for commands related
> to "word".
> 
> 
> SEGGER J-Link GDB Server V5.12c Command Line Version JLinkARM.dll V5.12c
> (DLL compiled Apr 21 2016 16:22:40)
> 
> -GDB Server start settings-
> 
> GDBInit file: none
> 
> GDB Server Listening port: 
> 
> SWO raw output listening port: 2332
> 
> Terminal I/O port: 2333
> 
> Accept remote connection: yes
> 
> Generate logfile: off
> 
> Verify download: off
> 
> Init regs on
> 
> start: off
> 
> Silent mode: off
> 
> Single run mode: on
> 
> 
> Target connection timeout: 0 ms
> 
> 
> --J-Link related settings--
> 
> J-Link Host interface: USB
> 
> J-Link script: none
> 
> J-Link settings file: none
> 
> 
> --Target related settings--
> 
> Target device: nRF52
> 
> Target interface: SWD
> 
> Target interface speed: 4000kHz
> 
> Target endian: little
> 
> 
> Connecting to J-Link...
> 
> Connecting to J-Link failed.
> 
> Connected correctly?
> 
> GDBServer will be closed... Shutting down...
> 
> Could not connect to J-Link.
> 
> Please check power, connection and settings..
> 
> gdb_cmds:2: Error in sourced command file: localhost:: Connection timed
> out.
> 
> (gdb) quit
> 
> 
> I have tried erasing the device memory using JLinkExe and trying again,
> I've tried different operating systems (except Windows because I only have
> a Macbook with a Windows Guest VM and there doesn't appear to be a native
> Windows newt tool). I've tried loading the images myself using the SEGGER
> JLink tools but they all expect .hex files and can't read the build output
> from the newt tool. I'd be happy with any solution that can simply get
> these images onto my device, because I'd really like to customize the
> nimBLE controller firmware for a project I'm working on this week.



Re: Adding platform specific API to get public and/or random static address

2017-04-04 Thread will sanfilippo
Marcel:

Thanks for the clarification on the public address and that for BLE the two 
LSbits of the MSbyte do not apply. I do understand the trickiness of changing 
the public address but it is certainly helpful for debugging/testing.


> On Apr 3, 2017, at 9:38 AM, Marcel Holtmann  wrote:
> 
> Hi Will,
> 
 There has been some discussion of this already on the list but nothing has 
 been done yet so I wanted to resurrect the conversation with some 
 proposals.
 
 What we are trying to do here is the following:
 1) Have the controller get a public device address without it being 
 hardcoded.
 2) Have the ability to read a chip-specific random static address if the 
 chip has one programmed.
 
 The proposal is the following:
 
 1) Add two new API. These will be platform specific and will be placed in 
 the ble_hw.c file:
 
 /* These API will return -1 if no address available. If available, will 
 return 0 and will place the address in *addr */
 int ble_hw_get_public_addr(ble_addr_t *addr)
 int ble_hw_get_static_addr(ble_addr_t *addr)
 
 2) Add a syscfg variable to the controller which will allow the developer 
 to set a public address of their choosing. By default this will be all 0 
 (no public address). More on this below.
 
 3) The ble_hw_get_public_addr function will do the following:
 * If the user has overridden the default public address (the syscfg 
 variable) with a non-zero public address, that address will be returned by 
 this function.
 * If the default public address in the syscfg is all zero, the code will 
 read FICR and check if the device address type in the FICR is public. If 
 so, it means the nordic chip was factory programmed with a public address 
 and this will be used.
 * If both of the above checks fail, the code will read UICR[0] and UICR[1] 
 to see if a public address has been programmed into the UICR. We are doing 
 this to make it easy for folks to program their development kits with 
 public addresses so they do not have to hardcode them. UICR[0] will 
 contain the least significant 4 bytes of the device address. UICR[1] will 
 contain the most significant two bytes. The upper 16 bits of this word 
 should be set to 0. The API will presume that this is a valid public 
 device address as long as the upper 16-bits of this 32-bit word are all 
 zero. We will also check to see if this is a valid public address (see 
 below). If both UICR[0] and UICR[1] are zero, this will not be considered 
 a valid public address.
 
 A note on valid public addresses. Not sure if I got this right, but I 
 think the two least significant bits of the most significant byte of the 
 public address should be zero. I think I will check this to make sure it 
 is valid.
>>> 
>>> you got that wrong. The public address is a BD_ADDR (6 octets) and the 
>>> random address is that (also 6 octets). If you just get 6 octets, you can 
>>> not differentiate if it is public or random. That is why I keep that LE 
>>> addresses are actually 49 bits instead. There is "out-of-band” telling if 
>>> its public or random.
>>> 
>> The above comment is not based on the BLE specification, it is based on the 
>> IEEE standard which says that the two LSbit’s of the MSbyte are the 
>> universally/locally administered addres bit and the indiviual/group address 
>> bit. I was presuming that both of these bits need to be zero but was not 
>> sure. I was only referring to public addresses here.
> 
> the BD_ADDR usage and its relation to IEEE is defined in the standard. The 
> bits and its assignment are irrelevant since it is treated as 6 octets 
> (defined as 3 parts). But that is for BR/EDR only. For LE it is just a 6 
> octet value marked as public address.
> 
>>> As far I know the FICR is just a 6 octet random value. It is neither a 
>>> public address or a static random address (you are after the static after 
>>> all since NRPAs and RPAs are different as well). So you even need to mask 
>>> the upper 2 bits correctly to make FICR a static address.
>> 
>> Marcel: you are incorrect I Ibelieve. I should have been more specific. 
>> There is a DEVICEADDRTYPE register in the FICR which says whether the 
>> address is public or random. The code was going to read that register to 
>> determine if the address in the FICR was public or random. I do not expect 
>> that register being set to public but if it is, the adress in the next two 
>> FICR registers should be the public address.
> 
> Good to know. If that is set to public address, then HCI_Read_BD_ADDR should 
> return that value.
> 
>>> 
 4) The ble_hw_get_static_addr() will do the following:
 * Read the FICR to see if there is a random address in the FICR. This is 
 the default programming of the nrf51dk and nrf52dk. Unless you have them 
 program a 

Re: Adding platform specific API to get public and/or random static address

2017-04-03 Thread will sanfilippo
Setting the public address from the host came from two things: bletiny and also 
my recollection of one of the vendor specific HCI commands that was discussed 
at Linaro Connect in Budapest this past March. I am trying to find the document 
we discussed in Budapest to confirm that this was one of the commands.

Of course I realize the possible issues with setting a public device address 
“on the fly”. However, it should be quite possible to do this and not all that 
tricky, assuming the controller is not doing anything at the time.

If no one thinks that having this capability is useful I am fine not including 
it.

> On Apr 1, 2017, at 8:03 AM, Christopher Collins  wrote:
> 
> On Sat, Apr 01, 2017 at 09:53:03AM +0200, Marcel Holtmann wrote:
>>> Some things about writing apps and the BLE spec:
>>> 1) I realize that it is the host that tells the controller the
>>> random address to use. The controller will NOT automatically use the
>>> random address from ble_hw_get_static_addr(). That API will be added
>>> as a convenience so that the app developer does not have to generate
>>> their own. If the app wants to use this random address it needs to
>>> tell the controller to use it using LE_Set_Random_Addr.
>>> 
>>> 2) Regarding the public device address. We have an app called
>>> bletiny that can set the public device address I think. If the above
>>> gets approved we are going to remove g_dev_addr from the code; it
>>> will be kept in the controller and not available globally. The
>>> Zephyr project is considering adding vendor specific HCI commands,
>>> one of which is “set public device address”. I think if we go with
>>> the above approach we should add this vendor specific command and
>>> that should be the way an app can set the public device address if
>>> it so chooses.
>> 
>> The public BD_ADDR needs to be inside the controller before the call
>> of HCI_Reset. Otherwise all sort of assumptions on HCI break. Until
>> then the HCI_Read_BD_ADDR has to return 00:00:00:00:00:00 to indicate
>> the controller has no public address. Switching the BD_ADDR mid-flight
>> with a backdoor is going to fail in the most funny ways left and
>> right.
> 
> The bletiny app is a sandbox test tool, and it does some things that a
> more robust application shouldn't do.  One such underhanded thing it
> does is change its own public address whenever the user requests it.  It
> does this by simply overwriting the global public address byte array and
> hoping for the best.  In practice, I've never seen anything funny
> happen, either to the left or the right :), but this is certainly not
> guaranteed to work.  Also, this won't work at all unless bletiny is
> running on a combined host-controller, since that is the only occasion
> in which the host has access to the public address global variable.
> 
> I don't want to speak for Will, but my guess is he just looked at what
> existing code accesses the public address global to determine the scope
> of the proposed API.  My understanding from reading his email is that
> bletiny is doing something sketchy, and that the proposed API won't
> support this particular use case.
> 
> Chris



Re: Adding platform specific API to get public and/or random static address

2017-04-01 Thread will sanfilippo
Comments inline:

> On Apr 1, 2017, at 12:53 AM, Marcel Holtmann  wrote:
> 
> Hi Will,
> 
>> There has been some discussion of this already on the list but nothing has 
>> been done yet so I wanted to resurrect the conversation with some proposals.
>> 
>> What we are trying to do here is the following:
>> 1) Have the controller get a public device address without it being 
>> hardcoded.
>> 2) Have the ability to read a chip-specific random static address if the 
>> chip has one programmed.
>> 
>> The proposal is the following:
>> 
>> 1) Add two new API. These will be platform specific and will be placed in 
>> the ble_hw.c file:
>> 
>> /* These API will return -1 if no address available. If available, will 
>> return 0 and will place the address in *addr */
>> int ble_hw_get_public_addr(ble_addr_t *addr)
>> int ble_hw_get_static_addr(ble_addr_t *addr)
>> 
>> 2) Add a syscfg variable to the controller which will allow the developer to 
>> set a public address of their choosing. By default this will be all 0 (no 
>> public address). More on this below.
>> 
>> 3) The ble_hw_get_public_addr function will do the following:
>> * If the user has overridden the default public address (the syscfg 
>> variable) with a non-zero public address, that address will be returned by 
>> this function.
>> * If the default public address in the syscfg is all zero, the code will 
>> read FICR and check if the device address type in the FICR is public. If so, 
>> it means the nordic chip was factory programmed with a public address and 
>> this will be used.
>> * If both of the above checks fail, the code will read UICR[0] and UICR[1] 
>> to see if a public address has been programmed into the UICR. We are doing 
>> this to make it easy for folks to program their development kits with public 
>> addresses so they do not have to hardcode them. UICR[0] will contain the 
>> least significant 4 bytes of the device address. UICR[1] will contain the 
>> most significant two bytes. The upper 16 bits of this word should be set to 
>> 0. The API will presume that this is a valid public device address as long 
>> as the upper 16-bits of this 32-bit word are all zero. We will also check to 
>> see if this is a valid public address (see below). If both UICR[0] and 
>> UICR[1] are zero, this will not be considered a valid public address.
>> 
>> A note on valid public addresses. Not sure if I got this right, but I think 
>> the two least significant bits of the most significant byte of the public 
>> address should be zero. I think I will check this to make sure it is valid.
> 
> you got that wrong. The public address is a BD_ADDR (6 octets) and the random 
> address is that (also 6 octets). If you just get 6 octets, you can not 
> differentiate if it is public or random. That is why I keep that LE addresses 
> are actually 49 bits instead. There is "out-of-band” telling if its public or 
> random.
> 
The above comment is not based on the BLE specification, it is based on the 
IEEE standard which says that the two LSbit’s of the MSbyte are the 
universally/locally administered addres bit and the indiviual/group address 
bit. I was presuming that both of these bits need to be zero but was not sure. 
I was only referring to public addresses here.

> As far I know the FICR is just a 6 octet random value. It is neither a public 
> address or a static random address (you are after the static after all since 
> NRPAs and RPAs are different as well). So you even need to mask the upper 2 
> bits correctly to make FICR a static address.

Marcel: you are incorrect I Ibelieve. I should have been more specific. There 
is a DEVICEADDRTYPE register in the FICR which says whether the address is 
public or random. The code was going to read that register to determine if the 
address in the FICR was public or random. I do not expect that register being 
set to public but if it is, the adress in the next two FICR registers should be 
the public address.
> 
>> 4) The ble_hw_get_static_addr() will do the following:
>> * Read the FICR to see if there is a random address in the FICR. This is the 
>> default programming of the nrf51dk and nrf52dk. Unless you have them program 
>> a public device address in the FICR, it will have a random address.
>> * If the chip does not have a random address the API returns -1.
> 
> See my comment above, the FICR is just 6 octets random data. It is surely not 
> a public address. It can never be since that requires to follow IEEE 
> assignment rules. And it is no static address either. It needs to be masked 
> correctly first. It is just a persistence 6 octets of randomness from 
> manufacturing.
I know it is not a public address! Well, it is not a public address if the 
DEVICEADDRTYPE says it is not. I was merely trying to point out the possibility 
that there is not a random address here. And yes, I have read the nordic 
devzone and I know I need to set the upper two bits accordingly.  I skipped 
that 

Re: Adding platform specific API to get public and/or random static address

2017-03-31 Thread will sanfilippo
Yep, except for one typo: ble_hw_get_public_addr() instead of 
ble_hs_get_public_addr().

I should have mentioned that, assuming we agree to this, the controller code 
will call that API and the host should not call it. I mentioned this API in 
case someone wants to modify how it works for them.

Thanks for clarifying that! (and reading that long email)

Will

> On Mar 31, 2017, at 4:28 PM, Christopher Collins <ch...@runtime.io> wrote:
> 
> On Fri, Mar 31, 2017 at 03:49:05PM -0700, will sanfilippo wrote:
>> Hello:
>> 
>> There has been some discussion of this already on the list but nothing has 
>> been done yet so I wanted to resurrect the conversation with some proposals.
>> 
>> What we are trying to do here is the following:
>> 1) Have the controller get a public device address without it being 
>> hardcoded.
>> 2) Have the ability to read a chip-specific random static address if the 
>> chip has one programmed.
>> 
>> The proposal is the following:
> 
>> 1) Add two new API. These will be platform specific and will be placed
>> in the ble_hw.c file:
> 
>> /* These API will return -1 if no address available. If available, will
>> return 0
>> and will place the address in *addr */
>> int ble_hw_get_public_addr(ble_addr_t *addr)
>> int ble_hw_get_static_addr(ble_addr_t *addr)
> 
> [...]
> 
> That sounds good to me.  This covers all the use cases I can think of.
> 
> As you mentioned, Bluetooth is somewhat asymmetric regarding public and
> random addresses.  The controller is in charge of the public address
> while the host is in charge of the static random address.
> 
> With the API you proposed, I think the workflow would look something
> like this:
> 
> 1. At init time, controller calls ble_hs_get_public_addr().  If this
> call yields an address, the controller configures itself to use it.
> 
> 2. If app wants a static random address, it calls
> ble_hw_get_static_addr().  If a random address is available, the
> application configures the host to use it with a call to
> ble_hs_id_set_rnd().
> 
> Does that sound about right?
> 
> In thinking about this, I realized the host interface is missing
> something.  There is currently no way for an application to ask the host
> for its public address.  An application may want to know this to
> determine if it should configure a random address (or just for
> reporting purposes).  The host does know its own public address--it gets
> it from the controller at startup--it just doesn't expose it to the
> application.
> 
> Chris



Adding platform specific API to get public and/or random static address

2017-03-31 Thread will sanfilippo
Hello:

There has been some discussion of this already on the list but nothing has been 
done yet so I wanted to resurrect the conversation with some proposals.

What we are trying to do here is the following:
1) Have the controller get a public device address without it being hardcoded.
2) Have the ability to read a chip-specific random static address if the chip 
has one programmed.

The proposal is the following:

1) Add two new API. These will be platform specific and will be placed in the 
ble_hw.c file:

/* These API will return -1 if no address available. If available, will return 
0 and will place the address in *addr */
int ble_hw_get_public_addr(ble_addr_t *addr)
int ble_hw_get_static_addr(ble_addr_t *addr)

2) Add a syscfg variable to the controller which will allow the developer to 
set a public address of their choosing. By default this will be all 0 (no 
public address). More on this below.

3) The ble_hw_get_public_addr function will do the following:
* If the user has overridden the default public address (the syscfg variable) 
with a non-zero public address, that address will be returned by this function.
* If the default public address in the syscfg is all zero, the code will read 
FICR and check if the device address type in the FICR is public. If so, it 
means the nordic chip was factory programmed with a public address and this 
will be used.
* If both of the above checks fail, the code will read UICR[0] and UICR[1] to 
see if a public address has been programmed into the UICR. We are doing this to 
make it easy for folks to program their development kits with public addresses 
so they do not have to hardcode them. UICR[0] will contain the least 
significant 4 bytes of the device address. UICR[1] will contain the most 
significant two bytes. The upper 16 bits of this word should be set to 0. The 
API will presume that this is a valid public device address as long as the 
upper 16-bits of this 32-bit word are all zero. We will also check to see if 
this is a valid public address (see below). If both UICR[0] and UICR[1] are 
zero, this will not be considered a valid public address.

A note on valid public addresses. Not sure if I got this right, but I think the 
two least significant bits of the most significant byte of the public address 
should be zero. I think I will check this to make sure it is valid.

4) The ble_hw_get_static_addr() will do the following:
* Read the FICR to see if there is a random address in the FICR. This is the 
default programming of the nrf51dk and nrf52dk. Unless you have them program a 
public device address in the FICR, it will have a random address.
* If the chip does not have a random address the API returns -1.

Some things about writing apps and the BLE spec:
1) I realize that it is the host that tells the controller the random address 
to use. The controller will NOT automatically use the random address from 
ble_hw_get_static_addr(). That API will be added as a convenience so that the 
app developer does not have to generate their own. If the app wants to use this 
random address it needs to tell the controller to use it using 
LE_Set_Random_Addr.

2) Regarding the public device address. We have an app called bletiny that can 
set the public device address I think. If the above gets approved we are going 
to remove g_dev_addr from the code; it will be kept in the controller and not 
available globally. The Zephyr project is considering adding vendor specific 
HCI commands, one of which is “set public device address”. I think if we go 
with the above approach we should add this vendor specific command and that 
should be the way an app can set the public device address if it so chooses.

Comments/suggestions?

Commit of low power timer support (for nordic platforms)

2017-03-30 Thread will sanfilippo
Hello:

Low power timer support for the nordic platforms (both nrf51 and nrf52) was 
just committed to develop. Here is a basic explanation of the changes and how 
to turn it on/off. Note that while a reasonable amount of testing was done on 
this code a bit more needs to be done to verify that the timing is correct. 
Next step is to do some current consumption profiling for 
advertising/scanning/connections so that we can fine tune the code.

The basics:

The nordic chips support two basic timers: RTC timers and non-RTC timers (which 
I will simply to refer to as timers hereafter). The RTC timers are lower power 
than timers so it desirable to use them instead of the other timers in the 
chip. The initial controller version used a timer set to count at 1MHz. This 
was done because it made the code quite simple to convert bluetooth timing to 
timer ticks (microseconds to ticks and ticks to microseconds was a no-op). The 
RTC timers only count at 32.768 kHz (well, I guess you could put a different 
crystal but typically 32.768 kHz crystals are used). Another change associated 
with the low power timer code is to turn on the HFXO (high-frequency crystal 
oscillator) only when needed. Actually, this gives you the largest current 
consumption gain as the difference between a RTC timer and timer is about 10uA 
and the HFXO is about 250 uA (if I am recalling the chip specification 
correctly).

A note about the usecs to ticks conversion routine:

The routine I decided to use is one where there is no need to do a divide. This 
is not perfectly accurate however and can be off by 1 full tick for certain 
values as the code does not add in the residual to correct for this. The 
routine takes about 10usecs on the nrf51 and if you add the residual it takes 
16 usecs. This routine calculates a “floor” value, so if you care about the 
remainder you can either modify the routine to get that remainder or you can 
convert back to microseconds and subtract it from the microsecond value you 
converted into ticks. The controller does this for connection event timing as 
it needs to be more accurate than one 32.768 tick. After adding this code I 
have been debating having two different routines (one that is faster and one 
that is more exact) but for now there is only the one routine. And btw, the 
routine to do it exactly takes a long time on the nrf51 and based on how the 
controller was written there was not enough time to use the more exact routine 
(the controller needs to do things within the 150 usec IFS time).

How to enable/disable:

We decided to add the code in such a manner that the old way of doing things is 
still in the code. This allows the least amount of disruption while also 
allowing folks to give this code a spin. To enable the code you will need to do 
set the following syscfg variables. Note that you can do this in your target or 
BSP:

1) Set the crystal frequency to 32768.
OS_CPUTIME_FREQ: 32768

2) Set the crystal setting time. This is going to be crystal/board dependent 
and is something you will have to characterize for your product. I need to do a 
bit more research to get the correct number to use for the nordic development 
kits. I chose 1500 usecs (1.5 msecs) as the default. This number cannot be zero!
BLE_XTAL_SETTLE_TIME: 1500

3) Change the timer used for CPUTIME, disable the old timer and enable the RTC 
timer

NRF52:
OS_CPUTIME_TIMER_NUM: 5
TIMER_0: 0
TIMER_5: 1

NRF51:
OS_CPUTIME_TIMER_NUM: 3
TIMER_0: 0
TIMER_3: 1

Here are some sample target excerpts:

Sample nrf51 target:
syscfg.vals:
BLE_XTAL_SETTLE_TIME: 1500
OS_CPUTIME_FREQ: 32768
OS_CPUTIME_TIMER_NUM: 3
TIMER_0: 0
TIMER_3: 1

Sample nrf52 target:
 syscfg.vals:
   BLE_XTAL_SETTLE_TIME: 1500
   OS_CPUTIME_FREQ: 32768
   OS_CPUTIME_TIMER_NUM: 5
   TIMER_0: 0
   TIMER_5: 1

Further improvements:
1) The nordic chips allow for a faster RXEN/TXEN time. This was not added to 
this version of code but could be added in the future to improve power 
consumption.
2) The old code which uses 1 usec timing could also include the code to turn 
on/off the HFXO. This was not added as we wanted to keep the old code 
unchanged. The basic idea here is this though: if you want more accurate timing 
and you have a device that is not battery operated, you are better off using 
the 1MHz timer. I think this makes for a smaller code footprint as well 
although I have not characterized this.
3) Determine the actual settling time on the development boards.

Comments/suggestions are always welcome.




Re: Query on application: Bare metal or using Mynewt RTOS

2017-03-30 Thread will sanfilippo
Amit:

I cannot be sure but I think I understand the problem you are having. The 
nordic delay routines do not use the OS and thus you are sitting in that loop 
constantly. The watchdog is enabled and you are probably watch dogging since 
the task that is supposed to tickle the watchdog is not able to run. A quick 
way to verify this would be to use the blinky code that uses os_time_delay().

I think others have commented on using the nordic SDK to do things. We did not 
provide a HAL for the PPI as this is a nordic specific thing (well, sort of) so 
you will have to do that yourself or use the SDK code.


> On Mar 30, 2017, at 3:17 PM, amit mehta  wrote:
> 
> On Thu, Mar 30, 2017 at 6:08 PM, marko kiiskila  wrote:
>> Hi Amit,
>> 
>> if you want to build without OS, take a look at 
>> apps/boot/syscfg.yml:syscfg.vals
> 
> Thank you Marko for the pointer.
> 
> On a related note, I noticed today that my simple blinky example
> that uses Nordic's peripheral library, along with the mynewt RTOS
> seem to be crashing. Logs below:
> 
> [amit@discworld mynewt-nrf52-prph]$ newt debug my_blinky
> [/home/amit/Documents/devel/ble/distrib/mynewt-nrf52-prph/repos/apache-mynewt-core/hw/bsp/nrf52dk/nrf52dk_debug.sh
> /home/amit/Documents/devel/ble/distrib/mynewt-nrf52-prph/repos/apache-mynewt-core/hw/bsp/nrf52dk
> /home/amit/Documents/devel/ble/distrib/mynewt-nrf52-prph/bin/targets/my_blinky/app/apps/blinky/blinky]
> Debugging 
> /home/amit/Documents/devel/ble/distrib/mynewt-nrf52-prph/bin/targets/my_blinky/app/apps/blinky/blinky.elf
> GNU gdb (GNU Tools for ARM Embedded Processors) 7.8.0.20150604-cvs
> ...
> ...
> Reading symbols from
> /home/amit/Documents/devel/ble/distrib/mynewt-nrf52-prph/bin/targets/my_blinky/app/apps/blinky/blinky.elf...done.
> 0x856a in nrf_delay_us (number_of_us=999) at
> repos/mynewt_nordic/hw/mcu/nordic_sdk/src/ext/nRF5_SDK_11.0.0_89a8197/components/drivers_nrf/delay/nrf_delay.h:170
> 170__ASM volatile (
> (gdb) b apps/blinky/src/main.c:57
> Breakpoint 1 at 0x8436: file apps/blinky/src/main.c, line 57.
> (gdb) c
> Continuing.
> 
> Breakpoint 1, main (argc=, argv=) at
> apps/blinky/src/main.c:59 <-- system reset probably
> 59++g_task1_loops;
> (gdb) list
> 54
> 55g_led_pin = LED_BLINK_PIN;
> 56hal_gpio_init_out(g_led_pin, 1);
> 57
> 58while (1) {
> 59++g_task1_loops;
> 60
> 61/* Wait one second.
> 62 *
> 63 * XXX: Invoke Nordic's defined method instead.
> (gdb) c
> Continuing.
> 
> Breakpoint 1, main (argc=, argv=) at
> apps/blinky/src/main.c:59 <-- once again a reset seem to have occurred
> 59++g_task1_loops;
> 
> 
> With this simple blinky app, I was trying to see If I can use the
> Nordic's peripheral library along with Mynewt core/Mynewt RTOS.
> I mentioned [1], this somedays back in the dev-mailing list.
> 
> Questions:
> 1: How can I debug this further ? I've used gdb, but not much in past.
> 2: Is such a approach (using Nordic's peripheral library) with mynewt
> core is indeed okay ? For example, I was interested in trying out
> Nordic's PPI examples, but couldn't find PPI related APIs under,
> Mynewt's core repo and hence, thought of using this approach.
> 
> Lastly, If someone wants to try this setup, then please follow this [2]
> readme; the code is hosted on github.
> 
> [1] 
> http://mail-archives.apache.org/mod_mbox/incubator-mynewt-dev/201703.mbox/raw/%3cCAOUxTKOmit07kodpSt-=iozu5xm_h44xyvwsmakrtnazz2p...@mail.gmail.com%3e/
> [2] https://github.com/bartledan/mynewt-nrf52-prph/blob/master/README.md
> 
> Thanks,
> Amit
> 
> -- 
> Sent from Bahamas, while drinking chi-chi and piña colada.



Re: Initial commit of the "bsn" branch (body sensor network) branch

2017-03-21 Thread will sanfilippo
I think the basic application requirement this was designed for was an 
application requiring that the data from N sensors was relatively in sync with 
each other. One application could be tracking movement where you want to see 
the relative movement of the sensors in as close to real time as possible. I am 
sure there are numerous applications that would benefit from this form of 
scheduling; this is just one example. Another requirement would be to discard 
old/stale data: if data cannot be delivered within a certain time that data 
should be dropped in favor of “new” data.

A note: this implementation does not add any form of “QoS”; it is meant as a 
proof of concept to show that you can connect N peripherals to a central and 
guarantee them fixed time slotting using BLE. QoS will be added so that data 
can be discarded if it gets too stale; this is coming in a future revision to 
this code. As was mentioned, it is very preliminary.

> On Mar 21, 2017, at 4:50 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Hey Will -
> 
> This sounds pretty cool.  I’m interested: what type of sensor data did you 
> need to have this hard scheduling in Bluetooth for/what were the application 
> requirements you were engineering for?
> 
> Sterling
> 
> On 21 Mar 2017, at 16:21, will sanfilippo wrote:
> 
>> Hello:
>> 
>> Disclaimers:
>> 1) Long email follows.
>> 2) This is a preliminary version of this code. There are hard-coded things 
>> and things I had to hack, mainly because of my ignorance of some areas in 
>> the code. I pushed it so some folks (if they want) can take a look and mess 
>> around before things get cleaned up.
>> 
>> For those interested, a branch was committed today named the “bsnbranch”. 
>> For lack of a better term, I called this the “body sensor network” branch. 
>> This could be quite the misnomer as there is no actual sensor code with this 
>> commit, but I had to come up with a name :-)
>> 
>> The basic idea behind this branch is the following:
>> 
>> * A central wants to connect to N known peripherals.
>> * Each peripheral wants to connect to a known central.
>> * Peripherals generate “application data” at some fixed size and rate (for 
>> the most part). This rate is expected to be pretty fast.
>> * Peripherals and centrals should do their best to maintain these 
>> connections and if a connection is dropped, to re-connect.
>> * The central should allocate fixed time slots to the peripherals and 
>> guarantee those fixed time slots are available.
>> 
>> As with some of the apps in the repo, the initial commit is fairly 
>> hard-coded in some ways. If you look at the source code in main.c in these 
>> apps there are arrays which currently hold some hard-coded addresses: the 
>> public address of the peripheral, the public address of the central, and the 
>> addresses that the central wants to connect to. The application example 
>> shows a central that wants to connect to 5 peripherals. If you want to use 
>> the app without mods in the repo, you need to change BLE_MAX_CONNECTIONS to 
>> 5 when you build your central (in net/nimble/syscfg.yml).
>> 
>> The central application adds the devices in the bsncent_peer_addrs array to 
>> the whitelist and constantly intiates if it is not connected to all of these 
>> devices. The peripheral application does high-duty cycle directed 
>> advertising (constantly!) until it connects to the central. If a connection 
>> is dropped the central and/or peripheral start initiating/advertising until 
>> the connection is re-established. NOTE: there is no delay between the 
>> high-duty cycle advertising attempts currently so beware of that if you are 
>> running your peripheral on a battery!
>> 
>> The central currently uses a hard-coded connection interval of 13 (16.25 
>> msecs). More on this later. The peripheral attempts to send approx an 
>> 80-byte packet at a rate close to this connection interval. That timing is 
>> based on os ticks so it is not perfect, so if folks want more accurate 
>> timing something else would need to be done.
>> 
>> The central also display some basic performance numbers on the console at a 
>> 10-second interval: # of connections, total packets received, total bytes 
>> received, and the pkts/sec and bytes/sec over the last 10 second interval.
>> 
>> While I was testing this setup (5 peripherals, one central) I ran into some 
>> resource issues. I cannot claim to know the host code all that well, but 
>> here are the items that I modified to get this to work. Some of these may 
>> not be necessary since I did not test 

Initial commit of the "bsn" branch (body sensor network) branch

2017-03-21 Thread will sanfilippo
Hello:

Disclaimers:
1) Long email follows.
2) This is a preliminary version of this code. There are hard-coded things and 
things I had to hack, mainly because of my ignorance of some areas in the code. 
I pushed it so some folks (if they want) can take a look and mess around before 
things get cleaned up.

For those interested, a branch was committed today named the “bsnbranch”. For 
lack of a better term, I called this the “body sensor network” branch. This 
could be quite the misnomer as there is no actual sensor code with this commit, 
but I had to come up with a name :-)

The basic idea behind this branch is the following:

* A central wants to connect to N known peripherals.
* Each peripheral wants to connect to a known central.
* Peripherals generate “application data” at some fixed size and rate (for the 
most part). This rate is expected to be pretty fast.
* Peripherals and centrals should do their best to maintain these connections 
and if a connection is dropped, to re-connect.
* The central should allocate fixed time slots to the peripherals and guarantee 
those fixed time slots are available.

As with some of the apps in the repo, the initial commit is fairly hard-coded 
in some ways. If you look at the source code in main.c in these apps there are 
arrays which currently hold some hard-coded addresses: the public address of 
the peripheral, the public address of the central, and the addresses that the 
central wants to connect to. The application example shows a central that wants 
to connect to 5 peripherals. If you want to use the app without mods in the 
repo, you need to change BLE_MAX_CONNECTIONS to 5 when you build your central 
(in net/nimble/syscfg.yml).

The central application adds the devices in the bsncent_peer_addrs array to the 
whitelist and constantly intiates if it is not connected to all of these 
devices. The peripheral application does high-duty cycle directed advertising 
(constantly!) until it connects to the central. If a connection is dropped the 
central and/or peripheral start initiating/advertising until the connection is 
re-established. NOTE: there is no delay between the high-duty cycle advertising 
attempts currently so beware of that if you are running your peripheral on a 
battery!

The central currently uses a hard-coded connection interval of 13 (16.25 
msecs). More on this later. The peripheral attempts to send approx an 80-byte 
packet at a rate close to this connection interval. That timing is based on os 
ticks so it is not perfect, so if folks want more accurate timing something 
else would need to be done.

The central also display some basic performance numbers on the console at a 
10-second interval: # of connections, total packets received, total bytes 
received, and the pkts/sec and bytes/sec over the last 10 second interval.

While I was testing this setup (5 peripherals, one central) I ran into some 
resource issues. I cannot claim to know the host code all that well, but here 
are the items that I modified to get this to work. Some of these may not be 
necessary since I did not test them in all their various combinations and some 
may have no impact at all.

NOTE: these changes are not in the branch btw. They need to be modified by 
either changing a syscfg value or hacking the code. I realize hacking the code 
is quite undesirable but it was not obvious how to do this with syscfg and my 
lack of understanding of the code prevented me from doing something more 
elegant. The items in CAPS are syscfg variables. Changing them in your target 
is a good way to change there.

1) Mbufss at the central. I modified the number of mbufs and their size. I used 
24 mbufs with a size of 128. Not sure how many you actually need, but did not 
run out of mbufs with this setting.
MSYS_1_BLOCK_COUNT: 24
MSYS_1_BLOCK_SIZE: 128
2) BLE_GATT_MAX_PROCS: I increased this to 8 for the central.
3) BLE_MAX_CONNECTIONS: I made this 5 for the central. NOTE: 32 is the maximum 
# of connections supported here. If you use more, the code does not complain 
and the behavior will be unpredicatable.
4) I hacked the code to add more the ble_att_svr_entry_pool. I multiplied the 
number by 2 (ble_hs_max_attrs * 2).
5) I believe I added 12 to the ble_gatts_clt_cfg_pool but not sure this is 
needed.
6) Enabled data length extension by setting BLE_LL_CONN_INIT_MAX_TX_BYTES to 
251. This number could be made less but for now I made it the full size. This 
is for both central and peripheral.
 
SCHEDULER CHANGES:
A large part of the changes to the controller involve how connections get 
scheduled. There are three configuration items that can be modified, and need 
to be modified, for this code to work. I realize I committed this with some 
default numbers that probably should be turned off when we merge this into 
develop, but for now realize these numbers are based on the connection interval 
that the central uses (16.25 msecs) and 5 connections

BLE_LL_STRICT_CONN_SCHEDULING: 

Re: Debugging blecent application on nrf52dk

2017-03-17 Thread will sanfilippo
Yes, that is for 4.2. But earlier versions of the specification do not even 
mention Data Length Update procedure so this would not apply to a 4.1 
controller, for instance.

There needs to be a generic place in the spec that says “if you receive a LL 
control opcode that you do not understand, you reply with LL_UNKNOWN_RSP”.

It migth be in there but when I did a quick search this morning I did not 
explicitly find it.


> On Mar 17, 2017, at 11:34 AM, Pritish Gandhi <prit...@aylanetworks.com> wrote:
> 
> Hi Will,
> Thanks for the explanation. I found it in the BluetoothCore v4.2 spec in
> section 5.1.9 - Data Length Update Procedure.
> I agree with your conclusion that it sounds like the controller must accept
> LL_UNKNOWN_RSP but must not require it.
> Thanks again this was really very helpful.
> -Pritish
> 
> On Fri, Mar 17, 2017 at 11:05 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> I can elaborate. It has been discussed at fair length on the dev list
>> before but I will summarize.
>> 
>> The issue is that our controller automatically attempts to do data length
>> extension with a peer. The default configuration sets a larger receive size
>> than the minimum size and this causes the controller to initiate that
>> control procedure automatically (without any host intervention).
>> 
>> We have seen that other controllers do not like it when they get a
>> LL_LENGTH_REQ_PDU when they do not support the data length update
>> procedure. They are supposed to send back an UNKNOWN_RSP but some do not.
>> Actually, we have seen different controllers with different behavior.
>> 
>> And re-reading the specification, it could be that I simply
>> mis-interpreted it. It was my understanding that if a controller receives a
>> control PDU that it does not understand it should reply with
>> LL_UNKNOWN_RSP. The spec sort of hints at that in the Feature Exchange
>> Procedure section, but maybe it is not required. Technically I am probably
>> wrong as it says that an implementation must accept LL_UNKNOWN_RSP but does
>> not say that you have to send it. I thought I had read it somewhere but
>> maybe not.
>> 
>> Anyway, we will address this issue soon enough so for now just disable
>> data length extension and you should be fine.
>> 
>>> On Mar 17, 2017, at 10:47 AM, Pritish Gandhi <prit...@aylanetworks.com>
>> wrote:
>>> 
>>> Hey Will,
>>> That worked!! The blecent is staying connected to the bleprph almost
>>> indefinitely now.
>>> 
>>> Can you please elaborate on the reason: "And sure enough, I think this is
>>> the same issue we have seen before. The data length update procedure is
>> not
>>> completing and the connection is timing out."
>>> 
>>> Do you mean that the blecent device is trying to enable data length
>>> extension with the bleprph but that is failing?
>>> 
>>> Thanks,
>>> Pritish
>>> 
>>> On Fri, Mar 17, 2017 at 10:11 AM, will sanfilippo <wi...@runtime.io>
>> wrote:
>>> 
>>>> BTW: Lukasz did have a really good suggestion for future debugging (why
>>>> didnt I think of that?) :-)
>>>> 
>>>> A sniffer with wireshark is a really handy tool. I mentioned the
>> debugger
>>>> as it was something we could look at really quickly to see if it is an
>>>> issue that I already know about with some controllers.
>>>> 
>>>> And sure enough, I think this is the same issue we have seen before. The
>>>> data length update procedure is not completing and the connection is
>> timing
>>>> out.
>>>> 
>>>> A quick fix for now would be to disable the data length extension
>> feature
>>>> to see if this addresses the issue.
>>>> 
>>>> In the syscfg.yml file in net/nimble/controller you should set
>>>> BLE_LL_CFG_FEAT_DATA_LEN_EXT to 0. This should disable the controller
>>>> automatically sending the data length update procedure. Hopefully this
>>>> solves the problem.
>>>> 
>>>> Will
>>>> 
>>>>> On Mar 17, 2017, at 9:43 AM, Pritish Gandhi <prit...@aylanetworks.com>
>>>> wrote:
>>>>> 
>>>>> Hi Will,
>>>>> Sorry I was mistaken, I jumbled up the issues in my own head. You are
>>>>> correct in that the blecent is running on the nrf52dk and the logs I
>>>>> provided are from the blecent application.
>>>>> 
>>>>> As you requested 

Re: Debugging blecent application on nrf52dk

2017-03-17 Thread will sanfilippo
I can elaborate. It has been discussed at fair length on the dev list before 
but I will summarize.

The issue is that our controller automatically attempts to do data length 
extension with a peer. The default configuration sets a larger receive size 
than the minimum size and this causes the controller to initiate that control 
procedure automatically (without any host intervention).

We have seen that other controllers do not like it when they get a 
LL_LENGTH_REQ_PDU when they do not support the data length update procedure. 
They are supposed to send back an UNKNOWN_RSP but some do not. Actually, we 
have seen different controllers with different behavior.

And re-reading the specification, it could be that I simply mis-interpreted it. 
It was my understanding that if a controller receives a control PDU that it 
does not understand it should reply with LL_UNKNOWN_RSP. The spec sort of hints 
at that in the Feature Exchange Procedure section, but maybe it is not 
required. Technically I am probably wrong as it says that an implementation 
must accept LL_UNKNOWN_RSP but does not say that you have to send it. I thought 
I had read it somewhere but maybe not.

Anyway, we will address this issue soon enough so for now just disable data 
length extension and you should be fine.

> On Mar 17, 2017, at 10:47 AM, Pritish Gandhi <prit...@aylanetworks.com> wrote:
> 
> Hey Will,
> That worked!! The blecent is staying connected to the bleprph almost
> indefinitely now.
> 
> Can you please elaborate on the reason: "And sure enough, I think this is
> the same issue we have seen before. The data length update procedure is not
> completing and the connection is timing out."
> 
> Do you mean that the blecent device is trying to enable data length
> extension with the bleprph but that is failing?
> 
> Thanks,
> Pritish
> 
> On Fri, Mar 17, 2017 at 10:11 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> BTW: Lukasz did have a really good suggestion for future debugging (why
>> didnt I think of that?) :-)
>> 
>> A sniffer with wireshark is a really handy tool. I mentioned the debugger
>> as it was something we could look at really quickly to see if it is an
>> issue that I already know about with some controllers.
>> 
>> And sure enough, I think this is the same issue we have seen before. The
>> data length update procedure is not completing and the connection is timing
>> out.
>> 
>> A quick fix for now would be to disable the data length extension feature
>> to see if this addresses the issue.
>> 
>> In the syscfg.yml file in net/nimble/controller you should set
>> BLE_LL_CFG_FEAT_DATA_LEN_EXT to 0. This should disable the controller
>> automatically sending the data length update procedure. Hopefully this
>> solves the problem.
>> 
>> Will
>> 
>>> On Mar 17, 2017, at 9:43 AM, Pritish Gandhi <prit...@aylanetworks.com>
>> wrote:
>>> 
>>> Hi Will,
>>> Sorry I was mistaken, I jumbled up the issues in my own head. You are
>>> correct in that the blecent is running on the nrf52dk and the logs I
>>> provided are from the blecent application.
>>> 
>>> As you requested I put the breakpoint at ble_ll_ctrl_proc_rsp_timer_cb()
>> please
>>> see the dump below:
>>> 
>>> *console logs:*
>>> 
>>> 92626:[ts=723640584ssb, mod=64 level=1] Connection established
>>> 
>>> 92627:[ts=723648396ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all services
>>> 
>>> 92703:[ts=724242172ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all characteristics; start_handle=1 end_handle=11
>>> 
>>> 92748:[ts=724593712ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all characteristics; start_handle=12 end_handle=15
>>> 
>>> 92786:[ts=724890568ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all characteristics; start_handle=16 end_handle=19
>>> 
>>> 92818:[ts=725140616ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all characteristics; start_handle=20 end_handle=32
>>> 
>>> 92863:[ts=725492156ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all characteristics; start_handle=33 end_handle=65535
>>> 
>>> 92927:[ts=725992124ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all descriptors; chr_val_handle=14 end_handle=15
>>> 
>>> 92940:[ts=726093744ssb, mod=4 level=1] GATT procedure initiated: discover
>>> all descriptors; chr_val_handle=18 end_handle=19
>>> 
>>> 92953:[ts=726195300ssb, mod=4 level=1] GATT procedu

Re: Debugging blecent application on nrf52dk

2017-03-17 Thread will sanfilippo
_time = 0x848,
> 
>  rem_max_tx_time = 0x148,
> 
>  rem_max_rx_time = 0x148,
> 
>  eff_max_tx_time = 0x148,
> 
>  eff_max_rx_time = 0x148,
> 
>  chanmap = {0xff, 0xff, 0xff, 0xff, 0x1f},
> 
>  req_chanmap = {0x0, 0x0, 0x0, 0x0, 0x0},
> 
>  chanmap_instant = 0x0,
> 
>  hop_inc = 0xb,
> 
>  data_chan_index = 0x5,
> 
>  unmapped_chan = 0x5,
> 
>  last_unmapped_chan = 0x1f,
> 
>  num_used_chans = 0x25,
> 
>  conn_rssi = 0xd0,
> 
>  tx_seqnum = 0x1,
> 
>  next_exp_seqnum = 0x1,
> 
>  cons_rxd_bad_crc = 0x0,
> 
>  last_rxd_sn = 0x0,
> 
>  last_rxd_hdr_byte = 0x5,
> 
>  rpa_index = 0xff,
> 
>  reject_reason = 0x0,
> 
>  host_reply_opcode = 0x0,
> 
>  master_sca = 0x4,
> 
>  tx_win_size = 0x1,
> 
>  cur_ctrl_proc = 0x8,
> 
>  disconnect_reason = 0x0,
> 
>  rxd_disconnect_reason = 0x0,
> 
>  common_features = 0x0,
> 
>  vers_nr = 0x7,
> 
>  pending_ctrl_procs = 0x100,
> 
>  event_cntr = 0x320,
> 
>  completed_pkts = 0x0,
> 
>  comp_id = 0xf,
> 
>  sub_vers_nr = 0x2209,
> 
>  auth_pyld_tmo = 0xbb8,
> 
>  access_addr = 0x14258862,
> 
>  crcinit = 0x742bf8,
> 
>  ce_end_time = 0x2d8b779d,
> 
>  terminate_timeout = 0x0,
> 
>  last_scheduled = 0x2d86,
> 
>  conn_itvl_min = 0x18,
> 
>  conn_itvl_max = 0x28,
> 
>  conn_itvl = 0x28,
> 
>  slave_latency = 0x0,
> 
>  supervision_tmo = 0x100,
> 
>  min_ce_len = 0x10,
> 
>  max_ce_len = 0x50,
> 
>  tx_win_off = 0x0,
> 
>  anchor_point = 0x2d8b6e97,
> 
>  last_anchor_point = 0x2b290d01,
> 
>  slave_cur_tx_win_usecs = 0x0,
> 
>  slave_cur_window_widening = 0x0,
> 
>  own_addr_type = 0x0,
> 
>  peer_addr_type = 0x0,
> 
>  peer_addr = {0xaa, 0xaa, 0xaa, 0xaa, 0xaa, 0xaa},
> 
>  conn_spvn_timer = {
> 
>bsp_timer = 0x20002ee4,
> 
>cb_func = 0x19865,
> 
>cb_arg = 0x20003298,
> 
>expiry = 0x2db1bca8,
> 
>link = {
> 
>  tqe_next = 0x0,
> 
>  tqe_prev = 0x200036a4
> 
>}
> 
>  },
> 
>  conn_spvn_ev = {
> 
>ev_queued = 0x0,
> 
>ev_cb = 0x1ac05,
> 
>ev_arg = 0x20003298,
> 
>ev_next = {
> 
>  stqe_next = 0x0
> 
>}
> 
>  },
> 
>  conn_ev_end = {
> 
>ev_queued = 0x0,
> 
>ev_cb = 0x1a6a5,
> 
>ev_arg = 0x20003298,
> 
>ev_next = {
> 
>  stqe_next = 0x0
> 
>}
> 
>  },
> 
>  cur_tx_pdu = 0x0,
> 
>  conn_txq = {
> 
>stqh_first = 0x0,
> 
>stqh_last = 0x20003358
> 
>  },
> 
>  {
> 
>act_sle = {
> 
>  sle_next = 0x0
> 
>},
> 
>free_stqe = {
> 
>  stqe_next = 0x0
> 
>}
> 
>  },
> 
>  ctrl_proc_rsp_timer = {
> 
>c_ev = {
> 
>  ev_queued = 0x0,
> 
>  ev_cb = 0x1c1ed,
> 
>  ev_arg = 0x20003298,
> 
>  ev_next = {
> 
>stqe_next = 0x0
> 
>  }
> 
>},
> 
>c_evq = 0x2000313c,
> 
>c_ticks = 0x17dd2,
> 
>c_next = {
> 
>  tqe_next = 0x200018e4,
> 
>  tqe_prev = 0x0
> 
>}
> 
>  },
> 
>  conn_sch = {
> 
>sched_type = 0x3,
> 
>enqueued = 0x1,
> 
>start_time = 0x2d8b6dd9,
> 
>end_time = 0x2d8b779d,
> 
>cb_arg = 0x20003298,
> 
>sched_cb = 0x19d35,
> 
>link = {
> 
>  tqe_next = 0x0,
> 
>  tqe_prev = 0x200036ac
> 
>}
> 
>  },
> 
>  auth_pyld_timer = {
> 
>c_ev = {
> 
>  ev_queued = 0x0,
> 
>  ev_cb = 0x1a2d1,
> 
>  ev_arg = 0x20003298,
> 
>  ev_next = {
> 
>stqe_next = 0x0
> 
>  }
> 
>},
> 
>c_evq = 0x2000313c,
> 
>c_ticks = 0x0,
> 
>c_next = {
> 
>  tqe_next = 0x0,
> 
>  tqe_prev = 0x0
> 
>}
> 
>  },
> 
>  enc_data = {
> 
>enc_state = 0x1,
> 
>tx_encrypted = 0x0,
> 
>enc_div = 0x0,
> 
>tx_pkt_cntr = 0x0,
> 
>rx_pkt_cntr = 0x0,
> 
>host_rand_num = 0x0,
> 
>iv = {0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0},
> 
>enc_block = {
> 
>  key = {0x0 },
> 
>  plain_text = {0x0 },
> 
>  cipher_text = {0x0 }
> 
>}
> 
>  },
> 
>  conn_param_req = {
> 
>handle = 0x0,
> 
>conn_itvl_min = 0x0,
> 
>conn_itvl_max = 0x0,
> 
>conn_latency = 0x0,
> 
>supervision_timeout = 0x0,
> 
>min_ce_len = 0x0,
> 
>max_ce_len = 0x0
> 
>  },
> 
>  conn_update_req = {
&

Re: Debugging blecent application on nrf52dk

2017-03-16 Thread will sanfilippo
I do not think there is a simple way to debug this. As Chris points out, the 
first problem is a LL control procedure timeout. I think I can help figure some 
things out there. There is a function called  ble_ll_ctrl_proc_rsp_timer_cb. If 
you set a breakpoint at this function in the debugger when you get the first 
error you can examine the connection state machine. The parameter passed in to 
that function is an event and ev->ev_arg is a pointer to the connection state 
machine. In the debugger, just dump ev_arg after typecasting it to a connection 
state machine: p/x (struct ble_ll_conn_sm *)ev->ev_arg

I presume you are OK with using gdb? I would ‘set print pretty on’ before 
dumping the connection state machine. If you send me the output of that I might 
be able to help.

Thanks

> On Mar 16, 2017, at 2:30 PM, Christopher Collins  wrote:
> 
> Hi Pritish,
> 
> On Thu, Mar 16, 2017 at 01:50:12PM -0700, Pritish Gandhi wrote:
>> Hi All,
>> I'm trying to run blecent on an nrf52dk and am running the bleprph
>> application on another BLE module (stm32f4discovery talking to a broadcom
>> BLE core). Anyways, when try to run blecent it seems like I successfully
>> connect to the peripheral and are able to discover it, however after that
>> the connection seems to be timing out and then am never able to discover
>> the peripheral again.
> 
> [...]
> 
> Hmm, that is odd, indeed.  The disconnect reason codes you are seeing
> are mapped as follows:
> 
>546 - LMP RESPONSE TIMEOUT / LL RESPONSE TIMEOUT
>520 - CONNECTION TIMEOUT
> 
> I'm afraid I don't have any ideas at the moment.  Could you please
> clarify the setup you are using?  Here is my understanding:
> 
> Device A: blecent on nRF52dk (combined host-controller)
> Device B:
>* bleprph on stm32f4discovery (host-only)
>* broadcom controller
> 
> Is that correct?  If so, I assume the host and controller on device B
> communicate via UART?
> 
> Thanks,
> Chris
> 
>> 
>> 1) Connected and Discovered the bleprph:
>> 
>> 37493:[ts=292914004ssb, mod=4 level=1] GAP procedure initiated: discovery;
>> own_addr_type=0 filter_policy=0 passive=1 limited=0 filter_duplicates=1
>> duration=forever
>> 
>> 37503:[ts=292992124ssb, mod=4 level=1] GAP procedure initiated: connect;
>> peer_addr_type=0 peer_addr=aa:aa:aa:aa:aa:aa scan_itvl=16 scan_window=16
>> itvl_min=24 itvl_max=40 latency=0 supervision_timeout=256 min_ce_len=16
>> max_ce_len=768 own_addr_ty
>> 
>> 37517:[ts=293101556ssb, mod=64 level=1] Connection established
>> 
>> 37519:[ts=293117180ssb, mod=4 level=1] GATT procedure initiated: discover
>> all services
>> 
>> 37588:[ts=293656208ssb, mod=4 level=1] GATT procedure initiated: discover
>> all characteristics; start_handle=1 end_handle=11
>> 
>> 37627:[ts=293960876ssb, mod=4 level=1] GATT procedure initiated: discover
>> all characteristics; start_handle=12 end_handle=15
>> 
>> 37658:[ts=294203112ssb, mod=4 level=1] GATT procedure initiated: discover
>> all characteristics; start_handle=16 end_handle=19
>> 
>> 37684:[ts=294406224ssb, mod=4 level=1] GATT procedure initiated: discover
>> all characteristics; start_handle=20 end_handle=32
>> 
>> 37722:[ts=294703080ssb, mod=4 level=1] GATT procedure initiated: discover
>> all characteristics; start_handle=33 end_handle=65535
>> 
>> 37761:[ts=295007812ssb, mod=4 level=1] GATT procedure initiated: discover
>> all descriptors; chr_val_handle=14 end_handle=15
>> 
>> 37774:[ts=295109368ssb, mod=4 level=1] GATT procedure initiated: discover
>> all descriptors; chr_val_handle=18 end_handle=19
>> 
>> 37786:[ts=295203112ssb, mod=4 level=1] GATT procedure initiated: discover
>> all descriptors; chr_val_handle=24 end_handle=25
>> 
>> 37799:[ts=295304668ssb, mod=4 level=1] GATT procedure initiated: discover
>> all descriptors; chr_val_handle=29 end_handle=30
>> 
>> 37812:[ts=295406224ssb, mod=4 level=1] GATT procedure initiated: discover
>> all descriptors; chr_val_handle=37 end_handle=65535
>> 
>> 37825:[ts=295507780ssb, mod=64 level=3] Service discovery complete;
>> status=0 conn_handle=1
>> 
>> 2) Read/Write/Subscribe for notifications. Finally fails with reason=546
>> 
>> 37827:[ts=295523404ssb, mod=4 level=1] GATT procedure initiated: read;
>> att_handle=22
>> 
>> 37829:[ts=295539028ssb, mod=4 level=1] GATT procedure initiated: write;
>> att_handle=32 len=2
>> 
>> 37832:[ts=295562464ssb, mod=4 level=1] GATT procedure initiated: write;
>> att_handle=30 len=2
>> 
>> 37851:[ts=295710892ssb, mod=64 level=1] Read complete; status=0
>> conn_handle=1 attr_handle=22 value=
>> 
>> 37857:[ts=295757764ssb, mod=64 level=1] Write complete; status=0
>> conn_handle=1 attr_handle=32
>> 
>> 37863:[ts=295804636ssb, mod=64 level=1] Subscribe complete; status=0
>> conn_handle=1 attr_handle=30
>> 
>> 42637:[ts=333101556ssb, mod=64 level=1] disconnect; reason=546
>> 
>> 
>> 3) Once it disconnects, blecent gets stuck in this loop of trying to
>> discover, but the discovery always fails:
>> 
>> 42638:[ts=333109368ssb, 

Re: MyNewt: NimBLE: Does BD_ADDR have to be unique?

2017-03-13 Thread will sanfilippo
All:

You should not use DEVICEID for the address. While there is an extremely high 
probability you will be fine, there is a chance you will not. And yes, I 
realize the chance is really, really, really small! :-) The address cannot be 
all 1’s or all 0’s. I asked nordic this question and they say their DEVICEADDR 
does abide by the rules in the spec so that is the one I would use. Here is the 
link to the post:

https://devzone.nordicsemi.com/question/101162/deviceaddr-and-resolvable-private-addresses/
 

I have to say I was not terribly happy with the answer nordic provided since 
they say "the address is, as you know, a random static address...”. Well, it 
might not be. You have to set the upper two bits to make it so (as Szymon 
mentioned). You could also use DEVICEADDR as a non-resolvable private address; 
you just set the two upper bits to 0.


> On Mar 13, 2017, at 3:36 PM, Pritish Gandhi <prit...@aylanetworks.com> wrote:
> 
> Hi All,
> Sorry I should've confirmed earlier but I used the hal_bsp_hw_id() for this
> and was able to get what I wanted (i.e a unique hw id which would persist
> over flash writes/re-writes and would stay constant for this device).
> 
> I would suggest that the nimble stack set the bd_addr using this by default
> if the application does not set the g_dev_addr. This would eliminate the
> need for the application developer to obtain (or generate and persist) a
> unique bd_addr for their device.
> 
> Thanks,
> Pritish
> 
> On Mon, Mar 13, 2017 at 3:07 PM, amit mehta <gmate.a...@gmail.com> wrote:
> 
>> On Mon, Mar 13, 2017 at 10:24 PM, will sanfilippo <wi...@runtime.io>
>> wrote:
>>> amit:
>>> 
>>> A couple of things:
>>> 
>>> 1) For the bsp hw id call for the nrf52, there are actually 64-bits (8
>> bytes) worth of unique date. So you dont need to specify BLE_DEV_ADDR_LEN
>> for that. Not sure what you want to do the device ID btw. What are you
>> going to use it for?
>> 
>> Will, yes, It is 8 bytes, I was suggesting to use the 48 of those 64 bits
>> in sample BLE peripheral application (bleprph and/or similiar), so
>> that flashing different target boards, do not end up advertising
>> the same current default address, i.e. 0x0a,0x0a,0x0a,0x0a,0x0a,0x0a
>> 
>> I think the originator of this thread, Pritish had this issue, which
>> I thought could be solved by using the unique device id value provided
>> by nrf5xxx devices.
>> 
>> That was the whole rationale behind this.
>> 
>> Thanks,
>> Amit
>> 
>> --
>> Sent from Bahamas, while drinking chi-chi and piña colada.
>> 



Re: MyNewt: NimBLE: Does BD_ADDR have to be unique?

2017-03-13 Thread will sanfilippo
Amit:

I do not have a really strong opinion here, so would be interested to hear what 
others think. I am referring to the API. Certainly, the “get_devaddr” API is 
appropriate for the nrf chips but not sure if we would want to have multiple 
API in case we want to port this to other chips and also to support the fact 
that a device can have both a random address and public address.


> On Mar 13, 2017, at 2:16 PM, amit mehta  wrote:
> 
>> if above is agreed, then nrf5xxx_get_devaddr_type() and
>> nrf5xxx_get_devid() might be okay.
> 
> Small typo; please read nrf5xxx_get_devid() as nrf5xxx_get_devaddr()
> 
> Thanks,
> Amit
> 
> -- 
> Sent from Bahamas, while drinking chi-chi and piña colada.



Re: MyNewt: NimBLE: Does BD_ADDR have to be unique?

2017-03-13 Thread will sanfilippo
amit:

A couple of things:

1) For the bsp hw id call for the nrf52, there are actually 64-bits (8 bytes) 
worth of unique date. So you dont need to specify BLE_DEV_ADDR_LEN for that. 
Not sure what you want to do the device ID btw. What are you going to use it 
for?

2) If you want to put the API in ble_hw.c in hw/drivers/nimble/src/ I would not 
name them nrf5xxx. I would use the current naming convention of ble_hw_. I 
would have this return an int and pass in a pointer to the device address 
structure (ble_addr_t). This way if the HW does not support having random 
static addresses or other addresses they can just return -1.

Also, be careful about “endianness" and how you write the code to access the 
FICR and “copy it” to the device address pointer. There is a structure called 
“ble_addr_t” and that is what should be passed. I do not think I would have two 
different API for address type and such. I think I would do something like 
Szymon mentioned:

int ble_hw_read_random_static_addr(ble_addr_t *addr);   /* Remember what Szymon 
pointed out. You have to set the upper two bits appropriately here. The chip 
does not do this */
int ble_hw_read_public_addr(ble_addr_t *addr);  /* Nordic does 
not come programmed with one so not sure what I would do here. Either not 
implement it or have it return -1 */

Note on “endianness” and device address. If you look at ble_addr_t and the 
macros defined to access it, you will see that the least significant byte is in 
location 0 (val[0]) and the most significat is in val[5]. I realize it is a bit 
confusing to talk about endianness with all this but just wanted to point it 
out.

I do have to say, I am still not sure if these API belong in hw/bsp as opposed 
to hw/drivers/nimble/nrfxxx/src.

What do others think?

> On Mar 13, 2017, at 1:32 PM, amit mehta <gmate.a...@gmail.com> wrote:
> 
> On Mon, Mar 13, 2017 at 6:06 PM, will sanfilippo <wi...@runtime.io> wrote:
>> amit:
>> 
>> There is already a function to return the device id. It is named 
>> hal_bsp_hw_id() and exists in hw/mcu/nordic/nrf52xxx/src/nrf52_hw_id. Many 
>> chips come with some form of unique id which was why this was placed in 
>> hw/mcu.
> 
> Yes, thank you. I can reuse this API (for demo purposes), something like:
> 
> rc = hal_bsp_hw_id(g_dev_addr, BLE_DEV_ADDR_LEN);
> assert(rc == BLE_DEV_ADDR_LEN);
> 
>> I do not think I would put the API for ble device address in the same place. 
>> Why? Because it is BLE specific. This device id is not. I guess folks can 
>> make an argument that a public device address is just a 6-byte MAC address 
>> and thus could be used for protocols other than BLE. The random static 
>> address is BLE specific though.
>> 
>> 
>> My vote would be to place this API in one of the following directories:
>> hw/drivers/nimble/nrf5xxx/src/  /* there is a file called 
>> ble_hw.c in this dir that might be a good fit */
>> hw/mcu/nordic/nrf5xxx/src/  /* depending on whether we 
>> think this should be a hal */
> 
> Agree with you and I'm also more inclined towards:
> hw/drivers/nimble/nrf5xxx/src
> 
>> Finally, as for the naming convention, if this thing is going to be called 
>> generically by code outside a nordic specific directory I would not name it 
>> nrf_xxx. Once we have consensus where to place these API I think the name 
>> will follow...
> 
> if above is agreed, then nrf5xxx_get_devaddr_type() and
> nrf5xxx_get_devid() might be okay.
> 
> Thanks,
> Amit
> -- 
> Sent from Bahamas, while drinking chi-chi and piña colada.



Re: MyNewt: NimBLE: Does BD_ADDR have to be unique?

2017-03-13 Thread will sanfilippo
amit:

There is already a function to return the device id. It is named 
hal_bsp_hw_id() and exists in hw/mcu/nordic/nrf52xxx/src/nrf52_hw_id. Many 
chips come with some form of unique id which was why this was placed in hw/mcu.

I do not think I would put the API for ble device address in the same place. 
Why? Because it is BLE specific. This device id is not. I guess folks can make 
an argument that a public device address is just a 6-byte MAC address and thus 
could be used for protocols other than BLE. The random static address is BLE 
specific though.


My vote would be to place this API in one of the following directories:
hw/drivers/nimble/nrf5xxx/src/  /* there is a file called 
ble_hw.c in this dir that might be a good fit */
hw/mcu/nordic/nrf5xxx/src/  /* depending on whether we 
think this should be a hal */

Finally, as for the naming convention, if this thing is going to be called 
generically by code outside a nordic specific directory I would not name it 
nrf_xxx. Once we have consensus where to place these API I think the name will 
follow...
> On Mar 11, 2017, at 8:51 AM, amit mehta  wrote:
> 
> On Thu, Mar 9, 2017 at 8:56 AM, Szymon Janc  wrote:
>> Hi,
>> 
>> On 9 March 2017 at 02:52, Christopher Collins  wrote:
>>> Hi Pritish,
>>> 
>>> On Wed, Mar 08, 2017 at 02:47:01PM -0800, Pritish Gandhi wrote:
 So it seems like the nrf52dk should have a RANDOM STATIC address which
 should be programmed once in the hardware. However I'm not able to read
 that address from the host.
 Would appreciate any help.
>>> 
>>> You can configure the device with a random static address using this
>>> function:
>>> 
>>>int ble_hs_id_set_rnd(const uint8_t *rnd_addr)
>>> 
>>> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_hs_id/functions/ble_hs_id_set_rnd/)
>>> 
>>> The argument that you pass to this function would be the address
>>> that was preprogrammed into the board.  I'm afraid I am not sure how to
>>> read this preprogrammed address out of the nRF hardware.
>> 
>> There is FICR register in nRF5X that can be used as a permanent source
>> for static random
>> address (it is basically a random number so we would still need to
>> mask bits for proper static
>> random address). The thing is that there is no standard HCI command
>> for reading static
>> random address from controller. So we would need to have vendor
>> command for this or
>> use global for time being.
> 
> I think, there is a use case to utilize the Nordic's random static
> (FICR::DEVICEADDRTYPE = 1) device address to use in advertisement
> packet. I was wondering, where should I add (if agreed) these APIs
> (say, nrfxx_get_devaddr_type(), nrfxx_get_devaddr(), nrfxx_get_devid() ) ?
> 
> Comments please.
> 
> Thanks,
> Amit



Re: How to verify porting MyNewt to another board was success?

2017-03-02 Thread will sanfilippo
Well, there are a number of ways to verify that the port is successful.

If you get blinky up and running you can be pretty assured that the gpio and 
core os are working on your board. There are other apps that test other 
functionality so depending on what you want to verify, you would choose the 
appropriate app.

Regarding peripherals, here are some of the other apps that can test 
peripherals:
1) spitest can be used to test spi master and spi slave
2) timtest will test basic hal_timer port
3) slinky can be used for basic serial port testing

There are other drivers to test other peripherals as well; others can chime in 
on those.

Have fun!

> On Mar 2, 2017, at 4:49 AM, Louie Lu  wrote:
> 
> Hi everyone,
> 
> I'm now trying to port MyNewt to STM32F429discovery, and have some
> nice shot at blinky now.
> 
> My question is, how could I verify that my porting is successful (to
> porting peripheral correctly, and the core system function)?
> 
> Is that doing in runtime, or just run the unittest to verify?
> 
> 
> Thanks,
> Louie.



Re: Hackillinois this weekend in Urbana IL

2017-02-22 Thread will sanfilippo
Do not know how helpful this will be and it is just my own two cents so take it 
for what it is worth :-)

First, this is more of a favor/ask: if you have folks going through the 
installation process and the documentation, any feedback you can provide on 
what was easy/good/hard/bad/confusing would be great to know.

As far as contributions go I just have a couple of thoughts. Not sure if these 
will be a tall order or not. The first is adding a BLE profile. There are a 
number of defined profiles and some might be implementable in a short time. 
Another idea could be to add a driver for a sensor (or sensors).

Let us know how it goes!

> On Feb 22, 2017, at 1:34 PM, Jacob Rosenthal  wrote:
> 
> Hey newt folks,
> 
> Im mentoring at https://hackillinois.org/ this weekend on bluetooth and
> embedded in general
> 
> ~1000 Students will create and contribute to open source projects all
> weekend starting friday. Im not sure what skill levels and languages Ill
> have available to me, but if anyone has ideas for mynewt contribs Im
> definitely going to tell them about mynewt and bring some targets for them
> to play with.
> 
> --Jacob



Re: Resources Reserved for Mynewt

2017-02-20 Thread will sanfilippo
I dont think a document exists which details all of the used resources. 
Obviously, it is based on the packages that are used in your application. Some 
general information:

OS uses TIMER1 or RTC1 (for os time)
Nimble stack uses TIMER0 for high resolution timer.
Nimble stack uses a number of the pre-programmed radio PPI.
Nimble stack uses the radio peripheral.

Other packages may use other interrupts (uart interrupts, spi, i2c, etc). Not 
sure what other PPI may be used.

Let us know if you have further questions. Should be fairly easy to determine 
which resources are used by which packages by searching the codebase.


> On Feb 20, 2017, at 3:37 AM, Lm Chew  wrote:
> 
> [https://tr.cloudmagic.com/h/v6/emailtag/tag/1487590625/e07327a00ce48c6feaf5c6318fd5666d/98148963e301c00d168ac75868cda51d/922b3d0cc2cdb8f9bb0eaaeb1a0d8dbc/7d73f5719844943a877cfefbca240ecc/newton.gif]
> 
> Hi,
> 
> Is there a document that list the resource reserved for Mynewt and what 
> resources free/safe for us to use on the Nrf52?
> 
> eg.
> What PPI channels is utilized by mynewt?
> What Timer used by mynewt?
> What Software Interrupt is used by mynewt?
> 
> Best Regards,
> Chew
> 



Re: Issues with bleprph and blecent on nRF51822xxaa

2017-02-16 Thread will sanfilippo
Hello there Marcos:

Indeed, some of the sample apps probably wont run in 16KB RAM. If a malloc 
fails it should be pretty easy to debug as I would suspect most mallocs in the 
code assert() if they cant get the memory.

Is there a specific app your want to run?


> On Feb 16, 2017, at 8:19 PM, Marcos Scheeren  wrote:
> 
> Hi, Marko.
> 
> On Tue, Feb 14, 2017 at 2:33 PM, marko kiiskila  wrote:
>> Hi,
>> 
>> 
>> Quick peek to gdb sources tells me that the memory region is marked as
>> flash, and comment says that only allow writes during ‘load’ phase (which,
>> technically, I guess would be correct). Check the output of ‘info mem’, and 
>> see if you
>> can change the properties of it.
>> 
> 
> (gdb) info mem
> Using memory regions provided by the target.
> Num Enb Low Addr   High Addr  Attrs
> 0   y   0x 0x0004 flash blocksize 0x400 nocache
> 1   y   0x10001000 0x10001100 flash blocksize 0x100 nocache
> 2   y   0x2000 0x20004000 rw nocache
> 
> 
>> Alternative would be to convert the binary blob into a ihex or srecord 
>> format.
>> gdb can load these the same way as it can load elf. You can use objcopy
>> to do that. Note that elf has location data, as do ihex and srecord.
>> 
> 
> I tried "$ arm-none-eabi-objcopy bletest.elf.bin -O srec bletest.elf.bin.srec"
> but it yields: arm-none-eabi-objcopy:bletest.elf.bin: File format not 
> recognized
> 
> When inputting the .elf file, it converts ok to both srec and ihex and GDB
> accepts both just fine.
> 
> 
>> 
>> My guess the system is out of heap. Check while in gdb:
>> p/d sbrkBase-brk
>> 
>> Hopefully there are things you can prune out.
>> 
> 
> The output of p/d sbrkBase-brk in gdb:
> blehci: -5392
> bletest: -1120
> bleprph: -192
> bleprph (BLE_LL_CFG_FEAT_LE_ENCRYPTION: 0 // BLE_SM_LEGACY: 0):  -1072
> blecent: -1200
> 
>> 
>> Highly unlikely that the linker scripts would cause this.
>> I suspect it’s the RAM usage.
> 
> Could it be that for some examples/apps 16KB MCUs aren't just enough?
> 
>> 
>> Let me know how it goes,
>> M
>> 
>> 
> 
> Thank you.
> Marcos.



Re: BLE HCI support on NRF52DK

2017-02-10 Thread will sanfilippo
Hello Alan:

I may be reading this incorrectly or mistaken, but the host does not need to 
see the NOOP from the controller. The controller needs to be ready to receive 
the HCI Reset command from the host. At least, that is my understanding after 
the email exchange with Andrzej. I would have thought there would be a retry 
mechanism as well but that is not the case. So all you need to insure is that 
the controller is up and running before the host sends the HCI Reset.

Am I making sense? :-)

> On Feb 10, 2017, at 12:39 PM, Alan Graves <agra...@deltacontrols.com> wrote:
> 
> Hi Guys,
> 
> The BLE hardware I have to work with does not provide hardware flow control 
> with RTS/CTS. The CTS line is grounded and the RTS is left not connected. In 
> any case the BLE module is on its own board that is internally connected to 
> the Linux host processor. It is probably safe to assume that in this 
> situation the Nordic chip will be powered up and expecting the Host to be 
> ready to receive any messages sent via the BLE HCI before the Linux BlueZ 
> stack is initialized. Obviously I could arbitrarily delay the NOOP message 
> timing so that the two ends can be in sync, but to not have a timeout 
> mechanism on the HCI  protocol would seem to me to be a guarantee that a 
> deadlock condition would occur. Another possibility is that perhaps I can 
> find a way to keep the BLE hardware in a reset state until the Host is 
> initialized by driving the RESET signal with a GPIO line.
> 
> ALan
> 
> -Original Message-
> From: will sanfilippo [mailto:wi...@runtime.io] 
> Sent: Monday, February 06, 2017 5:55 PM
> To: dev@mynewt.incubator.apache.org
> Subject: Re: BLE HCI support on NRF52DK
> 
> Ah ok; that is quite interesting. I did not realize that was the case and I 
> was thinking of an external board that was powered off (and not quite 
> trusting the state of the flow control lines).
> 
> Then really the only thing we need to make sure on our end is that when UART 
> is brought up and the flow control line is properly de-asserted the nimble 
> stack sees any commands that were sent by the host (in the case where the 
> UART comes up first, then the rest of the nimble stack).
> 
> Will
> 
>> On Feb 6, 2017, at 10:27 AM, Andrzej Kaczmarek 
>> <andrzej.kaczma...@codecoup.pl> wrote:
>> 
>> Hi Will,
>> 
>> I could not find any timeout defined for HCI commands so the problem 
>> here would be when host should timeout and resend HCI Reset. I think 
>> we should just assume that hw is designed properly and flow control 
>> lines are either pulled or driven externally all the time so this is not 
>> overly complicated.
>> Actually, if you check Vol 4 Part A Section 1, it says that objective 
>> of UART TL is to have both ends on the same PCB and communication is 
>> free form errors, so there is no case that we suddenly have controller 
>> disconnected - I'd say above assumption is reasonable :-)
>> 
>> BR,
>> Andrzej
>> 
>> 
>> 
>> On Sat, Feb 4, 2017 at 12:25 AM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>>> Hi Andrzej
>>> 
>>> Thanks for pointing me to Vol 2 Part E, Section 4.4. I was recalling 
>>> a section of the spec that talked about this but could not find it 
>>> when I sent this email. Thus, I completely agree that the controller 
>>> sending a NOOP does not in any way indicate that it reset. It is fine 
>>> if the controller does send a NOOP, but the host cannot use that as 
>>> an indication that the controller reset. That does make things a bit 
>>> tricky though as you mention, but hopefully if something is really 
>>> badly out of sync the host will figure it out and reset the controller.
>>> 
>>> I was also thinking of the following scenario which I should have 
>>> explained a bit better. If the controller is powered off, it is not 
>>> driving the flow control line so I am not sure what would happen HW 
>>> wise in this case. It could be that the flow control line is 
>>> floating, and therefore the host could see it in various states. 
>>> Therefore, I would suspect that when a host issues a HCI Reset and 
>>> does not get a response for some amount of time, it just keeps issuing the 
>>> HCI Reset until it gets a response.
>>> 
>>> Given that a controller can send a NOOP on power up, I cant see how 
>>> we can guarantee that the following will NOT happen:
>>> 
>>> * Host sends HCI Reset
>>> * Controller sends NOOP
>>> * Controller sends Command Complete w/Reset opcode
>>> 
>>> I can also 

[RESULT][VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-09 Thread will sanfilippo
Hello all,

Voting for Apache Mynewt 1.0.0-b2-incubating-rc1 is now closed.  The release 
has passed this step of the process.  The vote breakdown is as follows:

+1 Christopher Collins (binding)
+1 Sterling Hughes (binding)
+1 Jim Jagielski (binding)
+1 Szymon Janc
+1 Marko Kiiskila (binding)
+1 Padmasheela Kiiskila
+1 Vipul Rahane (binding)
+1 Will San Filippo (binding)
+1 David Simmons

Total: +6 binding, +3 non-binding

We can now call a vote on the general@incubator list.

Thank you to all who voted.
Will San Filippo

Re: sysint() fails

2017-02-08 Thread will sanfilippo
David:

It seems like, from this email, that things are now working for you. Are you 
still going to vote -1 or are you going to change your vote?


> On Feb 8, 2017, at 5:33 AM, David G. Simmons  wrote:
> 
> 
>> On Feb 7, 2017, at 2:38 PM, marko kiiskila  wrote:
>> 
>> can you get a backtrace of that crash?
> 
> Sorry, I was not able to get a backtrace ... my shell history didn't go back 
> far enough and I've been playing around with stuff for hours. 
> 
>> 
>> Develop branch and the 1.0.0 beta2 release branches have diverged a bit, so 
>> we
>> should see what this assert() is about.
> 
> I did get the 1.0.0B2 branch installed, and things seem to be better ... at 
> least with the bundled apps. I *did* finally have to completely erase the 
> chip and start over before it all went away.
> 
>> One issue I ran across a month back with nrf52 and sys/reboot package. The 
>> flash area
>> containing FCB was holding some other data. This was causing fcb_init() on 
>> that region to
>> return non-zero. Thereby causing sys/reboot package init to assert() during 
>> sysinit().
>> I think I had been playing around with boot loader with was bigger in size, 
>> and
>> had trailing part of my big bootloader in that area.
>> 
>> The way I sorted that out was by erasing the flash, and then reinstalled 
>> bootloader
>> and my app again.
> 
> I will try this as I'm seeing the ADC malfunctioning and getting the same 
> error
> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
> e=e@entry=0x0) at 
> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
> 125  asm("bkpt");
> from an assert() 
> 
> ...
> 
> Forgot to hit send yesterday ... And I found the culprit here as well. 
> 
> 
> 
> --
> David G. Simmons
> (919) 534-5099
> Web • Blog • Linkedin • Twitter • GitHub
> /** Message digitally signed for security and authenticity.  
> * If you cannot read the PGP.sig attachment, please go to 
>  * http://www.gnupg.com/ Secure your email!!!
>  * Public key available at keyserver.pgp.com
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



Re: [VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-07 Thread will sanfilippo
> [X ] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because…
> 
 +1 (binding)

> Hello all,
> I am pleased to be calling this vote for the source release of Apache
> Mynewt 1.0.0, beta 2.
> 
> Apache Mynewt is a community-driven, permissively licensed open source
> initiative for constrained, embedded applications. Mynewt provides a
> real-time operating system, flash file system, network stacks, and
> support utilities for real-world embedded systems.
> 
> For full release notes, please visit the Apache Mynewt Wiki:
> https://cwiki.apache.org/confluence/display/MYNEWT/Release+Notes
> 
> This release candidate was tested as follows:
>   1. Manual execution of the Mynewt test plan:
>  
> https://cwiki.apache.org/confluence/display/MYNEWT/Apache+Mynewt+Test+Plan
>  The test results can be found at:
>  https://cwiki.apache.org/confluence/display/MYNEWT/1.0.0-b2+Test+Results
>   2. The full unit test suite for this release was executed via "newt
>  test all" with no failures.  This testing was performed on the
>  following platforms:
>* OS X 10.10.5
>* Linux 4.4.6 (Gentoo)
> 
> The release candidate to be voted on is available at:
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/
> 
> The commits under consideration are as follows:
> blinky:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-blinky
>   commit a69b409197a845bc75748af564cb08c4ec7701d4
> core:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-core
>   commit de35d2337189a69d97aa3fdccc4f7bfaeb31efc9
> newt:
>   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-newt
>   commit fdac74ff83f21a11c7fbaa2e1adc2d50cbf1e612
> 
> In addition, the following newt convenience binaries are available:
>   linux: 
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-linux-1.0.0-b2-incubating.tgz
>   osx: 
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-osx-1.0.0-b2-incubating.tgz
> 
> The release candidate is signed with a GPG key available at:
> https://dist.apache.org/repos/dist/dev/incubator/mynewt/KEYS
> 
> The vote is open for at least 72 hours and passes if a majority of at
> least three +1 PPMC votes are cast.
> [ ] +1 Release this package
> [ ]  0 I don't feel strongly about it, but don't object
> [ ] -1 Do not release this package because…
> 
> Anyone can participate in testing and voting, not just committers,
> please feel free to try out the release candidate and provide your
> votes.
> 
> A separate [DISCUSS] thread will be opened to talk about this release
> candidate.
> 
> Thanks,
> Will



Re: [DISCUSS] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-07 Thread will sanfilippo
The mynewt binary that was committed was built with go version 1.6. There are 
issues with running go binaries on newer versions of mac osx if they were built 
with 1.6 so that is the reason it is crashing (I suspect). You need go 1.7 if 
you are running macOS 10.12 Sierra.

> On Feb 7, 2017, at 10:01 AM, marko kiiskila <ma...@runtime.io> wrote:
> 
> Hi,
> 
> should the NOTICE files be updated with 2017?
> Looks like blinky and newt still have copyright from 2015-2016.
> Core has it from 2015-2017.
> 
> Verified signatures. Those check out.
> 
> Checked the binaries for OSX and Linux, these seem to be mostly ok.
> newt binary for OSX is giving me occasional crash; never in a repeatable
> spot though. binary for Linux is working just fine, and newt on OSX works
> without issues when I build it from source.
> Version for newt is ok.
> 
> I can build and run blinky on both Linux and Mac.
> 
>> On Feb 6, 2017, at 5:35 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> Hi all,
>> 
>> This thread is for any and all discussion regarding the release of
>> Apache Mynewt 1.0.0-b2-incubating-rc1.  All feedback is welcome.
>> 
>> Thanks,
>> Will
> 



Re: sysint() fails

2017-02-07 Thread will sanfilippo
Hello David:

I did not attempt to re-test all the apps you mentioned below, but bletiny on 
the nrf52dk is working just fine.

Another note: the release is on branch 1_0_0_b2_dev. That is the branch I would 
use, or check out the tag (mynewt_1_0_0_b2_rc1_tag).

Thanks

> On Feb 7, 2017, at 8:07 AM, Christopher Collins  wrote:
> 
> Hi David,
> 
> Could your version of the newt tool be out of date?  Some backwards
> compatibility breaking changes were made about two weeks ago.  If that
> isn't the problem, could you grab a backtrace in gdb at the point of the
> crash ("bt" or "where" in gdb)?
> 
> Thanks,
> Chris
> 
> 
> On Tue, Feb 07, 2017 at 09:43:19AM -0500, David G. Simmons wrote:
>> Having some trouble this morning with the nrf52dk board.
>> 
>> 389  sysinit();
>> (gdb) n
>> 
>> Program received signal SIGTRAP, Trace/breakpoint trap.
>> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
>> e=e@entry=0x0) at 
>> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
>> 125 asm("bkpt");
>> 
>> I've updated both mynewt_nordic and apache-mynewt-core to the latest develop 
>> branches, and
>> 
>> int
>> main(int argc, char **argv)
>> {
>>int rc;
>> 
>>/* Initialize OS */
>>sysinit();
>> 
>> ...
>> 
>> Fails at sysinit()
>> 
>> I've built a new bootloader (just in case). I thought maybe it was something 
>> I was doing in my app, so I built and loaded core/apps/bleprph and
>> 
>> 259  sysinit();
>> (gdb) n
>> 
>> Program received signal SIGTRAP, Trace/breakpoint trap.
>> __assert_func (file=file@entry=0x0, line=line@entry=0, func=func@entry=0x0, 
>> e=e@entry=0x0) at 
>> repos/apache-mynewt-core/kernel/os/src/arch/cortex_m4/os_fault.c:125
>> 125 asm("bkpt");
>> 
>> So it appears that something is broken for at least the nrf52dk dev board ...
>> 
>> cd repos/apache-mynewt-core/
>> DSimmons-Pro:apache-mynewt-core dsimmons$ git status -v
>> On branch develop
>> Your branch is up-to-date with 'origin/develop'.
>> cd ../mynewt_nordic/
>> DSimmons-Pro:mynewt_nordic dsimmons$ git status -v
>> On branch develop
>> Your branch is up-to-date with 'origin/develop'.
>> nothing to commit, working tree clean
>> 
>> dg
>> --
>> David G. Simmons
>> (919) 534-5099
>> Web  • Blog  • 
>> Linkedin  • Twitter 
>>  • GitHub 
>> /** Message digitally signed for security and authenticity.
>> * If you cannot read the PGP.sig attachment, please go to
>> * http://www.gnupg.com/  Secure your email!!!
>> * Public key available at keyserver.pgp.com 
>> **/
>> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
>> 
>> There are only 2 hard things in computer science: Cache invalidation, naming 
>> things, and off-by-one errors.
>> 
>> 
> 
> 



[DISCUSS] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-06 Thread will sanfilippo
Hi all,

This thread is for any and all discussion regarding the release of
Apache Mynewt 1.0.0-b2-incubating-rc1.  All feedback is welcome.

Thanks,
Will


[VOTE] Release Apache Mynewt 1.0.0-b2-incubating-rc1

2017-02-06 Thread will sanfilippo
Hello all,
I am pleased to be calling this vote for the source release of Apache
Mynewt 1.0.0, beta 2.

Apache Mynewt is a community-driven, permissively licensed open source
initiative for constrained, embedded applications. Mynewt provides a
real-time operating system, flash file system, network stacks, and
support utilities for real-world embedded systems.

For full release notes, please visit the Apache Mynewt Wiki:
https://cwiki.apache.org/confluence/display/MYNEWT/Release+Notes

This release candidate was tested as follows:
   1. Manual execution of the Mynewt test plan:
  https://cwiki.apache.org/confluence/display/MYNEWT/Apache+Mynewt+Test+Plan
  The test results can be found at:
  https://cwiki.apache.org/confluence/display/MYNEWT/1.0.0-b2+Test+Results
   2. The full unit test suite for this release was executed via "newt
  test all" with no failures.  This testing was performed on the
  following platforms:
* OS X 10.10.5
* Linux 4.4.6 (Gentoo)

The release candidate to be voted on is available at:
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/

The commits under consideration are as follows:
blinky:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-blinky
   commit a69b409197a845bc75748af564cb08c4ec7701d4
core:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-core
   commit de35d2337189a69d97aa3fdccc4f7bfaeb31efc9
newt:
   repos: https://git-wip-us.apache.org/repos/asf/incubator-mynewt-newt
   commit fdac74ff83f21a11c7fbaa2e1adc2d50cbf1e612

In addition, the following newt convenience binaries are available:
   linux: 
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-linux-1.0.0-b2-incubating.tgz
   osx: 
https://dist.apache.org/repos/dist/dev/incubator/mynewt/apache-mynewt-1.0.0-b2-incubating/rc1/apache-mynewt-newt-bin-osx-1.0.0-b2-incubating.tgz

The release candidate is signed with a GPG key available at:
https://dist.apache.org/repos/dist/dev/incubator/mynewt/KEYS

The vote is open for at least 72 hours and passes if a majority of at
least three +1 PPMC votes are cast.
[ ] +1 Release this package
[ ]  0 I don't feel strongly about it, but don't object
[ ] -1 Do not release this package because…

Anyone can participate in testing and voting, not just committers,
please feel free to try out the release candidate and provide your
votes.

A separate [DISCUSS] thread will be opened to talk about this release
candidate.

Thanks,
Will

Re: BLE HCI support on NRF52DK

2017-02-03 Thread will sanfilippo
Hi Andrzej

Thanks for pointing me to Vol 2 Part E, Section 4.4. I was recalling a section 
of the spec that talked about this but could not find it when I sent this 
email. Thus, I completely agree that the controller sending a NOOP does not in 
any way indicate that it reset. It is fine if the controller does send a NOOP, 
but the host cannot use that as an indication that the controller reset. That 
does make things a bit tricky though as you mention, but hopefully if something 
is really badly out of sync the host will figure it out and reset the 
controller.

I was also thinking of the following scenario which I should have explained a 
bit better. If the controller is powered off, it is not driving the flow 
control line so I am not sure what would happen HW wise in this case. It could 
be that the flow control line is floating, and therefore the host could see it 
in various states. Therefore, I would suspect that when a host issues a HCI 
Reset and does not get a response for some amount of time, it just keeps 
issuing the HCI Reset until it gets a response.

Given that a controller can send a NOOP on power up, I cant see how we can 
guarantee that the following will NOT happen:

* Host sends HCI Reset
* Controller sends NOOP
* Controller sends Command Complete w/Reset opcode

I can also see this happening:

* Host sends HCI Reset
* Controller sends NOOP
* Nothing else happens

I certainly agree that once the controller actively takes control of the flow 
control line it should honor the HCI Reset although I still see the possibility 
of the two scenarios described above happening.

Regarding HW Error: that is something we can do in the controller as we can 
look at the reason why the device reset and send a HW error event.


> On Feb 3, 2017, at 12:12 PM, Andrzej Kaczmarek 
> <andrzej.kaczma...@codecoup.pl> wrote:
> 
> Hi Will,
> 
> On Fri, Feb 3, 2017 at 7:08 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> I might be getting a bit confused here so hopefully I am making some
>> sense. I seem to recall some discussion around this in the past but I cant
>> recall :-) Anyway...
>> 
>> It is my understanding that the first thing a controller should do when it
>> powers up is send a NOOP. Looking at the Core V4.2 Spec, Vol 6 Part D
>> Section 2 you can see a message sequence chart that shows this. It sounds
>> like folks think MyNewt is different than other controllers in this
>> respect. If so, we can change that behavior, but it makes sense to me to do
>> this, as it will inform the host that the controller has powered up.
>> 
> 
> The section you quote is only informative (see section 1.1 of the same
> part) and the diagram is only one of possibilities. The actual requirement
> is in Vol 2, Part E, Section 4.4 which states that after power up host is
> allowed to send up to 1 outstanding command so 1 credit is assumed here.
> Also controller does not need to send noop, but it is also not an error to
> do so.
> 
> Of course, there is a chicken and egg problem here. If the controller is
>> not powered up and the host sends a HCI Reset, the host is obviously not
>> going to get a response. I am also not sure one can trust the flow control
>> lines if the board is not powered up but one would hope that RTS/CTS are
>> pulled the proper way if the controller is not powered.
>> 
> 
> I guess host can assume that CTS/RTS lines work properly, otherwise there
> is no way to detect when controller is ready to receive something (i.e. is
> attached).
> 
> 
>> Certainly, an interesting issue with the MyNewt HCI firmware would be the
>> order in which the UART gets initialized and when the LL is initialized. In
>> the end, I dont think it should really matter, as the host should have to
>> deal with the controller not being ready to receive the HCI Reset.
>> 
> 
> My understanding of spec section I mentioned is that controller should be
> always ready to receive HCI Reset after power up. If it is not, then flow
> control on transport layer should not be enabled.
> 
> 
>> Here are the basic scenarios and what I would expect:
>> 
>> 1. Controller powers up first and host is not powered or not ready
>> * Controller issues NOOP but host does not see it.
>> * Host wakes up and sends HCI Reset.
>> * Host gets Command Complete (with proper opcode) and all is well
>> 
> 
> Agree.
> 
> 2. Host powers up first and controller powers up some time later
>> * Host sends HCI Reset but gets no response.
>> * Host sits in a loop, sending HCI Resets periodically.
>> * If Host gets a NOOP, it knows that the controller has powered up. In
>> this case, the host should issue HCI Reset and should get a Command
>> Complete.
>&g

Creating branch for 1.0.0 beta2 release

2017-02-01 Thread will sanfilippo
Hello:

Just a heads up. I am going to create the 1.0.0 beta 2 release branch.


Re: interrupt latency in mynewt

2017-01-28 Thread will sanfilippo
Jiacheng:

How are you measuring the latency? I presume you have a scope on a GPIO input 
and maybe set a GPIO high when you are inside the ISR and measure the time 
between them? Or are you measuring the timing using a task? There is certainly 
some hard limitation on interrupt response time but I am not sure what that is 
for the nrf52 specifically. If you tell me exactly how you are measuring the 
timing, what tasks you have running and their respective priorities, I might be 
able to hazard a guess as to why there are differences. I would also like to 
know what interrupts are enabled and their priorities.


> On Jan 27, 2017, at 6:38 PM, WangJiacheng  wrote:
> 
> Hi,
> 
> I have an interrupt triggered  by GPIO input, and observed different 
> interrupt latency from different CPU state. If all the tasks are sleep, the 
> interrupt latency is about 20us-30us, if the CPU is in idle mode with simple 
> calling “__WFI()”, the interrupt latency is about 10us-15us, and if the CPU 
> is running, the interrupt latency can be within 8us.
> 
> I do the test as following, create a low priority task with 3 case:
> 
> 1), the task loop is like
> while (1){
>   /* keep the task in sleep mode, the interrupt will be 20us-30us */
>os_time_delay(OS_TICKS_PER_SEC);
> }
> 
> 2). the task loop is like
> while (1){
>   /* put the CPU in idle mode by simple calling WFI, the interrupt will be 
> 10us-150us */
>__WFI;
> }
> 
> 3). the task loop is like
> while (1){
>   /* keep the CPU always running, the interrupt will be within 8us */
>   os_cputime_delay_usecs(100);
> }
> 
> Any idea to reduce the interrupt latency from all tasks are in sleep mode? or 
> there is a hard limitation of interrupt response time?
> 
> Thanks,
> 
> Jiacheng



Re: os_time_delay in milliseconds / microseconds

2017-01-26 Thread will sanfilippo
os_cputime_delay_ticks does not put the task to sleep; it was meant for short 
blocking delays. The nrf_delay_ms() function doesnt put the task to sleep 
either so I am not sure why you are seeing a difference between the two. 

> On Jan 26, 2017, at 6:03 AM, then yon  wrote:
> 
> Dear Jiacheng,
> 
> Thanks for your reply.
> 
> When i used os_cputime_delay_ticks() function it will cause my app hang and 
> it will never goes into idle stat.
> 
> I found the solution by using the nrf_delay_ms from nordic sdk.
> 
> Thank you.
> 
> Regards,
> 
> Then Yoong Ze
> 
> 
> On 26/1/2017 7:41 PM, WangJiacheng wrote:
>> Hi Then,
>> 
>> The OS time tick resolution is defined by OS_TICKS_PER_SEC.
>> 
>> If you want higher time resolution, use CPU time. The default CPU time tick 
>> is 1 microsecond, function os_cputime_delay_ticks() should be used.
>> 
>> Moreover, you can change CPU timing frequency to change CLOCK_FREQ and 
>> OS_CPUTIME_FREQ in syscfg.yml.
>> 
>> Jiacheng
>> 
>> 
>> 
>>> 在 2017年1月26日,16:00,then yon  写道:
>>> 
>>> Dear Support,
>>> 
>>> I'm working on a timing critical app; but the os_time_delay didn't gave me 
>>> a precise timing.
>>> 
>>> Currently the min delay i can get is more than 2ms with os_time_delay 
>>> function.
>>> 
>>> Somehow i notice that the clock time have up to microsecond precision; but 
>>> how do i make a delay with that?
>>> 
>>> Thank you.
>>> 
>>> Regards,
>>> 
>>> Then Yoong Ze
>> .
>> 
> 



Re: Scheduling time of Nimble stack

2017-01-25 Thread will sanfilippo
Well, things still might work even at 10-20msecs. All depends on the timing of 
the connection event in relation to the interrupts. You have to miss a number 
of connection events for a connection to drop. Will be interesting to see how 
it performs in those circumstances.

> On Jan 24, 2017, at 11:35 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Thanks, Will,
> 
> Yes, my semaphore code has problem to run, I have removed the code of release 
> the semaphore, and  use “goto” to check the free time again after my task 
> wake up.
> 
> The interrupt frequency depends on the phone’s status. For standby phone, 
> there will be an interrupt every 30s, this is not a big issue since 30s is a 
> quite long time. However, for active phone such as making a call,  there will 
> be several interrupts, and the time separation will only be 10ms-20ms,  this 
> will cause BLE connections to fail. I will continue to work on this issue.
> 
> Best Regards,
> 
> Jiacheng
> 
> 
> 
>> 在 2017年1月25日,14:36,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Jiacheng
>> 
>> 1) Sorry about not converting msecs to os time ticks. Good catch!
>> 2) I understand using a semaphore to wake up a task but looking at the exact 
>> code you have shown, I dont understand why the task would release the 
>> semaphore in this case. Doesnt the interrupt release the semaphore?
>> 3) Blocking interrupts. If you block for 600-700 usecs you will cause 
>> failures in the underlying BLE stack. These wont be “catastrophic” (at 
>> least, I dont think so) but it can cause you to miss things like connection 
>> events, scan requests/responses, advertising events, etc. If your high 
>> priority interrupt fires off frequently you could possibly cause connections 
>> to fail. If you do it occasionally you should be ok.
>> 
>>> On Jan 24, 2017, at 5:08 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
>>> 
>>> Thanks, Will, you help me  a lot.
>>> 
>>> Since my task is triggered by a semaphore, and the semaphore is released by 
>>> another interrupt routine,  so if my task have no enough time to running 
>>> and go to sleep, after wake up, it will release the semaphore again. 
>>> Another minor change is time unit conversion (ms -> OS tick) by function 
>>> os_time_ms_to_ticks(). 
>>> 
>>> The main body of my task will like
>>> //
>>> while (1) 
>>> {
>>>  t = os_sched_get_current_task();
>>>  assert(t->t_func == phone_command_read_handler);
>>> 
>>>  /* Wait for semaphore from ISR */
>>>  err = os_sem_pend(_phone_command_read_sem, OS_TIMEOUT_NEVER);
>>>  assert(err == OS_OK);
>>> 
>>>> time_till_next = ll_eventq_free_time_from_now();
>>>> if (time_till_next > X) {
>>>>/* Take control of transceiver and do what you want */
>>>> } else {
>>>>/* Delay task until LL services event. This assumes time_till_next is 
>>>> not negative. */
>>>>os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>>>os_time_delay(os_time_ms_to_ticks((os_delay + 999) / 1000));
>>>  
>>>  /* Release the semaphore after wake up  */
>>> err = os_sem_release(_phone_command_read_sem);
>>> assert(err == OS_OK);
>>> 
>>>> }
>>> }
>>> //
>>> 
>>> I will test if this can work. BTW, current test results show there will be 
>>> an event collision between 2 stacks about 3~4 hours running.
>>> 
>>> I have a question about using interrupt disable, How long can the LL task 
>>> be blocked by interrupt disable? The high priority interrupt of Nordic’s 
>>> SoftDevice can be blocked only within 10us. I have an interrupt with most 
>>> high priority, it will take 600us~700us, is it safe to block LL task and 
>>> other interrupt such as Nimble Radio and OS time tick during this time?
>>> 
>>> Best Regards,
>>> 
>>> Jiacheng 
>>> 
>>> 
>>> 
>>>> 在 2017年1月25日,00:37,will sanfilippo <wi...@runtime.io> 写道:
>>>> 
>>>> Jiacheng:
>>>> 
>>>> Given that your task is lower in priority than the LL task, you are going 
>>>> to run into issues if you dont either disable interrupts or prevent the LL 
>>

Re: Scheduling time of Nimble stack

2017-01-24 Thread will sanfilippo
Jiacheng

1) Sorry about not converting msecs to os time ticks. Good catch!
2) I understand using a semaphore to wake up a task but looking at the exact 
code you have shown, I dont understand why the task would release the semaphore 
in this case. Doesnt the interrupt release the semaphore?
3) Blocking interrupts. If you block for 600-700 usecs you will cause failures 
in the underlying BLE stack. These wont be “catastrophic” (at least, I dont 
think so) but it can cause you to miss things like connection events, scan 
requests/responses, advertising events, etc. If your high priority interrupt 
fires off frequently you could possibly cause connections to fail. If you do it 
occasionally you should be ok.

> On Jan 24, 2017, at 5:08 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Thanks, Will, you help me  a lot.
> 
> Since my task is triggered by a semaphore, and the semaphore is released by 
> another interrupt routine,  so if my task have no enough time to running and 
> go to sleep, after wake up, it will release the semaphore again. Another 
> minor change is time unit conversion (ms -> OS tick) by function 
> os_time_ms_to_ticks(). 
> 
> The main body of my task will like
> //
> while (1) 
> {
>t = os_sched_get_current_task();
>assert(t->t_func == phone_command_read_handler);
>   
>/* Wait for semaphore from ISR */
>err = os_sem_pend(_phone_command_read_sem, OS_TIMEOUT_NEVER);
>assert(err == OS_OK);
> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>  /* Take control of transceiver and do what you want */
>> } else {
>>  /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>  os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>  os_time_delay(os_time_ms_to_ticks((os_delay + 999) / 1000));
>
>/* Release the semaphore after wake up  */
>   err = os_sem_release(_phone_command_read_sem);
>   assert(err == OS_OK);
> 
>> }
> }
> //
> 
> I will test if this can work. BTW, current test results show there will be an 
> event collision between 2 stacks about 3~4 hours running.
> 
> I have a question about using interrupt disable, How long can the LL task be 
> blocked by interrupt disable? The high priority interrupt of Nordic’s 
> SoftDevice can be blocked only within 10us. I have an interrupt with most 
> high priority, it will take 600us~700us, is it safe to block LL task and 
> other interrupt such as Nimble Radio and OS time tick during this time?
> 
> Best Regards,
> 
> Jiacheng 
>   
> 
> 
>> 在 2017年1月25日,00:37,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Jiacheng:
>> 
>> Given that your task is lower in priority than the LL task, you are going to 
>> run into issues if you dont either disable interrupts or prevent the LL task 
>> from running. Using interrupt disable as an example (since this is easy), 
>> you would do this. The code below is a function that returns the time till 
>> the next event.:
>> 
>> os_sr_t sr;
>> uint32_t time_now;
>> int32_t time_free;
>> 
>> time_free = 1;
>> OS_ENTER_CRITICAL(sr);
>> time_now = os_cputime_get32();
>> sch = TAILQ_FIRST(_ble_ll_sched_q);
>> if (sch) {
>>   time_free = (int32_t)(sch->start_time - time_now);
>> }
>> OS_EXIT_CRITICAL();
>> 
>> /* 
>> * NOTE: if time_free < 0 it means that you have to wait since the LL task
>> * should be waking up and servicing that event soon.
>> */
>> return time_free;
>> 
>> Given that you are in control of what the LL is doing with your app, I guess 
>> you could do something like this in your task;
>> 
>> time_till_next = ll_eventq_free_time_from_now();
>> if (time_till_next > X) {
>>  /* Take control of transceiver and do what you want */
>> } else {
>>  /* Delay task until LL services event. This assumes time_till_next is 
>> not negative. */
>>  os_delay = os_cputime_ticks_to_usecs(time_till_next);
>>  os_time_delay((os_delay + 999) / 1000);
>> }
>> 
>> So the problem with the above code, and also with the code you have below is 
>> something I mentioned previously. If you check the sched queue and there is 
>> nothing on it, you might think you have time, but in reality you dont 
>> because the LL has pulled the item off the schedu

Re: NimBLE host advertising data API

2017-01-24 Thread will sanfilippo
I am not sure I have any intelligent comments on this, but that has never 
stopped me from commenting in the past, so…

I think a byte buffer interface is fine as long as you have helper functions to 
create that buffer. Having folks have to figure out how to create an 
advertisement without any helper functions would be a bad idea (imho).

I am not sure I completely understand your example re:Tx Power Level. Would 
this field still get added by the host or would there be a helper function that 
a developer could call to add the Tx Power Level field to the advertisement?


> On Jan 24, 2017, at 11:45 AM, Christopher Collins  wrote:
> 
> Hello all,
> 
> I've mentioned this before - I really don't care for the advertising
> data API that we ended up with:
> http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_set_fields/
> 
> I think we should change this API now before the 1.0 release.  I was
> wondering what others think.
> 
> The current API is high-level and is relatively easy to use, but
> requires a lot of code space and RAM.  I think a function which just
> takes a raw byte buffer (or mbuf) would be much better.  Then, there
> could be a helper function which converts an instance of `struct
> ble_hs_adv_fields` to a raw byte buffer.
> 
> A simple peripheral that always advertises the same data shouldn't be
> burdened with the ble_hs_adv_fields API.
> 
> There is actually a rationale as to why the API is the way it is today.
> There are a few fields in the advertisement data that the host can be
> configured to fill in automatically:
>* Flags (contains advertising type).
>* TX Power Level
> 
> I figured it would be safer if these values got calculated when
> advertising is initiated.  This is impractical if the host were handed a
> byte buffer rather than a series of fields.
> 
> Under the new proposal, the application would need to specify the
> correct advertising type when building the byte buffer, and the tx power
> level would be queried before the advertising procedure is actually
> started.  I don't think this will be a problem in practice, and I think
> the benefits in code size and RAM usage outweigh the API burden.
> 
> All thoughts welcome.
> 
> Thanks,
> Chris



Re: [ATTENTION] incubator-mynewt-core git commit: os; spin up OS before calling. main() gets called in context of main task.

2017-01-24 Thread will sanfilippo
So you are saying that there will still be well-defined places where things get 
initialized and that there will be defined ranges for these stages? For example:

0 - 99 Before os_init() is called.
100-199 in os_init() after os_init() code executes
200-299: in os_start() somewhere

Realize that the above are just examples and not meant to be actual ranges or 
actual places where we initalize.


> On Jan 23, 2017, at 9:03 PM, Sterling Hughes 
> <sterling.hughes.pub...@gmail.com> wrote:
> 
> Also, one other thing to look at with the new sysinit changes.  I think we 
> probably need to revise the ordering on device initialization.
> 
> Right now device init has the following:
> 
> /*
> * Initialization order, defines when a device should be initialized
> * by the Mynewt kernel.
> *
> */
> #define OS_DEV_INIT_PRIMARY   (1)
> #define OS_DEV_INIT_SECONDARY (2)
> #define OS_DEV_INIT_KERNEL(3)
> 
> #define OS_DEV_INIT_F_CRITICAL (1 << 0)
> 
> 
> #define OS_DEV_INIT_PRIO_DEFAULT (0xff)
> 
> And these stages are called:
> 
> In os_init():  PRIMARY, SECONDARY
> In os_start(): KERNEL
> 
> I think it makes sense to more clearly map these stages to the new sparsely 
> designed sysinit stages, and add device init hooks throughout the system 
> startup.
> 
> Given the new sparse IDs, I’m thinking that we could do it per-ID range, i.e. 
> os_dev_initializeall(100), os_dev_initializeall(200), etc.  Within that 
> range, devices could be initialized by priority.
> 
> Thoughts?
> 
> Sterling
> 
> On 23 Jan 2017, at 19:12, Jacob Rosenthal wrote:
> 
>> Looks like this breaks splitty as app, bleprph as loader
>> Error: Syscfg ambiguities detected:
>>Setting: OS_MAIN_TASK_PRIO, Packages: [apps/bleprph, apps/splitty]
>> Setting history (newest -> oldest):
>>OS_MAIN_TASK_PRIO: [apps/splitty:10, apps/bleprph:1, kernel/os:0xfe]
>> 
>> Setting OS_MAIN_TASK_PRIO in splitty to 1 made this go away..but Dont know
>> if theres other complications related to that though.Then it gets stuck
>> after confirming image and resetting while entering the app image at
>> gcc_startup_nrf51.s Default_Handler
>> 
>> On Mon, Jan 23, 2017 at 4:48 PM, marko kiiskila <ma...@runtime.io> wrote:
>> 
>>> I pushed this change to develop.
>>> 
>>> You’ll need to update the newt tool as part of this change; as sysinit
>>> calls should not include call to os_init() anymore.
>>> 
>>> After this change you can specify multiple calls to be made to your package
>>> from sysinit().
>>> Tell newt to do this by having this kind of block in your pkg.yml.
>>> 
>>> pkg.init:
>>>ble_hs_init: 200
>>>ble_hs_init2: 500
>>> 
>>> I.e. in pkg.init block specify function name followed by call order.
>>> 
>>> And app main() should minimally look like:
>>> 
>>> int
>>> main(int argc, char **argv)
>>> {
>>> #ifdef ARCH_sim
>>>mcu_sim_parse_args(argc, argv);
>>> #endif
>>> 
>>>sysinit();
>>> 
>>>while (1) {
>>>os_eventq_run(os_eventq_dflt_get());
>>>}
>>>assert(0);
>>> 
>>>return rc;
>>> }
>>> 
>>> So there’s a call to mcu_sim_parse_args() (in case app can execute in
>>> simulator),
>>> call to sysinit(), which calls all the package init routines, followed by
>>> this main task
>>> calling os_eventq_run() for default task.
>>> 
>>> I might also want to lock the scheduler for the duration of call to
>>> sysinit();
>>> but we don’t have that facility yet. This might be a good time to add it?
>>> 
>>>> On Jan 21, 2017, at 9:00 AM, will sanfilippo <wi...@runtime.io> wrote:
>>>> 
>>>> +1 sounds good to me. I dont think the amount of changes to the app are
>>> all that many and folks should be able to deal with them pretty easily.
>>>> 
>>>> 
>>>>> On Jan 20, 2017, at 1:35 PM, Sterling Hughes <
>>> sterling.hughes.pub...@gmail.com> wrote:
>>>>> 
>>>>> Hey,
>>>>> 
>>>>> Changed the subject to call this out to more people.  :-)
>>>>> 
>>>>> Response above, because I generally think this is on the right track.
>>> In my view, we should bite the bullet prior to 1.0, and move to this
>>> approach.  I think it greatly simplifies startup, and the concept of the
>>> default event queue now ties into their being a defaul

Re: Scheduling time of Nimble stack

2017-01-24 Thread will sanfilippo
w( ) < 1)
> {
>  /* just loop to wait the free time slot > 1 CPU time ticks */
>  /* Nimble events have higher task priority, will keep on running */
>  if (time_out)
>   {
> return(1);
>   }
> }
> 
> /** my event require 1 CPU time ticks run here **/
> 
> //
> 
> Does this make sense?
> 
> Thanks,
> 
> Jiacheng
> 
> 
> 
> 
>> 在 2017年1月24日,14:25,WangJiacheng <jiacheng.w...@icloud.com> 写道:
>> 
>> Thanks, Will,
>> 
>> It seems I can not get the things  work by the simple way. I just want to 
>> find out a free time slot at high level to access PHY resource such as CPU 
>> and radio RF exclusively. With your explain, I should interleave my events 
>> into BLE events at low level in the same schedule queue.
>> 
>> Best Regards,
>> 
>> Jiacheng
>> 
>> 
>>> 在 2017年1月24日,13:48,will sanfilippo <wi...@runtime.io> 写道:
>>> 
>>> Jiacheng:
>>> 
>>> First thing with the code excerpt below: TAILQ_FIRST always gives you the 
>>> head of the queue. To iterate through all the queue elements you would use 
>>> TAILQ_FOREACH() or you would modify the code to get the next element using 
>>> TAILQ_NEXT. I would just use TAILQ_FOREACH. There is an example of this in 
>>> ble_ll_sched.c.
>>> 
>>> Some other things to note about scheduler queue:
>>> 1) It is possible for items to be on the queue that have already expired. 
>>> That means that the current cputime might have passed sch->start_time. 
>>> Depending on how you want to deal with things, you are might be  better off 
>>> doing a signed 32-bit subtract when calculating time_tmp.
>>> 2) You are not taking into account the end time of the scheduled event. The 
>>> event starts at sch->start_time and ends at sch->end_time. Well, if all you 
>>> care about is the time till the next event you wont have to worry about the 
>>> end time of the event, but if you want to iterate through the schedule, the 
>>> time between events is the start time of event N minus the end time of 
>>> event N - 1.
>>> 3) When an event is executed it is removed from the scheduler queue. Thus, 
>>> if you asynchronously look at the first item in the scheduler queue and 
>>> compare it to the time now you have to be aware that an event might be 
>>> running and that the nimble stack is using the PHY. This could also cause 
>>> you to think that nothing is going to be done in the future, but when the 
>>> scheduled event is over that item gets rescheduled and might get put back 
>>> in the scheduler queue (see #4, below).
>>> 4) Events in the scheduler queue appear only once. This is not an issue if 
>>> you are only looking at the first item on the queue, but if you iterate 
>>> through the queue this could affect you. For example, say there are two 
>>> items on the queue (item 1 is at head, item 2 is next and is last). You see 
>>> that the gap between the two events is 400 milliseconds (I just made that 
>>> number up). When item 1 is executed and done, that event will get 
>>> rescheduled. So lets say item 1 is a periodic event that occurs every 100 
>>> msecs. Item 1 will get rescheduled causing you to really only have 100 
>>> msecs between events.
>>> 5) The “end_time” of the scheduled item may not be the true end time of the 
>>> underlying event. When scheduling connections we schedule them for some 
>>> fixed amount of time. This is done to guarantee that all connections get a 
>>> place in the scheduler queue. When the schedule item executes at 
>>> “start_time” and the item is a connection event, the connection code will 
>>> keep the current connection going past the “end_time” of the scheduled 
>>> event if there is more data to be sent and the next scheduled item wont be 
>>> missed. So you may think you have a gap between scheduled events when in 
>>> reality the underlying code is still running.
>>> 6) For better or worse, scanning events are not on the scheduler queue; 
>>> they are dealt with in an entirely different manner. This means that the 
>>> underlying PHY could be used when there is nothing on the schedule queue.
>>> 
>>> I have an idea of what you are trying to do and it might end up being a bit 
>>> tricky given the current code implementation. You may be better served 
>>> adding an item to the schedu

Re: Scheduling time of Nimble stack

2017-01-23 Thread will sanfilippo
Jiacheng:

First thing with the code excerpt below: TAILQ_FIRST always gives you the head 
of the queue. To iterate through all the queue elements you would use 
TAILQ_FOREACH() or you would modify the code to get the next element using 
TAILQ_NEXT. I would just use TAILQ_FOREACH. There is an example of this in 
ble_ll_sched.c.

Some other things to note about scheduler queue:
1) It is possible for items to be on the queue that have already expired. That 
means that the current cputime might have passed sch->start_time. Depending on 
how you want to deal with things, you are might be  better off doing a signed 
32-bit subtract when calculating time_tmp.
2) You are not taking into account the end time of the scheduled event. The 
event starts at sch->start_time and ends at sch->end_time. Well, if all you 
care about is the time till the next event you wont have to worry about the end 
time of the event, but if you want to iterate through the schedule, the time 
between events is the start time of event N minus the end time of event N - 1.
3) When an event is executed it is removed from the scheduler queue. Thus, if 
you asynchronously look at the first item in the scheduler queue and compare it 
to the time now you have to be aware that an event might be running and that 
the nimble stack is using the PHY. This could also cause you to think that 
nothing is going to be done in the future, but when the scheduled event is over 
that item gets rescheduled and might get put back in the scheduler queue (see 
#4, below).
4) Events in the scheduler queue appear only once. This is not an issue if you 
are only looking at the first item on the queue, but if you iterate through the 
queue this could affect you. For example, say there are two items on the queue 
(item 1 is at head, item 2 is next and is last). You see that the gap between 
the two events is 400 milliseconds (I just made that number up). When item 1 is 
executed and done, that event will get rescheduled. So lets say item 1 is a 
periodic event that occurs every 100 msecs. Item 1 will get rescheduled causing 
you to really only have 100 msecs between events.
5) The “end_time” of the scheduled item may not be the true end time of the 
underlying event. When scheduling connections we schedule them for some fixed 
amount of time. This is done to guarantee that all connections get a place in 
the scheduler queue. When the schedule item executes at “start_time” and the 
item is a connection event, the connection code will keep the current 
connection going past the “end_time” of the scheduled event if there is more 
data to be sent and the next scheduled item wont be missed. So you may think 
you have a gap between scheduled events when in reality the underlying code is 
still running.
6) For better or worse, scanning events are not on the scheduler queue; they 
are dealt with in an entirely different manner. This means that the underlying 
PHY could be used when there is nothing on the schedule queue.

I have an idea of what you are trying to do and it might end up being a bit 
tricky given the current code implementation. You may be better served adding 
an item to the schedule queue but it all depends on how you want to prioritize 
BLE activity with what you want to do.

Will

> On Jan 23, 2017, at 8:56 PM, WangJiacheng  wrote:
> 
> Hi, 
> 
> I’m trying to find out a free time slot between Nimble scheduled events.
> 
> I try to go through  all items on the schedule queue  global variable 
> “g_ble_ll_sched_q” to find out all the scheduled LL events near future, 
> function as
> //
> uint32_t ll_eventq_free_time_from_now(void)
> {
>  struct ble_ll_sched_item *sch;
>  uint32_t cpu_time_now;
>  uint32_t time_free;
>  uint32_t time_tmp;
>   
>  time_free = 10;
>  cpu_time_now = os_cputime_get32();
> 
>  /* Look through schedule queue */
>  while ((sch = TAILQ_FIRST(_ble_ll_sched_q)) != NULL)
>  {
>time_tmp = sch->start_time - cpu_time_now;
>if  (time_tmp < time_free)
>{
>   time_free = time_tmp;
>}
>  }
>   
>  return (time_free);
> }
> //
> 
> Does above function make sense to find out the free time at any given time 
> point? or any suggestion to find out the free time slot between LL events?
> 
> 
> Thanks,
> 
> Jiacheng
> 



Re: [RFC] endianness API cleanup

2017-01-23 Thread will sanfilippo
Szymon:

Indeed, those endianness macros were put in ble.h because they were 
non-standard and acted on a buffer as opposed to just swapping bytes. 
Internally (quite some time ago) we debated using packed structures for  PDU 
protocol elements and we just never ended up deciding on what to do throughout 
the code. We did figure if we went the packed structure route the macros used 
(htole16) would get replaced with ones that just byte swap (if needed).

I looked over the changes and they look good to me. With these changes we 
should also go through the code and use packed structures elsewhere. This will 
definitely save a bunch of code as there will be no swapping since the protocol 
and host are little endian.

I think there are also macros in the host for endianness-related functions. Not 
sure if they have been renamed/replaced.


> On Jan 23, 2017, at 8:34 AM, Szymon Janc  wrote:
> 
> Hi,
> 
> While lurking in code I noticed that endianness APIs in Mynewt
> are bit strange and scattered around:
> - htole16, htobe16 etc are defined in "nimble/ble.h"
> - above mentioned functions have signatures different than same named
>  functions normally defined in endian.h
> 
> So to clean those up I propose following:
> - rename functions existing in ble.h to put_le16, get_le16 etc which are
>   intended for use on raw byte buffer
> - move those to endian.h
> - add standard htole16 etc definitions in endian.h
> 
> Some open points:
> 1) there are two functions in ble.h
> void swap_in_place(void *buf, int len);
> void swap_buf(uint8_t *dst, const uint8_t *src, int len);
>   that I also moved to endian.h for time being but I think that eventually
>   we should have "os/misc.h" (or utils.h) for such helpers
> 
> 2) I had to wrap macros in endian.h into #ifndef-endif since tests seem
>   to be including both os/ and system includes resulting in macro redefined
>   error
> 
> Code implementing above is available at [1].
> 
> Comments are welcome.
> 
> 
> [1] https://github.com/sjanc/incubator-mynewt-core/commits/endianness
> 
> -- 
> pozdrawiam
> Szymon K. Janc



Re: [RFC] Reducing size of BLE Security Manager

2017-01-20 Thread will sanfilippo
I have mixed feelings about packed structures. For processors that cannot 
handle unaligned accesses I have always found that they increased code size. 
Every access of an element in that structure needs code to determine the 
alignment of that element. Sure, they save RAM, so if that is what you want 
then fine, but code size? When you did this code size comparison did you do it 
on a processor that handles unaligned access? This can also impact the speed at 
which the code runs although that is rarely an issue.

About reducing copies. I am sure you know this, but folks should be careful 
doing something like mystruct = (struct mystruct *)om->om_data. You are not 
guaranteed that the data is contiguous so you better m_pullup first.

The controller does byte-by-byte copies and does not use packed structs. If we 
find that they generally svae code space we can modify that code as well.

> On Jan 20, 2017, at 8:21 AM, Christopher Collins  wrote:
> 
> Hi Szymon,
> 
> On Fri, Jan 20, 2017 at 10:21:16AM +0100, Szymon Janc wrote:
>> Hi,
>> 
>> I was recently looking on how we could reduce size of SM code.
>> So my proposal is to change the way PDUs are parsed and constructed.
>> 
>> Instead of having ble_sm_foo_parse(), ble_sm_foo_write() and ble_sm_foo_tx()
>> for parsing and constructing PDU byte by byte we could use packed structures
>> for describing PDU and let compiler figure out details related to
>> unaligned access.
> [...]
> 
> I think that's a great idea.  The ATT code does something similar,
> though there is probably more work to be done there.  In my opinion,
> using packed structs for parsing and encoding doesn't just reduce code
> size, it also simplifies the code.
> 
> Chris



Re: MBUF sizing for the bluetooth stack

2017-01-20 Thread will sanfilippo
Simon:

I think you are pretty much correct; generally you are better off with smaller 
size mbufs. However, there are cases where larger mbufs are better (for 
example, a very large portion of your data packets are large).

> On Jan 19, 2017, at 11:57 PM, Simon Ratner  wrote:
> 
> Thanks Chris,
> 
> It appears to me that there is questionable benefit to having mbufs sized
> larger than the largest L2CAP fragment size (plus overhead), i.e. the 80
> bytes that Will mentioned. Is that a reasonable statement, or am I missing
> something?
> 
> For incoming data, you always waste memory with larger mbufs, and for
> outgoing data host will take longer to free the memory (since you can't
> free the payload mbuf until the last fragment, as opposed to freeing
> smaller mbufs as you go), and you don't save on the number of copies in the
> host. You will save something on mbuf allocations and mbuf header overhead
> in the app as you are generating the payload, though.
> 
> When allocating mbufs for the payload, is there something I should do to
> reserve enough leading space for the ACL header to make sure host doesn't
> need to re-allocate it?
> 
> Also, at least in theory, it sounds like you could size mbufs to match the
> fragment exactly -- or pre-fragment the mbuf chain as you are generating
> the payload -- and have zero copies in the host. Could be useful in a
> low-memory situation, if the host was smart enough to take advantage of
> that?
> 
> 
> 
> 
> On Thu, Jan 19, 2017 at 11:13 AM, Christopher Collins 
> wrote:
> 
>> On Thu, Jan 19, 2017 at 10:57:58AM -0800, Christopher Collins wrote:
>>> On Thu, Jan 19, 2017 at 03:46:49AM -0800, Simon Ratner wrote:
 A related question: how does this map to large ATT_MTU and fragmented
 packets at the L2CAP level (assuming no data length extension)? Does
>> each
 fragment get its own mbuf, which are then chained together, or does the
 entire packet get reassembled into a single mbuf if there is room?
>>> 
>>> If the host needs to send a large packet, it packs the payload into an
>>> mbuf chain.  By "packs," I mean each buffer holds as much data as
>>> possible with no regard to the maximum L2CAP fragment size.
>>> 
>>> When the host sends an L2CAP fragment, it splits the fragment payload
>>> off from the front of the mbuf chain, constructs an ACL data packet, and
>>> sends it to the controller.  If a buffer at the front of mbuf can be
>>> freed, now that data has been removed, the host frees it.
>>> 
>>> If you are interested, the function which handles fragmentation and
>>> freeing is mem_split_frag() (util/mem/src/mem.c).
>> 
>> I rushed this response a bit, and there are some important details I
>> neglected.
>> 
>> * For the final L2CAP fragment in a packet, the host doesn't
>> do an allocating or copying.  Instead, it just prepends an ACL data
>> header to the mbuf chain and sends it to the controller.
>> 
>> * For all L2CAP fragments *other than the last*, the host allocates an
>> additional mbuf chain to hold the ACL data packet.  The host then copies
>> the fragment data into this new chain, sends it, and frees buffers from
>> the front of the original chain if possible.  The number of buffers that
>> get allocated for the fragment depends on how the maximum L2CAP fragment
>> size compares to the msys mbuf size.  If an msys mbuf buffer has
>> sufficient capacity for a maximum size L2CAP fragment, then only one
>> buffer will get allocated.  If the mbuf capacity is less, the chain that
>> gets allocated will consist of multiple buffers.
>> 
>> * An L2CAP fragment mbuf chain contains the following:
>>* mbuf pkthdr   (8 bytes)
>>* HCI ACL data header   (4 bytes)
>>* Basic L2CAP header(4 bytes)
>>* Payload   (varies)
>> 
>> * For incoming data, the host does not do any packing.  Each L2CAP
>> fragment is simply chained together.
>> 



Re: MBUF sizing for the bluetooth stack

2017-01-19 Thread will sanfilippo
That is a good question. I should let Chris answer this one as he knows for 
sure. I suspect you will have a chain of mbufs but I would have to look over 
the code to be sure.


> On Jan 19, 2017, at 3:46 AM, Simon Ratner <si...@proxy.co> wrote:
> 
> Hi Will,
> 
> A related question: how does this map to large ATT_MTU and fragmented
> packets at the L2CAP level (assuming no data length extension)? Does each
> fragment get its own mbuf, which are then chained together, or does the
> entire packet get reassembled into a single mbuf if there is room?
> 
> 
> 
> On Wed, Jan 11, 2017 at 4:57 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Yes; 76 or 80. Note that I have not actually tested with 80 byte mbuf
>> blocks. That is the theory though :-)
>> 
>>> On Jan 11, 2017, at 4:31 PM, Simon Ratner <si...@proxy.co> wrote:
>>> 
>>> Got it; by minimum size you mean the 76/80 bytes?
>>> 
>>> On Wed, Jan 11, 2017 at 4:17 PM, will sanfilippo <wi...@runtime.io>
>> wrote:
>>> 
>>>> Well, yes, there are “definitions” for these things. They are in various
>>>> places but they are there. Using them might get a bit tricky as you have
>>>> mentioned; not sure. You would have to make sure the right header files
>> get
>>>> included in the proper places...
>>>> 
>>>> Anyway, here are the definitions:
>>>> os mbuf header: sizeof(struct os_mbuf). Size = 16
>>>> os mbuf packet header: sizeof(struct os_mbuf_pkthdr) Size = 8
>>>> user header: sizeof(struct ble_mbuf_hdr) Size = 8 or 12
>>>> The HCI ACL data header: BLE_HCI_DATA_HSDR_SZ. 4 bytes
>>>> The LL PDU header: BLE_LL_PDU_HDR_LEN. 2 bytes
>>>> 
>>>> I would always make the size a multiple of 4 but the code should do that
>>>> for you; I just like to do it so the size you see in the syscfg
>> variable is
>>>> the actual memory block size.
>>>> 
>>>> Another thing I should mention: you should never add a buffer pool to
>> msys
>>>> smaller than the minimum size I mentioned if you are using the
>> controller.
>>>> This is something we will address in the future but for now it would be
>>>> bad. :-)
>>>> 
>>>> 
>>>> 
>>>>> On Jan 11, 2017, at 3:49 PM, Simon Ratner <si...@proxy.co> wrote:
>>>>> 
>>>>> Thanks for the detailed write-up, Will - very useful.
>>>>> 
>>>>> Are there defines for these things?
>>>>> Ideally, if I want a payload size of N, I'd like to specify in
>>>> syscfg.yml:
>>>>> 
>>>>>  MSYS_1_BLOCK_SIZE: '(N + MBUF_HEADER + PKT_HEADER + LL_OVERHEAD +
>>>> ...)'
>>>>> 
>>>>> And magically have optimally-sized buffers.
>>>>> 
>>>>> 
>>>>> On Wed, Jan 11, 2017 at 11:00 AM, will sanfilippo <wi...@runtime.io>
>>>> wrote:
>>>>> 
>>>>>> Hello:
>>>>>> 
>>>>>> Since this has come up on a number of different occasions I wanted to
>>>> send
>>>>>> out an email which discusses how the nimble stack uses mbufs. This
>> will
>>>> be
>>>>>> a controller-centric discussion but the concepts apply to the host as
>>>> well.
>>>>>> 
>>>>>> A quick refresher on mbufs: Mynewt, and the nimble stack, use mbufs
>> for
>>>>>> networking stack packet data. A “packet” is simply a chain of mbufs
>> with
>>>>>> the first mbuf in the chain being a packet header mbuf and all others
>>>> being
>>>>>> “normal” mbufs. A packet header mbuf contains a mbuf header, a packet
>>>>>> header and an optional user-defined header.
>>>>>> 
>>>>>> The length of the packet (i.e. all the data contained in all the mbuf
>>>>>> chains) is stored in the packet header. Each individual mbuf in the
>>>> chain
>>>>>> also contains a length which is the length of the data in that mbuf.
>> The
>>>>>> sum of all the mbuf data lengths = length of packet.
>>>>>> 
>>>>>> The amount of overhead in an mbuf and its size determine the amount of
>>>>>> data that can be carried in a mbuf. All mbufs have a 16-byte mbuf
>>>> header.
>>>>>> Packet header mbufs have an additional 8 bytes for t

Re: sys/stats and sys/log

2017-01-17 Thread will sanfilippo
I think the stub approach is fine as well.

> On Jan 17, 2017, at 1:43 PM, Kevin Townsend  wrote:
> 
> I don't have any issues with the stub approach myself, and it's easy to 
> switch back and forth (no more work than changing syscfg.yml)
> 
> 
> On 17/01/17 22:07, marko kiiskila wrote:
>> Hi,
>> 
>> at the moment it is not very easy to get rid of all code
>> related to logging and/or statistics.
>> I ran across this when trying to see how small I can
>> make an image while keeping BLE and OIC.
>> 
>> Therefore, I was going to create stub packages for
>> sys/stats and sys/log.
>> 
>> Then, within the app you can choose between a stub or
>> an actual implementation. We have this same model for
>> picking up implementation of console.
>> 
>> Alternative would be to make syscfg knobs for these.
>> However, I think I prefer the stub packages, I believe
>> that will make the code easier to read (less #ifdef's).
>> 
>> What do you guys think?
> 



Re: Bluetooth specification question after seeing Android 7.1.1 disconnect

2017-01-17 Thread will sanfilippo
It was not a phone I was using. I think it was a Nexus 6P. And yeah, I 
shouldnt’t have said “Android” when I was mentioning the bug. I have used other 
Android phones and they dont have this issue. Well, I have used one other 
Android Phone (I think it was a Nexus 5x) and there was no issue.

Regarding the proposed fix. I agree that the spec does not mention what to do 
when a LL_REJECT_IND (or REJECT_IND_EXT) is received (outside of the control 
procedures where use of REJECT_IND is expected). The spec is quite clear in 
other areas though; for example, a Data Length Update procedure ends only when 
a LL_LENGTH_RSP is received or LL_UNKNOWN_RSP is received.

This might just be me, but I really dislike adding work-arounds to what are 
pretty clearly bugs and that also clearly violate the spec in other areas. I 
also "worry" that there might be other unintended consequences by doing this. 
For example, the nimble controller issues a connection update and the peer 
responds with LL_REJECT_IND. We cancel the procedure but the peer accepts the 
connection update (which would cause a supervision timeout).

I wonder if there is a work-around that would fix this particular issue with 
this controller that would not violate the spec in other areas? Dont get me 
wrong; I think your idea is very reasonable and makes sense. Especially if you 
have encountered this with other devices.


> On Jan 17, 2017, at 2:12 AM, Andrzej Kaczmarek 
> <andrzej.kaczma...@codecoup.pl> wrote:
> 
> Hi Will,
> 
> On Tue, Jan 17, 2017 at 5:48 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Hello:
>> 
>> Was wondering if there were any folks out there that could comment on
>> something regarding a disconnect issue with an Android Phone running 7.1.1
>> and our bluetooth stack (the controller).
>> 
> 
> Which phone do you use? Android has only host stack (Bluedroid) to this is
> likely specific to controller used in particular phone - I've seen similar
> problems when testing other controller and some "generic" Chinese phones.
> 
> 
>> 
>> What appears to be happening is this:
>> 
>> * Nimble wants to do Data Length Extension and enqueues a LL_LENGTH_REQ
>> when a connection gets created. Nimble is a peripheral btw.
>> * The Android controller wants to do a feature exchange so it enqueues a
>> LL_FEATURE_REQ.
>> * Android controller sends the LL_FEATURE_REQ.
>> * Nimble controller sends a LL_LENGTH_REQ.
>> * Once the nimble controller succeeds in sending the LL_LENGTH_REQ, it
>> sends the LL_FEATURE_RSP.
>> * Android responds with a LL_REJECT_IND with error code 0x24 LMP PDU not
>> allowed.
>> 
> 
> IIRC this is the same as I've seen (even the error code is the same) -
> don't have logs now though...
> 
> 
>> * Android resends the LL_FEATURE_REQ.
>> * Nimble responds with LL_FEATURE_RSP.
>> * Android sends LL_LENGTH_REQ
>> * Nimble controller sends LL_LENGTH_RSP.
>> * All goes fine until nimble controller times out due to a failed LL
>> control procedure: the nimble stack never received a LL_LENGTH_RSP.
>> 
>> NOTE: from the above it is hard to say why the Android controller sent the
>> LL_REJECT_IND. Basically, it appears that the LL_LENGTH_REQ messed up the
>> Android controller as the Android controller was expecting a LL_FEATURE_RSP.
>> 
>> My questions are the following:
>> * I think this is a bug on the part of the Android controller. The
>> specification allows for non-real time response to control PDU’s and it is
>> quite possible that a controller starts a procedure “at the same time” that
>> the remote controller starts a procedure. What I would have expected is
>> that the Android controller should have responded to the LL_LENGTH_REQ with
>> a LL_LENGTH_RSP. Eventually, the Android controller gets the LL_FEATURE_RSP
>> and all should have been fine. Do folks agree with this?
>> * A controller should not use a LL_REJECT_IND as a generic response when a
>> controller sends something unexpected. The LL_REJECT_IND is only used
>> during encryption procedures, connection parameter request update
>> procedures and in a couple of cases where there are Control Procedure
>> collisions. Note that the scenario described above is NOT one of the
>> Control Procedure collisions mentioned in the specification.
>> 
> 
> I agree, this is clearly issue on peer side - there is no procedure
> collision here since both length update and feature request can be handled
> at the same time. However, I think what Nimble should do here is to remove
> transaction once LL_REJECT_IND is received.
> 
> I know specification does use LL_REJECT_IND explicitly only in case o

Bluetooth specification question after seeing Android 7.1.1 disconnect

2017-01-16 Thread will sanfilippo
Hello:

Was wondering if there were any folks out there that could comment on something 
regarding a disconnect issue with an Android Phone running 7.1.1 and our 
bluetooth stack (the controller).

What appears to be happening is this: 

* Nimble wants to do Data Length Extension and enqueues a LL_LENGTH_REQ when a 
connection gets created. Nimble is a peripheral btw.
* The Android controller wants to do a feature exchange so it enqueues a 
LL_FEATURE_REQ.
* Android controller sends the LL_FEATURE_REQ.
* Nimble controller sends a LL_LENGTH_REQ.
* Once the nimble controller succeeds in sending the LL_LENGTH_REQ, it sends 
the LL_FEATURE_RSP.
* Android responds with a LL_REJECT_IND with error code 0x24 LMP PDU not 
allowed.
* Android resends the LL_FEATURE_REQ.
* Nimble responds with LL_FEATURE_RSP.
* Android sends LL_LENGTH_REQ
* Nimble controller sends LL_LENGTH_RSP.
* All goes fine until nimble controller times out due to a failed LL control 
procedure: the nimble stack never received a LL_LENGTH_RSP.

NOTE: from the above it is hard to say why the Android controller sent the 
LL_REJECT_IND. Basically, it appears that the LL_LENGTH_REQ messed up the 
Android controller as the Android controller was expecting a LL_FEATURE_RSP.

My questions are the following:
* I think this is a bug on the part of the Android controller. The 
specification allows for non-real time response to control PDU’s and it is 
quite possible that a controller starts a procedure “at the same time” that the 
remote controller starts a procedure. What I would have expected is that the 
Android controller should have responded to the LL_LENGTH_REQ with a 
LL_LENGTH_RSP. Eventually, the Android controller gets the LL_FEATURE_RSP and 
all should have been fine. Do folks agree with this?
* A controller should not use a LL_REJECT_IND as a generic response when a 
controller sends something unexpected. The LL_REJECT_IND is only used during 
encryption procedures, connection parameter request update procedures and in a 
couple of cases where there are Control Procedure collisions. Note that the 
scenario described above is NOT one of the Control Procedure collisions 
mentioned in the specification.

Thanks!




Re: stopping scan & adv in bleprph example

2017-01-16 Thread will sanfilippo
Yes, Mynewt works the same way as FreeRTOS in this respect. Well, at least in 
the way you are describing FreeRTOS. We have a tickless OS and when we decide 
to go to sleep we are waiting for an interrupt to wake us up.

Regarding the radio: there are some registers that are only programmed once, so 
if you switch to your own custom RF stack and you want to switch back to 
bluetooth, you would either have to write some custom code or reset the link 
layer. There is an API to do this but I am not sure if it is accessible to the 
application developer.


> On Jan 16, 2017, at 5:08 PM, Lm Chew <lm.c...@free2move.se> wrote:
> 
> Hi Chris,
> 
> Thanks for the reply.
> 
> So calling ble_gap_adv_stop and ble_gap_disc_cancel will stop all radio 
> activity is that correct?
> 
> Is it safe to modify the Radio setting(on the physical just like in ble_phy) 
> after just calling these functions?
> 
> Hi Will,
> 
> Not exactly a "system off" I am looking for.
> Previously I am using FreeRTOS tickless mode where the MCU will remain in 
> sleep mode most of the tire unless there is a task to perform.
> 
> I am asking this because in the bleprph example I don't see any function 
> being called to put the MCU to sleep.
> 
> Does mynewt OS work the same way as FreeRTOS?
> 
> Best Regards,
> Chew
> 
> 
> 
> 
> 
> On Tue, Jan 17, 2017 at 1:57am, will sanfilippo 
> <wi...@runtime.io<mailto:wi...@runtime.io>> wrote:
> 
> If by deep sleep you mean "system off" mode requiring some form of wakeup, it 
> is curently not implemented. You would have to hook that in yourself.
> 
>> On Jan 16, 2017, at 9:22 AM, Christopher Collins <ccoll...@apache.org> wrote:
>> 
>> Hi Chew,
>> 
>> On Mon, Jan 16, 2017 at 11:33:23AM +, Lm Chew wrote:
>>> Hi,
>>> 
>>> How do I stop the scan &  adv in the bleprph example.
>>> 
>>> I tried calling the ble_ll_scan_sm_stop(1) and  ble_ll_adv_stop in my app, 
>>> but I am still able to see the device on my phone when I perform a scan.
>> 
>> To stop advertising, call: ble_gap_adv_stop()
>> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_stop/)
>> 
>> For BLE operations, an application should only use the host interface.
>> Functions with the "ble_ll" prefix are defined by the controller, not
>> the host, so your application should not call them.
>> 
>> Regarding scanning- the bleprph app doesn't perform any scanning, so
>> there is no need to stop scanning.  This application only implements the
>> peripheral role, so operations like scanning and initiating a connection
>> are not compiled in.  However, if you have a different app which does
>> support scanning, you would stop the scan procedure by calling
>> ble_gap_disc_cancel()
>> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_disc_cancel/)
>> 
>>> I am trying to switch between my custom rf stack  and nimble bt stack. So I 
>>> need to disable nimble  operation before running my custom RF Stack.
>>> And once I am done what I need using the custom RF Stack, I will switch 
>>> back nimble.
>>> 
>>> Another question, how do you put the MCU to deep sleep while using nimble 
>>> stack? In the example the MCU does not goes to deep sleep.
>> 
>> Sorry, I am not sure about this one.  I am not sure this is actually
>> supported yet, but I'll let someone more knowledgable chime in.
>> 
>> Chris
> 



Re: stopping scan & adv in bleprph example

2017-01-16 Thread will sanfilippo
If by deep sleep you mean “system off” mode requiring some form of wakeup, it 
is curently not implemented. You would have to hook that in yourself.

> On Jan 16, 2017, at 9:22 AM, Christopher Collins  wrote:
> 
> Hi Chew,
> 
> On Mon, Jan 16, 2017 at 11:33:23AM +, Lm Chew wrote:
>> Hi,
>> 
>> How do I stop the scan &  adv in the bleprph example.
>> 
>> I tried calling the ble_ll_scan_sm_stop(1) and  ble_ll_adv_stop in my app, 
>> but I am still able to see the device on my phone when I perform a scan.
> 
> To stop advertising, call: ble_gap_adv_stop()
> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_adv_stop/)
> 
> For BLE operations, an application should only use the host interface.
> Functions with the "ble_ll" prefix are defined by the controller, not
> the host, so your application should not call them.
> 
> Regarding scanning- the bleprph app doesn't perform any scanning, so
> there is no need to stop scanning.  This application only implements the
> peripheral role, so operations like scanning and initiating a connection
> are not compiled in.  However, if you have a different app which does
> support scanning, you would stop the scan procedure by calling
> ble_gap_disc_cancel()
> (http://mynewt.apache.org/latest/network/ble/ble_hs/ble_gap/functions/ble_gap_disc_cancel/)
> 
>> I am trying to switch between my custom rf stack  and nimble bt stack. So I 
>> need to disable nimble  operation before running my custom RF Stack.
>> And once I am done what I need using the custom RF Stack, I will switch back 
>> nimble.
>> 
>> Another question, how do you put the MCU to deep sleep while using nimble 
>> stack? In the example the MCU does not goes to deep sleep.
> 
> Sorry, I am not sure about this one.  I am not sure this is actually
> supported yet, but I'll let someone more knowledgable chime in.
> 
> Chris



Re: MBUF sizing for the bluetooth stack

2017-01-11 Thread will sanfilippo
Yes; 76 or 80. Note that I have not actually tested with 80 byte mbuf blocks. 
That is the theory though :-)

> On Jan 11, 2017, at 4:31 PM, Simon Ratner <si...@proxy.co> wrote:
> 
> Got it; by minimum size you mean the 76/80 bytes?
> 
> On Wed, Jan 11, 2017 at 4:17 PM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Well, yes, there are “definitions” for these things. They are in various
>> places but they are there. Using them might get a bit tricky as you have
>> mentioned; not sure. You would have to make sure the right header files get
>> included in the proper places...
>> 
>> Anyway, here are the definitions:
>> os mbuf header: sizeof(struct os_mbuf). Size = 16
>> os mbuf packet header: sizeof(struct os_mbuf_pkthdr) Size = 8
>> user header: sizeof(struct ble_mbuf_hdr) Size = 8 or 12
>> The HCI ACL data header: BLE_HCI_DATA_HSDR_SZ. 4 bytes
>> The LL PDU header: BLE_LL_PDU_HDR_LEN. 2 bytes
>> 
>> I would always make the size a multiple of 4 but the code should do that
>> for you; I just like to do it so the size you see in the syscfg variable is
>> the actual memory block size.
>> 
>> Another thing I should mention: you should never add a buffer pool to msys
>> smaller than the minimum size I mentioned if you are using the controller.
>> This is something we will address in the future but for now it would be
>> bad. :-)
>> 
>> 
>> 
>>> On Jan 11, 2017, at 3:49 PM, Simon Ratner <si...@proxy.co> wrote:
>>> 
>>> Thanks for the detailed write-up, Will - very useful.
>>> 
>>> Are there defines for these things?
>>> Ideally, if I want a payload size of N, I'd like to specify in
>> syscfg.yml:
>>> 
>>>   MSYS_1_BLOCK_SIZE: '(N + MBUF_HEADER + PKT_HEADER + LL_OVERHEAD +
>> ...)'
>>> 
>>> And magically have optimally-sized buffers.
>>> 
>>> 
>>> On Wed, Jan 11, 2017 at 11:00 AM, will sanfilippo <wi...@runtime.io>
>> wrote:
>>> 
>>>> Hello:
>>>> 
>>>> Since this has come up on a number of different occasions I wanted to
>> send
>>>> out an email which discusses how the nimble stack uses mbufs. This will
>> be
>>>> a controller-centric discussion but the concepts apply to the host as
>> well.
>>>> 
>>>> A quick refresher on mbufs: Mynewt, and the nimble stack, use mbufs for
>>>> networking stack packet data. A “packet” is simply a chain of mbufs with
>>>> the first mbuf in the chain being a packet header mbuf and all others
>> being
>>>> “normal” mbufs. A packet header mbuf contains a mbuf header, a packet
>>>> header and an optional user-defined header.
>>>> 
>>>> The length of the packet (i.e. all the data contained in all the mbuf
>>>> chains) is stored in the packet header. Each individual mbuf in the
>> chain
>>>> also contains a length which is the length of the data in that mbuf. The
>>>> sum of all the mbuf data lengths = length of packet.
>>>> 
>>>> The amount of overhead in an mbuf and its size determine the amount of
>>>> data that can be carried in a mbuf. All mbufs have a 16-byte mbuf
>> header.
>>>> Packet header mbufs have an additional 8 bytes for the packet header
>>>> structure and an optional user-data header. The nimble stack uses
>> either an
>>>> 8-byte or 12-byte user data header. If you turn on multi-advertising
>>>> support, the user header is 12 bytes; otherwise it is 8 bytes. This
>> means
>>>> the total packet header mbuf overhead is 32 or 36 bytes.
>>>> 
>>>> The total mbuf size is defined by the various MSYS_X_BLOCK_SIZE syscfg
>>>> variables. Currently, there is one mbuf pool added to msys (MSYS_1)
>> with a
>>>> block size of 292 bytes.
>>>> 
>>>> Controller constraints:
>>>> The controller assumes that a certain minimum data size is available in
>> a
>>>> packet header mbuf. This size is equal to the largest advertising PDU,
>> or
>>>> 37 bytes, and must also contain the 2-byte LL PDU header (for a total
>> of 39
>>>> bytes). Additionally, the controller requires an additional 4 bytes at
>> the
>>>> start of the packet header mbuf to prepend the HCI ACL data packet
>> header.
>>>> This means that the minimum mbuf size that can be allocated in any msys
>>>> mbuf pool is: packet header overhead + 4 + 39 = 75 (79 for multi-adv).
>>>>

Re: MBUF sizing for the bluetooth stack

2017-01-11 Thread will sanfilippo
Well, yes, there are “definitions” for these things. They are in various places 
but they are there. Using them might get a bit tricky as you have mentioned; 
not sure. You would have to make sure the right header files get included in 
the proper places...

Anyway, here are the definitions:
os mbuf header: sizeof(struct os_mbuf). Size = 16
os mbuf packet header: sizeof(struct os_mbuf_pkthdr) Size = 8
user header: sizeof(struct ble_mbuf_hdr) Size = 8 or 12
The HCI ACL data header: BLE_HCI_DATA_HSDR_SZ. 4 bytes
The LL PDU header: BLE_LL_PDU_HDR_LEN. 2 bytes

I would always make the size a multiple of 4 but the code should do that for 
you; I just like to do it so the size you see in the syscfg variable is the 
actual memory block size.

Another thing I should mention: you should never add a buffer pool to msys 
smaller than the minimum size I mentioned if you are using the controller. This 
is something we will address in the future but for now it would be bad. :-)



> On Jan 11, 2017, at 3:49 PM, Simon Ratner <si...@proxy.co> wrote:
> 
> Thanks for the detailed write-up, Will - very useful.
> 
> Are there defines for these things?
> Ideally, if I want a payload size of N, I'd like to specify in syscfg.yml:
> 
>MSYS_1_BLOCK_SIZE: '(N + MBUF_HEADER + PKT_HEADER + LL_OVERHEAD + ...)'
> 
> And magically have optimally-sized buffers.
> 
> 
> On Wed, Jan 11, 2017 at 11:00 AM, will sanfilippo <wi...@runtime.io> wrote:
> 
>> Hello:
>> 
>> Since this has come up on a number of different occasions I wanted to send
>> out an email which discusses how the nimble stack uses mbufs. This will be
>> a controller-centric discussion but the concepts apply to the host as well.
>> 
>> A quick refresher on mbufs: Mynewt, and the nimble stack, use mbufs for
>> networking stack packet data. A “packet” is simply a chain of mbufs with
>> the first mbuf in the chain being a packet header mbuf and all others being
>> “normal” mbufs. A packet header mbuf contains a mbuf header, a packet
>> header and an optional user-defined header.
>> 
>> The length of the packet (i.e. all the data contained in all the mbuf
>> chains) is stored in the packet header. Each individual mbuf in the chain
>> also contains a length which is the length of the data in that mbuf. The
>> sum of all the mbuf data lengths = length of packet.
>> 
>> The amount of overhead in an mbuf and its size determine the amount of
>> data that can be carried in a mbuf. All mbufs have a 16-byte mbuf header.
>> Packet header mbufs have an additional 8 bytes for the packet header
>> structure and an optional user-data header. The nimble stack uses either an
>> 8-byte or 12-byte user data header. If you turn on multi-advertising
>> support, the user header is 12 bytes; otherwise it is 8 bytes. This means
>> the total packet header mbuf overhead is 32 or 36 bytes.
>> 
>> The total mbuf size is defined by the various MSYS_X_BLOCK_SIZE syscfg
>> variables. Currently, there is one mbuf pool added to msys (MSYS_1) with a
>> block size of 292 bytes.
>> 
>> Controller constraints:
>> The controller assumes that a certain minimum data size is available in a
>> packet header mbuf. This size is equal to the largest advertising PDU, or
>> 37 bytes, and must also contain the 2-byte LL PDU header (for a total of 39
>> bytes). Additionally, the controller requires an additional 4 bytes at the
>> start of the packet header mbuf to prepend the HCI ACL data packet header.
>> This means that the minimum mbuf size that can be allocated in any msys
>> mbuf pool is: packet header overhead + 4 + 39 = 75 (79 for multi-adv).
>> Since memory pools are always rounded up to the nearest 4 byte boundary,
>> this means that the minimum size should be 76 (or 80) bytes.
>> 
>> For most applications that dont use large packets, setting the mbuf size
>> to 80 should be fine as this will accommodate the typical BLE PDU and also
>> meets the minimum requirement. If your application generally uses larger
>> packets it might be benefical to allocate large mbufs as you dont lose the
>> 16-byte overhead per mbuf.
>> 
>> Finally, here is an example of how many mbufs will be used by the
>> controller for received packets. This assumes multi-advertising enabled (36
>> byte packet header overhead).
>> 
>> Example 1: PDU length = 251, msys_1_block_size = 80
>> 
>> Controller needs to store 251 + 2 = 253 total bytes.
>> 
>> Packet header mbuf can hold 80 - 36 - 4 bytes, or 40 bytes.
>> Each additional mbuf can hold 80 - 16 bytes, or 64 bytes.
>> Total mbufs = 5. First mbuf holds 40 bytes, the next three hold 64 bytes
>> while the final mbuf holds 21 bytes (40 + 64*3 + 21 = 253).
>> 
>> Example 2: PDU length = 251, msys_1_block_size = 112
>> Total mbufs: 3 (72 + 96 + 85)
>> 
>> Hope this helps.
>> 
>> 
>> 
>> 
>> 
>> 



Data Length Extension in Nimble

2017-01-11 Thread will sanfilippo
Hello:

In order to take full advantage of Data Length Extension the user should set 
the following syscfg values to 251.

BLE_LL_MAX_PKT_SIZE
BLE_LL_CONN_INIT_MAX_TX_BYTES

Both of these values should be set to 251 in order for both transmitted and 
received PDU’s to possibly be as large as 251 bytes. If only the 
LL_MAX_PKT_SIZE is 251, received PDU’s can be as large as 251 but transmitted 
PDU’s will only be as large as CONN_INIT_MAX_TX_BYTES.

Of course, the host can always change things by using the LE_Set_Data_Length 
command.

Enjoy




Re: How to change the CPU time frequency of mynewt and nimble stack

2017-01-10 Thread will sanfilippo
Jiacheng:

OK, there were some more issues with setting a non-1MHz clock. I tested this 
with 1, 2 and 4 MHz using LightBlue (ios). I pushed the changes to develop so 
it should work now (one hopes).


> On Jan 10, 2017, at 6:47 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Thanks, Chris,
> 
> It’s working now.
> 
> More information about nimble stack with 2 MHz CPU frequency. nimble-bleprph 
> can be connected by LightBlue, but after several seconds, it is disconnected 
> with message “Disconnected Alert: The peripheral has disconnected.”  With 4 
> MHz CPU frequency, nimble-bleprph can not be scanned by LightBlue.
> 
> I’m trying to get higher timing resolution, to make my ISO/IEC 7816 stack 
> (UICC-terminal interface) co-exist with nimble stack. 
> 
> Best Regards,
> 
> Jiacheng
> 
> 
>> 在 2017年1月11日,09:26,Christopher Collins <ccoll...@apache.org> 写道:
>> 
>> Hi Jiacheng,
>> 
>> I think your version of newt is still slightly out of date.  You can
>> install the latest as follows:
>> 
>>   cd $GOPATH/src/mynewt.apache.org/newt/newt &&
>>  git checkout develop&&
>>  git pull origin develop &&
>>  go install  ;
>>  cd -
>> 
>> 
>> Thanks,
>> Chris
>> 
>> On Wed, Jan 11, 2017 at 09:04:05AM +0800, WangJiacheng wrote:
>>> Sterling,
>>> 
>>> Thanks.
>>> 
>>> Yes, the newt is already updated. “newt version” has return "Apache Newt 
>>> (incubating) version: 1.0.0-dev”.
>>> 
>>> Best Regards,
>>> 
>>> Jiacheng
>>> 
>>>> 在 2017年1月11日,08:58,Sterling Hughes <sterl...@apache.org> 写道:
>>>> 
>>>> Hi Jiacheng,
>>>> 
>>>> You need to update your newt tool along with the new develop.
>>>> 
>>>> Best,
>>>> 
>>>> Sterling
>>>> 
>>>> On 10 Jan 2017, at 16:46, WangJiacheng wrote:
>>>> 
>>>>> Hi, Will,
>>>>> 
>>>>> I need more help, I have an error message when compile the target.
>>>>> 
>>>>> I’m currently working on the release branch, so upgrade to dev branch by:
>>>>>  1. change file project.yml from "vers: 0-latest” to "vers: 0-dev”
>>>>>  2. upgrade to dev branch “newt upgrade”
>>>>> 
>>>>> Then compile the target by “newt build nrf52_boot”, an error message as:
>>>>> 
>>>>> Building target targets/nrf52_boot
>>>>> Compiling boot.c
>>>>> Archiving boot.a
>>>>> Compiling bootutil_misc.c
>>>>> Compiling image_ec.c
>>>>> Compiling image_ec256.c
>>>>> Compiling image_rsa.c
>>>>> Compiling image_validate.c
>>>>> Compiling loader.c
>>>>> Archiving bootutil.a
>>>>> Error: In file included from aes.c:29:0:
>>>>> /Users/jiachengwang/dev/myproj/repos/apache-mynewt-core/crypto/mbedtls/include/mbedtls/config.h:2522:10:
>>>>>  error: #include expects "FILENAME" or 
>>>>> #include MBEDTLS_USER_CONFIG_FILE
>>>>>^
>>>>> 
>>>>> it seems the config file "mbedtls/config_mynewt.h” define in 
>>>>> “crypto/mbedtls/pkg.yml” is missed.
>>>>> 
>>>>> Thanks,
>>>>> 
>>>>> Jiacheng
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>> 
>>>>>> 在 2017年1月10日,11:06,WangJiacheng <jiacheng.w...@icloud.com> 写道:
>>>>>> 
>>>>>> Thanks, Will.
>>>>>> 
>>>>>> There is an Internet connection issue to GitHub.com currently, I’ll 
>>>>>> update the code later.
>>>>>> 
>>>>>> Best Regards,
>>>>>> 
>>>>>> Jiacheng
>>>>>> 
>>>>>> 
>>>>>>> 在 2017年1月10日,10:10,will sanfilippo <wi...@runtime.io> 写道:
>>>>>>> 
>>>>>>> Hello:
>>>>>>> 
>>>>>>> This issue should now be fixed in the latest development branch. Note 
>>>>>>> that this is not working on the nrf51 platforms but si

Re: How to change the CPU time frequency of mynewt and nimble stack

2017-01-09 Thread will sanfilippo
Hello:

This issue should now be fixed in the latest development branch. Note that this 
is not working on the nrf51 platforms but since you were using nrf52 it should 
work.

Let me know if you see any issues with it.


> On Jan 8, 2017, at 6:20 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Hi, Will,
> 
> Thanks a lot for your reply.
> 
> Yes,the hardwear processor clock frequency of nRF52 (Cortex M4F) is 64 MHz 
> and can not be changed.
> 
> The reason of changing CLOCK_FREQ is that I want re-use the internal timing 
> of mynewt already there with more accurate timing, by calling function 
> "os_cputime_get32()”.  I’m trying to implement a (soft) IC card reader by 
> nRF52 with mynewt OS and nimble stack running.
> 
> I am also considering to use an independent timer (NRF_TIMER3 or  NRF_TIMER4) 
> at the cost of about 0.1mA current. I already use NRF_TIMER2 to provide a 4 
> MHz clock signal output from GPIO of nRF52. By reading the source code of 
> apache-mynewt-core, my understanding is that NRF_TIMER0 and NRF_TIMER1 is 
> already used by mynewt OS and nimble stack, is my understanding correct?
> 
> Thanks,
> 
> Jiacheng
> 
>> 在 2017年1月9日,01:10,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Those should be the only two parameters you need to configure. Must be a bug 
>> in the controller :-)
>> 
>> I think it is worthwhile to point out that CLOCK_FREQ only changes the units 
>> of os cputime; it does not affect the speed at which the processor runs. At 
>> least, I could not see any other uses of CLOCK_FREQ. So, these settings only 
>> affect the nimble stack and the controller specifically (internal controller 
>> timing).
>> 
>> I am curious why you wanted to change this variable; what were you trying to 
>> achieve?
>> 
>> Thanks for pointing this out; I will take a look to see why it is not 
>> working.
>> 
>>> On Jan 7, 2017, at 10:48 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
>>> 
>>> Hi, 
>>> 
>>> The default CPU time frequency of Mynewt OS and Nimble stack is 1 MHz, I 
>>> try to change the CPU time frequency to be 2 MHz, I modified the related 2 
>>> config files:
>>> configure file “hw/bsp/nrf52dk/syscfg.yml” as
>>>  CLOCK_FREQ:
>>>  description: 'TBD'
>>>  value:  200 
>>> configure file “kernel/os/syscfg.yml” as
>>>  OS_CPUTIME_FREQ:
>>>  description: 'Frequency of os cputime'
>>>  value: 200
>>> 
>>> The app “bleperiph" is running and the CPU time frequency is 2 MHz, also 
>>> the BLE “nimble-bleprph” peripheral  can be scanned by LightBlue of iOS 
>>> APP, and show 1 service is there. However, when I try to connect it ,an 
>>> error massage “Connection Alert: Timeout interrogating the peripheral”
>>> 
>>> When change back above 2 syscfg parameters to 100, it can be connected.
>>> 
>>> And app “bletiny” is the same.
>>> 
>>> Is there any  missed config setting in my test? How to change the CPU time 
>>> frequency to 2 Mhz and Nimble device can be connected?
>>> 
>>> Thanks,
>>> 
>>> Jiacheng 
>>> 
>>> 
>> 
> 



Re: How to change the CPU time frequency of mynewt and nimble stack

2017-01-09 Thread will sanfilippo
You should be able to do exactly what you tried to do; that was one of the 
intents with os_cputime. Hopefully I will have an answer soon regarding why 
this does not work.

Regarding timers:

The nimble stack on the nrf52 currently uses Timer 0 for cputime (and thus for 
the controller BLE timing). It either uses RTC1 or TIMER1 for the os tick, 
depending on the syscfg variable XTAL_32768 in your bsp syscfg.yml file.


> On Jan 8, 2017, at 6:20 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
> 
> Hi, Will,
> 
> Thanks a lot for your reply.
> 
> Yes,the hardwear processor clock frequency of nRF52 (Cortex M4F) is 64 MHz 
> and can not be changed.
> 
> The reason of changing CLOCK_FREQ is that I want re-use the internal timing 
> of mynewt already there with more accurate timing, by calling function 
> "os_cputime_get32()”.  I’m trying to implement a (soft) IC card reader by 
> nRF52 with mynewt OS and nimble stack running.
> 
> I am also considering to use an independent timer (NRF_TIMER3 or  NRF_TIMER4) 
> at the cost of about 0.1mA current. I already use NRF_TIMER2 to provide a 4 
> MHz clock signal output from GPIO of nRF52. By reading the source code of 
> apache-mynewt-core, my understanding is that NRF_TIMER0 and NRF_TIMER1 is 
> already used by mynewt OS and nimble stack, is my understanding correct?
> 
> Thanks,
> 
> Jiacheng
> 
>> 在 2017年1月9日,01:10,will sanfilippo <wi...@runtime.io> 写道:
>> 
>> Those should be the only two parameters you need to configure. Must be a bug 
>> in the controller :-)
>> 
>> I think it is worthwhile to point out that CLOCK_FREQ only changes the units 
>> of os cputime; it does not affect the speed at which the processor runs. At 
>> least, I could not see any other uses of CLOCK_FREQ. So, these settings only 
>> affect the nimble stack and the controller specifically (internal controller 
>> timing).
>> 
>> I am curious why you wanted to change this variable; what were you trying to 
>> achieve?
>> 
>> Thanks for pointing this out; I will take a look to see why it is not 
>> working.
>> 
>>> On Jan 7, 2017, at 10:48 PM, WangJiacheng <jiacheng.w...@icloud.com> wrote:
>>> 
>>> Hi, 
>>> 
>>> The default CPU time frequency of Mynewt OS and Nimble stack is 1 MHz, I 
>>> try to change the CPU time frequency to be 2 MHz, I modified the related 2 
>>> config files:
>>> configure file “hw/bsp/nrf52dk/syscfg.yml” as
>>>  CLOCK_FREQ:
>>>  description: 'TBD'
>>>  value:  200 
>>> configure file “kernel/os/syscfg.yml” as
>>>  OS_CPUTIME_FREQ:
>>>  description: 'Frequency of os cputime'
>>>  value: 200
>>> 
>>> The app “bleperiph" is running and the CPU time frequency is 2 MHz, also 
>>> the BLE “nimble-bleprph” peripheral  can be scanned by LightBlue of iOS 
>>> APP, and show 1 service is there. However, when I try to connect it ,an 
>>> error massage “Connection Alert: Timeout interrogating the peripheral”
>>> 
>>> When change back above 2 syscfg parameters to 100, it can be connected.
>>> 
>>> And app “bletiny” is the same.
>>> 
>>> Is there any  missed config setting in my test? How to change the CPU time 
>>> frequency to 2 Mhz and Nimble device can be connected?
>>> 
>>> Thanks,
>>> 
>>> Jiacheng 
>>> 
>>> 
>> 
> 



Re: How to change the CPU time frequency of mynewt and nimble stack

2017-01-08 Thread will sanfilippo
Those should be the only two parameters you need to configure. Must be a bug in 
the controller :-)

I think it is worthwhile to point out that CLOCK_FREQ only changes the units of 
os cputime; it does not affect the speed at which the processor runs. At least, 
I could not see any other uses of CLOCK_FREQ. So, these settings only affect 
the nimble stack and the controller specifically (internal controller timing).

I am curious why you wanted to change this variable; what were you trying to 
achieve?

Thanks for pointing this out; I will take a look to see why it is not working.

> On Jan 7, 2017, at 10:48 PM, WangJiacheng  wrote:
> 
> Hi, 
> 
> The default CPU time frequency of Mynewt OS and Nimble stack is 1 MHz, I try 
> to change the CPU time frequency to be 2 MHz, I modified the related 2 config 
> files:
> configure file “hw/bsp/nrf52dk/syscfg.yml” as
>CLOCK_FREQ:
>description: 'TBD'
>value:  200 
> configure file “kernel/os/syscfg.yml” as
>OS_CPUTIME_FREQ:
>description: 'Frequency of os cputime'
>value: 200
> 
> The app “bleperiph" is running and the CPU time frequency is 2 MHz, also the 
> BLE “nimble-bleprph” peripheral  can be scanned by LightBlue of iOS APP, and 
> show 1 service is there. However, when I try to connect it ,an error massage 
> “Connection Alert: Timeout interrogating the peripheral”
> 
> When change back above 2 syscfg parameters to 100, it can be connected.
> 
> And app “bletiny” is the same.
> 
> Is there any  missed config setting in my test? How to change the CPU time 
> frequency to 2 Mhz and Nimble device can be connected?
> 
> Thanks,
> 
> Jiacheng 
> 
> 



Re: [nRF52 HAL I2C] Possible 'probe' issue?

2016-12-26 Thread will sanfilippo
What I am going to say is obvious, but has anyone hooked up an i2c analyzer to 
see what is actually going on in the two cases? I dont see why the peripheral 
would act differently based on the address itself; something must be happening 
on the bus causing the peripheral to act differently.

When I was messing around with the nrf i2c peripheral I used the probe API but 
only for the address I was expecting.

The way we are programming the nrf peripheral might indeed be an issue here but 
without knowing more about how devices act and more about the i2c protocol 
itself I dont think I can offer anything intelligent here :-)

I dont have an analyzer with me but if no one can get a hold of one and test it 
out, I will try to get something set up later in the week.


> On Dec 26, 2016, at 3:34 PM, Kevin Townsend  wrote:
> 
> And just to highlight the point, here is the results
> using 0x03 at the starting point on an identical setup:
> 
>   i2cscan
>   9116:Scanning I2C bus 0
> 0  1  2  3  4  5  6  7  8  9  a  b  c  d  e  f
>   00:  -- -- -- -- -- -- -- -- -- -- -- -- --
>   10: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   20: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   30: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   40: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   50: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   60: -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- --
>   70: -- -- -- -- -- -- -- --
>   Found 0 devices on I2C bus 0
> 
> The peripheral shouldn't die because of a faulty addr,
> though, and it isn't a behaviour I've seen before. :)
> 
> That sounds suspiciously like a silicon level problem with
> the I2C peripheral itself, though it would need further
> testing to confirm.
> 



Re: Schedule task with strict fixed timing and variable workload

2016-12-26 Thread will sanfilippo
I think there was some discussion re: HAL PWM but I cannot quite recall the end 
result. Maybe that this would be a driver instead of a HAL? I agree; PWM is 
very commonly used so having PWM support (in either driver or HAL form) should 
be added.


> On Dec 26, 2016, at 10:26 AM, Kevin Townsend  wrote:
> 
> Hi Will,
> 
> Thanks for the feedback.
> 
>> 1) Unless you are the highest priority task, other tasks can run which could 
>> cause delays, and thus you are not waking up at the desired time.
> Yeah, everything is based on the assumption that priority is resolved by 
> design, and that the scheduling constraints are realistic and well understood 
> in the system.
>> 2) Using os_time_delay() and os_callout_reset() gives you a 1 os time tick 
>> resolution, so if your ticks are 10 msecs, you will be off by up to 9 msecs. 
>> A 1 msec ticker gets you off by up to 1 msec.
>> 
>> Using a task to post another task is a bit heavyweight; I would use a timer 
>> or a callout to do this. A timer would solve your “tick resolution” issue 
>> but would not solve the problem of a task wake up getting delayed.
> A timer would be a valid solution as well, yes. I haven't looked seriously at 
> the timer HAL yet, but I'll have a look right now and give it a try just to 
> familiarize myself with it.
> 
> I saw earlier today that there wasn't PWM support and wanted to see if that 
> was something that could be easily added to the timer HAL since it's a common 
> requirement that should probably be included (motor control, dimming control, 
> etc.). Different issue though!
>> It would not be hard to add an API to os_callout.c that could be used for 
>> this. Something like “os_callout_reset_at” or “os_callout_reset_tick” which 
>> you could pass in a specified os tick. Not sure what the API should do if 
>> you missed the os tick (return -1 and not post the task)? This would be 
>> simple code to use; just add the sample rate to the last time and call the 
>> new os_calllout API.
> Personally, I think this would make a lot of sense and solve what is bound to 
> be a common problem, and perhaps you can have a flag to define the behaviour 
> when you miss the delay. Either you return an error code, OR you fire the 
> task as soon as you can (though ideally still with a flag of some sort to 
> know you're late), but having the option between the two should solve some 
> problems.
> 
> K.



Re: Schedule task with strict fixed timing and variable workload

2016-12-26 Thread will sanfilippo
There is nothing in the OS to delay until a specific OS time or to cause the 
scheduler to periodically wake up at a certain rate. There are different ways 
to go about doing this and it really depends on what you want. Using a task to 
guarantee timing can be tricky. Some things to be aware of (which I am sure you 
all are):

1) Unless you are the highest priority task, other tasks can run which could 
cause delays, and thus you are not waking up at the desired time.
2) Using os_time_delay() and os_callout_reset() gives you a 1 os time tick 
resolution, so if your ticks are 10 msecs, you will be off by up to 9 msecs. A 
1 msec ticker gets you off by up to 1 msec.

Using a task to post another task is a bit heavyweight; I would use a timer or 
a callout to do this. A timer would solve your “tick resolution” issue but 
would not solve the problem of a task wake up getting delayed.

It would not be hard to add an API to os_callout.c that could be used for this. 
Something like “os_callout_reset_at” or “os_callout_reset_tick” which you could 
pass in a specified os tick. Not sure what the API should do if you missed the 
os tick (return -1 and not post the task)? This would be simple code to use; 
just add the sample rate to the last time and call the new os_calllout API.

Will

> On Dec 26, 2016, at 6:30 AM, Kevin Townsend  wrote:
> 
> Hi Fabio,
> 
> Thanks for the feedback and suggestion.
> 
> I didn't think to have separate sync and read tasks. It does add a decent 
> amount of code when this could be solved with one or two lines and a built in 
> helper class, but it's a workable and reliable solution today.
> 
> I also tested the code below but there are several potential problems with it:
> 
>   static void
>   blinky_task_handler(void *arg)
>   {
>static os_time_t start;
>static os_time_t stop;
>hal_gpio_init_out(LED_BLINK_PIN, 1);
> 
>while (1) {
>/* Measure the starting tick */
>start = os_time_get();
> 
>/* Toggle the LED */
>hal_gpio_toggle(LED_BLINK_PIN);
> 
>/* Measure the ending tick */
>stop = os_time_get();
> 
>/* Check for overflow in tick counter */
>if (stop >= start) {
>os_time_delay(OS_TICKS_PER_SEC/BLINKY_RATE_HZ - (stop -
>   start));
>} else {
>/* Overflow occured */
>os_time_delay(OS_TICKS_PER_SEC/BLINKY_RATE_HZ - stop +
>(UINT_MAX - start) );
>}
>}
>   }
> 
> This doesn't account for situations where the read event might run /over/ the 
> desired rate either, which will require a check before the overflow test if 
> 'stop-start > OS_TICKS_PER_SECOND/BLINKY_RATE_HZ', at which point you 
> probably want to flag the timing overrun and maybe get a new sample right 
> away. There are likely other issues I'm not seeing as well, such as making 
> sure we don't exceed 1/2 the OS time epoch in our delay.
> 
> Having a single helper function would let you encapsulate all these edge 
> cases in one call might be useful, but perhaps someone has a suggestion about 
> the naming or some other helpers that are worth considering ... or disagrees 
> that this should be included in the core at all!?
> 
> FreeRTOS, for example, has vTaskDelayUntil(): 
> http://www.freertos.org/vtaskdelayuntil.html
> 
> K.
> 



Re: System init and OS eventq ensure

2016-12-11 Thread will sanfilippo
I guess, for no really great reason, I thought it would be weird to malloc, 
say, 1024 bytes, then free, say, 960 bytes. No weirder than what I was 
suggesting. :-) I guess there a number of things we could do here: malloc a 
temporary stack and free that whole thing and either do another malloc or 
change the task stack information to a bss defined idle task stack.

I did leave out the fact that we would need to modify the stack information in 
the task structure. I cannot recall exactly the information we keep in the task 
structure for the stack but this would not be hard to do; we would just need to 
do it.

Will

> On Dec 11, 2016, at 10:55 AM, Christopher Collins <ccoll...@apache.org> wrote:
> 
> On Sun, Dec 11, 2016 at 10:11:44AM -0800, will sanfilippo wrote:
>> Personally, I keep wanting to try and have the OS start up right away.
> 
> I wonder if this could solve the problem that Sterling raised (no
> default event queue during sysinit).  The control flow in main() might
> look like this:
> 
> 1. Start OS
> 2. Create and designate default event queue.
> 3. sysinit()
> 
> I think it would be nice if we could avoid adding another initialization
> stage.
> 
>> There are definitely “issues” with this:
>> a) We do not want to waste idle task stack.
>> b) When tasks are started they would start running right away. This
>> might cause issues where a task does something to a piece of memory
>> that another task initializes, but since that other task has not
>> initialized it yet…
>> 
>> b) can be avoided by locking the scheduler until initializations are 
>> finished.
>> 
>> a) is problematic :-) I think someone brought this up before, but I
>> wonder if it is worth the effort to do something “a bit crazy” like
>> the following: the idle task uses “the heap” during intialization.
>> Once initializations are over (or at some point that we determine),
>> the idle task stack is made smaller and the “top” of the heap is set
>> to the end of the idle task stack. For example, idle task stack is at
>> 0x20008000 and is of size 1K bytes; the bottom of the heap is at
>> 0x20007000; the top of the heap is at 0x20007C00 (in my nomenclature,
>> heap allocations start from the bottom). At some point, the top of the
>> heap is moved to 0x20007F80.
>> 
>> Yeah, maybe a bit crazy… :-)
> 
> I don't think that's too crazy.  It would be great if we could just
> malloc() a temporary stack, and then free it when initialization
> completes.  I guess the worry is that this will cause heap
> fragmentation?
> 
> Chris



Re: System init and OS eventq ensure

2016-12-11 Thread will sanfilippo
Personally, I keep wanting to try and have the OS start up right away. There 
are definitely “issues” with this:
a) We do not want to waste idle task stack.
b) When tasks are started they would start running right away. This might cause 
issues where a task does something to a piece of memory that another task 
initializes, but since that other task has not initialized it yet…

b) can be avoided by locking the scheduler until initializations are finished.

a) is problematic :-) I think someone brought this up before, but I wonder if 
it is worth the effort to do something “a bit crazy” like the following: the 
idle task uses “the heap” during intialization. Once initializations are over 
(or at some point that we determine), the idle task stack is made smaller and 
the “top” of the heap is set to the end of the idle task stack. For example, 
idle task stack is at 0x20008000 and is of size 1K bytes; the bottom of the 
heap is at 0x20007000; the top of the heap is at 0x20007C00 (in my 
nomenclature, heap allocations start from the bottom). At some point, the top 
of the heap is moved to 0x20007F80.

Yeah, maybe a bit crazy… :-)


> On Dec 10, 2016, at 4:04 PM, Sterling Hughes  wrote:
> 
> Hi Chris,
> 
> On 10 Dec 2016, at 13:37, Christopher Collins wrote:
> 
>> Darn, you're right. I'm writing these emails from my phone, and I didn't
>> look at the code closely enough.  For other packages, the start event
>> only gets executed the first time the event queue gets used (as you
>> said).  I guess it has worked out in practice because the application
>> uses the package shortly after the OS starts.
>> 
> 
> Yeah, that’s what I noticed too. :-)
> 
> For now, it’s OK, I can just call my init from main() after the default event 
> queue is set.
> 
>> That's not so great.  Second stage initialization sounds good to me.
>> Alternatively, the system could keep track of packages that need an
>> event queue, and enqueue their start event when a default event queue is
>> set.  Earlier, we discussed using linker sections to accomplish this
>> without requiring any RAM. I looked into this, but concluded it wasn't
>> possible without modifying the linker scripts.
>> 
> 
> I think we should probably use this opportunity to (again) review system 
> initialization, which is fast becoming our most circularly discussed topic 
> :-) I’d throw in there whether or not to initialize components in the idle 
> task (using up stack), and making sure that we map sysinit stages to the 
> driver initialization stages as well.
> 
> As far as a proposal, what do you think of having 2 initialization stages:
> 
> - Before OS
> - After OS
> 
> And we can break each stage into primary, secondary and tertiary order, so, 
> in terms of number space we have:
> 
> - 0: first init order, before OS
> - 1: second init order, before OS
> - 2: third init order, before OS
> - 3: first init order, after OS
> - 4: second init order, after OS
> - 5: third init order, after OS
> 
> I think we probably need to modify the package system configuration to 
> specify both stage & order, e.g.
> 
> pkg.init_func_startup.name: XX
> pkg.init_func_startup.order: 0
> pkg.init_func_kernel.name: YY
> pkg.init_func_kernel.order: 1
> 
> This should allow us to hook in at either of these stages.
> 
> I also think we probably need to give meaning to at least the primary and 
> secondary init orders here, e.g. designate which services are available after 
> each of these functions and come up with some documented nomenclature for it.
> 
> Sterling
> 



Re: 255-byte MTU with iPhones

2016-12-06 Thread will sanfilippo
I have not tried it myself; I cannot recall if Chris tried it. He is currently 
on a plane but will hopefully chime in on this soon.

We do not have iphone throughput numbers but we have throughput numbers that we 
achieved using two nimble devices: we achieved single connection LL throughput 
approaching 800 kbps.

Will

> On Dec 6, 2016, at 1:00 PM, Jitesh Shah  wrote:
> 
> Hey guys,
> I was wondering whether any of y'all had success connecting nimBLE stack to
> an iPhone with 255-byte MTU size as per Bluetooth 4.2 spec.
> 
> There's prolly no point in trying anything before iPhone 7 because they
> might not have a 4.2 chip. Any got a chance to give that a spin with iPhone
> 7? Have any throughput numbers you can share?
> 
> Jitesh
> 
> -- 
> This email including attachments contains Mad Apparel, Inc. DBA Athos 
> privileged, confidential, and proprietary information solely for the use 
> for the addressed recipients. If you are not the intended recipient, please 
> be aware that any review, disclosure, copying, distribution, or use of the 
> contents of this message is strictly prohibited. If you have received this 
> in error, please delete it immediately and notify the sender. All rights 
> reserved by Mad Apparel, Inc. 2012. The information contained herein is the 
> exclusive property of Mad Apparel, Inc. and should not be used, 
> distributed, reproduced, or disclosed in whole or in part without prior 
> written permission of Mad Apparel, Inc.



Re: I2C Pin Setup (nRF52)

2016-12-06 Thread will sanfilippo
I think the issue here is when folks are using the nordic SDK as well as our 
HAL. The idea was that pin configurations would be in hal_bsp.c; nowhere else. 
When we moved out the nordic SDK, pin definitions from nrf_drv_config.h were 
moved into pkg.yml. This made for some conflicts.

I have an idea but not sure it is a good one. I will use the twi interface as 
an example; I think the rest of the nordic SDK is similar. Sorry to be 
“pedantic” here; but you know me :-)

Users can setup their own config structures and pass that config into the 
nordic SDK initialization. For example:

Here is the initialization function for twi. Note the *p_config structure:

ret_code_t nrf_drv_twi_init(nrf_drv_twi_t const *p_instance,
nrf_drv_twi_config_t const 
*p_config,
nrf_drv_twi_evt_handler_t
event_handler,
void *p_context)


Here is the structure:
/**
 * @brief Structure for the TWI master driver instance configuration.
 */
typedef struct
{
uint32_tscl;///< SCL pin number.
uint32_tsda;///< SDA pin number.
nrf_twi_frequency_t frequency; ///< TWI frequency.
uint8_t interrupt_priority;  ///< Interrupt priority.

} nrf_drv_twi_config_t;

In hal_bsp.c, someone would create this structure with pin defintions just like 
we do for our HAL and thus all pin defintions would be in hal_bsp.c. They then 
pass this structure in the init call. No more pin definitions in pkg.yml.

Another option which has been mentioned by others is to have a header file that 
contains all the pin definitions and this can be in the bsp. These definitions 
can be the same as the ones in nrf_drv_config.h or they can be user defined. My 
preference is to have them user-defined.

Hopefully this makes sense...

Will


> On Dec 6, 2016, at 12:33 PM, Kevin Townsend  wrote:
> 
> Hi,
> 
> I was trying to test out an I2C chip with a previously working driver to 
> verify the HW, but it isn't clear anymore where to set pins (post Nordic SDK 
> removal). I couldn't get I2C to work after tweaking the pkg.yml file which 
> has a lot of pin defines, for example:
> 
>- '-DTWI0_CONFIG_SCL=26'
>- '-DTWI0_CONFIG_SDA=25'
> 
> After triple checking plus resolving some conflicts where master and slave of 
> the same bus types were set (in pkg.yml), etc., I looked at the hal_bsp.c 
> file and see they are also hard coded there:
> 
> #if MYNEWT_VAL(I2C_0)
> static const struct nrf52_hal_i2c_cfg hal_i2c_cfg = {
>.scl_pin = 27,
>.sda_pin = 26,
>.i2c_frequency = 100/* 100 kHz */
> };
> #endif
> 
> It might be easier to make the hal_bsp.c here use the values from pkg.yml, or 
> any solution really not to have two defines?
> 
> * 
> https://github.com/apache/incubator-mynewt-core/blob/master/hw/bsp/nrf52dk/src/hal_bsp.c#L79
> * 
> https://github.com/apache/incubator-mynewt-core/blob/master/hw/bsp/nrf52dk/pkg.yml#L72
> 
> There are some conflicts as well like I'm guessing this isn't possible 
> together?
> 
> * 
> https://github.com/apache/incubator-mynewt-core/blob/master/hw/bsp/nrf52dk/pkg.yml#L74
> * 
> https://github.com/apache/incubator-mynewt-core/blob/master/hw/bsp/nrf52dk/pkg.yml#L76
> 
> Not trying to be a PITA ... but it was a bit confusing changing pins but not 
> seeing the results. :)
> 
> I can send in a pull request as well if that's easier, but maybe other people 
> here have a better idea of where you want all the pins defines to be between 
> the .c file, syscfg.yml and pkg.yml?
> 
> I just couldn't get I2C to work, for example with this scanner I put together 
> which worked fine previously:
> 
> static int
> shell_i2cscan_cmd(int argc, char **argv)
> {
>uint8_t addr;
>int32_t timeout = OS_TICKS_PER_SEC / 10;
>uint8_t dev_count = 0;
> 
>console_printf("Scanning I2C bus 0\n"
>   " 0  1  2  3  4  5  6  7  8  9  a  b  c  d e  f\n"
>   "00:  ");
> 
>/* Scan all valid I2C addresses (0x03..0x77) */
>for (addr = 0x03; addr < 0x78; addr++) {
>int rc = hal_i2c_master_probe(0, addr, timeout);
>/* Print addr header every 16 bytes */
>if (!(addr % 16)) {
>  console_printf("\n%02x: ", addr);
>}
>/* Display the addr if a response was received */
>if (!rc) {
>console_printf("%02x ", addr);
>dev_count++;
>} else {
>console_printf("-- ");
>}
>}
>console_printf("\nFound %u devices on I2C bus 0\n", dev_count);
> 
>return 0;
> }
> 
> Kevin



The nimble scheduler and multiple advertising instances

2016-12-06 Thread will sanfilippo
Hello:

Just wanted to see if folks have any comments about the following topic. This 
has arisen due to our upcoming addition of allowing for multiple advertising 
instances. It relates to the “priority” of events in the scheduler (connections 
and advertising events specifically).

To make a long story short, it is possible that when attempting to schedule an 
advertising event there is no room in the scheduler for it. For example: assume 
your advertising interval is 20 msecs. If you have alot of current connections, 
it is possible that all of those 20 msecs are scheduled for connection events.

So, something has to give. Given that we only had one advertising instance in 
the past, I made the decision that advertising is the highest priority; a 
connection event that overlapped an advertising event would get pushed off. The 
chance that this would cause a connection to fail is miniscule given that 
advertising events are randomized within a 10 msec period.

With many advertisers this decision may be a bad one; connections could fail. 
Furthermore, if you have alot of advertisers and short intervals, a large % of 
the time would be spent advertising. I do realize that non-connectable 
advertising events must use an interval that is at least 100 msecs. Still, if 
you have 10 instances you are using more than 50% of the time if the 
advertisements are large and these advertisements are scannable.

So much for a long story being short :-) There are a number of ways to address 
this issue. Here are some choices to consider:

1) Connection events never get displaced by advertising events; you simply try 
to find the next possible time in the scheduler for advertising events and if 
they get pushed off indefinitely, oh well.
2) Advertising events always supplant connection events. If you cannot find a 
place in the scheduler for the advertising event, you push off the connection 
event to the next interval.
3) We modify the vendor specific HCI command so that the host can specify the 
behavior: the advertising event is more important than a connection event, or 
vice versa.
4) We come up with some “least recently used policy”. If we just serviced the 
connection event but skipped the advertising event, the next time we schedule 
things the advertising event would get precedence. Thus, for any scheduled 
event, you choose the one that was serviced the furthest in the past.

Thoughts? Any other choices that folks feel should be considered?

Thanks,
Will

PS Note that in our scheduler, scanning always has the lowest precedence. If 
the scheduler is completely full, scanning does not occur. This might be a bad 
idea as well but the chance that there is 0 time for scanning is pretty small.

PPS I am attaching a document that shows the additional, vendor specific HCI 
commands. This is an Android addition...

Re: NRF52dk ADC

2016-12-02 Thread will sanfilippo
There is a branch called “sterly_refactor” that we used when we were 
refactoring bsp and other code. There is ADC code in there. If you checkout 
that branch you can go to the apps directory and look for the sblinky app. In 
that main.c (which is where I said this stuff shouldnt be but that was all just 
test code) it should show how the ADC was used. You cant just bring all that 
code in I suspect as it is way out of date, but I think there is enough in 
there to give you an idea of how things are expected to work.

Be warned though; that code was just all hacked in and I am not sure what state 
it was left in.


> On Dec 2, 2016, at 9:11 AM, David G. Simmons <santa...@mac.com> wrote:
> 
> Do you have some functioning code you could share? I'm really just using this 
> sensor https://www.adafruit.com/products/1786 
> <https://www.adafruit.com/products/1786> and reading an analog voltage from 
> it anyway. 
> 
> dg
> 
>> On Dec 2, 2016, at 12:06 PM, will sanfilippo <wi...@runtime.io> wrote:
>> 
>> I have not implemented an ADC sensor but I have used the ADC to simply get 
>> readings from an analog voltage so I at least know that the underlying ADC 
>> code works (or used to work) :-)
> 
> --
> David G. Simmons
> (919) 534-5099
> Web <https://davidgs.com/> • Blog <https://davidgs.com/davidgs_blog> • 
> Linkedin <http://linkedin.com/in/davidgsimmons> • Twitter 
> <http://twitter.com/TechEvangelist1> • GitHub <http://github.com/davidgs>
> /** Message digitally signed for security and authenticity.  
> * If you cannot read the PGP.sig attachment, please go to 
> * http://www.gnupg.com/ <http://www.gnupg.com/> Secure your email!!!
> * Public key available at keyserver.pgp.com <http://keyserver.pgp.com/>
> **/
> ♺ This email uses 100% recycled electrons. Don't blow it by printing!
> 
> There are only 2 hard things in computer science: Cache invalidation, naming 
> things, and off-by-one errors.
> 
> 



  1   2   3   >