That is how it would end up if we removed setting the data rate from the port: the MAC/MLME will determine the data rate by using ADR. If ADR is not enabled, the data rate will be set via a default configuration and can also be modified by setting the MIB.
> On Oct 24, 2017, at 7:01 PM, marko kiiskila <[email protected]> wrote: > > My preference would be for this to be a system specific API, and not > tied to application port. > I feel this is a setting which should be owned by the entity who is > doing other lora network management (joins/rejoins/link monitoring). > >> On Oct 24, 2017, at 5:08 PM, will sanfilippo <[email protected]> wrote: >> >> I understand your point and it is a valid concern meaning that I can see why >> it might be nice to be able to configure different ports as you suggest. >> However, given the current underlying lora stack, the data rate would not >> remain constant and the application developer would still have to handle a >> message being sent back if it was too big once the data rate was lowered out >> from under them. >> >> Note that adding setting the data rate on a port basis is really an >> implementation detail of our lora stack. The reference design allows the >> application developer to set the port and data rate in the McpsRequest, >> meaning on a per message basis. This got morphed into doing this on a per >> port basis as access to the reference design's McpsRequest is “hidden” by >> the current lora api. However, even in the reference design case, the data >> rate used for that McpsRequest could change. >> >> I was considering modifying the api such that the data rate was specified in >> lora_app_port_send() when the application wants to send a frame. However, >> the data rate could still get changed and to me this makes it not as useful. >> Furthermore, I still think it will be rare for the application developer to >> dictate the data rate (but I could be wrong here). Honestly, in most cases I >> think this adds some extra burden on the app developer to specify a data >> rate. >> >> A possible consideration would be to modify the code so that the application >> developer can specify the data rate on either a per port or per message >> basis if they desire, and that the message would always be sent at that data >> rate. >> >>> On Oct 24, 2017, at 3:56 PM, Christopher Collins <[email protected]> wrote: >>> >>> On Tue, Oct 24, 2017 at 01:56:22PM -0700, will sanfilippo wrote: >>>> Hello: >>>> >>>> I would like to propose some changes to the lora api and I want to see if >>>> folks had any comments or issues with the following proposal. >>> >>> [...] >>> >>> It sounds reasonable to me. There is just one thing that stuck out for >>> me: >>> >>>> 4) Setting the data rate on a per-port basis is just overkill. I cannot >>>> imagine the application wanting to change the data rate on a per port >>>> basis. >>> >>> The application likely contains several packages not written by the >>> application developer. If any of those packages use LoRa, and they have >>> specific data rate requirements, per-port configuration might actually >>> make sense. If a package needs to send packets of a particular size, >>> for example, then it might have minimum data rate restrictions. I'm not >>> very familiar with LoRa, so I don't know if this is a valid concern or >>> not. >>> >>> Chris >> >
