Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-20 Thread Georgi Djakov
Hi Alexandre,

On 07/11/2018 07:21 PM, Alexandre Bailon wrote:
> On 07/09/2018 05:50 PM, Georgi Djakov wrote:
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov 
>> ---

[..]

>> +static int apply_constraints(struct icc_path *path)
>> +{
>> +struct icc_node *next, *prev = NULL;
>> +int ret;
>> +int i;
>> +
>> +for (i = 0; i < path->num_nodes; i++, prev = next) {
>> +struct icc_provider *p;
>> +
>> +next = path->reqs[i].node;
>> +/*
>> + * Both endpoints should be valid master-slave pairs of the
>> + * same interconnect provider that will be configured.
>> + */
>> +if (!prev || next->provider != prev->provider)
>> +continue;
>> +
>> +p = next->provider;
>> +
>> +aggregate_provider(p);
>> +
>> +/* set the constraints */
>> +ret = p->set(prev, next, p->avg_bw, p->peak_bw);
> I'm confuse here.
> In path_init(), the first reqs' node takes the node.
> But here, this same element is assigned to prev, which is used as src by
> set(). For me this looks like prev and next have been inverted.

Ok, right. Will change the order of reqs to go from the source to the
destination.

Thanks,
Georgi

>> +if (ret)
>> +goto out;
>> +}
>> +out:
>> +return ret;
>> +}
>> +


Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-20 Thread Georgi Djakov
Hi Alexandre,

On 07/11/2018 07:21 PM, Alexandre Bailon wrote:
> On 07/09/2018 05:50 PM, Georgi Djakov wrote:
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov 
>> ---

[..]

>> +static int apply_constraints(struct icc_path *path)
>> +{
>> +struct icc_node *next, *prev = NULL;
>> +int ret;
>> +int i;
>> +
>> +for (i = 0; i < path->num_nodes; i++, prev = next) {
>> +struct icc_provider *p;
>> +
>> +next = path->reqs[i].node;
>> +/*
>> + * Both endpoints should be valid master-slave pairs of the
>> + * same interconnect provider that will be configured.
>> + */
>> +if (!prev || next->provider != prev->provider)
>> +continue;
>> +
>> +p = next->provider;
>> +
>> +aggregate_provider(p);
>> +
>> +/* set the constraints */
>> +ret = p->set(prev, next, p->avg_bw, p->peak_bw);
> I'm confuse here.
> In path_init(), the first reqs' node takes the node.
> But here, this same element is assigned to prev, which is used as src by
> set(). For me this looks like prev and next have been inverted.

Ok, right. Will change the order of reqs to go from the source to the
destination.

Thanks,
Georgi

>> +if (ret)
>> +goto out;
>> +}
>> +out:
>> +return ret;
>> +}
>> +


Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-20 Thread Georgi Djakov
Hi Evan,

Thanks for helping to improve this!

On 07/11/2018 01:34 AM, Evan Green wrote:
> Ahoy Georgi!
> On Mon, Jul 9, 2018 at 8:51 AM Georgi Djakov  wrote:
>>
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov 
>> ---

[..]

>> +Interconnect node is the software definition of the interconnect hardware
>> +port. Each interconnect provider consists of multiple interconnect nodes,
>> +which are connected to other SoC components including other interconnect
>> +providers. The point on the diagram where the CPUs connects to the memory is
> 
> CPUs connect

Ok.

[..]

>> +
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
> 
> I needed to add #include  to get struct_size() (used
> in path_init) in order to get this to compile, but maybe my kernel is
> missing some upstream picks.

Yes, should be included.

>> +#include 
>> +
>> +static DEFINE_IDR(icc_idr);
>> +static LIST_HEAD(icc_provider_list);
>> +static DEFINE_MUTEX(icc_lock);
>> +
>> +/**
>> + * struct icc_req - constraints that are attached to each node
>> + *
>> + * @req_node: entry in list of requests for the particular @node
>> + * @node: the interconnect node to which this constraint applies
>> + * @dev: reference to the device that sets the constraints
>> + * @avg_bw: an integer describing the average bandwidth in kbps
>> + * @peak_bw: an integer describing the peak bandwidth in kbps
>> + */
>> +struct icc_req {
>> +   struct hlist_node req_node;
>> +   struct icc_node *node;
>> +   struct device *dev;
>> +   u32 avg_bw;
>> +   u32 peak_bw;
>> +};
>> +
>> +/**
>> + * struct icc_path - interconnect path structure
>> + * @num_nodes: number of hops (nodes)
>> + * @reqs: array of the requests applicable to this path of nodes
>> + */
>> +struct icc_path {
>> +   size_t num_nodes;
>> +   struct icc_req reqs[];
>> +};
>> +
>> +static struct icc_node *node_find(const int id)
>> +{
>> +   return idr_find(_idr, id);
> 
> Wasn't there going to be a warning if the mutex is not held?

I think that it would be really useful if the functions are exported,
but for now let's skip it.

>> +}
>> +
>> +static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
>> + ssize_t num_nodes)
>> +{
>> +   struct icc_node *node = dst;
>> +   struct icc_path *path;
>> +   size_t i;
>> +
>> +   path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
>> +   if (!path)
>> +   return ERR_PTR(-ENOMEM);
>> +
>> +   path->num_nodes = num_nodes;
>> +
> 
> There should probably also be a warning here about holding the lock,
> since you're modifying node->req_list.

This is called only by path_find() with the lock held.

>> +   for (i = 0; i < num_nodes; i++) {
>> +   hlist_add_head(>reqs[i].req_node, >req_list);
>> +
>> +   path->reqs[i].node = node;
>> +   path->reqs[i].dev = dev;
>> +   /* reference to previous node was saved during path 
>> traversal */
>> +   node = node->reverse;
>> +   }
>> +
>> +   return path;
>> +}
>> +
>> +static struct icc_path *path_find(struct device *dev, struct icc_node *src,
>> + struct icc_node *dst)
>> +{
>> +   struct icc_node *n, *node = NULL;
>> +   struct icc_provider *provider;
>> +   struct list_head traverse_list;
>> +   struct list_head edge_list;
>> +   struct list_head visited_list;
>> +   size_t i, depth = 1;
>> +   bool found = false;
>> +   int ret = -EPROBE_DEFER;
>> +
>> +   INIT_LIST_HEAD(_list);
>> +   INIT_LIST_HEAD(_list);
>> +   INIT_LIST_HEAD(_list);
>> +
> 
> A warning here too about holding the lock would also be good, since
> multiple people in here at once would be bad.

This is only called by icc_get() with locked mutex.

>> +   list_add_tail(>search_list, _list);
>> +   src->reverse = NULL;
>> +
>> +   do {
>> +   list_for_each_entry_safe(node, n, _list, 
>> search_list) {
>> +   if (node == dst) {
>> +   found = true;
>> +   

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-20 Thread Georgi Djakov
Hi Evan,

Thanks for helping to improve this!

On 07/11/2018 01:34 AM, Evan Green wrote:
> Ahoy Georgi!
> On Mon, Jul 9, 2018 at 8:51 AM Georgi Djakov  wrote:
>>
>> This patch introduces a new API to get requirements and configure the
>> interconnect buses across the entire chipset to fit with the current
>> demand.
>>
>> The API is using a consumer/provider-based model, where the providers are
>> the interconnect buses and the consumers could be various drivers.
>> The consumers request interconnect resources (path) between endpoints and
>> set the desired constraints on this data flow path. The providers receive
>> requests from consumers and aggregate these requests for all master-slave
>> pairs on that path. Then the providers configure each participating in the
>> topology node according to the requested data flow path, physical links and
>> constraints. The topology could be complicated and multi-tiered and is SoC
>> specific.
>>
>> Signed-off-by: Georgi Djakov 
>> ---

[..]

>> +Interconnect node is the software definition of the interconnect hardware
>> +port. Each interconnect provider consists of multiple interconnect nodes,
>> +which are connected to other SoC components including other interconnect
>> +providers. The point on the diagram where the CPUs connects to the memory is
> 
> CPUs connect

Ok.

[..]

>> +
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
>> +#include 
> 
> I needed to add #include  to get struct_size() (used
> in path_init) in order to get this to compile, but maybe my kernel is
> missing some upstream picks.

Yes, should be included.

>> +#include 
>> +
>> +static DEFINE_IDR(icc_idr);
>> +static LIST_HEAD(icc_provider_list);
>> +static DEFINE_MUTEX(icc_lock);
>> +
>> +/**
>> + * struct icc_req - constraints that are attached to each node
>> + *
>> + * @req_node: entry in list of requests for the particular @node
>> + * @node: the interconnect node to which this constraint applies
>> + * @dev: reference to the device that sets the constraints
>> + * @avg_bw: an integer describing the average bandwidth in kbps
>> + * @peak_bw: an integer describing the peak bandwidth in kbps
>> + */
>> +struct icc_req {
>> +   struct hlist_node req_node;
>> +   struct icc_node *node;
>> +   struct device *dev;
>> +   u32 avg_bw;
>> +   u32 peak_bw;
>> +};
>> +
>> +/**
>> + * struct icc_path - interconnect path structure
>> + * @num_nodes: number of hops (nodes)
>> + * @reqs: array of the requests applicable to this path of nodes
>> + */
>> +struct icc_path {
>> +   size_t num_nodes;
>> +   struct icc_req reqs[];
>> +};
>> +
>> +static struct icc_node *node_find(const int id)
>> +{
>> +   return idr_find(_idr, id);
> 
> Wasn't there going to be a warning if the mutex is not held?

I think that it would be really useful if the functions are exported,
but for now let's skip it.

>> +}
>> +
>> +static struct icc_path *path_init(struct device *dev, struct icc_node *dst,
>> + ssize_t num_nodes)
>> +{
>> +   struct icc_node *node = dst;
>> +   struct icc_path *path;
>> +   size_t i;
>> +
>> +   path = kzalloc(struct_size(path, reqs, num_nodes), GFP_KERNEL);
>> +   if (!path)
>> +   return ERR_PTR(-ENOMEM);
>> +
>> +   path->num_nodes = num_nodes;
>> +
> 
> There should probably also be a warning here about holding the lock,
> since you're modifying node->req_list.

This is called only by path_find() with the lock held.

>> +   for (i = 0; i < num_nodes; i++) {
>> +   hlist_add_head(>reqs[i].req_node, >req_list);
>> +
>> +   path->reqs[i].node = node;
>> +   path->reqs[i].dev = dev;
>> +   /* reference to previous node was saved during path 
>> traversal */
>> +   node = node->reverse;
>> +   }
>> +
>> +   return path;
>> +}
>> +
>> +static struct icc_path *path_find(struct device *dev, struct icc_node *src,
>> + struct icc_node *dst)
>> +{
>> +   struct icc_node *n, *node = NULL;
>> +   struct icc_provider *provider;
>> +   struct list_head traverse_list;
>> +   struct list_head edge_list;
>> +   struct list_head visited_list;
>> +   size_t i, depth = 1;
>> +   bool found = false;
>> +   int ret = -EPROBE_DEFER;
>> +
>> +   INIT_LIST_HEAD(_list);
>> +   INIT_LIST_HEAD(_list);
>> +   INIT_LIST_HEAD(_list);
>> +
> 
> A warning here too about holding the lock would also be good, since
> multiple people in here at once would be bad.

This is only called by icc_get() with locked mutex.

>> +   list_add_tail(>search_list, _list);
>> +   src->reverse = NULL;
>> +
>> +   do {
>> +   list_for_each_entry_safe(node, n, _list, 
>> search_list) {
>> +   if (node == dst) {
>> +   found = true;
>> +   

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-11 Thread Alexandre Bailon
On 07/09/2018 05:50 PM, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
> 
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
> 
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
> 
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which are connected to 

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-11 Thread Alexandre Bailon
On 07/09/2018 05:50 PM, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
> 
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
> 
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
> 
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which are connected to 

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-11 Thread Alexandre Bailon
Hi Georgi,

On 07/09/2018 05:50 PM, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
> 
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
> 
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
> 
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which are 

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-11 Thread Alexandre Bailon
Hi Georgi,

On 07/09/2018 05:50 PM, Georgi Djakov wrote:
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
> 
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
> 
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
> 
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which are 

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-10 Thread Evan Green
Ahoy Georgi!
On Mon, Jul 9, 2018 at 8:51 AM Georgi Djakov  wrote:
>
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
>
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which 

Re: [PATCH v6 1/8] interconnect: Add generic on-chip interconnect API

2018-07-10 Thread Evan Green
Ahoy Georgi!
On Mon, Jul 9, 2018 at 8:51 AM Georgi Djakov  wrote:
>
> This patch introduces a new API to get requirements and configure the
> interconnect buses across the entire chipset to fit with the current
> demand.
>
> The API is using a consumer/provider-based model, where the providers are
> the interconnect buses and the consumers could be various drivers.
> The consumers request interconnect resources (path) between endpoints and
> set the desired constraints on this data flow path. The providers receive
> requests from consumers and aggregate these requests for all master-slave
> pairs on that path. Then the providers configure each participating in the
> topology node according to the requested data flow path, physical links and
> constraints. The topology could be complicated and multi-tiered and is SoC
> specific.
>
> Signed-off-by: Georgi Djakov 
> ---
>  Documentation/interconnect/interconnect.rst |  96 
>  drivers/Kconfig |   2 +
>  drivers/Makefile|   1 +
>  drivers/interconnect/Kconfig|  10 +
>  drivers/interconnect/Makefile   |   2 +
>  drivers/interconnect/core.c | 597 
>  include/linux/interconnect-provider.h   | 130 +
>  include/linux/interconnect.h|  42 ++
>  8 files changed, 880 insertions(+)
>  create mode 100644 Documentation/interconnect/interconnect.rst
>  create mode 100644 drivers/interconnect/Kconfig
>  create mode 100644 drivers/interconnect/Makefile
>  create mode 100644 drivers/interconnect/core.c
>  create mode 100644 include/linux/interconnect-provider.h
>  create mode 100644 include/linux/interconnect.h
>
> diff --git a/Documentation/interconnect/interconnect.rst 
> b/Documentation/interconnect/interconnect.rst
> new file mode 100644
> index ..a1ebd83ad0a1
> --- /dev/null
> +++ b/Documentation/interconnect/interconnect.rst
> @@ -0,0 +1,96 @@
> +.. SPDX-License-Identifier: GPL-2.0
> +
> +=
> +GENERIC SYSTEM INTERCONNECT SUBSYSTEM
> +=
> +
> +Introduction
> +
> +
> +This framework is designed to provide a standard kernel interface to control
> +the settings of the interconnects on a SoC. These settings can be throughput,
> +latency and priority between multiple interconnected devices or functional
> +blocks. This can be controlled dynamically in order to save power or provide
> +maximum performance.
> +
> +The interconnect bus is a hardware with configurable parameters, which can be
> +set on a data path according to the requests received from various drivers.
> +An example of interconnect buses are the interconnects between various
> +components or functional blocks in chipsets. There can be multiple 
> interconnects
> +on a SoC that can be multi-tiered.
> +
> +Below is a simplified diagram of a real-world SoC interconnect bus topology.
> +
> +::
> +
> + ++++
> + | HW Accelerator |--->|  M NoC |<---+
> + ++++|
> + |  |++
> +  +-+  +-+  V   +--+ ||
> +  | DDR |  |++  | PCIe | ||
> +  +-+  || Slaves |  +--+ ||
> +^ ^|++ | |   C NoC|
> +| |V   V ||
> + +--+   ++   ||   +-+
> + |  |-->||-->||-->| CPU |
> + |  |-->||<--||   +-+
> + | Mem NoC  |   | S NoC  |   ++
> + |  |<--||-+|
> + |  |<--||<--+ ||   
> ++
> + +--+   ++   | |+-->| Slaves 
> |
> +   ^  ^^^  ^ | |
> ++
> +   |  |||  | | V
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> + | CPUs |  |  | GPU |   | DSP |  | Masters |-->|   P NoC|-->| Slaves 
> |
> + +--+  |  +-+   +-+  +-+   ++   
> ++
> +   |
> +   +---+
> +   | Modem |
> +   +---+
> +
> +Terminology
> +---
> +
> +Interconnect provider is the software definition of the interconnect 
> hardware.
> +The interconnect providers on the above diagram are M NoC, S NoC, C NoC, P 
> NoC
> +and Mem NoC.
> +
> +Interconnect node is the software definition of the interconnect hardware
> +port. Each interconnect provider consists of multiple interconnect nodes,
> +which