Ondrej,

Any fundamental disagreements you may think we have is probably due to me 
explaining my opinion poorly. I think our largest disagreement is what type of 
user/customer we should INITIALLY design for.


-Yes, we definitely are both concerned about the inactivity of the cloud 
subproject. In fact, I have been working on trying to implement a bare bones 
OCF cloud interface on mainflux, but I'd prefer to devote my time to the 
"official" implementation.

-I think we agree on this more than you realize. I agree that there should be a 
fully production ready turnkey iotivity cloud ready to go at the drop of a hat 
(or execution of a helm chart :)) but I also feel that we should first focus on 
getting a rock solid "backend agnostic" cloud interface implementation before 
focusing on having a turnkey solution. I believe this for the following reasons:

--Forces good architectural design: the team that originally developed vitess 
(CNCF hosted linearly scalable MySQL) was all from youtube, but they made sure 
to place an emphasis on implementing features in a FOSS-friendly way before 
integrating it into google's proprietary infrastructure. This was because they 
knew that if vitess was tightly coupled to google infrastructure, it wouldn't 
gain the same adoption. By that same logic, I'd assert that we need to ensure 
that the iotivity cloud implementation isn't tightly coupled to 
mongo/kafka/zookeeper

--Eases adoption by large customers (ex: PaaS, consumer electronics factories, 
analytics, blockchain): We should aim for large corporate adoption first 
because they can most easily make the business case for adoption/investing 
developer man-hours while the ecosystem is still a tad immature, and there'd be 
a "trickledown effect" where those large corporate investments make it easier 
for smaller companies to join due to improvements to the 
documentation/codebase, which is kinda happening already, and the creation of 
managed services akin to AWS IoT. Also, different users may have use cases that 
require fundamentally different backends. Some examples:

---if AWS wanted to offer an iotivity cloud, they'd want to bill the components 
separately and probably integrate it with their other backends-as-a-service 
like dynamoDB.

---Many hardware companies (especially all of the no-name OEM's out east that 
we need to be onboard if we want good OCF adoption) don't have a strong 
competency in cloud development, so they'd want to use a managed service like 
AWS which raises the same issues that I mentioned in the previous bullet point.

---A company which is collecting data from IoT devices for the purpose of 
analytics is going to want a database that is easy to use with the Hadoop 
ecosystem, like HBase.

---A blockchain use case is the most unique but I'm imagining a situation where 
the device chooses one node out of many in the network to publish its resource 
to, based on the "sel" field, and that data is inputted into a smart contract 
or something. In all fairness, blockchain probably isn't the biggest use case 
right now, but I have a vision for how IoT and blockchain should work together 
that seems to be compatible with the current OCF cloud architecture and I'd 
hate to see that use case sidelined while blockchain is still near the peak of 
the gartner hype cycle.

--Faster time to completion: successfully writing a loosely coupled cloud 
interface doesn't have to be the end of the project, I'm just proposing that it 
be the first stage so that a diverse set of use cases can be enabled as soon as 
possible, with the assumption that there'd eventually be a production ready 
turnkey reference implementation. I'm also going to shamelessly plug mainflux 
again because these guys have literally written a book on scalable IoT cloud 
architectures 
(http://www.oreilly.com/programming/free/scalable-architecture-for-the-internet-of-things.csp)
 and if we integrated an iotivity cloud interface into their codebase, then 
we'd be getting a lot of that scaling and production readiness "for free" which 
lets the iotivity community focus on what they're good at, which is writing 
code for iotivity. It probably is also worth mentioning that mainflux uses the 
same open source license as iotivity (ASL 2.0)

-I think we're largely in agreement for your 3rd paragraph, and anything we may 
disagree on has been addressed earlier in this message.


Regards,
Scott
From: Ondrej Tomcik [mailto:ondrej.tom...@kistler.com]
Sent: Monday, April 30, 2018 6:20 PM
To: Scott King <scott.k...@fkabrands.com>; Max Kholmyansky <max...@gmail.com>
Cc: iotivity-dev@lists.iotivity.org
Subject: RE: [dev] Activity of cloud maintainers

Hello Scott,

Ondrej,

I also share your concerns with the current state of the iotivity cloud, but I 
have a slightly different opinion on how those concerns should be addressed, 
and I'm interested in hearing your opinion on it.

My concerns are mainly about an inactivity of the IoTivity Cloud subproject, 
sometimes about "in/activities" of a whole IoTivity project as well - but 
that's a different topic :). So I would say that if you share my concerns about 
current inactivity but the way how you would revive the project is slightly 
different, that's great, let's discuss it.

I'd argue that the only "official" part of the OCF cloud should be the part 
that implements the actual device<=>cloud interface since API's age better than 
infrastructure. Technical decisions like which messaging systems and databases 
to use shouldn't be hard coded, although it's probably a good idea to have an 
example where everything is already set up to provide an introduction and 
reference implementation.

I assume that now you here writing only about the OCF specification for the 
Cloud part. Technical decisions doesn't have nothing to do with the 
specification. Architecture and a design of the OCF Cloud should be agnostic to 
databases, messaging systems, ... as you wrote.

If you meant implementation details of the IoTivity Cloud, I don't agree. 
IoTivity Cloud should use specific messaging system and database. It should be 
programmed in a way that it might be exchanged, but that's more an 
implementation detail rather than a future design of a system. If you would 
implement the IoTivity Cloud in a way that users can't use it out of box - they 
firstly have to properly integrate it with their messaging system and db by 
touching internals of the IoTivity Cloud, it would have same number of users as 
it has now. (I would say - 1). Reference implementation != production ready 
system.

If the cloud interface was written such that any interactions with the 
messaging/DB/auth were performed by end-user-written handlers or some other 
clean abstraction (like gRPC?), it'd be easier for vendors to integrate OCF 
cloud messaging into their existing infrastructure. I guess that would function 
as an alternative to the HTTP proxy that you're proposing. It'd also largely 
make the concerns scaling/reliability/updatability/availability out of scope, 
which I'd argue is preferable since those concerns are more within the 
bailiwick of the CNCF than the OCF. Speaking of the CNCF, if you want to see a 
project that utilizes a fair number of CNCF projects that would be able to 
utilize my proposed cloud interface, and has been placing a large emphasis on 
scaling/perf/availability, check out https://github.com/mainflux/mainflux

If I understand you, I would say that users of the IoTivity Cloud won't be 
interested in writing handlers for communication between IoTivity Cloud 
Components. In my opinion, they need a system which works out of box, can 
scale, is secure and reliable, provides async state events of the Cloud system 
and connected devices and end-user's components can issue commands in a 
standardised way - HTTP, not Java Native IoTivity SDK.

HTTP proxy is already in the IoTivity Cloud, but not finished. Goal is to use 
it only for issuing of commands towards IoTivity Cloud Interface. State events 
and async operations should come back in form of messages.
And as you wrote, scaling/reliability/... are within the bailiwick of the CNCF. 
It has nothing to do with OCF. BUT it has to do with the IoTivity.

Somehow, I am not completely sure if we have a same opinion or not :)


Regards,
Scott

BR
Ondrej

From: 
iotivity-dev-boun...@lists.iotivity.org<mailto:iotivity-dev-boun...@lists.iotivity.org>
 [mailto:iotivity-dev-boun...@lists.iotivity.org] On Behalf Of Ondrej Tomcik
Sent: Sunday, April 29, 2018 3:18 AM
To: Max Kholmyansky <max...@gmail.com<mailto:max...@gmail.com>>
Cc: iotivity-dev@lists.iotivity.org<mailto:iotivity-dev@lists.iotivity.org>
Subject: Re: [dev] Activity of cloud maintainers

Hello Max,

my list is ordered already by priority. (Documentation is not lowest priority 
but I wanted to mention it explicitely. )

So scalability and reliability is for us number one, together with HTTP proxy, 
which must be there to be able to use IoTivity Cloud component from other 
backend components in convinient way.

BR
Ondrej

On 29 Apr 2018, at 08:31, Max Kholmyansky 
<max...@gmail.com<mailto:max...@gmail.com>> wrote:
Hi Ondrej,

This is a very good suggestion.

One thing I want to stress: from the open source community perspective, there 
is no value in OCF compliance, unless the software cannot be used in production.
Server-side software must be scalable and reliable, and support modern 
deployment and load balancing technologies. So IMHO we must put scalability and 
reliability before the compliance.

An example of scalability challenge... right now, there is an assumption that 
the communicating entities ("client" and "server") establish a TCP connection 
to the same instance of Interface service. This is OK for the proof-of-concept, 
but hardly acceptable in a production deployment.


Regards,
Max



Max Kholmyansky
Software Architect - SURE Universal Ltd.
http://www.sureuniversal.com<http://www.sureuniversal.com/>

On Sun, Apr 29, 2018 at 1:03 AM, Ondrej Tomcik 
<ondrej.tom...@kistler.com<mailto:ondrej.tom...@kistler.com>> wrote:
I am glad that you replied Thiago.

Just to provide you an overview of a roadmap, or how we as a Kistler see the 
IoTivity Cloud further development:
 - HTTP proxy
      @ communication between IoTivity Cloud and other product components 
through HTTP, fully compliant with a OCF specification

 - Cloud enabled
      @ current Cloud system is not scalable nor reliable. Currently you cannot 
restart component (e.g. rolling update or just node restart would drop your 
db). Goal would be to redesign a communication between the Cloud components and 
a way how it manipulates with a data, to support availability, reliability and 
mainly scalability.

 - Resource shadow
      @ device shadow is a well-known term in a IoT world. We have a PoC of a 
resource shadow already implemented, so we would start with OCF specification 
proposal.

 - Documentation and maintanance

If community agrees, let's start.

BR
Ondrej Tomcik

On 27 Apr 2018, at 21:58, Thiago Macieira 
<thiago.macie...@intel.com<mailto:thiago.macie...@intel.com>> wrote:
On Friday, 27 April 2018 00:37:44 PDT Ondrej Tomcik wrote:
I would propose to change current list of maintainers. They are inactive and
it is not acceptable in open source project.

If there are no objections, we'll do that.

For anyone willing to object: your objections must come with a solution to
Ondrej's problems.

--
Thiago Macieira - thiago.macieira (AT) intel.com<http://intel.com>
 Software Architect - Intel Open Source Technology Center



_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org<mailto:iotivity-dev@lists.iotivity.org>
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org<mailto:iotivity-dev@lists.iotivity.org>
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

_______________________________________________
iotivity-dev mailing list
iotivity-dev@lists.iotivity.org
https://lists.iotivity.org/mailman/listinfo/iotivity-dev

Reply via email to