Moving to 100% GitHub

2017-05-18 Thread Tal Liron
Hi everyone,

ASF now has an option of moving the repository to be 100% hosted on GitHub,
so it won't be a mirror of the ASF repo. I think this could increase our
visibility and make the project seem more welcoming to outside
contributions. Also would make management a bit easier for us.

What do you think? Pros and cons?


-- 
Tal Liron
Solution Architect
t...@gigaspaces.com | +13123758299
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: GIT/JIRA Integration

2017-05-18 Thread Tal Liron
>> > > > > > >>
> > >> > > > > > >>> So it looks like its there.  INFRA is currently enabling
> > >> > fisheye
> > >> > > > for
> > >> > > > > > ASF,
> > >> > > > > > >>> since Atlassian is shutting down its public offering.
> > >> Assuming
> > >> > > no
> > >> > > > > > qualms
> > >> > > > > > >>> with enabling fisheye for ARIAs repos, this will happen
> > >> > > > > automatically.
> > >> > > > > > >>> Any
> > >> > > > > > >>> concerns with that?  Once its up, we can turn off the
> > other
> > >> > > format.
> > >> > > > > > >>>
> > >> > > > > > >>> John
> > >> > > > > > >>>
> > >> > > > > > >>> On Wed, Jan 4, 2017 at 7:05 AM Ran Ziv <
> > r...@gigaspaces.com>
> > >> > > wrote:
> > >> > > > > > >>>
> > >> > > > > > >>> > Yes, that's what I was referring to.
> > >> > > > > > >>> > I indeed don't know about version compatibility, I
> hope
> > it
> > >> > > > applies
> > >> > > > > > for
> > >> > > > > > >>> the
> > >> > > > > > >>> > Apache JIRA version as well.
> > >> > > > > > >>> >
> > >> > > > > > >>> > Ran
> > >> > > > > > >>> >
> > >> > > > > > >>> > On Wed, Jan 4, 2017 at 2:02 PM, John D. Ament <
> > >> > > > > johndam...@apache.org
> > >> > > > > > >
> > >> > > > > > >>> > wrote:
> > >> > > > > > >>> >
> > >> > > > > > >>> > > Ran,
> > >> > > > > > >>> > >
> > >> > > > > > >>> > > Maybe I have to be logged in to see it.  I don't
> have
> > an
> > >> > > > account
> > >> > > > > on
> > >> > > > > > >>> that
> > >> > > > > > >>> > > JIRA.  Are you talking about the "Development"
> section
> > >> on
> > >> > the
> > >> > > > far
> > >> > > > > > >>> right?
> > >> > > > > > >>> > >
> > >> > > > > > >>> > > That's a feature in newer JIRA versions.  We're on
> > 6.3.
> > >> > I'd
> > >> > > > have
> > >> > > > > > to
> > >> > > > > > >>> see
> > >> > > > > > >>> > if
> > >> > > > > > >>> > > that's an option first.
> > >> > > > > > >>> > >
> > >> > > > > > >>> > > John
> > >> > > > > > >>> > >
> > >> > > > > > >>> > > On Tue, Jan 3, 2017 at 8:31 AM Ran Ziv <
> > >> r...@gigaspaces.com
> > >> > >
> > >> > > > > wrote:
> > >> > > > > > >>> > >
> > >> > > > > > >>> > > > Thanks John,
> > >> > > > > > >>> > > > I see that basically all git traffic now appears
> on
> > >> JIRA
> > >> > as
> > >> > > > > > >>> comments -
> > >> > > > > > >>> > > that
> > >> > > > > > >>> > > > indeed seems too noisy IMO.
> > >> > > > > > >>> > > > Is it possible to configure it so that the
> git-jira
> > >> > > > integration
> > >> > > > > > >>> will
> > >> > > > > > >>> > > simply
> > >> > > > > > >>> > > > present commits/branches/PRs related to a specific
> > >> JIRA
> > >> > on
> > >> > > a
> > >> > > > > > >>> separate
> > >> > > > > > >>> > > > section rather than in comments (like it's done
> here
> > >> > > > > > >>> > > > <
> > https://cloudifysource.atlassian.net/browse/CFY-3000
> > >> >
> > >> > for
> > >> > > > > > >>> example)?
> > >> > > > > > >>> > > >
> > >> > > > > > >>> > > > thanks
> > >> > > > > > >>> > > >
> > >> > > > > > >>> > > > On Fri, Dec 30, 2016 at 4:52 PM, John D. Ament <
> > >> > > > > > >>> johndam...@apache.org>
> > >> > > > > > >>> > > > wrote:
> > >> > > > > > >>> > > >
> > >> > > > > > >>> > > > > Just as an FYI, I've enabled GIT/JIRA
> integration
> > >> for
> > >> > > > > > >>> AriaTosca.  If
> > >> > > > > > >>> > it
> > >> > > > > > >>> > > > > gets too noisy we can switch to worklog entries.
> > >> LMK.
> > >> > > > > > >>> > > > >
> > >> > > > > > >>> > > >
> > >> > > > > > >>> > >
> > >> > > > > > >>> >
> > >> > > > > > >>>
> > >> > > > > > >>
> > >> > > > > > >>
> > >> > > > > > >
> > >> > > > > >
> > >> > > > >
> > >> > > >
> > >> > >
> > >> >
> > >>
> > >
> > >
> >
>



-- 
Tal Liron
Solution Architect
t...@gigaspaces.com | +13123758299
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


[UPDATE] Work on ARIA-149: Support intrinsic functions in "dependencies" operation configuration

2017-05-19 Thread Tal Liron
Phew! Everything seems to work now: functions are parsed and evaluated
properly. But there are some changes that require explaining.

The "configuration" field in OperationTemplate and Operation models is now
a many-to-many with Parameter. This allows the values to fully support
intrinsic functions and types.

And I made another change in modeling: Operation now also has "function"
and "arguments" fields, which are what actually gets used by the Task API
instead of "implementation" and "inputs". This means that the
"implementation" and "inputs" values are preserved as is. During the
configuration phase the "arguments" are created according to the usual
logic: for the execution plugin, the "configuration" is massaged into a few
new "arguments", and otherwise other "inputs" are also merged into
"arguments".

Why make this change? Because I really think we should not mangle the
"inputs": they are logically different from the arguments that get sent to
the @operation function, even if in many cases we might treat them
identically. Also, we cannot lose the "implementation" string: plugins will
need to know what users put in there. The change is not big in terms of
extra code, but I think it helps make the code easier to understand: you
can see exactly what is being merged into the final "arguments" for the
task. The configure phase is now constructive rather than destructive:
existing fields are not changed.

Before moving forward with a PR I want to see what other committers think
about also renaming "implementation" and "inputs" in the Task API to
"function" and "arguments". For now I'm leaving them as is, but I think it
would be even cleaner if they followed the new naming convention in
modeling. I continue being unhappy about how the Task API treats both
arguments and inputs as the same thing and squashing them all together...
:( However, for now I would be happy if we just called them "arguments",
because that's what they really are for us.

-- 
Tal Liron
Solution Architect
t...@gigaspaces.com | +13123758299
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Renaming Old ARIA Repos

2017-05-21 Thread Tal Liron
Can't we just make them private repos?

On Sun, May 21, 2017 at 10:40 AM, Suneel Marthi  wrote:

> +1 Arthur
>
> On Sun, May 21, 2017 at 11:36 AM, Arthur Berezin 
> wrote:
>
> > I think it would be nice to redirect the traffic to the right place,
> > instead of creating broken links, and also to keep the old content for
> > reference.
> >
> >
> > On Sun, May 21, 2017 at 6:14 PM John D. Ament 
> > wrote:
> >
> > > Is there a real reason for the old repos to continue to exist? What
> would
> > > be the problem if they were just deleted?
> > >
> > > John
> > >
> > > On Sun, May 21, 2017 at 9:28 AM Arthur Berezin 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > >
> > > > It seems that the old ARIA organization on GitHub from before we
> moved
> > to
> > > > ASF(https://github.com/aria-tosca-old) are still creating confusion.
> > > > I added text "-old" to repo name and changed the description of all
> > > > repos to point to the new ASF repo.
> > > >
> > > > Do you think we should also move all the content to separate branches
> > and
> > > > keep only the readme.md in master that would point to the new repo?
> > > >
> > > > I prefere not deleting this org altoghether to it keep for refenrece.
> > > >
> > > > Arthur
> > > >
> > >
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Moving to 100% GitHub

2017-05-21 Thread Tal Liron
Ran, I only heard about this in person from John at ApacheCon -- I have no
links for info.

For me, when I see a mirrored project on GitHub I am worried about
contributing. My first instinct is that it's actually managed elsewhere,
which would require registration and some other kind of red tape. Indeed,
for us it means that a contributor would have to have an Apache ID and
become a committer. I think if it's managed 100% on GitHub this extra step
would not be there and could encourage people to submit simple bug fix PRs.

This is not a big deal at all, but if it's possible I'm thinking -- why not?


On Sun, May 21, 2017 at 6:51 AM, Ran Ziv  wrote:

> I don't think moving the project to Github and the issue of lack of
> discussions on the mailing list are related - we need to solve the latter
> regardless of the former.
>
> Tal, I'm not sure how exactly it would increase visibility - The mirror is
> as visible as the new repository would be..?
> It could however make things somewhat more convenient to us, although that
> too depends on who's managing the repo - I'm not familiar with this
> 100%-hosted-on-github concept yet.
>
> Tal, care to share some more links / info about this?
>
>
>
>
> On Thu, May 18, 2017 at 8:56 PM, John D. Ament 
> wrote:
>
> > One con I see is that the project already struggles with on list
> > discussions.  We want to open up github usage, but want to make sure that
> > projects are communicating well, so that the source code being maintained
> > on github is more of a convenience.  Until I see more action from
> AriaTosca
> > on this list I would be hesitant to say yes for the project.
> >
> > John
> >
> > On Thu, May 18, 2017 at 1:51 PM Tal Liron  wrote:
> >
> > > Hi everyone,
> > >
> > > ASF now has an option of moving the repository to be 100% hosted on
> > GitHub,
> > > so it won't be a mirror of the ASF repo. I think this could increase
> our
> > > visibility and make the project seem more welcoming to outside
> > > contributions. Also would make management a bit easier for us.
> > >
> > > What do you think? Pros and cons?
> > >
> > >
> > > --
> > > Tal Liron
> > > Solution Architect
> > > t...@gigaspaces.com | +13123758299 <(312)%20375-8299>
> > > Cloudify | http://getcloudify.org
> > > <
> > > http://getcloudify.org?utm_source=signaturesatori&utm_
> > medium=email&utm_campaign=general_signature
> > > >
> > >
> > > <https://twitter.com/CloudifySource>
> > > <https://www.linkedin.com/groups/8467478>
> > > <https://github.com/cloudify-cosmo>   <https://github.com/cloudify-
> cosmo
> > >
> > > [image: Azure Webinar]
> > > <
> > > http://getcloudify.org/webinars/Azure-plugin-for-
> > cloudify-webinar.html?utm_source=signaturesatori&utm_
> > medium=email&utm_campaign=general_signature
> > > >
> > >
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Moving to 100% GitHub

2017-05-21 Thread Tal Liron
We have both a website and a contribution guide. How should we improve them?

http://ariatosca.org/
https://cwiki.apache.org/confluence/display/ARIATOSCA/Contributing+to+ARIA

On Sun, May 21, 2017 at 2:38 PM, John D. Ament 
wrote:

> On Sun, May 21, 2017 at 7:51 AM Ran Ziv  wrote:
>
> > I don't think moving the project to Github and the issue of lack of
> > discussions on the mailing list are related - we need to solve the latter
> > regardless of the former.
> >
> >
> I disagree Ran.  Aria Tosca has no website, has no documented model for how
> to collaborate.  Yet for some reason developers raise pull requests.  How
> did that come to be the case?
>
>
> > Tal, I'm not sure how exactly it would increase visibility - The mirror
> is
> > as visible as the new repository would be..?
> > It could however make things somewhat more convenient to us, although
> that
> > too depends on who's managing the repo - I'm not familiar with this
> > 100%-hosted-on-github concept yet.
> >
> > Tal, care to share some more links / info about this?
> >
> >
> >
> >
> > On Thu, May 18, 2017 at 8:56 PM, John D. Ament 
> > wrote:
> >
> > > One con I see is that the project already struggles with on list
> > > discussions.  We want to open up github usage, but want to make sure
> that
> > > projects are communicating well, so that the source code being
> maintained
> > > on github is more of a convenience.  Until I see more action from
> > AriaTosca
> > > on this list I would be hesitant to say yes for the project.
> > >
> > > John
> > >
> > > On Thu, May 18, 2017 at 1:51 PM Tal Liron  wrote:
> > >
> > > > Hi everyone,
> > > >
> > > > ASF now has an option of moving the repository to be 100% hosted on
> > > GitHub,
> > > > so it won't be a mirror of the ASF repo. I think this could increase
> > our
> > > > visibility and make the project seem more welcoming to outside
> > > > contributions. Also would make management a bit easier for us.
> > > >
> > > > What do you think? Pros and cons?
> > > >
> > > >
> > > > --
> > > > Tal Liron
> > > > Solution Architect
> > > > t...@gigaspaces.com | +13123758299 <(312)%20375-8299>
> > <(312)%20375-8299>
> > > > Cloudify | http://getcloudify.org
> > > > <
> > > > http://getcloudify.org?utm_source=signaturesatori&utm_
> > > medium=email&utm_campaign=general_signature
> > > > >
> > > >
> > > > <https://twitter.com/CloudifySource>
> > > > <https://www.linkedin.com/groups/8467478>
> > > > <https://github.com/cloudify-cosmo>   <
> > https://github.com/cloudify-cosmo
> > > >
> > > > [image: Azure Webinar]
> > > > <
> > > > http://getcloudify.org/webinars/Azure-plugin-for-
> > > cloudify-webinar.html?utm_source=signaturesatori&utm_
> > > medium=email&utm_campaign=general_signature
> > > > >
> > > >
> > >
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Moving to 100% GitHub

2017-05-21 Thread Tal Liron
I'm confused by the conversation. Are you saying that moving ariatosca.org
to ariatosca.incubator.apache.org would increase visibility and
contributions?

On Sun, May 21, 2017 at 4:04 PM, Suneel Marthi  wrote:

> I think what John means is that  - migrate http://ariatosca.org to
> http://ariatosca.incubator.apache.org
>
>
>
> On Sun, May 21, 2017 at 4:15 PM, Tal Liron  wrote:
>
> > We have both a website and a contribution guide. How should we improve
> > them?
> >
> > http://ariatosca.org/
> > https://cwiki.apache.org/confluence/display/ARIATOSCA/
> Contributing+to+ARIA
> >
> > On Sun, May 21, 2017 at 2:38 PM, John D. Ament 
> > wrote:
> >
> > > On Sun, May 21, 2017 at 7:51 AM Ran Ziv  wrote:
> > >
> > > > I don't think moving the project to Github and the issue of lack of
> > > > discussions on the mailing list are related - we need to solve the
> > latter
> > > > regardless of the former.
> > > >
> > > >
> > > I disagree Ran.  Aria Tosca has no website, has no documented model for
> > how
> > > to collaborate.  Yet for some reason developers raise pull requests.
> How
> > > did that come to be the case?
> > >
> > >
> > > > Tal, I'm not sure how exactly it would increase visibility - The
> mirror
> > > is
> > > > as visible as the new repository would be..?
> > > > It could however make things somewhat more convenient to us, although
> > > that
> > > > too depends on who's managing the repo - I'm not familiar with this
> > > > 100%-hosted-on-github concept yet.
> > > >
> > > > Tal, care to share some more links / info about this?
> > > >
> > > >
> > > >
> > > >
> > > > On Thu, May 18, 2017 at 8:56 PM, John D. Ament <
> johndam...@apache.org>
> > > > wrote:
> > > >
> > > > > One con I see is that the project already struggles with on list
> > > > > discussions.  We want to open up github usage, but want to make
> sure
> > > that
> > > > > projects are communicating well, so that the source code being
> > > maintained
> > > > > on github is more of a convenience.  Until I see more action from
> > > > AriaTosca
> > > > > on this list I would be hesitant to say yes for the project.
> > > > >
> > > > > John
> > > > >
> > > > > On Thu, May 18, 2017 at 1:51 PM Tal Liron 
> > wrote:
> > > > >
> > > > > > Hi everyone,
> > > > > >
> > > > > > ASF now has an option of moving the repository to be 100% hosted
> on
> > > > > GitHub,
> > > > > > so it won't be a mirror of the ASF repo. I think this could
> > increase
> > > > our
> > > > > > visibility and make the project seem more welcoming to outside
> > > > > > contributions. Also would make management a bit easier for us.
> > > > > >
> > > > > > What do you think? Pros and cons?
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Tal Liron
> > > > > > Solution Architect
> > > > > > t...@gigaspaces.com | +13123758299 <(312)%20375-8299>
> > > > <(312)%20375-8299>
> > > > > > Cloudify | http://getcloudify.org
> > > > > > <
> > > > > > http://getcloudify.org?utm_source=signaturesatori&utm_
> > > > > medium=email&utm_campaign=general_signature
> > > > > > >
> > > > > >
> > > > > > <https://twitter.com/CloudifySource>
> > > > > > <https://www.linkedin.com/groups/8467478>
> > > > > > <https://github.com/cloudify-cosmo>   <
> > > > https://github.com/cloudify-cosmo
> > > > > >
> > > > > > [image: Azure Webinar]
> > > > > > <
> > > > > > http://getcloudify.org/webinars/Azure-plugin-for-
> > > > > cloudify-webinar.html?utm_source=signaturesatori&utm_
> > > > > medium=email&utm_campaign=general_signature
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Tal Liron
> > Senior Engineer
> > t...@gigaspaces.com | +1 (773) 828-9339
> > Cloudify | http://getcloudify.org
> > <http://getcloudify.org?utm_source=signaturesatori&utm_
> > medium=email&utm_campaign=general_signature>
> >
> > <https://twitter.com/CloudifySource>
> > <https://www.linkedin.com/groups/8467478>
> > <https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo
> >
> > [image: Azure Webinar]
> > <http://getcloudify.org/webinars/Azure-plugin-for-
> > cloudify-webinar.html?utm_source=signaturesatori&utm_
> > medium=email&utm_campaign=general_signature>
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: [UPDATE] Work on ARIA-149: Support intrinsic functions in "dependencies" operation configuration

2017-05-22 Thread Tal Liron
I see Avia merged ARIA-180, so definitely will convert to one-to-many now.

The reason the original values are preserved are the same reason all our
original values are preserved. The idea is that we don't want to make
service instance models dependent on service template models. They are
instantiated from them, but these foreign-key fields are all nullable. The
reason is to allow freedom for orchestration policies with on-the-fly
changes the topology. For example, a special kind of scaling policy might
involve adding an Nginx load balancer node for HTTP connections. However,
what if the user did not have a node template for that kind of the node? If
we require the creation of an OperationTemplate for every Operation, we
make this awkward, because it changes the original template. So, anyway,
this has been our principle until now and I suggest we continue this for
operations, too.

Let's walk through a simple example.

   1. A service template with a simple compute node, with a single "dazzle"
   operation on a custom interface, and this "dazzle" has a single
   implementation string, "dazzle_node.sh", with a single string input:
   "duration"="forever". To keep things simple, no special configuration is
   added via dependencies.
   2. The parser will create an OperationTemplate with these values:
  1. implementation = "dazzle_node.sh"
  2. inputs = {
 1. duration = "forever"}
  3. plugin_specification = None
  4. configuration = {}
  3. During initial instantiation phase, the above values will be
   copied to Operation model as is. At this early stage, "function" and
   "arguments" are still both empty.
   4. Now during instantiation *configuration* phase, we see that this
   operation uses the default execution plugin (plugin_specification=None).
   Because this node is a host, it will be configured to be a *remote*
   operation with very specific arguments. Also, we decided that for all
   plugins inputs get sent as *extra* arguments. So, here are all final
   values for Operation:
  1. implementation = "dazzle_node.sh"
  2. inputs = {
 1. duration = "forever"}
  3. configuration = {}
  4. plugin = None
  5. function = "aria.orchestrator.execution_
  plugin.operations.run_script_with_ssh"
  6. arguments = {
  1. process = {}
 2. script_path = "create_node.sh"
 3. use_sudo = False
 4. hide_output = []
 5. fabric_env = {user: DEFAULT_USER, password: DEFAULT_PASSWORD,
 key: None, key_filename: None}
 6. duration = forever}

So, you can see here that if we change "implementation" to be
"aria.orchestrator.execution_plugin.operations.run_script_with_ssh" (like
we did before), the implementation is hidden in one of the arguments
(script_path), which would be very tricky for users to discover. That's an
implementation detail of the execution plugin, not something you want to
rely on. Also, if we merge everything into inputs, the original distinction
between what is an input and what is an implementation detail is lost.

The fields in this suggested scheme have distinct and clear meanings:

   1. "implementation" and "inputs" are directly from the TOSCA spec. They
   are preserved and never change.
   2. "configuration" is an extra optional ARIA extension dict allowing
   users to configure how the plugin will execute the operation.
   3. "function" and "arguments" are internal implementation details of the
   ARIA orchestrator. In a nicely straightforward sense they become a Python
   function with kwargs.

How this clears things up!

On Sun, May 21, 2017 at 7:18 AM, Ran Ziv  wrote:

> +1 for examples, or perhaps a link to specific code sections.
>
> From what I did understand though, before we talk about additional changes,
> I'm not at all sure I'm in favor of the current ones.
>
> First, the many-to-many relationship sounds in contrast to what Avia is
> currently working on with (changing Parameters relationship type)
>
> Second, so what you're saying is that now Operation models will have both
> "implementation" and "function", and both "inputs" and "arguments"? This
> seems very confusing.
> I'm both not convinced the original values are needed at that stage (can we
> have an actual use case?), but even if they are, do we really need to keep
> all four on the same model?
> How about having the original values on the OperationTemplate, and the
> "coerced values" (for lack of a better term..) on the Operation model?
>
>
>
> On Sun, May 21, 2017 at 11:45 AM, Arthur Berezin 
> wrote:
>
> > On Sat, May 20, 2017 at 3:47 AM Tal Liron  wr

Re: [UPDATE] Work on ARIA-149: Support intrinsic functions in "dependencies" operation configuration

2017-05-23 Thread Tal Liron
Why do you think that on-the-fly updates won't require implementation and
inputs? It's true that the execution plugin squashes inputs into arguments
and keeps the implementation in arguments.process.script_path, but that's
an implementation detail. Some plugins may care more about what is an input
and what isn't, or do other kinds of mangling to the implementation. In any
case, the user should have to know how to massage these values: they can
set up inputs and possibly configuration (same way as in TOSCA) and then
call Operation.configure to do the right thing.

The implementation and function are not the same, as you can see even in my
trivial example. The implementation is the path to a script. The function
is a path to a Python function. "function" is absolutely not an evaluation
of "implementation": for the execution plugin the function is either
run_script_local or run_script_with_ssh, which is entirely determined by
the topology (relationship to a host node) and has nothing to do with the
implementation. Who knows, some plugins in some cases might totally throw
away the implementation string in some cases if it becomes irrelevant in
some configurations.

I just don't see why we need to introduce restrictions only to avoid saving
a single text column here. As for separating inputs -- seems very obvious
to me.

We will follow up the discussion in person and post our final decision here.

On Tue, May 23, 2017 at 8:57 AM, Ran Ziv  wrote:

> Actually I didn't mean ARIA-180 has already been merged, just that it might
> conflict with it and I think you two should sync about this.
>
> Regarding the new fields - I completely agree with your first paragraph -
> the original values should indeed be kept, and generally speaking the
> service instance models should be decoupled from the template models in
> order to allow on-the-fly updates and so on - but does this really apply
> here? What is the value of storing the original user values on the service
> instance models in this case? On-the-fly updates won't require the original
> value, and it does make sense to me that the only place the original values
> would be stored is on the template. At the end of the day the
> `implementation` and the `function` are one and the same - its simply that
> one of them is already evaluated and the other isn't. Also, Is it not the
> same case as properties that use the `get_input` intrinsic function?
>
> I guess the upside just isn't that clear to me here, and there is the
> downside of these being pretty confusing. The way I see it it's as if
> "implementation" is called "user_implementation" (vs "actual
> implementation") , or rather "template_implementation" - which again hints
> it should sit over at the template models..?
>
>
>
>
>
> On Mon, May 22, 2017 at 7:07 PM, Tal Liron  wrote:
>
> > I see Avia merged ARIA-180, so definitely will convert to one-to-many
> now.
> >
> > The reason the original values are preserved are the same reason all our
> > original values are preserved. The idea is that we don't want to make
> > service instance models dependent on service template models. They are
> > instantiated from them, but these foreign-key fields are all nullable.
> The
> > reason is to allow freedom for orchestration policies with on-the-fly
> > changes the topology. For example, a special kind of scaling policy might
> > involve adding an Nginx load balancer node for HTTP connections. However,
> > what if the user did not have a node template for that kind of the node?
> If
> > we require the creation of an OperationTemplate for every Operation, we
> > make this awkward, because it changes the original template. So, anyway,
> > this has been our principle until now and I suggest we continue this for
> > operations, too.
> >
> > Let's walk through a simple example.
> >
> >1. A service template with a simple compute node, with a single
> "dazzle"
> >operation on a custom interface, and this "dazzle" has a single
> >implementation string, "dazzle_node.sh", with a single string input:
> >"duration"="forever". To keep things simple, no special configuration
> is
> >added via dependencies.
> >2. The parser will create an OperationTemplate with these values:
> >   1. implementation = "dazzle_node.sh"
> >   2. inputs = {
> >  1. duration = "forever"}
> >   3. plugin_specification = None
> >   4. configuration = {}
> >   3. During initial instantiation phase, the above values will be
> >copied to Operation mo

Re: [UPDATE] Work on ARIA-149: Support intrinsic functions in "dependencies" operation configuration

2017-05-24 Thread Tal Liron
We had a long face-to-face today and reached an alignment. ARIA-149 is
being updated to reflect it.

So, we are going all the way with this:

The "implementation" and "inputs" fields in the models refer *only* to what
comes from the TOSCA parser. They are passed from template to instance as
is. This means that "implementation" should always be considered an
artifact name. (Indeed, when we implement artifact in the near future, this
field may be refactored in some way, possibly to become a foreign key to an
artifact-related model.)

The OperationTemplate and Operation models both also have a "configuration"
field, a dict of special parameters used as hints for orchestration. The
parser can generate them using our magical "dependencies" grammar
extensions. They are normal parameters (typed, and can contain intrinsic
functions).

And, we are introducing two *new* fields to Operation and Task models (and
the related Task API): "function" and "arguments". "function" is the full
path to a Python function, and "arguments" is a dict of parameters that are
sent as arguments to the function. These are ARIA's implementation details
for orchestration, and are *not* directly related to TOSCA's
"implementation" and "inputs" fields, and indeed the Task model does not
have nor need the "implementation" and "inputs" (to put it another way:
we've renamed these two fields in Task, but they have the exact same usage
as before).

The logic for tasks has not changed, but let's go over it again for
clarification with the new fields in mind:

   1. The parser generates an OperationTemplate model, and sets the
   "implementation" and "inputs" fields. Two ARIA grammar extensions
   optionally apply:
   1. If the ">" is used in the "implementation", then the parser actually
  leaves "implementation" empty (there is no artifact) and instead sets
  "function".
  2. If the ">" is used in "dependencies", they are converted to
  "configuration".
  2. During instantiation, all these fields are copied as is from
   OperationTemplate to Operation.
   3. During the configuration phase of instantiation, there are two
   possible routes:
  1. For the default execution plugin, it generates "function" and
  "arguments" according to whether the operation is local or remote, using
  params from "configuration" if they are available (and valid). Also, any
  extra "configuration" params set by the user, which are not recognized by
  the plugin, just get appended to "arguments".
  2. For other plugins, we already have "function" set for us by the
  parser (according to what the user put in "implementation" after
the ">").
  So, we just append all "configuration" params if they exist to "arguments"
   4. For all plugins, "inputs" are also appended to "arguments".

Some implications of this logic worth considering:

   1. "implementation" and "inputs" are never mangled and always adhere to
   the strict TOSCA typed interface contract.
   2. It's possible to use our special "dependencies" grammar to send *extra
   arbitrary* arguments to the Python operation function, *beyond* what is
   allowed by the strict TOSCA contract.
   3. Python operation functions do not have a way to know which arguments
   came from where: some arguments might be inputs and some might be extra
   configuration. We have currently decided that this distinguishes the two is
   cumbersome and not useful. If it does become important, we will need to
   revisit this logic.


On Tue, May 23, 2017 at 11:09 AM, Tal Liron  wrote:

> Why do you think that on-the-fly updates won't require implementation and
> inputs? It's true that the execution plugin squashes inputs into arguments
> and keeps the implementation in arguments.process.script_path, but that's
> an implementation detail. Some plugins may care more about what is an input
> and what isn't, or do other kinds of mangling to the implementation. In any
> case, the user should have to know how to massage these values: they can
> set up inputs and possibly configuration (same way as in TOSCA) and then
> call Operation.configure to do the right thing.
>
> The implementation and function are not the same, as you can see even in
> my trivial example. The implementation is the path to a script. The
> function is a path to a Python function. "function" is absolutely not an
> evaluation of "implementation": for the execution plugin the function is
> either run_script_local or run_script_with_ssh, which is en

Re: [UPDATE] Work on ARIA-149: Support intrinsic functions in "dependencies" operation configuration

2017-05-25 Thread Tal Liron
I don't think the implementation field is an input at all. It is instead
the thing that receives the inputs. :)

In TOSCA, the implementation is actually one primary artifact plus optional
dependencies. It should receive the interface and operation inputs, but
nothing else. TOSCA has nothing to say about "tasks", and indeed that's an
implementation (ha!) detail of ARIA.

In any case, all of this is indeed gone when we move to ARIA tasks: tasks
only have "function" and "arguments", none of the other fields.

When we have an artifact management system, that
"implementation"+"dependencies" field will become much more important: we
will validate that the artifact exists in the CSAR, see about copying it
remotely, support the get_artifact intrinsic function, etc...

On Thu, May 25, 2017 at 4:12 AM, Maxim Orlov  wrote:

> Not entirely sure why the "implementation" gets its own field. It basically
> another input to the operation. Though it's a special input (since it
> enables for the plugin do decide which "function" to run), it's still an
> inputs.  feel we already have a lot of different possible inputs to an
> operation: (1) interface inputs, (2) operation inputs (3) task inputs. I
> feel that the "implementation" should have a special place under the
> operation inputs, but not a field on its own...
>
> On Thu, May 25, 2017 at 1:22 AM, Tal Liron  wrote:
>
> > We had a long face-to-face today and reached an alignment. ARIA-149 is
> > being updated to reflect it.
> >
> > So, we are going all the way with this:
> >
> > The "implementation" and "inputs" fields in the models refer *only* to
> what
> > comes from the TOSCA parser. They are passed from template to instance as
> > is. This means that "implementation" should always be considered an
> > artifact name. (Indeed, when we implement artifact in the near future,
> this
> > field may be refactored in some way, possibly to become a foreign key to
> an
> > artifact-related model.)
> >
> > The OperationTemplate and Operation models both also have a
> "configuration"
> > field, a dict of special parameters used as hints for orchestration. The
> > parser can generate them using our magical "dependencies" grammar
> > extensions. They are normal parameters (typed, and can contain intrinsic
> > functions).
> >
> > And, we are introducing two *new* fields to Operation and Task models
> (and
> > the related Task API): "function" and "arguments". "function" is the full
> > path to a Python function, and "arguments" is a dict of parameters that
> are
> > sent as arguments to the function. These are ARIA's implementation
> details
> > for orchestration, and are *not* directly related to TOSCA's
> > "implementation" and "inputs" fields, and indeed the Task model does not
> > have nor need the "implementation" and "inputs" (to put it another way:
> > we've renamed these two fields in Task, but they have the exact same
> usage
> > as before).
> >
> > The logic for tasks has not changed, but let's go over it again for
> > clarification with the new fields in mind:
> >
> >1. The parser generates an OperationTemplate model, and sets the
> >"implementation" and "inputs" fields. Two ARIA grammar extensions
> >optionally apply:
> >1. If the ">" is used in the "implementation", then the parser
> actually
> >   leaves "implementation" empty (there is no artifact) and instead
> sets
> >   "function".
> >   2. If the ">" is used in "dependencies", they are converted to
> >   "configuration".
> >   2. During instantiation, all these fields are copied as is from
> >OperationTemplate to Operation.
> >3. During the configuration phase of instantiation, there are two
> >possible routes:
> >   1. For the default execution plugin, it generates "function" and
> >   "arguments" according to whether the operation is local or remote,
> > using
> >   params from "configuration" if they are available (and valid).
> Also,
> > any
> >   extra "configuration" params set by the user, which are not
> > recognized by
> >   the plugin, just get appended to "arguments".
> >   2. For other plugins, we already have "function" set for us by the
> >   parser

Re: Query related to substitution mapping

2017-05-25 Thread Tal Liron
Hi DJ,

I'm not sure what you mean by "the substituting template". Actually, ARIA
does almost nothing with substitution templates right now, just parses,
validates, and stores the info. Indeed, if you refer to a node type there,
it should be expected that the current template would need to know of that
type, possibly by importing.

Could you provide a short example to clarify?

On Thu, May 25, 2017 at 5:32 AM, D Jayachandran  wrote:

> Hi,
>
> The substitution mapping works in the latest APACHE ARIA code only if the
> substituting template is imported in the top-level template.
>
> This seems to be contradicting with the TOSCA specification where the
> substitution is expected to happen without the import (though not
> explicitly mentioned).
>
> We are looking at the possible ways to identify the appropriate node
> template without importing the substituting template. ( Possibly by going
> through already available service models for a substitutable entity )
>
> Do you have any plans to have substitution mapping work without having the
> template imported every time ?
> Do you have any feedback on this and if our understanding is correct ?
>
>
> Regards,
> DJ
>
>
>
>


-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Query on operation inputs

2017-05-25 Thread Tal Liron
t; Required inputs [u'labels'] have not been specified - expected inputs:
> > [u'isService', u'name', u'exposed_port', u'image', u'labels',
> > u'target_port', u'target_host']
> >
> > Node-type
> >
> > node_types:
> > test.nodes.Container.Application:
> > derived_from: tosca.nodes.Root
> > properties:
> > name:
> >   type: string
> >   required: true
> > image:
> >   type: string
> >   required: true
> > port:
> >   type: integer
> >   required: false
> > interfaces:
> > Standard:
> > type: tosca.interfaces.node.lifecycle.Standard
> > create:
> > inputs:
> > name:
> > type: string
> > required: true
> > image:
> > type: string
> > required: true
> > exposed_port:
> > type: integer
> > required: false
> > target_port:
> > type: integer
> > required: false
> > target_host:
> > type: integer
> > required: false
> > labels:
> > type: string
> > required: false
> > isService:
> > type: boolean
> > required: false
> > implementation:
> > primary: sample > sample.samplemethod
> >
> > Node template:
> >
> > web_app:
> > type: test.nodes.Container.Application
> > properties:
> > name: { get_input: web_app_name }
> > image: { get_input: web_app_image }
> > port: { get_input: web_app_port }
> > requirements:
> > - dependency:
> >   node: database
> >   relationship:
> >   type: tosca.relationships.DependsOn
> > interfaces:
> > Standard:
> >  create:
> > inputs:
> > name: { get_input: web_app_name }
> > image: { get_property: [ web_app, image] }
> > exposed_port: { get_property: [ web_app,
> > port] }
> > target_host: { get_property: [ database,
> > name] }
> > target_port: { get_property: [ database,
> > port] }
> > isService: true
> >
> > Regards,
> > DJ
> >
> > -Original Message-
> > From: Ran Ziv [mailto:r...@gigaspaces.com]
> > Sent: Thursday, May 25, 2017 4:07 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Query on operation inputs
> >
> > Hi,
> >
> > Weird, I remember responding to this mail before, but it doesn't seem
> > like I have.
> > In any case, it is indeed our intention that no inputs may be passed
> > into operations unless they have been clearly declared in the
> service-template.
> > ARIA opts to be a strict implementation of TOSCA wherever possible.
> >
> > Ran
> >
> > On Thu, May 25, 2017 at 1:22 PM, D Jayachandran <
> > d.jayachand...@ericsson.com
> > > wrote:
> >
> > > Hi,
> > >
> > > The latest Apache-aria is throwing a validation error during the
> > > execution of a service.
> > > It demands all the operation inputs defined in a node type be
> > > declared in the service template though they are optional inputs.
> > > Could you please let us know if this change was intentional ?
> > >
> > >
> > > Regards,
> > > DJ(D Jayachandran)
> > >
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Query related to substitution mapping

2017-05-26 Thread Tal Liron
t; Please find the attachment for an example of substitution mapping.
> The db.yaml is the substituting template which is imported to web.yaml.
> The web.yaml has an abstract node template "database" which would be
> substituted with db.yaml.
>
> We want to understand is it mandatory to have templates imported for
> substitution to work ? Does TOSCA spec says this ?
> The use-case for us would be to have substitution mapping without
> importing the template but to find the template from the already available
> service-templates in the Database.
>
> You could refer Section 2.10  in TOSCA simple yaml 1.0 for more
> information on substitution mapping.
>
> Regards,
> DJ
>
> -Original Message-
> From: Tal Liron [mailto:t...@gigaspaces.com]
> Sent: Thursday, May 25, 2017 10:32 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Query related to substitution mapping
>
> Hi DJ,
>
> I'm not sure what you mean by "the substituting template". Actually, ARIA
> does almost nothing with substitution templates right now, just parses,
> validates, and stores the info. Indeed, if you refer to a node type there,
> it should be expected that the current template would need to know of that
> type, possibly by importing.
>
> Could you provide a short example to clarify?
>
> On Thu, May 25, 2017 at 5:32 AM, D Jayachandran <
> d.jayachand...@ericsson.com
> > wrote:
>
> > Hi,
> >
> > The substitution mapping works in the latest APACHE ARIA code only if
> > the substituting template is imported in the top-level template.
> >
> > This seems to be contradicting with the TOSCA specification where the
> > substitution is expected to happen without the import (though not
> > explicitly mentioned).
> >
> > We are looking at the possible ways to identify the appropriate node
> > template without importing the substituting template. ( Possibly by
> > going through already available service models for a substitutable
> > entity )
> >
> > Do you have any plans to have substitution mapping work without having
> > the template imported every time ?
> > Do you have any feedback on this and if our understanding is correct ?
> >
> >
> > Regards,
> > DJ
> >
> >
> >
> >
>
>
> --
> Tal Liron
> Senior Engineer
> t...@gigaspaces.com | +1 (773) 828-9339
> Cloudify | http://getcloudify.org
> <http://getcloudify.org?utm_source=signaturesatori&utm_
> medium=email&utm_campaign=general_signature>
>
> <https://twitter.com/CloudifySource>
> <https://www.linkedin.com/groups/8467478>
> <https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
> [image: Azure Webinar]
> <http://getcloudify.org/webinars/Azure-plugin-for-
> cloudify-webinar.html?utm_source=signaturesatori&utm_
> medium=email&utm_campaign=general_signature>
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Query on operation inputs

2017-05-26 Thread Tal Liron
OK, I see now -- the error you are getting is about the operation inputs,
not the topology inputs which are different.

You may have discovered a bug here. It seems like you're doing the right
thing and giving values to all these inputs, so it should not be
complaining.

I am actually working on a PR right now that makes some significant changes
to this mechanism, but it's not merged yet. I don't mean to waste your
time, but I would appreciate if you could test it out for me in your
environment. Here is the branch to use:

https://github.com/apache/incubator-ariatosca/tree/ARIA-149-functions-in-operation-configuration


On Fri, May 26, 2017 at 4:53 AM, D Jayachandran  wrote:

> Hi Tal,
>
> Thanks for your email.
>
> With the same example you took with my inputs "isService" & "image". ARIA
> has a problem when I don’t specify "isService" which is defined as
> required: false.
>
> Please find just the different inputs used in my example ( topology, node
> type  and node template)
>
> TOPOLOGY INPUTS
>
> inputs:
> web_app_name:
> type: string
> value: tosca-webapp
>
> web_app_image:
> type: string
> value: kuber-master:5000/webwithdbinput
>
> web_app_port:
> type: integer
> value: 80
>
> db_name:
> type: string
> value: tosca-database
>
> db_image:
> type: string
> value: kuber-master:5000/dbforweb
>
> db_port:
> type: integer
> value: 3306
>
>
> NODE-TYPE INPUTS
>
> create:
> inputs:
> name:
> type: string
> required: true
> image:
> type: string
> required: true
> exposed_port:
> type: integer
> required: false
> target_port:
> type: integer
> required: false
> default: 8080
> target_host:
> type: string
> required: false
> default: test
> labels:
> type: string
> required: false
> isService:
> type: boolean
> required: false
>
> NODE-TEMPLATE INPUTS
>
> interfaces:
> Standard:
> create:
> inputs:
> name: { get_input: web_app_name }
> image: { get_property: [ web_app, image] }
> exposed_port: { get_property: [ web_app, port]
> }
> target_host: { get_property: [ database, name]
> }
> target_port: { get_property: [ database, port]
> }
> isService: true
>
> All my TOPOLOGY templates have a value, so it's not an issue in my case.
> Only "name" and "image" from my NODE-TYPE have the required definition as
> "true". So I Must mandatory have these input specified in my NODE-TEMPLATE
> which I have specified.
> Remaining NODE-TYPE inputs "exposed_port", "target_port", "target_host", "
> labels"  and "isService" have the required definition as "false".  Hence I
> may or may not specify them in my NODE-TEMPLATE input section.
> Except "labels" I have metioned all my optional outputs. I expect my
> service to be started without any issue but it fails with the error "label"
> is not specified. This is why I find ARIA is having a problem.
>
>
> # python /root/incubator-ariatosca/aria/cli/main.py executions start -s
> demo-sr-1 install
> Required inputs [u'labels'] have not been specified - expected inputs:
> [u'isService', u'name', u'exposed_port', u'image', u'labels',
> u'target_port', u'target_host']
>
>
> Regards,
> DJ
> -Original Message-
> From: Tal Liron [mailto:t...@gigaspaces.com]
> Sent: Thursday, May 25, 2017 11:19 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Q

Re: Support for TOSCA Simple Profile NFV 1.0

2017-06-01 Thread Tal Liron
Thanks DJ, I opened a new JIRA issue for this if you want to track it:

https://issues.apache.org/jira/browse/ARIA-275

It shouldn't be too hard to do, just some busy work in YAML. Anyone on the
mailing list want to tackle this?

On Thu, Jun 1, 2017 at 4:53 AM, D Jayachandran 
wrote:

> Hi,
>
> I hope ARIA currently supports , TOSCA Simple Profile NFV 1.0 draft 03.
> The Latest available TOSCA NFV profile is Simple profile NFV 1.0 draft 04,
> released on 11 May 2017.
>
> Could you kindly confirm the current level of support from ARIA for NFV
> profiles and do you have any timelines to support draft 04 ?
>
>
> Regards,
> DJ
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Query related to substitution mapping

2017-06-01 Thread Tal Liron
Your expectations are reasonable: that ARIA would look at all of its
current service instances and try to match reqs-and-caps with substitutions.

However, we are a bit far from implementing this. Currently, ARIA only
knows how to match reqs-and-caps within the service.

Also, this feature has to be planned rather carefully: in some cases the
user will not want such automatic matching to happen with services that
just happen to exist in ARIA's db. I think this a great place to introduce
a new Policy that would allow the user to configure exactly how matching
would happen: should the matching prefer external substitutions over
internal nodes? are there limited to how many could be matched? (like the
"occurrences" definition in Capability) should matching only happen with
services of a certain csar/template? etc.

​We are planning some work ahead to refactor the way we instantiate
services, and I think at least some parts of this feature should be
included in that.


Re: Query on operation inputs

2017-06-01 Thread Tal Liron
.
> > >
> > >
> > > On Tue, May 30, 2017 at 8:48 AM, D Jayachandran <
> > > d.jayachand...@ericsson.com
> > > > wrote:
> > >
> > > > Hi Ran,
> > > >
> > > > I think Tal as updated, it might be possibly a bug here. May be we
> > > > all should come to common understanding.
> > > >
> > > > As I updated earlier, since the inputs validation are completing
> > > > during parsing stage, I don’t feel why the validation is required
> > > > again during orchestration time ?
> > > > Does the TOSCA spec actually refers the 2nd points of yours ? (The
> > > > operation inputs must either have a default value in the type
> > > > definition or be supplied with a value in the actual operation
> > > > definition)
> > > >
> > > >
> > > > Regards,
> > > > DJ
> > > >
> > > > -Original Message-
> > > > From: Ran Ziv [mailto:r...@gigaspaces.com]
> > > > Sent: Sunday, May 28, 2017 6:14 PM
> > > > To: dev@ariatosca.incubator.apache.org
> > > > Subject: Re: Query on operation inputs
> > > >
> > > > I've reviewed your example, and I think either I'm missing something
> > > > or my original explanation still applies:
> > > >
> > > >   1. The validation at orchestration time for whether required
> > > > inputs have been specified does not deal with the "required" flag at
> > > > all (actually, the flag never makes it past the parsing stage and
> > > > into the
> > > storage models).
> > > >
> > > >   2. For operation inputs to validate successfully, each input must
> > > > either have a default value in the type definition or be supplied
> > > > with a value in the actual operation definition. In your case, both
> > > > "labels" and "isService" for example didn't have a default value set
> > > > in the type definition (as opposed to "target_host" for example) -
> > > However, "isService"
> > > > was set to "true" in the actual operation definition, while "labels"
> > > > wasn't assigned with any such value - Which is why you received the
> > > > validation error for a missing required input over the "labels"
> > > > operation input.
> > > >
> > > >
> > > > Does this make sense?
> > > >
> > > >
> > > > On Fri, May 26, 2017 at 7:26 PM, Tal Liron 
> wrote:
> > > >
> > > > > OK, I see now -- the error you are getting is about the operation
> > > > > inputs, not the topology inputs which are different.
> > > > >
> > > > > You may have discovered a bug here. It seems like you're doing the
> > > > > right thing and giving values to all these inputs, so it should
> > > > > not be complaining.
> > > > >
> > > > > I am actually working on a PR right now that makes some
> > > > > significant changes to this mechanism, but it's not merged yet. I
> > > > > don't mean to waste your time, but I would appreciate if you could
> > > > > test it out for me in your environment. Here is the branch to use:
> > > > >
> > > > > https://github.com/apache/incubator-ariatosca/tree/ARIA-
> > > > > 149-functions-in-operation-configuration
> > > > >
> > > > >
> > > > > On Fri, May 26, 2017 at 4:53 AM, D Jayachandran <
> > > > > d.jayachand...@ericsson.com
> > > > > > wrote:
> > > > >
> > > > > > Hi Tal,
> > > > > >
> > > > > > Thanks for your email.
> > > > > >
> > > > > > With the same example you took with my inputs "isService" &
> > "image".
> > > > > > ARIA has a problem when I don’t specify "isService" which is
> > > > > > defined as
> > > > > > required: false.
> > > > > >
> > > > > > Please find just the different inputs used in my example (
> > > > > > topology, node type  and node template)
> > > > > >
> > > > > > TOPOLOGY INPUTS
> > > > > >
> > > > > > inputs:
> > > > > 

Let's talk about scaling (ARIA-254)

2017-06-01 Thread Tal Liron
The TOSCA spec says nothing about how multiple nodes per template would be
created, leaving it the orchestrator to decide. The provided scaling policy
in the Simple Profile is empty, just providing a base type for users to
extend.

What I'm proposing is to support basic policy-based scaling to ARIA. I have
a pull request that works and you can try out:

https://github.com/apache/incubator-ariatosca/pull/143

The way it works is that you can define a default_instances number for any
number of nodes (or nodes in the group) by applying the scaling policy to
it. min_instances and max_instances are also there, though ARIA doesn't
know what to do with them at the moment. They would be available for
plugins to implement their own scaling system, to make sure their workflows
are scaling with the user-defined bounds.

In terms of code, it meant removing the default_instances, min_instances,
and max_instances fields from the NodeTemplate model. Instead, there is a
default_instance property that will grab the value from applicable policies.

Works great! But I would be happy to hear feedback about this bold
approach. I think one negative is that if you want to give default scaling
parameters for different nodes you would have to define separate policies.
But I don't think this is so bad, and is actually the correct usage of
policies (and groups) in TOSCA.


Re: Let's talk about scaling (ARIA-254)

2017-06-05 Thread Tal Liron
>
>
> The scaling policies are definitely needed e.g. for group, but it'd have
> been nice to have some simplified manner for defining multiple instances of
> a given node without having to go through actual scaling policies
> definitions (e.g. a special property on the node to define this etc).
>

I simply cannot think of a way to do this in TOSCA. For our operation
configuration we made use of dependencies in a creative way. But for nodes
there just isn't much there to use. I considered using artifact definitions
... but this is really going too far in my view in terms of weirdness. And
policies is the right way to do it in TOSCA.


> 2) I'm not sure about removing the relevant fields from NodeTemplate. The
> scaling policy is after all a special one, and having it fill these fields'
> values seems somewhat logical IMO.
>

Actually, the scaling policy is not especially special :) -- it's parsed
and stored like any other policy (unlike custom workflow policies, which
become OperationTemplate models). And there can be many other TOSCA
policies: placement, allotment, allocations, etc., that may or may not be
supported by ARIA specifically, and may or may not be used by other TOSCA
tools up or down the line.

For example, someone might create an ARIA extension to support resource
allocation policies with its own list of special properties. Why does
scaling get a privileged column in NodeTemplate but not them? I say we
treat all policies fairly and equally and not sully the NodeTemplate model
with policy specifics. The policies are exactly designed to be in a
separate section in the TOSCA template so you can look them and see all
that apply to your nodes.

There's actually something else interesting we can do here: create a
special "policies" property for NodeTemplate that combines those directly
associated with it with those that are associated indirectly via
GroupTemplate (this extra combination was the only reason I didn't use SQL
relationships directly to find them). This could be implemented via a SQL
query. What do you think?

3) You've mentioned once before that TOSCA also defines an implicit
> mechanism for creating multiple instances via the requirements&capabilities
> mechanism; What are your ideas regarding that, and how do they fit in
> together with this?
>

So, here's how I understand it: in capabilities definition (section
3.6.1.1) you have an "occurrences" field, which by default is unbounded but
can definitely be set to an explicit max. For example, a router might
define an "upstream" capability limited to 5. During the reqs-and-caps
matching process, let's say that 5 relationships have been forged with the
node, and now you have an extra node that also requires "upstream" from us.
To me, this seems to imply than an extra router node should be created.

But ... I might very well be wrong here. This kind of auto-scaling might
have unforeseen consequences if the template author was not aware that this
could happen.

I think it's best that we should not apply any magical autoscaling at the
design phase, HOWEVER, we can take into consider the policy if it was set.
If default_instances is 1, and max_instances is 5, then it means the
template author has considered multiple router nodes.

I hope this makes sense, it's a somewhat shadowy corner of the TOSCA
spec... Note I would consider implementing this as a separate JIRA if we
agree that this is correct.


Re: Let's talk about scaling (ARIA-254)

2017-06-07 Thread Tal Liron
> About storing in policy information in nodes/templates that the policy is
> associated with in general - sounds interesting, I'm not sure I'd implement
> this right away, I think it'd be better to hold off and see what sort of
> policies might exist in the future and what data might be useful to keep on
> the node/template first.
>

Oh sorry, I was trying to make the opposite point. :) I don't think there
is a good reason to store special policy information in nodes, when it can
easily be looked up. And so in this case I am considering the aria.Scaling
Policy to be just another policy, which is why I am saying we shouldn't
store "default_instances" in the node...


Re: Missing support for type qualified name

2017-06-08 Thread Tal Liron
Thanks, DJ. This was on the list of things to do but we indeed forgot to
create a JIRA for it...

On Thu, Jun 8, 2017 at 8:24 AM, Ran Ziv  wrote:

> Hi DJ,
>
> Sounds good. Feel free to create a new JIRA yourself!  And thanks for
> posting on the dev list before creating this issue.
> One small note, I'd personally think of this as a "story" rather than a
> "bug" - We don't yet claim to be 100% TOSCA complaint, and we're familiar
> with several other missing spec sections implementations at the moment.
>
> Let me know if you need any help.
> Tal might have more to add on this (Type qualified name) as well.
>
> Thank you
> Ran
>
>
> On Wed, Jun 7, 2017 at 3:35 PM, D Jayachandran <
> d.jayachand...@ericsson.com>
> wrote:
>
> > Hi,
> >
> > TOSCA Simple Yaml 1.0 profile specification supports usage of  the
> > following Namespace Alias
> >
> >   1.  Shorthand Name
> >   2.  Type Qualified Name
> >   3.  Type URI
> >
> > ARIA currently supports only "Shorthand Name" and "Type URI". The support
> > for "Type Qualified Name" is missing which is required to adhere with the
> > TOSCA Simple yaml 1.0 specifications. Could this be considered as bug
> and a
> > JIRA issue opened for this ?
> > We would like to start our contribution with this.
> >
> >
> > Regards,
> > DJ
> >
> >
>



-- 
Tal Liron
Senior Engineer
t...@gigaspaces.com | +1 (773) 828-9339
Cloudify | http://getcloudify.org
<http://getcloudify.org?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>

<https://twitter.com/CloudifySource>
<https://www.linkedin.com/groups/8467478>
<https://github.com/cloudify-cosmo>   <https://github.com/cloudify-cosmo>
[image: Azure Webinar]
<http://getcloudify.org/webinars/Azure-plugin-for-cloudify-webinar.html?utm_source=signaturesatori&utm_medium=email&utm_campaign=general_signature>


Re: Let's talk about scaling (ARIA-254)

2017-06-12 Thread Tal Liron
Let's talk about scaling some more. :)

Earlier, I complete missed the definition of tosca.capabilities.Scalable:

http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#DEFN_TYPE_CAPABILITIES_SCALABLE

The reason I missed it is that it's actually *not* declared in
tosca.nodes.Root, but specifically declared only for tosca.nodes.Compute
and tosca.nodes.Container.Runtime. This seems a bit odd to me. From our
experience, we know that it's not only VM nodes that need to be scaled, but
in fact *any* node in the topology is a template that can have more than
one instances (or possibly even zero in some situations). I'm honestly
confused as to why TOSCA did it this way.

So, I currently propose this: support *both* the capability and a policy.
The mechanism works by first looking through the node templates
capabilities to find whether it has a "scalable"-role capability. If it
doesn't, it will see if there is a "scalable"-role policy that applies to
it.

I'll note that the "scalable"-role policy is part of the ARIA Profile, but
even without that profile we will still support the more basic scalability
defined in the Simple Profile. Supporting both is actually very easy in
terms of the code, just a few lines for each once the mechanism is in
place: the properties names and usages are the same in each.

One quirk is how tosca.capabilities.Scalable defines its properties. We're
used to thinking that max_instances defaults to infinity, and
default_instances defaults to 1. The way the Simple Profile defines it,
max_instances defaults to 1, and default_instances is an optional field.
Meaning that in a sense it is up to the orchestrator to define the default
number of instances for a node template. I recommend we switch to the way
the Simple Profile works, for our policy as well.

What this means is that if, for example, you want a node to have 5
instances, then you need to set default_instances to 5, but *also* set
max_instances to >=5, otherwise you will get a validation error. I
personally think this is annoying, and prefer our way of thinking, but I
think we should adhere to TOSCA here.


Re: Let's talk about scaling (ARIA-254)

2017-06-13 Thread Tal Liron
Right, TOSCA also defines tosca.policies.Scaling as a base type for
potential scaling policies, though leaves it entirely empty. So, I think we
are doing it the right "TOSCA way" by supporting both.

About which should override which -- I'm really not entirely sure. The way
the code works right now: it tries to apply capability values, and then
policy values. So policy values would override those.

Another option would be to emit a validation error if there is
conflict/ambiguity between the values. What do you think?

On Tue, Jun 13, 2017 at 4:25 AM, Ran Ziv  wrote:

> I think that's a very good solution. The semantics of capability being
> first in precedence makes sense - basically if you'd like to scale a
> compute node, do it the "TOSCA way" - Otherwise, use the policy.
>
> I also agree that despite the quirkiness, the policy should have identical
> properties/defaults as the capability does (despite the extra step required
> in order to set >1 instances).
> This should also mean that a user could also scale the number of instances
> by setting "min_instances=X" and "max_instances>=X", yet without setting
> "default_instances". The number of instances in this scenario should be X.
>
>
> Have you seen the 5.4.10.3 note by the way? It talks about the number of
> instances possibly being governed by a separate policy - Even though this
> is still not properly defined in TOSCA, perhaps this means that the
> policy's values should override the capability's if both exist? I'd rather
> it didn't, as it could make things confusing, and because it's still an
> ARIA-specific definition. Just thought I'd bring it up though.
>
>
>
>
> On Tue, Jun 13, 2017 at 1:54 AM, Tal Liron  wrote:
>
> > Let's talk about scaling some more. :)
> >
> > Earlier, I complete missed the definition of tosca.capabilities.Scalable:
> >
> > http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#
> > DEFN_TYPE_CAPABILITIES_SCALABLE
> >
> > The reason I missed it is that it's actually *not* declared in
> > tosca.nodes.Root, but specifically declared only for tosca.nodes.Compute
> > and tosca.nodes.Container.Runtime. This seems a bit odd to me. From our
> > experience, we know that it's not only VM nodes that need to be scaled,
> but
> > in fact *any* node in the topology is a template that can have more than
> > one instances (or possibly even zero in some situations). I'm honestly
> > confused as to why TOSCA did it this way.
> >
> > So, I currently propose this: support *both* the capability and a policy.
> > The mechanism works by first looking through the node templates
> > capabilities to find whether it has a "scalable"-role capability. If it
> > doesn't, it will see if there is a "scalable"-role policy that applies to
> > it.
> >
> > I'll note that the "scalable"-role policy is part of the ARIA Profile,
> but
> > even without that profile we will still support the more basic
> scalability
> > defined in the Simple Profile. Supporting both is actually very easy in
> > terms of the code, just a few lines for each once the mechanism is in
> > place: the properties names and usages are the same in each.
> >
> > One quirk is how tosca.capabilities.Scalable defines its properties.
> We're
> > used to thinking that max_instances defaults to infinity, and
> > default_instances defaults to 1. The way the Simple Profile defines it,
> > max_instances defaults to 1, and default_instances is an optional field.
> > Meaning that in a sense it is up to the orchestrator to define the
> default
> > number of instances for a node template. I recommend we switch to the way
> > the Simple Profile works, for our policy as well.
> >
> > What this means is that if, for example, you want a node to have 5
> > instances, then you need to set default_instances to 5, but *also* set
> > max_instances to >=5, otherwise you will get a validation error. I
> > personally think this is annoying, and prefer our way of thinking, but I
> > think we should adhere to TOSCA here.
> >
>



-- 
Tal Liron, Senior Solutions Architect <http://cloudify.co>
--
M: +1-312-375-8299 http://cloudify.co @cloudifysource
<https://twitter.com/CloudifySource>
<https://www.linkedin.com/company-beta/17918192/>
<https://github.com/cloudify-cosmo>
<https://www.youtube.com/cloudifysource>


Re: aria install from source error

2017-06-17 Thread Tal Liron
DeWayne, can you detail all the steps you needed from scratch and on which
OS?

On Fri, Jun 16, 2017 at 7:21 PM, DeWayne Filppi 
wrote:

> Ran,
>
> FYI, that worked, in addition to a ton of other modules I had to add.  The
> other modules were easy to identity, unlike setuptools.  Thanks.
>
> DeWayne
>
> On Fri, Jun 16, 2017 at 2:34 PM, Ran Ziv  wrote:
>
> > Hi DeWayne,
> >
> > The readme doesn't specify anything about a centos installation by the
> way
> > - there should definitely be some system dependencies that might be
> > required, so if you run into those please let us know so we'll update the
> > readme accordingly.
> >
> > Your specific error, however, is caused by an old version of setuptools.
> > please run this:
> >
> > pip install -U setuptools
> >
> > (possibly upgrading pip itself beforehand)
> > and then try again.
> >
> >
> > Re attempting to install aria from pypi, it's not on there yet, so that
> > wouldn't work right now. The readme is ahead of its time in that regard.
> >
> >
> > Ran
> >
> >
> > On Fri, Jun 16, 2017 at 11:08 PM, DeWayne Filppi  >
> > wrote:
> >
> > > When installing Aria per instructions on centos 7 from source (e.g. pip
> > > install .), I get :
> > >
> > > Unpacking /home/vagrant/incubator-ariatosca
> > >   Running setup.py egg_info for package from file:///home/vagrant/
> > > incubator-ariatosca
> > > Traceback (most recent call last):
> > >   File "", line 16, in 
> > >   File "/tmp/pip-bnCofl-build/setup.py", line 134, in 
> > > packages=find_packages(include=['aria*']) +
> > > TypeError: find_packages() got an unexpected keyword argument
> > 'include'
> > > Complete output from command python setup.py egg_info:
> > > Traceback (most recent call last):
> > >
> > >   File "", line 16, in 
> > >
> > >   File "/tmp/pip-bnCofl-build/setup.py", line 134, in 
> > >
> > > packages=find_packages(include=['aria*']) +
> > >
> > > TypeError: find_packages() got an unexpected keyword argument 'include'
> > >
> > > Is this familiar?
> > >
> > > Also note that I installed pip via bootstrap.pypa and running "pip
> > install
> > > aria" yields a "not found" error.
> > >
> > > DeWayne
> > >
> >
>
>
>
> --
> DeWayne Filppi, Director, Solutions Architect <http://cloudify.co>
> --
> M: +17145121706 http://cloudify.co @dfilppi
> <https://twitter.com/CloudifySource>
> <https://www.linkedin.com/company-beta/17918192/>
> <https://github.com/cloudify-cosmo>
> <https://www.youtube.com/cloudifysource>
>



-- 
Tal Liron, Senior Solutions Architect <http://cloudify.co>
--
M: +1-312-375-8299 http://cloudify.co @cloudifysource
<https://twitter.com/CloudifySource>
<https://www.linkedin.com/company-beta/17918192/>
<https://github.com/cloudify-cosmo>
<https://www.youtube.com/cloudifysource>


Re: [VOTE] publish ariatosca 0.1.0

2017-06-29 Thread Tal Liron
This is confusing to me. Python is an interpreted language, so there is no
real difference between source and binary.

In the Python world, you would probably differentiate between "dev"
(everything on the git repo, including tests and development tools) and
"release" (just what's needed to run). I think we treated this as a
"release".

Is there a way to map Python's "dev" and "release" onto ASF's "source" and
"binary"? What do other Python ASF projects do?

On Thu, Jun 29, 2017 at 11:00 AM, John D. Ament 
wrote:

> On Thu, Jun 29, 2017 at 11:53 AM Ran Ziv  wrote:
>
> > Suneel, re mentioning 72 hours - note that I simply used the recommended
> > template for these messages from here:
> >
> > http://mail-archives.apache.org/mod_mbox/incubator-stdcxx-
> dev/200601.mbox/%3c43c1c0a0.7040...@roguewave.com%3E
> >
> >
> I'll note this is an email from 10 years ago, and things have been refined
> since then.  I plan to rewrite that guide to give better examples.  Here's
> a more up to date example
>
> https://lists.apache.org/thread.html/9fd77b14753bbde462bea06fc2e1c0
> 3d5cf5a89cea2fabd6751d805a@%3Cdev.ponymail.apache.org%3E
>
>
>
> >
> >
> > John:
> > Are you reading this off the README? If so, you'll notice that the
> > installation section mentions that when installing ARIA from source, the
> > command that should be executed is actually "pip install ." when you're
> > inside the extracted dir.
> >
> > Regarding your other comments:
> >  - DISCLAIMER file - apparently it was dropped from the manifest file
> > somehow, i'll add it back.
> >  - Is RAT to be used for Python projects as well? I thought it was
> > Java-specific and I'm not familiar with similar tools for Python. We've
> > done what we can to verify every code file has the license header.
> >
>
> RAT is a tool written in java that checks headers in all languages.  We
> should have instructions on how to run it here.
>
>
> >  - This is indeed the source release - There are indeed deltas between
> this
> > and the repo files but that's because some files are unnecessary for
> users
> > (e.g. docs generating files) while some aren't needed in the repo (e.g.
> > docs generated files).
> >
> >
> The source release is what's in your repo.  Source releases are for
> everyone to consume.
>
>
> >
> >
> > Ran
> >
> >
> > On Thu, Jun 29, 2017 at 6:43 PM, John D. Ament 
> > wrote:
> >
> > > -1.  Found the following issues:
> > >
> > > - BUILD instructions are INSTALL instructions, and the installation
> > doesn't
> > > work
> > >
> > > pip install apache-ariatosca
> > >
> > >
> > > Collecting apache-ariatosca
> > >   Could not find a version that satisfies the requirement
> > apache-ariatosca
> > > (from versions: )
> > > No matching distribution found for apache-ariatosca
> > >
> > > - There is no DISCLAIMER file
> > > - No instructions on how to run RAT
> > > - I'm not sure this is a source release, many files don't match whats
> in
> > > the repo (files added/missing?)
> > >
> > > Other things look fine:
> > > - contains incubating
> > > - files contain headers
> > >
> > > On Thu, Jun 29, 2017 at 11:26 AM Ran Ziv  wrote:
> > >
> > > > I created a tarball candidate for the 0.1.0 release and placed it in
> > > ARIA's
> > > > /dist/dev folder:
> > > > https://dist.apache.org/repos/dist/dev/incubator/ariatosca/
> > > > The file is signed (.asc) and its MD5 / SHA512 checksums may be found
> > in
> > > > that folder as well.
> > > >
> > > >
> > > > The list of issues Resolved for this release are simply all the
> issues
> > > that
> > > > have been resolved thus far, seeing as this would be the first
> release
> > :)
> > > > Those can be found here:
> > > > https://issues.apache.org/jira/browse/ARIA-295?filter=-
> > > > 1&jql=project%3Dariatosca%20and%20status%20in%20(
> resolved%2C%20closed)
> > > >
> > > >
> > > > Instructions for installation etc. may be found in the README file
> > inside
> > > > the tarball.
> > > >
> > > >
> > > > Please vote to publish this tarball on ARIA's /dist/release folder.
> > > >
> > > >
> > > > Ran
> > > >
> > >
> >
>



-- 
Tal Liron, Senior Solutions Architect <http://cloudify.co>
--
M: +1-312-375-8299 http://cloudify.co @cloudifysource
<https://twitter.com/CloudifySource>
<https://www.linkedin.com/company-beta/17918192/>
<https://github.com/cloudify-cosmo>
<https://www.youtube.com/cloudifysource>


Re: inputs

2017-06-30 Thread Tal Liron
They are typed parameters -- so they can be any type, including custom data
types.

By the way, for inputs ARIA can validate only simple types right now, not
custom data types. We have that on our TODO list.

On Fri, Jun 30, 2017 at 12:34 PM, DeWayne Filppi 
wrote:

> Are template inputs always scalar?  The spec seems vague, but all examples
> I see are scalar.
>
> --DeWayne
>



-- 
Tal Liron, Senior Solutions Architect <http://cloudify.co>
--
M: +1-312-375-8299 http://cloudify.co @cloudifysource
<https://twitter.com/CloudifySource>
<https://www.linkedin.com/company-beta/17918192/>
<https://github.com/cloudify-cosmo>
<https://www.youtube.com/cloudifysource>


Re: Query on operation inputs

2017-07-03 Thread Tal Liron
Oops, it seems your email from before was somehow tagged as read by
mistake, so I missed it. I will get to it after the US holiday!

On Mon, Jul 3, 2017 at 4:21 AM, D Jayachandran 
wrote:

> Hi Tal,
>
> Have you got a chance to look into this below issue ?
>
> Regards,
> DJ
> -Original Message-
> From: D Jayachandran [mailto:d.jayachand...@ericsson.com]
> Sent: Monday, June 05, 2017 3:44 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: RE: Query on operation inputs
>
> Hi Tal,
>
> Please find below the git repo of my example.
>
> https://github.com/djay8887/Aria-operationInputs
>
> regards,
> DJ
>
> -Original Message-
> From: Tal Liron [mailto:t...@gigaspaces.com]
> Sent: Thursday, June 01, 2017 9:59 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Query on operation inputs
>
> I'm still a bit confused by all this. DJ, could you possibly create a
> quick git repo with your complete example to make sure we're all on the
> same page here?
>
> On Thu, Jun 1, 2017 at 7:10 AM, Ran Ziv  wrote:
>
> > Right, it makes more sense now :) But now I simply have to say again
> > that as far as I can tell this should in fact be the intended behavior.
> >
> > What would you rather happen? the "labels" parameter be assigned with
> > "None" instead?
> > We considered this but part of the problem here is that the
> > information about whether an input is required or not is no longer
> > available at this stage so it's impossible to know whether to use "None"
> or raise an error.
> > Tal and I have talked about it in the past, and from what I remember,
> > Tal said the "required" field information in fact should not be
> > stored, and is only relevant for parsing phase. It is possible I'm
> > getting this wrong though :)
> >
> > I'm open for changes here as it is a somewhat confusing behavior -
> > although I think it does make sense after all.
> >
> >
> >
> > On Thu, Jun 1, 2017 at 3:04 PM, D Jayachandran <
> > d.jayachand...@ericsson.com>
> > wrote:
> >
> > > Hi Ran/Tal,
> > >
> > > I was wrong, Tal's branch still throws the validation error (I was
> > loading
> > > a different service template) :). So the issue which I told still
> > > exists
> > >
> > > [root@DJ-DEV tal-test]# python /root/tal-test/incubator-
> > ariatosca/aria/cli/main.py
> > > executions start install -s s2
> > > Declared parameters "labels" have not been provided values
> > >
> > > Regards,
> > > DJ
> > >
> > > -Original Message-
> > > From: Ran Ziv [mailto:r...@gigaspaces.com]
> > > Sent: Thursday, June 01, 2017 5:24 PM
> > > To: dev@ariatosca.incubator.apache.org
> > > Subject: Re: Query on operation inputs
> > >
> > > Again, there's a difference between the "required" validation and
> > > the actual runtime validation. the runtime one cannot be done during
> > > instantiation phase, which is why there are two separate validations.
> > >
> > > I do not know how come Tal's branch (which by now has been merged to
> > > master) helped fixing your issue, so I might have misunderstood
> > > something about your problem :)
> > >
> > > Ran
> > >
> > > On Thu, Jun 1, 2017 at 2:11 PM, D Jayachandran <
> > > d.jayachand...@ericsson.com>
> > > wrote:
> > >
> > > > Hi Tal,
> > > >
> > > > I did test your branch  https://github.com/apache/
> > > > incubator-ariatosca/tree/ARIA-149-functions-in-operation-configura
> > > > tion and it seems to have the fix for operation/interface inputs.
> > > >
> > > > Regards,
> > > > DJ
> > > > -Original Message-
> > > > From: D Jayachandran
> > > > Sent: Thursday, June 01, 2017 4:40 PM
> > > > To: dev@ariatosca.incubator.apache.org
> > > > Subject: RE: Query on operation inputs
> > > >
> > > > Hi Ran,
> > > >
> > > > The validation of operation inputs is also done during instantiation.
> > > > Please find below.
> > > >
> > > > [root@DJ-DEV tal-test]# python
> > > > /root/tal-test/incubator-ariatosca/aria/cli/main.py
> > > > service-templates store /root/tosca_simple_yaml_
> > > > plugin/kubernetes-deployment.yaml st-3 Storing service template
> > st-3...
>

Re: imperative workflows (1.1)

2017-07-03 Thread Tal Liron
Not exactly. We do not support TOSCA 1.1 imperative workflows and it's not
on the roadmap. Our current plan is to provide a state-of-the-art TOSCA 1.0
implementation, and we're not at 100% completion yet.

That said, we support something we call "custom workflows". They are not
defined directly in TOSCA, but rather in a Python function indicated by a
TOSCA policy (of type aria.Workflow).

To be honest, you can do a lot more with Python than with TOSCA, but of
course we understand that Python is not an option for many users. And of
course this is an optional feature.

Bottom line: we do not support TOSCA 1.1 right now, but have a very
powerful workaround.

On Mon, Jul 3, 2017 at 3:01 PM, DeWayne Filppi 
wrote:

> Is there any current support for imperative workflows (ala 1.1)?  If not,
> is it a priority roadmap item?
>
> --DeWayne
>



-- 
Tal Liron, Senior Solutions Architect <http://cloudify.co>
--
M: +1-312-375-8299 http://cloudify.co @cloudifysource
<https://twitter.com/CloudifySource>
<https://www.linkedin.com/company-beta/17918192/>
<https://github.com/cloudify-cosmo>
<https://www.youtube.com/cloudifysource>


Re: imperative workflows (1.1)

2017-07-04 Thread Tal Liron
Hi Steve,

Let's just call it ARIA (the "ng" next-gen suffix was a temporary
development code, no longer necessary as everything is merged into the
Apache repository).

The Python workflow function doesn't actually execute the operations. Its
job is to create the workflow graph based on the topology. The graph is a
series of interdependent tasks that usually just execute a TOSCA operation
on an interface of a node, a relationship, or a group. For example, the
normative TOSCA "install" workflow has a series of interdependent
operations to install, configure, and start nodes and operations, based on
the TOSCA normative lifecycle interfaces. (See the TOSCA spec for more
details.) Your own custom workflows will likely use your own custom
interfaces and operation with their own define dependencies. That's where
you need Python to build that graph. Actually, most workflow functions are
quite short: there's not a lot of Python to write here.

Once the workflow Python function creates the graph, an "executor" actually
handles the execution of the tasks. ARIA's built-in executor can run the
tasks either locally (on the machine running ARIA) or remotely via SSH, and
expects these "implementations" to be be simple executable artifacts, such
as shell scripts, that can be included in the CSAR. That's all standard
TOSCA behavior. The executor API, though, is extensible and we expect many
useful executors in the future. We have some support for Celery, and will
also expect Puppet, Chef, and Juju. So a task, for example, could execute a
Juju charm rather than just run a shell script. All of these products have
their own rich debugging tools.

That said, We've also put a lot of effort into the logging mechanism: logs
are stores in the database (SQL), and are marked per workflow execution and
per service, so even if you are running a large number of workflows you can
still pinpoint the error without rummaging through mountains of text logs.
The CLI lets you access these logs, though you can read the database via
your own monitoring and analytics tools. So, if an individual task fails,
the error message will be logged at this high level.

It's also worth pointing out that even the task API is extensible. Though
usually you want to execute TOSCA operations as part of your workflow, you
can include your own custom tasks as part of the workflow, for
domain-specific operations that are just not relevant to the TOSCA
topology. (For example, a task to discover a controller, register a
transaction, etc.)

The CLI is quite rich in other aspects, including tools to display the
topology in depth and understand what might have gone wrong.

Obviously, this is an ongoing effort and we would very much appreciate
feedback on how to improve debugging and workflow construction. Of course
we look forward to TOSCA 1.1 finalizing and allowing a way to define these
workflow graphs in YAML, although from experience we expect that YAML might
not be good enough for some especially complex workflows. But, we'll wait
and see, TOSCA 1.1 is also an ongoing effort.

We're also missing a guide and examples for how to do all this. For now, we
are here to help on the mailing list!

On Tue, Jul 4, 2017 at 5:54 PM, Steve Baillargeon <
steve.baillarg...@ericsson.com> wrote:

> Hi
> Good information.
> How does the user troubleshoot a Python script that is not properly
> executed by ARIA NG?
> Any support for CLI or API to inform client and provide cause/reason for
> error?
>
> -Steve B
>
> -Original Message-
> From: Ran Ziv [mailto:r...@gigaspaces.com]
> Sent: Tuesday, July 04, 2017 4:23 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: imperative workflows (1.1)
>
> Yup. We have given some thought as to how to implement 1.1 workflows
> including conditional tasks ("on_success"/"on_failure") and it maps well
> into the workflow engine, but generally speaking supporting TOSCA 1.1 is
> not currently a top item.
>
> On Tue, Jul 4, 2017 at 8:12 AM, Tal Liron  wrote:
>
> > Not exactly. We do not support TOSCA 1.1 imperative workflows and it's
> > not on the roadmap. Our current plan is to provide a state-of-the-art
> > TOSCA 1.0 implementation, and we're not at 100% completion yet.
> >
> > That said, we support something we call "custom workflows". They are
> > not defined directly in TOSCA, but rather in a Python function
> > indicated by a TOSCA policy (of type aria.Workflow).
> >
> > To be honest, you can do a lot more with Python than with TOSCA, but
> > of course we understand that Python is not an option for many users.
> > And of course this is an optional feature.
> >
> > Bottom line: we do not support TOSCA 1.1 right now, but have a very
> > powerful workar

Re: Query on operation inputs

2017-07-05 Thread Tal Liron
DJ, I'm sorry, I'm still not sure what to look at here. The example seems
to run fine.

By the way, you can import ARIA's types (such as aria.Plugin) like so:

imports:
  - aria-1.0

On Mon, Jul 3, 2017 at 2:17 PM, Tal Liron  wrote:

> Oops, it seems your email from before was somehow tagged as read by
> mistake, so I missed it. I will get to it after the US holiday!
>
> On Mon, Jul 3, 2017 at 4:21 AM, D Jayachandran <
> d.jayachand...@ericsson.com> wrote:
>
>> Hi Tal,
>>
>> Have you got a chance to look into this below issue ?
>>
>> Regards,
>> DJ
>> -Original Message-
>> From: D Jayachandran [mailto:d.jayachand...@ericsson.com]
>> Sent: Monday, June 05, 2017 3:44 PM
>> To: dev@ariatosca.incubator.apache.org
>> Subject: RE: Query on operation inputs
>>
>> Hi Tal,
>>
>> Please find below the git repo of my example.
>>
>> https://github.com/djay8887/Aria-operationInputs
>>
>> regards,
>> DJ
>>
>> -Original Message-
>> From: Tal Liron [mailto:t...@gigaspaces.com]
>> Sent: Thursday, June 01, 2017 9:59 PM
>> To: dev@ariatosca.incubator.apache.org
>> Subject: Re: Query on operation inputs
>>
>> I'm still a bit confused by all this. DJ, could you possibly create a
>> quick git repo with your complete example to make sure we're all on the
>> same page here?
>>
>> On Thu, Jun 1, 2017 at 7:10 AM, Ran Ziv  wrote:
>>
>> > Right, it makes more sense now :) But now I simply have to say again
>> > that as far as I can tell this should in fact be the intended behavior.
>> >
>> > What would you rather happen? the "labels" parameter be assigned with
>> > "None" instead?
>> > We considered this but part of the problem here is that the
>> > information about whether an input is required or not is no longer
>> > available at this stage so it's impossible to know whether to use
>> "None" or raise an error.
>> > Tal and I have talked about it in the past, and from what I remember,
>> > Tal said the "required" field information in fact should not be
>> > stored, and is only relevant for parsing phase. It is possible I'm
>> > getting this wrong though :)
>> >
>> > I'm open for changes here as it is a somewhat confusing behavior -
>> > although I think it does make sense after all.
>> >
>> >
>> >
>> > On Thu, Jun 1, 2017 at 3:04 PM, D Jayachandran <
>> > d.jayachand...@ericsson.com>
>> > wrote:
>> >
>> > > Hi Ran/Tal,
>> > >
>> > > I was wrong, Tal's branch still throws the validation error (I was
>> > loading
>> > > a different service template) :). So the issue which I told still
>> > > exists
>> > >
>> > > [root@DJ-DEV tal-test]# python /root/tal-test/incubator-
>> > ariatosca/aria/cli/main.py
>> > > executions start install -s s2
>> > > Declared parameters "labels" have not been provided values
>> > >
>> > > Regards,
>> > > DJ
>> > >
>> > > -Original Message-
>> > > From: Ran Ziv [mailto:r...@gigaspaces.com]
>> > > Sent: Thursday, June 01, 2017 5:24 PM
>> > > To: dev@ariatosca.incubator.apache.org
>> > > Subject: Re: Query on operation inputs
>> > >
>> > > Again, there's a difference between the "required" validation and
>> > > the actual runtime validation. the runtime one cannot be done during
>> > > instantiation phase, which is why there are two separate validations.
>> > >
>> > > I do not know how come Tal's branch (which by now has been merged to
>> > > master) helped fixing your issue, so I might have misunderstood
>> > > something about your problem :)
>> > >
>> > > Ran
>> > >
>> > > On Thu, Jun 1, 2017 at 2:11 PM, D Jayachandran <
>> > > d.jayachand...@ericsson.com>
>> > > wrote:
>> > >
>> > > > Hi Tal,
>> > > >
>> > > > I did test your branch  https://github.com/apache/
>> > > > incubator-ariatosca/tree/ARIA-149-functions-in-operation-configura
>> > > > tion and it seems to have the fix for operation/interface inputs.
>> > > >
>> > > > Regards,
>> > > > DJ
>> > > > -Original Message-
>> > >

Re: Plugin validation

2017-07-11 Thread Tal Liron
That's a good question. :)

Actually, in the service template you do not pin down the plugin, but
rather specify the plugin and a minimum version that you need. During
instantiation there might be a higher version of the plugin that would be
matched. Once instantiated the plugin version is "locked" for that service
instance. If you then install a newer version of the plugin, and create a
new service instance, the second service instance would use the higher
version. All this is done to make sure that the service remains stable.

It would of course be possible to verify this earlier, but it seemed
unnecessary. What advantage do you see in verifying the existing of the
plugin during the service template parsing stage?


On Tue, Jul 11, 2017 at 12:04 PM, D Jayachandran <
d.jayachand...@ericsson.com> wrote:

> Hi,
>
> With current implementation, the plugin validation in a service template
> happens during the service creation (instantiation of service model).
> Will it be appropriate if we have this plugin validation happening during
> the service-template creation itself ?
>
>
>
> Regards,
> DJ
>


Re: workflow list API

2017-07-11 Thread Tal Liron
Do you mean the CLI command?

We actually have talked about this in the past, and the question was just
how much the "built-in" (normative lifecycle) workflows should be
considered as equivalent to any arbitrary workflow. Can you think of
arguments for or against this thinking?

On Wed, Jul 12, 2017 at 12:52 AM, DeWayne Filppi 
wrote:

> The workflow list API call doesn't return "normative" workflows (like
> "install").  Intentional?
>
> -- DeWayne
>


Re: workflow runner error

2017-07-13 Thread Tal Liron
I wonder if we should provide a better upgrade path here. Perhaps we need
to add a file with some meta information about the version of the database.
We don't necessarily have to provide an automatic upgrade to a new database
format, but least we can tell the user that the database is out of date and
needs resetting. What do you think?

On Thu, Jul 13, 2017 at 3:40 AM, DeWayne Filppi 
wrote:

> Yes, that absolutely could be the case.  I'll try a reset.
>
> On Wed, Jul 12, 2017 at 3:40 PM, Ran Ziv  wrote:
>
> > Could it be that you first used an older, non-release version of ARIA,
> and
> > then used 0.1.0 without first resetting your ARIA working directory (e.g.
> > "aria reset -f")?
> >
> > The task table should indeed have the said column, as can be seen here:
> > https://github.com/apache/incubator-ariatosca/blob/0.1.
> > 0/aria/modeling/orchestration.py#L410
> >
> >
> > If this is not the cause for your problem, perhaps attach the workflow in
> > question as well, since the problem occurs during the workflow
> compilation
> > phase.
> >
> >
> >
> > On Wed, Jul 12, 2017 at 7:41 PM, DeWayne Filppi 
> > wrote:
> >
> > > When executing this:
> > >
> > >   runner = WorkflowRunner(model_storage, resource_storage,
> > plugin_manager,
> > >   service_id = service_id,
> > >   workflow_name = workflow_name,
> > >   inputs = inputs,
> > >   executor = executor,
> > >   task_max_attempts =  task_max_attempts,
> > >   task_retry_interval = task_retry_interval)
> > >
> > > I get this:
> > >
> > >   File
> > > "/home/vagrant/src/incubator-ariatosca/aria/orchestrator/
> > > workflow_runner.py",
> > > line 102, in __init__
> > > compiler.compile(self._tasks_graph)
> > >   File
> > > "/home/vagrant/src/incubator-ariatosca/aria/orchestrator/
> > > workflows/core/graph_compiler.py",
> > > line 44, in compile
> > > start_stub_type, depends_on, self._start_graph_suffix(task_
> graph.id
> > ),
> > > task_graph.name,
> > >   File
> > > "/home/vagrant/src/incubator-ariatosca/aria/orchestrator/
> > > workflows/core/graph_compiler.py",
> > > line 80, in _create_stub_task
> > > self._ctx.model.task.put(model_task)
> > >   File "/home/vagrant/src/incubator-ariatosca/aria/storage/sql_
> mapi.py",
> > > line 124, in put
> > > self._safe_commit()
> > >   File "/home/vagrant/src/incubator-ariatosca/aria/storage/sql_
> mapi.py",
> > > line 187, in _safe_commit
> > > raise exceptions.StorageError('SQL Storage error:
> > {0}'.format(str(e)))
> > > StorageError: SQL Storage error: (sqlite3.OperationalError) *table task
> > has
> > > no column named interface_name* [SQL: u'INSERT INTO task (name, status,
> > > due_at, started_at, ended_at, attempts_count, function, max_attempts,
> > > retry_interval, ignore_failure, interface_name, operation_name,
> _api_id,
> > > _executor, _context_cls, _stub_type, plugin_fk, execution_fk,
> > > relationship_fk, node_fk) VALUES (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?,
> ?,
> > ?,
> > > ?, ?, ?, ?, ?, ?)'] [parameters:
> > > ('install.fdf601e4-4d14-4ddf-9e57-f2c09b5293f7', 'pending',
> '2017-07-10
> > > 04:14:20.892673', None, None, 1, None, 1, 0.0, 0, None, None, None,
> > > , None,
> > > 'start_workflow', None, 1, None, None)]
> > >
> > > I'm running 0.1.0.
> > >
> > > -- DeWayne
> > >
> > > --
> > > DeWayne Filppi, Director, Solutions Architect 
> > > --
> > > M: +17145121706 http://cloudify.co @dfilppi
> > > 
> > > 
> > > 
> > > 
> > >
> >
>
>
>
> --
> DeWayne Filppi, Director, Solutions Architect 
> --
> M: +17145121706 http://cloudify.co @dfilppi
> 
> 
> 
> 
>


Re: Contribution for https://issues.apache.org/jira/browse/ARIA-118

2017-07-20 Thread Tal Liron
It's unassigned, so I don't see why not!

On Thu, Jul 20, 2017 at 7:41 AM, D Jayachandran  wrote:

> Hi,
>
> Do you have any plans on working on this JIRA issue ?
> https://issues.apache.org/jira/browse/ARIA-118
> Can we contribute on this ?
>
>
> Regards,
> DJ
>


Re: Openstack plugin

2017-07-21 Thread Tal Liron
ARIA has an adapter that can use Cloudify plugins, and it has been tested
successfully with both OpenStack and AWS so far.

Unfortunately there are no instructions on how to use it. I know just the
right person to write it and will ask him to do so. :)

On Fri, Jul 21, 2017 at 3:29 AM, D Jayachandran  wrote:

> Hi,
>
> Will openstack plugin be available as part of any ARIA release ?
> Is this already been looked upon or in the backlog ?
>
>
> Regards,
> DJ
>


Re: Inputs and Node object context for python and shell scripts

2017-07-24 Thread Tal Liron
I'm pretty sure you can access the inputs via the ctx call. Can anyone
confirm how to do this?

We really need to document ctx usage...

On Mon, Jul 24, 2017 at 5:57 AM, D Jayachandran  wrote:

> Hi,
>
> With current ARIA implementation, the python and shell scripts are being
> executed by the "execution plugin".
>
> The context object and inputs are not passed to passed to python scripts.
> We would like this to be passed to the python scripts.
> For shell scripts atleast the inputs needs to be passed. The context
> object can be accessed via client.py with the SOCKET URL.
> Kindly let us know if this can be added as a JIRA issue ?
>
>
> Regards,
> DJ
>
>
>
>


Re: TOSCA spec compliance on finding target node

2017-07-25 Thread Tal Liron
It indeed should *not* be required. I just verified that it you are
correct, and a match is not made if only the capability is specified
without a node type/template.

This is a regression, because it used to work correctly.

There is currently work in progress to refactor that mechanism, so I will
add a test case to make sure the regression is fixed.

See my test case and follow progress here:
https://issues.apache.org/jira/browse/ARIA-174

On Tue, Jul 25, 2017 at 3:28 AM, Vaishnavi K.R 
wrote:

> Hi ARIA folks,
>
>
> I had a look at the source code of ARIA on how the target node is
> identified based on the requirement and capability information furnished in
> the node template and its corresponding node type. But I find that only few
> of the combinations are supported i.e., as per the TOSCA spec, in the
> requirement section of a node template, the 'node' option is not mandatory,
> but ARIA expects that to be present.
>
>
> In my use-case, my node template has a requirement on a node which has a
> particular capability. So I just specify the capability type in my node
> template under the requirement section. As ARIA expects the 'node' option
> to be present, this use-case fails.
>
>
> So I wish to get clarified is there any specific reason for mandating the
> 'node' option or if TOSCA spec compliance on this target identification
> based on the capability name or type will be supported in the future
> versions?
>
>
> Thanks,
>
> /Vaish
>


Re: Unique identification of an instance element across services

2017-07-25 Thread Tal Liron
We used to use UUIDs but at some point this was refactored. I tend to agree
with you.

Actually, I would prefer it to be configurable. We have code in place for
ID generation of various types: UUIDs, short UUIDs, and sequentials. All of
them would seem useful to me for various scenarios.

On Tue, Jul 25, 2017 at 3:42 AM, Vaishnavi K.R 
wrote:

> Hi,
>
>
> With my understanding in current ARIA, the node instances are made unique
> by prefixing the node name with the 'id of the service' (i.e. the primary
> key of the service table) as the instances are specific to the service.
>
>
> What will be the name of the node instances if the default instances for
> the node template is '3' and how this will hold good during scale in and
> out?
>
>
> Could UUID be of great help in handling such cases by including that as a
> column in the database tables of the service and the node?
>
> This will wipe out the naming confusions and querying can be made easy
> with the UUIDs.
>
>
> Looking forward to your suggestion.
>
>
> Thanks,
>
> /Vaish
>


Re: Inputs and Node object context for python and shell scripts

2017-07-25 Thread Tal Liron
It should be impossible in TOSCA to create an attribute that was not
declared at the type. Are we allowing users to create any ad hoc attribute?

On Tue, Jul 25, 2017 at 7:33 AM, Maxim Orlov  wrote:

> Indeed runtime_properties became attributes in ARIA . As for the behavior,
> attributes behave just as a dict would (behind the scenes attributes
> translate to a proper Attribute TOSCA model).
> No need to define the attributes on the node-type level, if an attribute
> with that name exists in on the model, the value of that attribute would be
> overridden, if you are creating a whole new attribute, a proper Attribute
> model would be created for you.
>
> as for:
>
> ctx.node.attributes['map']['key'] = 'value'
>
> “map” is a name of an attribute which holds a dict, “key” is a key in that
> dict.
> ​
>
> On Tue, Jul 25, 2017 at 3:07 PM, D Jayachandran <
> d.jayachand...@ericsson.com
> > wrote:
>
> > Hi Max,
> >
> > I see the runtime_properties have been replaced with "attributes" and
> > there has been multiple changes with respect to attribute handling.
> >
> > What do you refer by "map" in your below example, Is that a keyword ?
> > "ctx.node.attributes['map']['key'] = value"
> >
> > Also with runtime_properties plugins were able to update the database
> with
> > new key=value. Can we achieve the same with attributes ?
> > Do we need to define the attributes in the node-types to be able to
> update
> > them by the plugins ?
> >
> > Regards,
> > DJ
> >
> > -Original Message-
> > From: D Jayachandran [mailto:d.jayachand...@ericsson.com]
> > Sent: Tuesday, July 25, 2017 11:23 AM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: RE: Inputs and Node object context for python and shell scripts
> >
> > Hi Max,
> >
> > Yes I can access the context ctx with a python plugin and shell script as
> > you have mentioned.
> > But with python script .py files under implementation, am not sure if the
> > ctx and inputs are passed as "globals". I will re-confirm this.
> > The inputs which I was referring here were the lifecycle operation
> inputs.
> >
> >
> > Regards,
> > DJ
> >
> > -Original Message-
> > From: Maxim Orlov [mailto:ma...@gigaspaces.com]
> > Sent: Tuesday, July 25, 2017 12:14 AM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Inputs and Node object context for python and shell scripts
> >
> > I'm not entirely sure to which inputs you are referring to, but any ctx
> > attribute or method accessible from a python script is accessible form
> any
> > shell script. For example:
> >
> >- "ctx.node.attributes['map']['key']" (in python) is "ctx node
> >attributes map.key" (under bash)
> >- "ctx.node.attributes['map']['key'] = value" (in python) is "ctx
> node
> >attributes map.key value" (under bash)
> >- "ctx.logger.info('some message')" (in python) is "ctx logger info
> >'some message'" (under bash)
> >
> >
> > On Mon, Jul 24, 2017 at 8:47 PM, Tal Liron  wrote:
> >
> > > I'm pretty sure you can access the inputs via the ctx call. Can anyone
> > > confirm how to do this?
> > >
> > > We really need to document ctx usage...
> > >
> > > On Mon, Jul 24, 2017 at 5:57 AM, D Jayachandran <
> > > d.jayachand...@ericsson.com
> > > > wrote:
> > >
> > > > Hi,
> > > >
> > > > With current ARIA implementation, the python and shell scripts are
> > > > being executed by the "execution plugin".
> > > >
> > > > The context object and inputs are not passed to passed to python
> > scripts.
> > > > We would like this to be passed to the python scripts.
> > > > For shell scripts atleast the inputs needs to be passed. The context
> > > > object can be accessed via client.py with the SOCKET URL.
> > > > Kindly let us know if this can be added as a JIRA issue ?
> > > >
> > > >
> > > > Regards,
> > > > DJ
> > > >
> > > >
> > > >
> > > >
> > >
> >
>


Re: Unique identification of an instance element across services

2017-07-25 Thread Tal Liron
Our code for determining the next index is not concurrently safe (no atomic
transaction) so I can see it breaking in concurrent use cases (running two
ARIA commands at the same time).

What is to gain here in terms of human readability? In my opinion it adds
confusion because it gives a false sense of predictability.

In my opinion the best compromise is to use base57-encoded UUIDs. These are
true UUIDs, but use a mix of upper and lowercase alphanumerics ensuring no
visually ambiguous characters. We have the code for this in utils/uuid.py.

See also: https://github.com/wyattisimo/base57-ruby

On Tue, Jul 25, 2017 at 1:28 PM, Maxim Orlov  wrote:

> Actually the refactoring was made so the id would be more user readable.
> The index is determined according to the used indices (it's not just a
> running number). If indeed this poses an issue (or if indeed a uuid is
> easier to recognize, or even use in a query), let's discuss it further...
>
> On Tue, Jul 25, 2017 at 7:35 PM, Tal Liron  wrote:
>
> > We used to use UUIDs but at some point this was refactored. I tend to
> agree
> > with you.
> >
> > Actually, I would prefer it to be configurable. We have code in place for
> > ID generation of various types: UUIDs, short UUIDs, and sequentials. All
> of
> > them would seem useful to me for various scenarios.
> >
> > On Tue, Jul 25, 2017 at 3:42 AM, Vaishnavi K.R <
> vaishnavi@ericsson.com
> > >
> > wrote:
> >
> > > Hi,
> > >
> > >
> > > With my understanding in current ARIA, the node instances are made
> unique
> > > by prefixing the node name with the 'id of the service' (i.e. the
> primary
> > > key of the service table) as the instances are specific to the service.
> > >
> > >
> > > What will be the name of the node instances if the default instances
> for
> > > the node template is '3' and how this will hold good during scale in
> and
> > > out?
> > >
> > >
> > > Could UUID be of great help in handling such cases by including that
> as a
> > > column in the database tables of the service and the node?
> > >
> > > This will wipe out the naming confusions and querying can be made easy
> > > with the UUIDs.
> > >
> > >
> > > Looking forward to your suggestion.
> > >
> > >
> > > Thanks,
> > >
> > > /Vaish
> > >
> >
>


Re: Unique identification of an instance element across services

2017-07-25 Thread Tal Liron
It's not an issue of thread safety -- it could be entirely different
processes, on different machines, accessing the same db. It can be solved
via a SQL transaction, but I feel the whole issue can be avoided by using
UUIDs.

Using the CLI to access specific nodes is not something I see happening a
lot outside of debugging. And when you do debug, you'll probably be copying
and pasting a node ID from the logs, so shorter names do not add much ease
of use.

Again, I would be personally happiest if this was configurable (and
personally think UUIDs should be the reasonable default).

On Tue, Jul 25, 2017 at 2:01 PM, Maxim Orlov  wrote:

> Technically we have no issue with implementing this via uuid or a
> threadsafe solution for the current index implementation.
>
> Getting node data via the cli feels more intuitive using the index based
> ID, rather than the uuid based ID in my opionion.
>
> On Jul 25, 2017 9:49 PM, "Tal Liron"  wrote:
>
> Our code for determining the next index is not concurrently safe (no atomic
> transaction) so I can see it breaking in concurrent use cases (running two
> ARIA commands at the same time).
>
> What is to gain here in terms of human readability? In my opinion it adds
> confusion because it gives a false sense of predictability.
>
> In my opinion the best compromise is to use base57-encoded UUIDs. These are
> true UUIDs, but use a mix of upper and lowercase alphanumerics ensuring no
> visually ambiguous characters. We have the code for this in utils/uuid.py.
>
> See also: https://github.com/wyattisimo/base57-ruby
>
> On Tue, Jul 25, 2017 at 1:28 PM, Maxim Orlov  wrote:
>
> > Actually the refactoring was made so the id would be more user readable.
> > The index is determined according to the used indices (it's not just a
> > running number). If indeed this poses an issue (or if indeed a uuid is
> > easier to recognize, or even use in a query), let's discuss it further...
> >
> > On Tue, Jul 25, 2017 at 7:35 PM, Tal Liron  wrote:
> >
> > > We used to use UUIDs but at some point this was refactored. I tend to
> > agree
> > > with you.
> > >
> > > Actually, I would prefer it to be configurable. We have code in place
> for
> > > ID generation of various types: UUIDs, short UUIDs, and sequentials.
> All
> > of
> > > them would seem useful to me for various scenarios.
> > >
> > > On Tue, Jul 25, 2017 at 3:42 AM, Vaishnavi K.R <
> > vaishnavi@ericsson.com
> > > >
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > >
> > > > With my understanding in current ARIA, the node instances are made
> > unique
> > > > by prefixing the node name with the 'id of the service' (i.e. the
> > primary
> > > > key of the service table) as the instances are specific to the
> service.
> > > >
> > > >
> > > > What will be the name of the node instances if the default instances
> > for
> > > > the node template is '3' and how this will hold good during scale in
> > and
> > > > out?
> > > >
> > > >
> > > > Could UUID be of great help in handling such cases by including that
> > as a
> > > > column in the database tables of the service and the node?
> > > >
> > > > This will wipe out the naming confusions and querying can be made
> easy
> > > > with the UUIDs.
> > > >
> > > >
> > > > Looking forward to your suggestion.
> > > >
> > > >
> > > > Thanks,
> > > >
> > > > /Vaish
> > > >
> > >
> >
>


Re: Unique identification of an instance element across services

2017-07-25 Thread Tal Liron
Let's consider a mass deployment: thousands of service instances of the
same service template, created by many different users with their own ARIA
installations (and databases). In that case, assuming we use sequential
IDs, you would have the same node ID appear many times. You would have to
identify it via the particular user and service instance. If you're
centralizing logs, this can quickly be cumbersome. A UUID will identify it
globally and avoid any confusion.

I think the default should be something that avoids such problems. For
users who insist on shorter IDs, we can allow them to configure it.

On Tue, Jul 25, 2017 at 2:42 PM, DeWayne Filppi  wrote:

> True uuids are seductive, because of their simplicity.  But they are huge,
> overkill, and meaningless.  Imho a structured id is superior if it can be
> made to work without a global locking scheme.
>
> - DeWayne
>
> On Jul 25, 2017 12:11 PM, "Tal Liron"  wrote:
>
> > It's not an issue of thread safety -- it could be entirely different
> > processes, on different machines, accessing the same db. It can be solved
> > via a SQL transaction, but I feel the whole issue can be avoided by using
> > UUIDs.
> >
> > Using the CLI to access specific nodes is not something I see happening a
> > lot outside of debugging. And when you do debug, you'll probably be
> copying
> > and pasting a node ID from the logs, so shorter names do not add much
> ease
> > of use.
> >
> > Again, I would be personally happiest if this was configurable (and
> > personally think UUIDs should be the reasonable default).
> >
> > On Tue, Jul 25, 2017 at 2:01 PM, Maxim Orlov  wrote:
> >
> > > Technically we have no issue with implementing this via uuid or a
> > > threadsafe solution for the current index implementation.
> > >
> > > Getting node data via the cli feels more intuitive using the index
> based
> > > ID, rather than the uuid based ID in my opionion.
> > >
> > > On Jul 25, 2017 9:49 PM, "Tal Liron"  wrote:
> > >
> > > Our code for determining the next index is not concurrently safe (no
> > atomic
> > > transaction) so I can see it breaking in concurrent use cases (running
> > two
> > > ARIA commands at the same time).
> > >
> > > What is to gain here in terms of human readability? In my opinion it
> adds
> > > confusion because it gives a false sense of predictability.
> > >
> > > In my opinion the best compromise is to use base57-encoded UUIDs. These
> > are
> > > true UUIDs, but use a mix of upper and lowercase alphanumerics ensuring
> > no
> > > visually ambiguous characters. We have the code for this in
> > utils/uuid.py.
> > >
> > > See also: https://github.com/wyattisimo/base57-ruby
> > >
> > > On Tue, Jul 25, 2017 at 1:28 PM, Maxim Orlov 
> wrote:
> > >
> > > > Actually the refactoring was made so the id would be more user
> > readable.
> > > > The index is determined according to the used indices (it's not just
> a
> > > > running number). If indeed this poses an issue (or if indeed a uuid
> is
> > > > easier to recognize, or even use in a query), let's discuss it
> > further...
> > > >
> > > > On Tue, Jul 25, 2017 at 7:35 PM, Tal Liron  wrote:
> > > >
> > > > > We used to use UUIDs but at some point this was refactored. I tend
> to
> > > > agree
> > > > > with you.
> > > > >
> > > > > Actually, I would prefer it to be configurable. We have code in
> place
> > > for
> > > > > ID generation of various types: UUIDs, short UUIDs, and
> sequentials.
> > > All
> > > > of
> > > > > them would seem useful to me for various scenarios.
> > > > >
> > > > > On Tue, Jul 25, 2017 at 3:42 AM, Vaishnavi K.R <
> > > > vaishnavi@ericsson.com
> > > > > >
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > >
> > > > > > With my understanding in current ARIA, the node instances are
> made
> > > > unique
> > > > > > by prefixing the node name with the 'id of the service' (i.e. the
> > > > primary
> > > > > > key of the service table) as the instances are specific to the
> > > service.
> > > > > >
> > > > > >
> > > > > > What will be the name of the node instances if the default
> > instances
> > > > for
> > > > > > the node template is '3' and how this will hold good during scale
> > in
> > > > and
> > > > > > out?
> > > > > >
> > > > > >
> > > > > > Could UUID be of great help in handling such cases by including
> > that
> > > > as a
> > > > > > column in the database tables of the service and the node?
> > > > > >
> > > > > > This will wipe out the naming confusions and querying can be made
> > > easy
> > > > > > with the UUIDs.
> > > > > >
> > > > > >
> > > > > > Looking forward to your suggestion.
> > > > > >
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > > /Vaish
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Unique identification of an instance element across services

2017-07-26 Thread Tal Liron
I just don't see users having to deal much with node IDs outside of simple
hello-world style tutorials, and I'd hate for the first impressions that
users get out of ARIA is that it's just a playground for TOSCA. It should
be ready out-of-the-box for the real world.

On Wed, Jul 26, 2017 at 9:13 AM, DeWayne Filppi  wrote:

> Such is their strength.  I'm just advocating using them as a last resort
> because they are user unfriendly and gigantic.
>
> On Tue, Jul 25, 2017 at 12:55 PM, Tal Liron  wrote:
>
> > Let's consider a mass deployment: thousands of service instances of the
> > same service template, created by many different users with their own
> ARIA
> > installations (and databases). In that case, assuming we use sequential
> > IDs, you would have the same node ID appear many times. You would have to
> > identify it via the particular user and service instance. If you're
> > centralizing logs, this can quickly be cumbersome. A UUID will identify
> it
> > globally and avoid any confusion.
> >
> > I think the default should be something that avoids such problems. For
> > users who insist on shorter IDs, we can allow them to configure it.
> >
> > On Tue, Jul 25, 2017 at 2:42 PM, DeWayne Filppi 
> > wrote:
> >
> > > True uuids are seductive, because of their simplicity.  But they are
> > huge,
> > > overkill, and meaningless.  Imho a structured id is superior if it can
> be
> > > made to work without a global locking scheme.
> > >
> > > - DeWayne
> > >
> > > On Jul 25, 2017 12:11 PM, "Tal Liron"  wrote:
> > >
> > > > It's not an issue of thread safety -- it could be entirely different
> > > > processes, on different machines, accessing the same db. It can be
> > solved
> > > > via a SQL transaction, but I feel the whole issue can be avoided by
> > using
> > > > UUIDs.
> > > >
> > > > Using the CLI to access specific nodes is not something I see
> > happening a
> > > > lot outside of debugging. And when you do debug, you'll probably be
> > > copying
> > > > and pasting a node ID from the logs, so shorter names do not add much
> > > ease
> > > > of use.
> > > >
> > > > Again, I would be personally happiest if this was configurable (and
> > > > personally think UUIDs should be the reasonable default).
> > > >
> > > > On Tue, Jul 25, 2017 at 2:01 PM, Maxim Orlov 
> > wrote:
> > > >
> > > > > Technically we have no issue with implementing this via uuid or a
> > > > > threadsafe solution for the current index implementation.
> > > > >
> > > > > Getting node data via the cli feels more intuitive using the index
> > > based
> > > > > ID, rather than the uuid based ID in my opionion.
> > > > >
> > > > > On Jul 25, 2017 9:49 PM, "Tal Liron"  wrote:
> > > > >
> > > > > Our code for determining the next index is not concurrently safe
> (no
> > > > atomic
> > > > > transaction) so I can see it breaking in concurrent use cases
> > (running
> > > > two
> > > > > ARIA commands at the same time).
> > > > >
> > > > > What is to gain here in terms of human readability? In my opinion
> it
> > > adds
> > > > > confusion because it gives a false sense of predictability.
> > > > >
> > > > > In my opinion the best compromise is to use base57-encoded UUIDs.
> > These
> > > > are
> > > > > true UUIDs, but use a mix of upper and lowercase alphanumerics
> > ensuring
> > > > no
> > > > > visually ambiguous characters. We have the code for this in
> > > > utils/uuid.py.
> > > > >
> > > > > See also: https://github.com/wyattisimo/base57-ruby
> > > > >
> > > > > On Tue, Jul 25, 2017 at 1:28 PM, Maxim Orlov 
> > > wrote:
> > > > >
> > > > > > Actually the refactoring was made so the id would be more user
> > > > readable.
> > > > > > The index is determined according to the used indices (it's not
> > just
> > > a
> > > > > > running number). If indeed this poses an issue (or if indeed a
> uuid
> > > is
> > > > > > easier to recognize, or even use in a query), let's discuss it
> > > > further...
>

Re: Service Composition / Substitution Mapping

2017-08-01 Thread Tal Liron
Thanks for the kudos. :)

This topic was discussed on this list a while ago. It's indeed tricky to
get right, because TOSCA leaves a lot of room for the orchestrator to
implement.

I'm thinking of it working something like this:

1. The reqs-and-caps engine by default will always look for satisfiable
capabilities within the currently instantiated service. HOWEVER, if such a
capability is not present, the option is there to look for another
instantiated service that exposes the capabilities in substitution mappings.

2. If we DON'T have another instantiated service, but DO have a service
template that could fit the bill, perhaps we need to instantiate that other
service first. One obvious option is to do this automatically. But I feel
like this can create unforeseen consequences -- for example, some dummy
test template that someone happened to have in the database might get
instantiated by mistake. Also, it might need to trigger multiple install
workflows at once... a big mess. So I suggest that instead we provide a
very detailed validation error here saying that the requirement cannot be
satisfied, HOWEVER there exist service templates A, B, and C that can
substitute for us, so maybe the nice user would like to instantiate them
first? This seems very reasonable to me.

3. If indeed another service satisfies this, a special node is added to the
current service (with the correct type -- but without a service template
foreign key), which serves as a proxy of the other service template. I'm
not sure how we would mark this exactly. We can't use the service_fk field,
because it's still in our current service. So perhaps there's need of a new
fk field, maybe substituted_service_fk?

The above might be "sensible defaults," but it seems to me that users
really need control over this. So I propose to add a new aria.Composition
policy that would let you provide hints for this mechanism. For example,
you might want to "filter" the target service by service template name and
even by metadata in the service template. For example, you might want to
require version 1.2.2 of a specific service, no less.

Those are some quick thoughts. Exactly how such a policy would look with
require more thought...


On Tue, Aug 1, 2017 at 2:20 PM, Avia Efrat  wrote:

> Hello all,
>
> I'm starting to work on a full implementation of substitution_mapping,
> which will lead to the ability of service composition.
>
> For those unacquainted with substitution mapping, here are some quick
> resources:
> *From the spec
>  YAML/v1.0/TOSCA-Simple-Profile-YAML-v1.0.html>,
> sections:*
> 2.10
>  YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.html#_Toc471725208>,
> 2.11
>  YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.html#_Toc471725209>
> (theory and examples)
> 3.8.1, 3.8.2 (grammar)
> *From Tal's amazing lecture on TOSCA
> :*
> 00:00 until 12:30.
>
> If anyone wishes to:
> * ask questions regarding this feature
> * suggest real-life use cases
> * offer their insight about vague parts of the spec
> * anything else about substitution mapping and service composition
> Then please, feel encouraged to leave your feedback!
>


Re: Service Composition / Substitution Mapping

2017-08-01 Thread Tal Liron
I agree! Luckily metadata exists in the 1.0 spec. :)

http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#_Toc379455044

On Tue, Aug 1, 2017 at 7:16 PM, DeWayne Filppi  wrote:

> It occurs that it might be useful to be able to tag service templates with
> arbitrary meta-data.  Perhaps at one level carried forward from a CSAR
> manifest, but also user definable.  This would allow inter-service
> references to be definitive, if desired.  This could be implicitly defined
> as a capability by the orchestrator, but some kind of special requirement
> type(s) would be needed to utilize it.  This way, external repos could be
> used safely and directly without the separate load step.
>
> On Tue, Aug 1, 2017 at 12:43 PM, Tal Liron  wrote:
>
> > Thanks for the kudos. :)
> >
> > This topic was discussed on this list a while ago. It's indeed tricky to
> > get right, because TOSCA leaves a lot of room for the orchestrator to
> > implement.
> >
> > I'm thinking of it working something like this:
> >
> > 1. The reqs-and-caps engine by default will always look for satisfiable
> > capabilities within the currently instantiated service. HOWEVER, if such
> a
> > capability is not present, the option is there to look for another
> > instantiated service that exposes the capabilities in substitution
> > mappings.
> >
> > 2. If we DON'T have another instantiated service, but DO have a service
> > template that could fit the bill, perhaps we need to instantiate that
> other
> > service first. One obvious option is to do this automatically. But I feel
> > like this can create unforeseen consequences -- for example, some dummy
> > test template that someone happened to have in the database might get
> > instantiated by mistake. Also, it might need to trigger multiple install
> > workflows at once... a big mess. So I suggest that instead we provide a
> > very detailed validation error here saying that the requirement cannot be
> > satisfied, HOWEVER there exist service templates A, B, and C that can
> > substitute for us, so maybe the nice user would like to instantiate them
> > first? This seems very reasonable to me.
> >
> > 3. If indeed another service satisfies this, a special node is added to
> the
> > current service (with the correct type -- but without a service template
> > foreign key), which serves as a proxy of the other service template. I'm
> > not sure how we would mark this exactly. We can't use the service_fk
> field,
> > because it's still in our current service. So perhaps there's need of a
> new
> > fk field, maybe substituted_service_fk?
> >
> > The above might be "sensible defaults," but it seems to me that users
> > really need control over this. So I propose to add a new aria.Composition
> > policy that would let you provide hints for this mechanism. For example,
> > you might want to "filter" the target service by service template name
> and
> > even by metadata in the service template. For example, you might want to
> > require version 1.2.2 of a specific service, no less.
> >
> > Those are some quick thoughts. Exactly how such a policy would look with
> > require more thought...
> >
> >
> > On Tue, Aug 1, 2017 at 2:20 PM, Avia Efrat  wrote:
> >
> > > Hello all,
> > >
> > > I'm starting to work on a full implementation of substitution_mapping,
> > > which will lead to the ability of service composition.
> > >
> > > For those unacquainted with substitution mapping, here are some quick
> > > resources:
> > > *From the spec
> > > <http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > > YAML/v1.0/TOSCA-Simple-Profile-YAML-v1.0.html>,
> > > sections:*
> > > 2.10
> > > <http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > > YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.html#_Toc471725208>,
> > > 2.11
> > > <http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > > YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.html#_Toc471725209>
> > > (theory and examples)
> > > 3.8.1, 3.8.2 (grammar)
> > > *From Tal's amazing lecture on TOSCA
> > > <https://www.youtube.com/watch?v=6xGmpi--7-A>:*
> > > 00:00 until 12:30.
> > >
> > > If anyone wishes to:
> > > * ask questions regarding this feature
> > > * suggest real-life use cases
> > > * offer their insight about vague parts of the spec
> > > * anything else about substitution mapping and service composition
> > > Then please, feel encouraged to leave your feedback!
> > >
> >
>


Re: Service Composition / Substitution Mapping

2017-08-02 Thread Tal Liron
Our goal with adding new "conventions" to ARIA, such as policies, is to
always make them optional. The idea is that a plain-vanilla TOSCA template
would "just work" in ARIA via sensible defaults. The extra stuff is there
if you know you are using ARIA and you want to make use of its features.
(The opposite is true, too: we make sure that any additions are still pure
TOSCA and would be parsed validly by other TOSCA parsers.)

On Wed, Aug 2, 2017 at 9:08 AM, DeWayne Filppi  wrote:

> Cool.  Missed that.  That leaves things almost completely wide open from
> the orchestrator side, IOW few predefined keys.  Too few IMHO, but if
> everyone uses ARIA conventions it could work.
>
> On Tue, Aug 1, 2017 at 11:49 PM, Tal Liron  wrote:
>
> > I agree! Luckily metadata exists in the 1.0 spec. :)
> >
> > http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#_Toc379455044
> >
> > On Tue, Aug 1, 2017 at 7:16 PM, DeWayne Filppi 
> > wrote:
> >
> > > It occurs that it might be useful to be able to tag service templates
> > with
> > > arbitrary meta-data.  Perhaps at one level carried forward from a CSAR
> > > manifest, but also user definable.  This would allow inter-service
> > > references to be definitive, if desired.  This could be implicitly
> > defined
> > > as a capability by the orchestrator, but some kind of special
> requirement
> > > type(s) would be needed to utilize it.  This way, external repos could
> be
> > > used safely and directly without the separate load step.
> > >
> > > On Tue, Aug 1, 2017 at 12:43 PM, Tal Liron  wrote:
> > >
> > > > Thanks for the kudos. :)
> > > >
> > > > This topic was discussed on this list a while ago. It's indeed tricky
> > to
> > > > get right, because TOSCA leaves a lot of room for the orchestrator to
> > > > implement.
> > > >
> > > > I'm thinking of it working something like this:
> > > >
> > > > 1. The reqs-and-caps engine by default will always look for
> satisfiable
> > > > capabilities within the currently instantiated service. HOWEVER, if
> > such
> > > a
> > > > capability is not present, the option is there to look for another
> > > > instantiated service that exposes the capabilities in substitution
> > > > mappings.
> > > >
> > > > 2. If we DON'T have another instantiated service, but DO have a
> service
> > > > template that could fit the bill, perhaps we need to instantiate that
> > > other
> > > > service first. One obvious option is to do this automatically. But I
> > feel
> > > > like this can create unforeseen consequences -- for example, some
> dummy
> > > > test template that someone happened to have in the database might get
> > > > instantiated by mistake. Also, it might need to trigger multiple
> > install
> > > > workflows at once... a big mess. So I suggest that instead we
> provide a
> > > > very detailed validation error here saying that the requirement
> cannot
> > be
> > > > satisfied, HOWEVER there exist service templates A, B, and C that can
> > > > substitute for us, so maybe the nice user would like to instantiate
> > them
> > > > first? This seems very reasonable to me.
> > > >
> > > > 3. If indeed another service satisfies this, a special node is added
> to
> > > the
> > > > current service (with the correct type -- but without a service
> > template
> > > > foreign key), which serves as a proxy of the other service template.
> > I'm
> > > > not sure how we would mark this exactly. We can't use the service_fk
> > > field,
> > > > because it's still in our current service. So perhaps there's need
> of a
> > > new
> > > > fk field, maybe substituted_service_fk?
> > > >
> > > > The above might be "sensible defaults," but it seems to me that users
> > > > really need control over this. So I propose to add a new
> > aria.Composition
> > > > policy that would let you provide hints for this mechanism. For
> > example,
> > > > you might want to "filter" the target service by service template
> name
> > > and
> > > > even by metadata in the service template. For example, you might want
> > to
> > > > require version 1.2.2 of a specific servi

Re: Workflow graph, Juju charm and node states

2017-08-02 Thread Tal Liron
>
> 1) Is it possible to send me an example of a custom workflow graph?
>

There is a rather simple one here:

https://github.com/apache/incubator-ariatosca/tree/master/tests/resources/service-templates/tosca-simple-1.0/node-cellar

On the YAML side, you'll see how we use the aria.Workflow policy type to
link the workflow function. In this example, we're deriving the type in
order to add an extra property, but you can also use the aria.Workflow type
directly if you don't need to extend it.

Then, in workflows.py you will see the actual Python function that builds
the task graph. It's a very trivial one in this case: we simply go through
all nodes and try to create a task to execute the operation. Nodes that
don't have the interface/operation will raise an exception, so we just
ignore those and move on.

The task graph API is much richer and allows complex interdependencies. At
this point I don't think we have a good example for more complex workflows.

2) When executing an operation associated with a "script", what are the
> main implications or differences between executing a Juju charm vs
> executing a shell script.
>
> Is it something like this?
>
> - a shell script is an artifact that is included in the CSAR and is likely
> executed by the local TOSCA ARIA orchestrator
>
> - a juju charm is not an artifact (therefore not included in the CSAR) and
> is likely executed by a remote Juju service orchestrator
>

It's not just a shell script -- it would be any OS executable. It could be
a shell script, or a Python script, or an .exe, etc.

You are right about Juju not being an artifact. This is why I am arguing
against the current idea on the table for TOSCA 1.2 to support more complex
operations using artifacts: a charm is just not an artifact. It should be,
in my opinion, solved by new operation types.

Until then, ARIA "solves" the situation by allowing for plugins that would
provide their own execution of the operation. We don't have a Juju plugin
right now, but I think it would not be hard at all to create one. It's
something you can contribute!


> 3) Is it possible to create custom node states using TOSCA/ARIA?
>

ARIA does not currently allow for this, because TOSCA doesn't. So we indeed
validate that the state you set is one of the currently supported states.
(Actually there is one extra state that ARIA adds internally, but in any
case a user can't simply add one.)

A workaround is to use a custom node attribute instead. You can then set
this "state" attribute to anything you wish. Operations can set this
attribute using ctx. So in a bash script it would like this:

ctx node attribute state = my-custom-state

More about using ctx:
https://cwiki.apache.org/confluence/display/ARIATOSCA/Execution+Context

The downside is that you would have to declare this attribute at the node
type level, so it won't work with built-in nodes unless you derive from
them.

4) Also related to node states. It looks like the YAML specs has a couple
> of discrepancies for the normative uninstall workflow (section 5.7.4.4.2 in
> YAML 1.0).
>
> IMO the available state is not defined and the diagram at the top of the
> workflow should use the started state instead.
>
> IMO the configured state at the bottom of the workflow diagram should be
> replaced with the initial state instead.
>
> Do you agree?
>

I agree. :) The spec actually only really discusses the install workflow,
not the others, and even with install it has two separate diagrams that
contradict each other.

Changing the node state is currently handled in ARIA via event listeners:
certain kinds of tasks trigger events that cause the state to change. The
mechanism is currently a bit opaque and not very easy for users to extend.
Perhaps other ARIA committers can comment more about it?


Re: Workflow graph, Juju charm and node states

2017-08-02 Thread Tal Liron
I forgot to mention the Custom Workflows wiki:

https://cwiki.apache.org/confluence/display/ARIATOSCA/Custom+Workflows

On Wed, Aug 2, 2017 at 2:11 PM, Tal Liron  wrote:

> 1) Is it possible to send me an example of a custom workflow graph?
>>
>
> There is a rather simple one here:
>
> https://github.com/apache/incubator-ariatosca/tree/master/tests/resources/
> service-templates/tosca-simple-1.0/node-cellar
>
> On the YAML side, you'll see how we use the aria.Workflow policy type to
> link the workflow function. In this example, we're deriving the type in
> order to add an extra property, but you can also use the aria.Workflow type
> directly if you don't need to extend it.
>
> Then, in workflows.py you will see the actual Python function that builds
> the task graph. It's a very trivial one in this case: we simply go through
> all nodes and try to create a task to execute the operation. Nodes that
> don't have the interface/operation will raise an exception, so we just
> ignore those and move on.
>
> The task graph API is much richer and allows complex interdependencies. At
> this point I don't think we have a good example for more complex workflows.
>
> 2) When executing an operation associated with a "script", what are the
>> main implications or differences between executing a Juju charm vs
>> executing a shell script.
>>
>> Is it something like this?
>>
>> - a shell script is an artifact that is included in the CSAR and is
>> likely executed by the local TOSCA ARIA orchestrator
>>
>> - a juju charm is not an artifact (therefore not included in the CSAR)
>> and is likely executed by a remote Juju service orchestrator
>>
>
> It's not just a shell script -- it would be any OS executable. It could be
> a shell script, or a Python script, or an .exe, etc.
>
> You are right about Juju not being an artifact. This is why I am arguing
> against the current idea on the table for TOSCA 1.2 to support more complex
> operations using artifacts: a charm is just not an artifact. It should be,
> in my opinion, solved by new operation types.
>
> Until then, ARIA "solves" the situation by allowing for plugins that would
> provide their own execution of the operation. We don't have a Juju plugin
> right now, but I think it would not be hard at all to create one. It's
> something you can contribute!
>
>
>> 3) Is it possible to create custom node states using TOSCA/ARIA?
>>
>
> ARIA does not currently allow for this, because TOSCA doesn't. So we
> indeed validate that the state you set is one of the currently supported
> states. (Actually there is one extra state that ARIA adds internally, but
> in any case a user can't simply add one.)
>
> A workaround is to use a custom node attribute instead. You can then set
> this "state" attribute to anything you wish. Operations can set this
> attribute using ctx. So in a bash script it would like this:
>
> ctx node attribute state = my-custom-state
>
> More about using ctx: https://cwiki.apache.org/
> confluence/display/ARIATOSCA/Execution+Context
>
> The downside is that you would have to declare this attribute at the node
> type level, so it won't work with built-in nodes unless you derive from
> them.
>
> 4) Also related to node states. It looks like the YAML specs has a couple
>> of discrepancies for the normative uninstall workflow (section 5.7.4.4.2 in
>> YAML 1.0).
>>
>> IMO the available state is not defined and the diagram at the top of the
>> workflow should use the started state instead.
>>
>> IMO the configured state at the bottom of the workflow diagram should be
>> replaced with the initial state instead.
>>
>> Do you agree?
>>
>
> I agree. :) The spec actually only really discusses the install workflow,
> not the others, and even with install it has two separate diagrams that
> contradict each other.
>
> Changing the node state is currently handled in ARIA via event listeners:
> certain kinds of tasks trigger events that cause the state to change. The
> mechanism is currently a bit opaque and not very easy for users to extend.
> Perhaps other ARIA committers can comment more about it?
>


Re: Inputs and Node object context for python and shell scripts

2017-08-03 Thread Tal Liron
 image:
> > >>   type: string
> > >>   required: true
> > >> port:
> > >>   type: integer
> > >>   required: false
> > >> attributes:
> > >> test:
> > >>   type: string
> > >> interfaces:
> > >> Standard:
> > >> type: tosca.interfaces.node.lifecycle.Standard
> > >> inputs:
> > >> name:
> > >> type: string
> > >> required: true
> > >> image:
> > >> type: string
> > >> required: true
> > >> exposed_port:
> > >> type: integer
> > >> required: false
> > >> target_port:
> > >> type: integer
> > >> required: false
> > >> default: 8080
> > >> target_host:
> > >> type: string
> > >> required: false
> > >> default: None
> > >> labels:
> > >> type: string
> > >> required: false
> > >> default: test
> > >> isService:
> > >> type: boolean
> > >> required: false
> > >> create:
> > >> implementation:
> > >> primary: testplugin > sample.samplemethod
> > >>
> > >>
> > >> PLUGIN
> > >>
> > >> def main():
> > >> """Entry point for the application script"""
> > >> print("Call your main application code here")
> > >>
> > >> def samplemethod(ctx=None, **inputs):
> > >> print "ctx -->",ctx
> > >> print "inputs -->",inputs
> > >> ctx.node.attributes['test'] = "abc"
> > >>
> > >>
> > >>
> > >> Regards,
> > >> DJ
> > >>
> > >>
> > >>
> > >> -Original Message-
> > >> From: Maxim Orlov [mailto:ma...@cloudify.co]
> > >> Sent: Monday, July 31, 2017 10:22 PM
> > >> To: dev@ariatosca.incubator.apache.org
> > >> Subject: Re: Inputs and Node object context for python and shell
> > >> scripts
> > >>
> > >> Interesting, can you describe exactly the scenario? including the
> > >> service template and the operation you are trying to run
> > >>
> > >> On Mon, Jul 31, 2017 at 6:12 PM, D Jayachandran <
> > >> d.jayachand...@ericsson.com
> > >> > wrote:
> > >>
> > >> > Hi,
> > >> >
> > >> > I got the below error when I tried assigning values as like a dict.
> > >> > It seems to fail when it tries to remove the existing value and
> > >> > triggering a change event.
> > >> >
> > >> > ObjectDereferencedError: Can't emit change event for attribute
> > >> > 'Node.attributes' - parent object of type  has been garbage
> > >> > collected
> > >> >
> > >> >
> > >> > Regards,
> > >> > DJ
> > >> >
> > >> > -Original Message-
> > >> > From: Maxim Orlov [mailto:ma...@cloudify.co]
> > >> > Sent: Monday, July 31, 2017 6:08 PM
> > >> > To: dev@ariatosca.incubator.apache.org
> > >> > Subject: Re: Inputs and Node object context for python and shell
> > >> > scripts
> > >> >
> > >> > From within any operation or workflow you don't need to use the
> > ".value"
> > >> > notation. In order to access the attribute use
> > >> > ctx.node.attributes['test'], and in order to assign the attribute
> > >> > just use ctx.node.attributes['test'] = "abc". Using this
> > >> > (hopefully
> > >>

Re: Node types

2017-08-03 Thread Tal Liron
> Q1
> I see normative node types in the YAML spec. Here it makes sense to call
> them normative node types :)
> I see non-normative node types in the YAML spec. Here it makes sense to
> call them normative node types :)
> I also know the template author can create its own node types. Should I
> call them custom node types (which are also classified as non-normative)?
>

My understanding is that "normative" means anything that is part of the
Simple Profile. The spec has some example service templates with special
types, but they are not normative in this sense.

I haven't seen the term "custom types" used very much, perhaps it would
make more sense in context. For example, you might be importing various 3rd
party types into your service template that are not part of the Simple
Profile, but they're still not "custom" types exactly because you consider
them as a standard (just not part of the Simple Profile standard).


> Q2
> Can you confirm ARIA will ensure all node types including non-normative
> and custom will support the Standard lifecycle interface since I see the
> Standard interface defined as part of tosca.nodes.Root?
>

Anything inheriting from toca.nodes.Root gets the standard lifecycle
interface, but even if you have a node type that doesn't inherit from Root
you can still explicitly give it the standard lifecycle interface. ARIA
just checks if it's there.


> Q3
> Do you have any guidelines about when it is best to create a new custom
> node type that is derived from root vs derived from an existing normative
> node type?
> Does ARIA care?
>

ARIA does not care, but it's important to think of future compatibility
with other TOSCA systems. For example, you might feel that
tosca.nodes.Compute isn't designed properly and would rather create your
own mynodes.Compute that you think is much better. ARIA would be fine with
that. However, if you want your service template to work with other
systems, they might not know what to do with that type. The Simple Profile
is intended as a standard baseline. Whether it's a good enough baseline or
not ... well, that will continue to evolve. Currently in the NFV world
there are arguments exactly about this topic.


> Q4
> I see the normative (generic) SoftwareComponent node type that must be
> hosted on a Compute node.
> If  a "specific SW component" must be hosted on a VDU.Compute node
> instead, then I assume it is not possible to derive the specific SW
> component node type from the normative SoftwareComponent node type (?)
>

You're right on the mark here. This is exactly why this argument is so
crucial: the Simple Profile is not just a collection of arbitrary node
types, but is a system of node types, capability types, and data types that
were designed to work together. It's not easy at all to replace one without
replacing the whole thing.

My hope is that a future Simple Profile (perhaps version 2.0?) would
completely overhaul the existing node types and possibly even remove them.
It's far more important in my view to have a collection of standard
capability types: these could then be attached to any node type and
represent an exposed "socket" for a requirement "plug". It shouldn't really
matter what node type you inherit from.

The NFV use case is definitely exposing the limitations of Simple Profile
1.0's strict object-oriented architecture.


Re: Version support for different TOSCA types

2017-08-04 Thread Tal Liron
I think you are referring to TOSCA 1.1, which is on the roadmap but not
supported yet.

You can of course create your own "version" property or attribute for node
types in TOSCA 1.0.

On Fri, Aug 4, 2017 at 7:05 AM, D Jayachandran 
wrote:

> Hi,
>
> The TOSCA spec mentions about the version as a keyname for different type
> definitions(Node, Group, Interface, Artifacts, Data .. .)
> As mentioned in spec this is for the re-use of different types . Does ARIA
> support the version at this stage ? What is the scope of orchestrator when
> it comes to the version support ?
>
>
>
> Regards,
> DJ
>


Re: TOSCA spec compliance on finding target node

2017-08-07 Thread Tal Liron
I think you are talking about requirements? Some of the combinations you
mention are for requirement declarations (at the node type) and some for
requirement assignments (at the node template).

Generally speaking, ARIA intends to support 100% of the TOSCA spec, so feel
free to contribute. If a combination does not work, it is a bug.

There is a known bug about requiring a capability without a template that
is being worked on.

On Mon, Aug 7, 2017 at 12:01 AM, Vaishnavi K.R 
wrote:

> Hi,
>
>
> I tried the following combinations in my service template,
>
>   1.  Type definition with capability type alone but node template having
> any of the following,
>  *   capability type alone
>  *   capability name alone
>  *   node type alone
>  *   node name alone
>  *   capability name and node name
>  *   capability name and node type
>  *   capability type and node type
>  *   capability type and node type
>   2.  Type definition with capability type and node type
>  *   capability type alone
>  *   capability name alone
>  *   node type alone
>  *   node name alone
>  *   capability name and node name
>  *   capability name and node type
>  *   capability type and node type
>  *   capability type and node type
>
> As per the TOSCA specification, the above are valid combinations.
>
> Will ARIA support all the above ?? If so, we wish to contribute.
>
> Looking forward to your comment.
>
>
>
> Thanks,
>
> /Vaish
>
> 
> From: Tal Liron 
> Sent: Tuesday, July 25, 2017 10:03:18 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: TOSCA spec compliance on finding target node
>
> It indeed should *not* be required. I just verified that it you are
> correct, and a match is not made if only the capability is specified
> without a node type/template.
>
> This is a regression, because it used to work correctly.
>
> There is currently work in progress to refactor that mechanism, so I will
> add a test case to make sure the regression is fixed.
>
> See my test case and follow progress here:
> https://issues.apache.org/jira/browse/ARIA-174
>
> On Tue, Jul 25, 2017 at 3:28 AM, Vaishnavi K.R  >
> wrote:
>
> > Hi ARIA folks,
> >
> >
> > I had a look at the source code of ARIA on how the target node is
> > identified based on the requirement and capability information furnished
> in
> > the node template and its corresponding node type. But I find that only
> few
> > of the combinations are supported i.e., as per the TOSCA spec, in the
> > requirement section of a node template, the 'node' option is not
> mandatory,
> > but ARIA expects that to be present.
> >
> >
> > In my use-case, my node template has a requirement on a node which has a
> > particular capability. So I just specify the capability type in my node
> > template under the requirement section. As ARIA expects the 'node' option
> > to be present, this use-case fails.
> >
> >
> > So I wish to get clarified is there any specific reason for mandating the
> > 'node' option or if TOSCA spec compliance on this target identification
> > based on the capability name or type will be supported in the future
> > versions?
> >
> >
> > Thanks,
> >
> > /Vaish
> >
>


Re: Version support for different TOSCA types

2017-08-07 Thread Tal Liron
OK, you are referring to the "version" data type, and it is fully supported
in ARIA, which includes:

1. Strict adherence to the (rather odd) specification and its regex
2. Proper support for TOSCA comparative constraints for versions
(greater_than, lesser_than, etc.)
3. Comparisons also work properly in Python when comparing version instances

On Mon, Aug 7, 2017 at 12:22 AM, D Jayachandran  wrote:

> Hi Tal,
>
> I was referring to the section 3.2.2  in TOSCA 1.0. It seems the version
> is part of both TOSCA 1.0 and TOSCA 1.1
>
> http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.pdf
>
> Regards,
> DJ
> -Original Message-
> From: Tal Liron [mailto:t...@cloudify.co]
> Sent: Friday, August 04, 2017 9:39 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Version support for different TOSCA types
>
> I think you are referring to TOSCA 1.1, which is on the roadmap but not
> supported yet.
>
> You can of course create your own "version" property or attribute for node
> types in TOSCA 1.0.
>
> On Fri, Aug 4, 2017 at 7:05 AM, D Jayachandran <
> d.jayachand...@ericsson.com>
> wrote:
>
> > Hi,
> >
> > The TOSCA spec mentions about the version as a keyname for different
> > type definitions(Node, Group, Interface, Artifacts, Data .. .) As
> > mentioned in spec this is for the re-use of different types . Does
> > ARIA support the version at this stage ? What is the scope of
> > orchestrator when it comes to the version support ?
> >
> >
> >
> > Regards,
> > DJ
> >
>


Re: Service Composition / Substitution Mapping

2017-08-07 Thread Tal Liron
Well, this is exactly what policies are for. :)

Again, I think the rule of thumb should be that users put policies in place
*only* if the defaults do not suffice.

On Mon, Aug 7, 2017 at 6:42 PM, Ran Ziv  wrote:

> The sensible defaults Tal's mentioned sound indeed sensible to me.
> I'd also like users to have control over this, though I'm a bit worried
> about us getting too carried away with how arbitrarily we use policies for
> configuring, well, pretty much anything. It might not be a problem right
> now but I'm not certain that will remain the case in the future when the
> number of them grows..
>
>
> On Wed, Aug 2, 2017 at 7:14 PM, Tal Liron  wrote:
>
> > Our goal with adding new "conventions" to ARIA, such as policies, is to
> > always make them optional. The idea is that a plain-vanilla TOSCA
> template
> > would "just work" in ARIA via sensible defaults. The extra stuff is there
> > if you know you are using ARIA and you want to make use of its features.
> > (The opposite is true, too: we make sure that any additions are still
> pure
> > TOSCA and would be parsed validly by other TOSCA parsers.)
> >
> > On Wed, Aug 2, 2017 at 9:08 AM, DeWayne Filppi 
> > wrote:
> >
> > > Cool.  Missed that.  That leaves things almost completely wide open
> from
> > > the orchestrator side, IOW few predefined keys.  Too few IMHO, but if
> > > everyone uses ARIA conventions it could work.
> > >
> > > On Tue, Aug 1, 2017 at 11:49 PM, Tal Liron  wrote:
> > >
> > > > I agree! Luckily metadata exists in the 1.0 spec. :)
> > > >
> > > > http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > > > YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#_
> > Toc379455044
> > > >
> > > > On Tue, Aug 1, 2017 at 7:16 PM, DeWayne Filppi 
> > > > wrote:
> > > >
> > > > > It occurs that it might be useful to be able to tag service
> templates
> > > > with
> > > > > arbitrary meta-data.  Perhaps at one level carried forward from a
> > CSAR
> > > > > manifest, but also user definable.  This would allow inter-service
> > > > > references to be definitive, if desired.  This could be implicitly
> > > > defined
> > > > > as a capability by the orchestrator, but some kind of special
> > > requirement
> > > > > type(s) would be needed to utilize it.  This way, external repos
> > could
> > > be
> > > > > used safely and directly without the separate load step.
> > > > >
> > > > > On Tue, Aug 1, 2017 at 12:43 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > Thanks for the kudos. :)
> > > > > >
> > > > > > This topic was discussed on this list a while ago. It's indeed
> > tricky
> > > > to
> > > > > > get right, because TOSCA leaves a lot of room for the
> orchestrator
> > to
> > > > > > implement.
> > > > > >
> > > > > > I'm thinking of it working something like this:
> > > > > >
> > > > > > 1. The reqs-and-caps engine by default will always look for
> > > satisfiable
> > > > > > capabilities within the currently instantiated service. HOWEVER,
> if
> > > > such
> > > > > a
> > > > > > capability is not present, the option is there to look for
> another
> > > > > > instantiated service that exposes the capabilities in
> substitution
> > > > > > mappings.
> > > > > >
> > > > > > 2. If we DON'T have another instantiated service, but DO have a
> > > service
> > > > > > template that could fit the bill, perhaps we need to instantiate
> > that
> > > > > other
> > > > > > service first. One obvious option is to do this automatically.
> But
> > I
> > > > feel
> > > > > > like this can create unforeseen consequences -- for example, some
> > > dummy
> > > > > > test template that someone happened to have in the database might
> > get
> > > > > > instantiated by mistake. Also, it might need to trigger multiple
> > > > install
> > > > > > workflows at once... a big mess. So I suggest that instead we
> > > provide a
> > > > > > very detailed validation error here saying that the requirement

Re: Version support for different TOSCA types

2017-08-07 Thread Tal Liron
There is no special use of versions in TOSCA 1.0: it is up to you to define
properties or attributes or inputs of the "version" data type and do with
those as you please in your operation implementations. TOSCA 1.1 takes it a
step further and provides standardized metadata to nodes.

It seems that you have a particular use case in mind. Can you elaborate it
to us? Perhaps we can together brainstorm a solution,

On Tue, Aug 8, 2017 at 1:05 AM, D Jayachandran 
wrote:

> Hi Tal,
>
> I agree version is now looked upon as a "data type" now. But does the
> orchestrator has any scope when it comes to comparing node types or
> templates depending on the version specified ?
> Am more interested in this statement where the version is looked upon as a
> parameter when defining different types "TOSCA supports the concept of
> “reuse” of type definitions, as well as template definitions which could be
> version and change over time. "
>
>
> Regards,
> DJ
> -Original Message-
> From: Tal Liron [mailto:t...@cloudify.co]
> Sent: Monday, August 07, 2017 9:04 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Version support for different TOSCA types
>
> OK, you are referring to the "version" data type, and it is fully
> supported in ARIA, which includes:
>
> 1. Strict adherence to the (rather odd) specification and its regex 2.
> Proper support for TOSCA comparative constraints for versions
> (greater_than, lesser_than, etc.) 3. Comparisons also work properly in
> Python when comparing version instances
>
> On Mon, Aug 7, 2017 at 12:22 AM, D Jayachandran <
> d.jayachand...@ericsson.com
> > wrote:
>
> > Hi Tal,
> >
> > I was referring to the section 3.2.2  in TOSCA 1.0. It seems the
> > version is part of both TOSCA 1.0 and TOSCA 1.1
> >
> > http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.pdf
> >
> > Regards,
> > DJ
> > -Original Message-
> > From: Tal Liron [mailto:t...@cloudify.co]
> > Sent: Friday, August 04, 2017 9:39 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Version support for different TOSCA types
> >
> > I think you are referring to TOSCA 1.1, which is on the roadmap but
> > not supported yet.
> >
> > You can of course create your own "version" property or attribute for
> > node types in TOSCA 1.0.
> >
> > On Fri, Aug 4, 2017 at 7:05 AM, D Jayachandran <
> > d.jayachand...@ericsson.com>
> > wrote:
> >
> > > Hi,
> > >
> > > The TOSCA spec mentions about the version as a keyname for different
> > > type definitions(Node, Group, Interface, Artifacts, Data .. .) As
> > > mentioned in spec this is for the re-use of different types . Does
> > > ARIA support the version at this stage ? What is the scope of
> > > orchestrator when it comes to the version support ?
> > >
> > >
> > >
> > > Regards,
> > > DJ
> > >
> >
>


Re: Version support for different TOSCA types

2017-08-08 Thread Tal Liron
My understanding has been that this is simply internal metadata, like the
"description" field. There also does not seem any way to access the
version, e.g. by an intrinsic function.

ARIA identifies a type by its name only, not by its version, so for the
same parsing session you cannot have two types of the same name even if
their version is different. If your understanding for TOSCA 1.0 is
different, and you please show me an example of different use?

On Tue, Aug 8, 2017 at 2:28 AM, D Jayachandran 
wrote:

> Ok Tal, I agree with having a property datatype as version and using them
> in my implementations. But to re-iterate I see the support for version
> metadata for different types ( node, artifact, attribute, capability,
> requirements ) in TOSCA 1.0 profile too. You can check the section starting
> from "3.6.3 Artifact Type".
>
> Example from SPEC:
>
> 3.6.8.2
> Grammar
> Node Types
> have following grammar
> :
> <
> node_type_name
> >:
> derived_from: <
> parent_node_type_name
> >
> version: <
> version_number
> >
> description: <
> node_type_description
> >
> properties:
> <
> property_definitions
> >
> attributes:
> <
> attribute_definitions
> >
> requirements:
> -
> <
> requirement_definitions
> >
> capabilities:
> <
> capability_definitions
> >
> interfaces:
> <
> interface_definitions
> >
> artifacts:
> <
> artifact_definitions
> >
>
> Even am trying to understand the use-case which can be mapped to version
> support with each of types.
>
> Is it like we can have same custom node types with different
> version in my service template ?
> In that case how can the node template choose a particular version
> of the custom node type ?
> Or Is the version only for the template author to track changes
> about custom types over time ?
>
>
> Regards,
> DJ
>
> -Original Message-
> From: Tal Liron [mailto:t...@cloudify.co]
> Sent: Tuesday, August 08, 2017 12:11 PM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Version support for different TOSCA types
>
> There is no special use of versions in TOSCA 1.0: it is up to you to
> define properties or attributes or inputs of the "version" data type and do
> with those as you please in your operation implementations. TOSCA 1.1 takes
> it a step further and provides standardized metadata to nodes.
>
> It seems that you have a particular use case in mind. Can you elaborate it
> to us? Perhaps we can together brainstorm a solution,
>
> On Tue, Aug 8, 2017 at 1:05 AM, D Jayachandran <
> d.jayachand...@ericsson.com>
> wrote:
>
> > Hi Tal,
> >
> > I agree version is now looked upon as a "data type" now. But does the
> > orchestrator has any scope when it comes to comparing node types or
> > templates depending on the version specified ?
> > Am more interested in this statement where the version is looked upon
> > as a parameter when defining different types "TOSCA supports the
> > concept of “reuse” of type definitions, as well as template
> > definitions which could be version and change over time. "
> >
> >
> > Regards,
> > DJ
> > -Original Message-
> > From: Tal Liron [mailto:t...@cloudify.co]
> > Sent: Monday, August 07, 2017 9:04 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Version support for different TOSCA types
> >
> > OK, you are referring to the "version" data type, and it is fully
> > supported in ARIA, which includes:
> >
> > 1. Strict adherence to the (rather odd) specification and its regex 2.
> > Proper support for TOSCA comparative constraints for versions
> > (greater_than, lesser_than, etc.) 3. Comparisons also work properly in
> > Python when comparing version instances
> >
> > On Mon, Aug 7, 2017 at 12:22 AM, D Jayachandran <
> > d.jayachand...@ericsson.com
> > > wrote:
> >
> > > Hi Tal,
> > >
> > > I was referring to the section 3.2.2  in TOSCA 1.0. It seems the
> > > version is part of both TOSCA 1.0 and TOSCA 1.1
> > >
> > > http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-
> > > YAML/v1.0/os/TOSCA-Simple-Profile-YAML-v1.0-os.pdf
> > >
> > > Regards,
> > > DJ
> > > -Original Message-
> > > From: Tal Liron [mailto:t...@cloudify.co]
> > > Sent: Friday, August 04, 2017 9:39 PM
> > > To: dev@ariatosca.incubator.apache.org
> > > Subject: Re: Version support for different TOSCA ty

Re: Version support for different TOSCA types

2017-08-08 Thread Tal Liron
The process is simply to open a JIRA ticket. However, it may be better to
first discuss it on this mailing list.

For this enhancement, I think it's important to spec out how such a tool
would look. Let's say the inputs are two templates. What would be the
output?

At least for TOSCA 1.0, I think this type version feature might be of more
use for "meta" tools that might be built on top of ARIA and allow some kind
of search or analysis. For example, a tool to rummage through a huge
directory of templates (or CSAR files) and do a search for node templates
or types that inherit from a specific base type and have a version of X or
X or similar. I don't know if such tools should be included in ARIA
proper, but they can definitely make use of ARIA as an SDK.

On Tue, Aug 8, 2017 at 11:02 AM, Steve Baillargeon <
steve.baillarg...@ericsson.com> wrote:

> Hi Tal
>
> I see that TOSCA version is optional for all types.
> Clearly it is not used for "identification purpose".
>
> However the definition of the TOSCA version seems to be for a different
> purpose.
> Here is a snippet:
> It is important to provide a reliable, normative means to represent a
> version string which enables the comparison and management of types and
> templates over time.
> Therefore, the TOSCA TC intends to provide a normative version type
> (string) for this purpose in future Working Drafts of this specification.
>
> I also see text about TOSCA version comparison.
>
> It  seems like TOSCA version can be used by the parser to show differences
> between 2 templates or to highlight types with different versions in the
> same template.
> I think this is a useful feature to add to ARIA. What do you think?
>
> In general, what is the process to request an enhancement to ARIA?
>
>
> Cheers
> Steve B
>
>
>
>
> -Original Message-
> From: Tal Liron [mailto:t...@cloudify.co]
> Sent: Tuesday, August 08, 2017 9:07 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Version support for different TOSCA types
>
> My understanding has been that this is simply internal metadata, like the
> "description" field. There also does not seem any way to access the
> version, e.g. by an intrinsic function.
>
> ARIA identifies a type by its name only, not by its version, so for the
> same parsing session you cannot have two types of the same name even if
> their version is different. If your understanding for TOSCA 1.0 is
> different, and you please show me an example of different use?
>
> On Tue, Aug 8, 2017 at 2:28 AM, D Jayachandran <
> d.jayachand...@ericsson.com>
> wrote:
>
> > Ok Tal, I agree with having a property datatype as version and using
> > them in my implementations. But to re-iterate I see the support for
> > version metadata for different types ( node, artifact, attribute,
> > capability, requirements ) in TOSCA 1.0 profile too. You can check the
> > section starting from "3.6.3 Artifact Type".
> >
> > Example from SPEC:
> >
> > 3.6.8.2
> > Grammar
> > Node Types
> > have following grammar
> > :
> > <
> > node_type_name
> > >:
> > derived_from: <
> > parent_node_type_name
> > >
> > version: <
> > version_number
> > >
> > description: <
> > node_type_description
> > >
> > properties:
> > <
> > property_definitions
> > >
> > attributes:
> > <
> > attribute_definitions
> > >
> > requirements:
> > -
> > <
> > requirement_definitions
> > >
> > capabilities:
> > <
> > capability_definitions
> > >
> > interfaces:
> > <
> > interface_definitions
> > >
> > artifacts:
> > <
> > artifact_definitions
> > >
> >
> > Even am trying to understand the use-case which can be mapped to
> > version support with each of types.
> >
> > Is it like we can have same custom node types with different
> > version in my service template ?
> > In that case how can the node template choose a particular
> > version of the custom node type ?
> > Or Is the version only for the template author to track
> > changes about custom types over time ?
> >
> >
> > Regards,
> > DJ
> >
> > -Original Message-
> > From: Tal Liron [mailto:t...@cloudify.co]
> > Sent: Tuesday, August 08, 2017 12:11 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Version support for different TOSCA types
> >
> > There is no special use of versions in TOSCA 1.0

Re: Version support for different TOSCA types

2017-08-09 Thread Tal Liron
Why just to add functionality? (You should use inheritance in that case.) A
new version of a type might as well also remove functionality, or really
change everything as you said,

In any case, TOSCA 1.0 doesn't tell us what to do with versions, so at this
point we should indeed do nothing in ARIA core.

On Wed, Aug 9, 2017 at 7:19 AM, Avia Efrat  wrote:

> Actually, I can see the version field used as a backwards-compatibility
> mechanism that will enable to keep the same node type, while adding
> functionality. (maybe even modifying functionality, but that is more
> complex).
>
> In general, I agree that the 1.0 spec is not clear about using this version
> field, but the fact that version is consistently mentioned as a field in
> all the types, allows imo for a possible use of it within ARIA. There is
> not such a usage example of the version field in the 1.0 spec, I agree.
>
> On Tue, Aug 8, 2017 at 8:40 PM, Tal Liron  wrote:
>
> > The process is simply to open a JIRA ticket. However, it may be better to
> > first discuss it on this mailing list.
> >
> > For this enhancement, I think it's important to spec out how such a tool
> > would look. Let's say the inputs are two templates. What would be the
> > output?
> >
> > At least for TOSCA 1.0, I think this type version feature might be of
> more
> > use for "meta" tools that might be built on top of ARIA and allow some
> kind
> > of search or analysis. For example, a tool to rummage through a huge
> > directory of templates (or CSAR files) and do a search for node templates
> > or types that inherit from a specific base type and have a version of X
> or
> > X or similar. I don't know if such tools should be included in
> ARIA
> > proper, but they can definitely make use of ARIA as an SDK.
> >
> > On Tue, Aug 8, 2017 at 11:02 AM, Steve Baillargeon <
> > steve.baillarg...@ericsson.com> wrote:
> >
> > > Hi Tal
> > >
> > > I see that TOSCA version is optional for all types.
> > > Clearly it is not used for "identification purpose".
> > >
> > > However the definition of the TOSCA version seems to be for a different
> > > purpose.
> > > Here is a snippet:
> > > It is important to provide a reliable, normative means to represent a
> > > version string which enables the comparison and management of types and
> > > templates over time.
> > > Therefore, the TOSCA TC intends to provide a normative version type
> > > (string) for this purpose in future Working Drafts of this
> specification.
> > >
> > > I also see text about TOSCA version comparison.
> > >
> > > It  seems like TOSCA version can be used by the parser to show
> > differences
> > > between 2 templates or to highlight types with different versions in
> the
> > > same template.
> > > I think this is a useful feature to add to ARIA. What do you think?
> > >
> > > In general, what is the process to request an enhancement to ARIA?
> > >
> > >
> > > Cheers
> > > Steve B
> > >
> > >
> > >
> > >
> > > -Original Message-
> > > From: Tal Liron [mailto:t...@cloudify.co]
> > > Sent: Tuesday, August 08, 2017 9:07 AM
> > > To: dev@ariatosca.incubator.apache.org
> > > Subject: Re: Version support for different TOSCA types
> > >
> > > My understanding has been that this is simply internal metadata, like
> the
> > > "description" field. There also does not seem any way to access the
> > > version, e.g. by an intrinsic function.
> > >
> > > ARIA identifies a type by its name only, not by its version, so for the
> > > same parsing session you cannot have two types of the same name even if
> > > their version is different. If your understanding for TOSCA 1.0 is
> > > different, and you please show me an example of different use?
> > >
> > > On Tue, Aug 8, 2017 at 2:28 AM, D Jayachandran <
> > > d.jayachand...@ericsson.com>
> > > wrote:
> > >
> > > > Ok Tal, I agree with having a property datatype as version and using
> > > > them in my implementations. But to re-iterate I see the support for
> > > > version metadata for different types ( node, artifact, attribute,
> > > > capability, requirements ) in TOSCA 1.0 profile too. You can check
> the
> > > > section starting from "3.6.3 Artifact Type".
> > > >
> > > > Example from SPEC:
> > > >

Re: Version support for different TOSCA types

2017-08-09 Thread Tal Liron
Yeah, that could be one use case. In the end it's just an extra metadata
field that users can do with as they please. It's hard to think of one
mechanism that would work for all intents.

On Wed, Aug 9, 2017 at 8:35 AM, Avia Efrat  wrote:

> First, I agree with both of you about not adding this currently to the ARIA
> core, as it is not certain that it is intended as a part of 1.0
>
> As for I meant by backwards-compatibility, I'll try to give an example:
> Suppose there is a cloud-provider database node type: cp.nodes.complexdb,
> which corresponds with the cloud-provider's implementation of this
> database. You interact with this database via API of version 1.
>
> After a new version (version 2) of the cloud provider is released, a new
> configurable feature is added to that database. It still represent the same
> type of database (the cloud provider treats it as the same), just with an
> extra feature.
>
> In such a case, I don't see an immediate need for a new node type, as the
> new database and old database represent the same logical entity. We could
> just create a new node type (like cr.nodes.complexdb.v2), but I think using
> a different version field better describes the differences between the
> version 1 database and version 2 database.
>
>
>
> On Wed, Aug 9, 2017 at 4:21 PM, Thomas Nadeau 
> wrote:
>
> >
> > > On Aug 9, 2017:9:12 AM, at 9:12 AM, Tal Liron  wrote:
> > >
> > > Why just to add functionality? (You should use inheritance in that
> > case.) A
> > > new version of a type might as well also remove functionality, or
> really
> > > change everything as you said,
> > >
> > > In any case, TOSCA 1.0 doesn't tell us what to do with versions, so at
> > this
> > > point we should indeed do nothing in ARIA core.
> >
> > I’d agree with that. Its tempting to do an ARIA-specificly
> > supported
> > extension to support this and then offer that as an update to TOSCA, but
> > we should think through that carefully if that is indeed what we want to
> > do.
> >
> > —Tom
> >
> >
> > >
> > > On Wed, Aug 9, 2017 at 7:19 AM, Avia Efrat  wrote:
> > >
> > >> Actually, I can see the version field used as a
> backwards-compatibility
> > >> mechanism that will enable to keep the same node type, while adding
> > >> functionality. (maybe even modifying functionality, but that is more
> > >> complex).
> > >>
> > >> In general, I agree that the 1.0 spec is not clear about using this
> > version
> > >> field, but the fact that version is consistently mentioned as a field
> in
> > >> all the types, allows imo for a possible use of it within ARIA. There
> is
> > >> not such a usage example of the version field in the 1.0 spec, I
> agree.
> > >>
> > >> On Tue, Aug 8, 2017 at 8:40 PM, Tal Liron  wrote:
> > >>
> > >>> The process is simply to open a JIRA ticket. However, it may be
> better
> > to
> > >>> first discuss it on this mailing list.
> > >>>
> > >>> For this enhancement, I think it's important to spec out how such a
> > tool
> > >>> would look. Let's say the inputs are two templates. What would be the
> > >>> output?
> > >>>
> > >>> At least for TOSCA 1.0, I think this type version feature might be of
> > >> more
> > >>> use for "meta" tools that might be built on top of ARIA and allow
> some
> > >> kind
> > >>> of search or analysis. For example, a tool to rummage through a huge
> > >>> directory of templates (or CSAR files) and do a search for node
> > templates
> > >>> or types that inherit from a specific base type and have a version
> of X
> > >> or
> > >>> X or similar. I don't know if such tools should be included in
> > >> ARIA
> > >>> proper, but they can definitely make use of ARIA as an SDK.
> > >>>
> > >>> On Tue, Aug 8, 2017 at 11:02 AM, Steve Baillargeon <
> > >>> steve.baillarg...@ericsson.com> wrote:
> > >>>
> > >>>> Hi Tal
> > >>>>
> > >>>> I see that TOSCA version is optional for all types.
> > >>>> Clearly it is not used for "identification purpose".
> > >>>>
> > >>>> However the definition of the TOSCA ver

Re: Is it possible to inspect a model?

2017-08-10 Thread Tal Liron
The "-f" flag ("--full") for "aria service-template show" and "aria service
show" will do just that. You can also output the same in --json and --yaml.

Currently the "-f" flag doesn't work on individual nodes, but there is an
open ticket to add that.

On Thu, Aug 10, 2017 at 3:38 AM, chbndrhnns  wrote:

> Hey,
>
> Is it possible to use the ariatosca parser, validator and model builder
> only so that I can take a look at the model that is generate from a TOSCA
> file?
>
> Bye,
> Jo
>


Re: Service Composition / Substitution Mapping

2017-08-10 Thread Tal Liron
Thanks for the feedback, DJ. What I wrote was just ideas for now, we're
still in the investigation phase and haven't implemented anything yet.

1. The reqs-and-caps engine by default will always look for satisfiable
> capabilities within the currently instantiated service. HOWEVER, if such a
> capability is not present, the option is there to look for another
> instantiated service that exposes the capabilities in substitution mappings.
> [DJ] - When you say option is there to look for another
> instantiated service is this an available option with current ARIA ?
>  - When you say instantiated service, is it the service or
> the real world service ?
>  - I think the 3rd point of yours is related to this
> service level mapping. When you say a special node would be added to the
> current service, will that node be unique across service A and service
> B(instantiated service) ? Will a life-cycle operation would be called for
> that node which is added to service A as part of the workflow execution ?
>

I don't think it's reasonable for ARIA to work with a "real world service"
if it hasn't been modeled yet in some way. I do have a dream of someday
having such a tool: take an existing cloud service and produce a basic
TOSCA service *and* service template for it. But for now I think it's
reasonable to expect the user to at least model the whole "real world"
service as some kind of logical node.

The question you ask about lifecycle operations are the right ones. In my
opinion, the new "composed service" should be a service instance in every
respect, so workflows would indeed happen on all nodes, including the
sub-services that were added. Otherwise, why do composition at all? The
whole point is to combine everything together.

An interesting question is what happens to nodes after that get "composed"
into another service. From what I say above, it means their service IDs
should change, and in fact the original service would disappear. (By the
way, things like this are another reason I prefer UUIDs for nodes rather
than have them be sequential within the service: nodes should be allowed to
move around between services.)

The only way you would know that these added nodes came from a another
service template is by following their node_template_fk to their
service_template_fk. Otherwise they are all in the same service.


> 2. If we DON'T have another instantiated service, but DO have a service
> template that could fit the bill, perhaps we need to instantiate that other
> service first. One obvious option is to do this automatically. But I feel
> like this can create unforeseen consequences -- for example, some dummy
> test template that someone happened to have in the database might get
> instantiated by mistake. Also, it might need to trigger multiple install
> workflows at once... a big mess. So I suggest that instead we provide a
> very detailed validation error here saying that the requirement cannot be
> satisfied, HOWEVER there exist service templates A, B, and C that can
> substitute for us, so maybe the nice user would like to instantiate them
> first? This seems very reasonable to me.
> [DJ] - Just to understand more on this, Let us assume we have
> service-template A and service-template B. Am trying to create a service A
> from service-template A. One of the node is abstract and this capability is
> provided by node from service-template B.
> - Now I assume service A will have node contributed by
> service-template B and also its nodes. Will this approach I don't see a
> need for multiple workflows.
> - Or is it like service B would also be created
> automatically. In that case how would the workflow be called for service B ?
> - As you stated we have this challenge with multiple
> service-template providing the same capabilities on which one to use.
> - Finally am not getting the exact meaning of the last
> statement of yours "HOWEVER there exist service templates A, B, and C that
> can substitute for us, so maybe the nice user would like to instantiate
> them first? This seems very reasonable to me". I assume you are talking
> having a provision where the user can mention the service-template to be
> used
>

I was thinking out loud there, comparing the two options. Thinking more
about it now, I think it's fine to run a normal "install" workflow on the
big composed service (as I said, it should be treated as a regular service
instance), so I see no problem with instantiating one big service
automatically for the user. No need for the user to instantiate the
sub-service first.

HOWEVER, during instantiation the reqs-and-caps engine should be careful to
consider each sub-service a separate "zone" for satisfying capabilities.
You don't want a requirement in one service template grabbing a capability
of a node in the other. The service is composed, but the service templates
are still separate.

Podcast about the state of NFV in TOSCA

2017-08-10 Thread Tal Liron
With Michael Brenner, Luc Boutier, and yours truly.

I specifically delve into some of the challenges we face in ARIA.

The podcast can be found here:

http://cloudify.co/2017/08/03/open-source-podcast-episode-3-an-honest-discussion-on-tosca.html
https://www.youtube.com/watch?v=gj6CUt2IF-I
https://soundcloud.com/theopensourcepodcast/the-open-source-podcast-episode-3-an-honest-discussion-on-tosca


Re: Import definition for top-level service template

2017-08-10 Thread Tal Liron
I do not think you need to import the substituting template: the whole
point is that it should be left to the orchestrator to "stitch" things
together.

However, I do think you at least need the substituted node type to be
known, so that the parser can validate the mappings. So it might make sense
to have a "types.yaml" or similar that you import both at the server and
client templates.

ARIA's validation errors in this case would be 1) if it doesn't recognize
the node type name, 2) if the mappings don't refer to declared
reqs-and-caps, and 3) if the substituted reqs-and-caps don't match in
types. (Note that you are allowed to substitute a child type, because the
polymorphic contract is still in place.)

Maybe point us exactly to the section in the spec that is confusing here
and we can try to understand it together.

On Thu, Aug 10, 2017 at 1:35 PM, Steve Baillargeon <
steve.baillarg...@ericsson.com> wrote:

> Quick questions about YAML 1.0 and substitution mappings
> In section 2.10.2 where it shows the definition of the top-level service
> template.
> The example does not show the import definitions.
> Can you confirm the top-level service template must import the
> substituting service template(s)?
> Will ARIA currently generate a validation error if the import definition
> is missing in such case?
> Will ARIA currently generate a validation error if a valid import
> definition is provided but does not contain the definitions for the
> abstract node?
> The text refers to a "substitutable directive" in the top-level service
> template. Is Datapoint_enpoint=db the "substitutable directive"?
> Regards
> Steve B
>


Re: Service Composition / Substitution Mapping

2017-08-11 Thread Tal Liron
Well, DJ, it's all just opinion at this stage and feedback is welcome!

Option 1:
> To look at satisfying nodes present in a substituting service,
> Have these nodes part of the newly created service and remove the
> substituting service(nodes with different ID's. Also we are very much in
> favor of  UUID )
> With this approach I guess the substituting service should not
> have any associated workflows running. If at all an workflow execution is
> already triggered I hope this service will not be considered for
> substitution.
> I hope this is the correct approach when we are looking at a
> service for the substitution
>

Yes, this is a good idea. It would be easy to discover this according to
the stored node state -- just make sure that all nodes are in a stable
state before composition.

This leads to a general issue: before composition, the substituting
services must be validated in some way before composition begins.

Also, let's all start using TOSCA terminology here: the containing service
is called the "top-level service," and the contained services are called
"substituting services."

Also, we keep using trivial examples, but it's definitely possible for a
top-level service to require several substituting services at the same
time. I can definitely see such things happening in NFV with even simple
service chains. Basically every VNF could be a substituting service.

So, actually one of the validations would be to make sure you do not create
circular composition: if the top-level also has substitution mappings, you
need to make sure that one of the substituting ones doesn't require it. :)
Not a very likely situation, but it would lead to failure.



> Option 2:
> While creating a service look at the req-caps at the
> service-template level and create a service including the nodes provided by
> the substituting service-template. With this approach there would not be
> any   service created from the service-template which is providing the
> substitution functionality. The service-template would remain the same but
> only the service would be added with extra nodes.
>
> Are you considering both option 1 & 2 for the implementation ? If not
> which one do you feel which take priority. I see option 2 at this stage
> could be the best possible approach
> Also could you please let me know a tentative time for this feature to be
> available?
>

I think both options 1 and 2 make sense and are useful, and actually one is
a subset of the other.

With option #2 (substituting a service template), it means new nodes are
instantiated and the composed service would include all nodes. So, an
"install" workflow would install everything at once. In this case we do
need to fix the lifecycle workflows to be "boundary aware," so that
workflows of substituting service nodes are part of their own task graph. I
think that possibly using a logical proxy node in between might solve this
situation automatically.

With option #1 (substituting a service) the substituting nodes might
already be installed. Or not (they might have been instantiated but still
not installed). So, the lifecycle workflows should only work on the nodes
that have not yet been installed.

The point is that we just need to beef up the lifecycle workflows to
properly work with boundaries and with a mix of nodes in different states.
If they can do that, then they can handle any kind of service composition,
whether it's option #1 or option #2.

I don't think we can provide a timeline yet, but I will say that we are in
the research phase and may have a POC soon. Avia is in charge of this JIRA,
so I will let him chime in with the current state of things.


Re: Service Composition / Substitution Mapping

2017-08-11 Thread Tal Liron
OK, let me try to organize this differently. Three potential conceptions
here:

1) A "fancy include," as you say. All that would happen here is that the
TOSCA parser would automatically find the service template to include from
some kind of repository of recognized service templates, and just include
that. The language in the TOSCA spec suggests that this is not the
intention: it is something that happens at the "orchestrator," not the
"parser."

2) Static service composition. This happens not at the parsing stage, but
rather the instantiation stage, where reqs-and-caps happens. I think this
is what is intended: substitution mapping is specifically about mapping
reqs-and-caps. And this is also where things like scalability and placement
happen: think of a requirement matching a capability, but that capability
not having enough capacity. So, the node might be able to scale out: in the
case of substitution, this would mean duplicating an entire service
instance. My understanding is that this is what is intended by TOSCA, and
it's entirely within the scope of what we can do. We've recently just
refactored the instantiation phase into a new "topology" module where
exactly all this logic is concentrated. So think of it this way -- ARIA has
three parts: "parser," "topology manager," and "workflow engine"
("orchestrator").

(I think I might have confused some thing here a bit when mentioning a
"logical proxy node." I do not mean an actual piece of proxy software
running on a machine somewhere. I mean just a data point in the
ARIA-generated topology that can be used as a barrier of sorts when
constructing the task graph -- because the task graph follows
relationships, the edges of the topology. It could be that we discover in
our POC that this is not needed, because actually the substitution node is
already part of our topology data model and we might be able to easily take
that into account when generating the task graph.)

3) Live service composition. I think this is a fantasy of some people: that
ARIA would be able to take existing, "running" service chains and run
workflows with them. I do think this is a solvable problem, but not via
substitution mapping per se. A solution could involve deployment of a proxy
service (which could actually be encapsulated in a substitution mapping,
but doesn't have to be), or configuring specialized virtual ports via an
SDN controller, or via logical proxy nodes created via an inspection tool,
etc. I cannot believe that there is a one-size-fits-all solution to this
problem. The dream of NFV might be to have everything connect to each other
like LEGO blocks, but there are far too many protocols, configuration, and
security standards. I think point #2, what I called "static service
composition," is a realistic compromise and within the scope of what TOSCA
promises.


On Fri, Aug 11, 2017 at 12:08 PM, DeWayne Filppi 
wrote:

> To my eyes, the spec doesn't speak of runtime service substituion, but
> parsetime template composition.  IOW, substitution mapping is a fancy
> "include", or the equivalent of an interface definition.  Is it understood
> by the ARIA team that this includes proxying of running services?  IOW, if
> my template requires a database service that my template does *not* want to
> control the lifecycle of, I can "substitution map" an instance of a
> template (i.e. a running service)?  This would be a lovely feature, but
> it's not really a "substitution map", rather more of a "service proxy" (as
> implemented as a plugin in Cloudify).   Just trying to clarify.  Maybe the
> community thinks that "substitution map" as something that occurs beyond
> parsetime, or should.
>
> On Fri, Aug 11, 2017 at 9:52 AM, Tal Liron  wrote:
>
> > Well, DJ, it's all just opinion at this stage and feedback is welcome!
> >
> > Option 1:
> > > To look at satisfying nodes present in a substituting service,
> > > Have these nodes part of the newly created service and remove the
> > > substituting service(nodes with different ID's. Also we are very much
> in
> > > favor of  UUID )
> > > With this approach I guess the substituting service should not
> > > have any associated workflows running. If at all an workflow execution
> is
> > > already triggered I hope this service will not be considered for
> > > substitution.
> > > I hope this is the correct approach when we are looking at a
> > > service for the substitution
> > >
> >
> > Yes, this is a good idea. It would be easy to discover this according to
> > the stored node state -- just make sure that all nodes are in a stable
> > 

Re: Service Composition / Substitution Mapping

2017-08-11 Thread Tal Liron
In my opinion, the new composite service should indeed be a single service,
so the ARIA CLI will show it as one (if a substituting service already
existed, it would be "dissolved" into the new top-level service). The
composition will show its traces if you look more deeply, because you'll
see that some node templates come from a different service template.
Perhaps our CLI can detect this and tell the user which service templates
were used to create the nodes in current service.

The rest of what you described I think is not related directly to service
composition, but resource realization. The way we currently handle this is
by being very explicit: you have to use the derived Compute types included
in our AWS or Openstack plugins, for example.

I think we should still keep this, because sometimes you want to be very
explicit (and use specific features of the platform), but I actually have
some ideas for multi-VIM support that would work differently: the idea is
to use the basic Compute type, with detailed properties in its Container
and OperatingSystem capabilities. There would then be a policy, perhaps
aria.Realization, that would hint at what kind of Compute nodes you are
looking for (you can apply the policy to specific nodes or groups, or
generally). It might be possible to still have strong platform-specific
feature support here, perhaps by implementing a general string key-value
dict that could include hints for the specific plugin to use.

Anyway, just ideas at this point...


On Fri, Aug 11, 2017 at 2:25 PM, DeWayne Filppi  wrote:

> Good stuff.  Obviously a "fancy include" was the wrong metaphor.  The
> lifecycles are linked though.  When a create the referencing service (aria
> service create), ARIA will match the other service template, and run create
> service on it, and then continue on.  Uninstall of the referencer will
> destroy the referenced service.  After such an instantiation, would listing
> all services produce two (perhaps with some parent indicator) or one?  From
> what I've seen, reqs and caps don't have the equivalent of a lifecycle
> interface associated.  Maybe I missed it.  It is implicit in the
> orchestrator to match stuff up at service creation time, I'm assuming, and
> using plugins somehow?  The sort of flagship scenario is matching cloud
> compute resources without specifying an image.  Since discovering whether a
> particular cloud has a given capability, I guessing a plugin for that cloud
> has to search cloud inventory and perhaps quota to provide info for the
> match.  Since plugins (at least the Cloudify variety) tend to be triggered
> due to operations that they connect to lifecycle methods, and there is no
> "materialize_capabilities" interface that I noticed, some other kind of
> magic must be performed.  Maybe this is an orchestrator-specific detail
> outside of TOSCA.
>
> DeWayne
>
> On Fri, Aug 11, 2017 at 10:57 AM, Tal Liron  wrote:
>
> > OK, let me try to organize this differently. Three potential conceptions
> > here:
> >
> > 1) A "fancy include," as you say. All that would happen here is that the
> > TOSCA parser would automatically find the service template to include
> from
> > some kind of repository of recognized service templates, and just include
> > that. The language in the TOSCA spec suggests that this is not the
> > intention: it is something that happens at the "orchestrator," not the
> > "parser."
> >
> > 2) Static service composition. This happens not at the parsing stage, but
> > rather the instantiation stage, where reqs-and-caps happens. I think this
> > is what is intended: substitution mapping is specifically about mapping
> > reqs-and-caps. And this is also where things like scalability and
> placement
> > happen: think of a requirement matching a capability, but that capability
> > not having enough capacity. So, the node might be able to scale out: in
> the
> > case of substitution, this would mean duplicating an entire service
> > instance. My understanding is that this is what is intended by TOSCA, and
> > it's entirely within the scope of what we can do. We've recently just
> > refactored the instantiation phase into a new "topology" module where
> > exactly all this logic is concentrated. So think of it this way -- ARIA
> has
> > three parts: "parser," "topology manager," and "workflow engine"
> > ("orchestrator").
> >
> > (I think I might have confused some thing here a bit when mentioning a
> > "logical proxy node." I do not mean an actual piece of proxy software
> > running on a machine somewhere. I mean just a data point in the
> > AR

Re: Service Composition / Substitution Mapping

2017-08-11 Thread Tal Liron
OK, that's a whole different can of worms. :)

TOSCA's Compute capabilities (Container and OperatingSystem) are explicit.
You specify which OS you want, how much RAM you want, how many CPUs, etc.
ARIA's explicit node types (for example, the AWS Compute node) are likewise
explicit. So there is not querying here: the plugin will attempt to spin up
exactly the virtual machine you asked for. If it fails, the workflow will
fail.

This is not good enough, I think, for real world scenarios. There are two
possible solutions:

1) Support ranges or fallbacks. So instead of saying "I need 4 GB of RAM"
you can say "I would like 4 GB of RAM, but 2 GB would also be OK." There's
no easy way to do this now without totally changing how the capability
types are designed. But, it may be possible to override this via policies.
So, the capabilities would perhaps specify the minimal requirements, while
policies specify preferences. Some aspects of this were discussed in the
OPEN-O project. DeWayne, has any of this survived in ONAP, or have we not
reached that point in the discussion yet?

2) Incorporate this into the bigger topic of resource orchestration. This a
huge challenge for the industry. The problem field contains not just "I
need X amount of RAM" but also "I want all my virtual machines and
containers running on the same high-performance network backend and have
these two nodes on the same bare-metal machine or at least in the same data
center rack with a NUMA interconnect, and I also don't want all this to
cost more that $100 per hour." That's not crazy: these are real world
requirements for high-performance VNFs and service chaining. Resource
orchestration requires a full map of what is available in the data centers,
a negotiation-based algorithm for properly allocating and placing
resources, connection to billing services, etc. Of course resource
orchestration is not within the scope of ARIA, but it would be great for
ARIA to have plugins for them (and TOSCA be able to model resource
requirement policies) when these become available.



On Fri, Aug 11, 2017 at 3:02 PM, DeWayne Filppi  wrote:

> For the "resource realization" part, I was not even considering
> multicloud/vim.   I was considering single cloud even outside of
> composition.  Just reqs and caps.  If my node "requires" a compute node
> with Suse Linux version X with a minimum of 4GB RAM, how does the
> orchestrator match that without querying the cloud via the plugin?   If it
> does query it, which it seems it must in addition to the implicit quota
> query, how is this done?  TOSCA seems to not really care, which is fine and
> perhaps a good idea.  But ARIA has to care.
>
> DeWayne
>
> On Fri, Aug 11, 2017 at 12:34 PM, Tal Liron  wrote:
>
> > In my opinion, the new composite service should indeed be a single
> service,
> > so the ARIA CLI will show it as one (if a substituting service already
> > existed, it would be "dissolved" into the new top-level service). The
> > composition will show its traces if you look more deeply, because you'll
> > see that some node templates come from a different service template.
> > Perhaps our CLI can detect this and tell the user which service templates
> > were used to create the nodes in current service.
> >
> > The rest of what you described I think is not related directly to service
> > composition, but resource realization. The way we currently handle this
> is
> > by being very explicit: you have to use the derived Compute types
> included
> > in our AWS or Openstack plugins, for example.
> >
> > I think we should still keep this, because sometimes you want to be very
> > explicit (and use specific features of the platform), but I actually have
> > some ideas for multi-VIM support that would work differently: the idea is
> > to use the basic Compute type, with detailed properties in its Container
> > and OperatingSystem capabilities. There would then be a policy, perhaps
> > aria.Realization, that would hint at what kind of Compute nodes you are
> > looking for (you can apply the policy to specific nodes or groups, or
> > generally). It might be possible to still have strong platform-specific
> > feature support here, perhaps by implementing a general string key-value
> > dict that could include hints for the specific plugin to use.
> >
> > Anyway, just ideas at this point...
> >
> >
> > On Fri, Aug 11, 2017 at 2:25 PM, DeWayne Filppi 
> > wrote:
> >
> > > Good stuff.  Obviously a "fancy include" was the wrong metaphor.  The
> > > lifecycles are linked though.  When a create the referencing service
> > (aria
> > > service create), ARIA will m

Re: Service Composition / Substitution Mapping

2017-08-11 Thread Tal Liron
You are correct -- to participate in this "multi-VIM" scenario, the
Openstack plugin would have to know how to translate the TOSCA properties
to a flavor ID. This could all be done in 100% TOSCA via policies (say, an
aria.Openstack).

Doing this automatically might not be a good idea, or even necessary. Worst
case is you get a validation error if the ARIA plugin can't find a flavor
in the table to match your requirements, in which you case you can go and
manually find the right ID and add it to the table.

And I agree about being fine with imperfection: the rule of thumb for our
work is always to allow for sensible defaults even if no explicit policy is
given.

Anyway, we've gone way off the topic of this thread. We can talk about it
more once it comes closer to an implementation.

On Fri, Aug 11, 2017 at 3:52 PM, DeWayne Filppi  wrote:

> Interesting.  Take Openstack (please ).  If you model a compute
> OS as explicitly as you like, there still has to be a "match" to an
> Openstack image ID.  Or are you saying you must supply the image ID for
> OSs.   Likewise, you can't supply RAM and CPUs without a flavor ID.
> Openstack does allow for custom flavors, but I doubt the plugin is doing
> that.  Much better to have a "caps-init" interface in some low down base
> type that can be triggered at service creation to support reqs/caps
> (IMHO).  Then the plugin can verify whether the service can be construct
> based on fuzzy constraints.  Maybe this is a case that the "general
> solution" is a nightmare of complexity, but having a plugin scan the
> available flavors to make sure a requirement can be met doesn't seem that
> tough.  If TOSCA provided a formal lifecycle interface for it, then
> orchestrators or just plugins could determine how tricky they wanted to
> be.  IOW, let not the perfect be the enemy of the good.
>
> DeWayne
>
>
> On Fri, Aug 11, 2017 at 1:26 PM, Tal Liron  wrote:
>
> > OK, that's a whole different can of worms. :)
> >
> > TOSCA's Compute capabilities (Container and OperatingSystem) are
> explicit.
> > You specify which OS you want, how much RAM you want, how many CPUs, etc.
> > ARIA's explicit node types (for example, the AWS Compute node) are
> likewise
> > explicit. So there is not querying here: the plugin will attempt to spin
> up
> > exactly the virtual machine you asked for. If it fails, the workflow will
> > fail.
> >
> > This is not good enough, I think, for real world scenarios. There are two
> > possible solutions:
> >
> > 1) Support ranges or fallbacks. So instead of saying "I need 4 GB of RAM"
> > you can say "I would like 4 GB of RAM, but 2 GB would also be OK."
> There's
> > no easy way to do this now without totally changing how the capability
> > types are designed. But, it may be possible to override this via
> policies.
> > So, the capabilities would perhaps specify the minimal requirements,
> while
> > policies specify preferences. Some aspects of this were discussed in the
> > OPEN-O project. DeWayne, has any of this survived in ONAP, or have we not
> > reached that point in the discussion yet?
> >
> > 2) Incorporate this into the bigger topic of resource orchestration.
> This a
> > huge challenge for the industry. The problem field contains not just "I
> > need X amount of RAM" but also "I want all my virtual machines and
> > containers running on the same high-performance network backend and have
> > these two nodes on the same bare-metal machine or at least in the same
> data
> > center rack with a NUMA interconnect, and I also don't want all this to
> > cost more that $100 per hour." That's not crazy: these are real world
> > requirements for high-performance VNFs and service chaining. Resource
> > orchestration requires a full map of what is available in the data
> centers,
> > a negotiation-based algorithm for properly allocating and placing
> > resources, connection to billing services, etc. Of course resource
> > orchestration is not within the scope of ARIA, but it would be great for
> > ARIA to have plugins for them (and TOSCA be able to model resource
> > requirement policies) when these become available.
> >
> >
> >
> > On Fri, Aug 11, 2017 at 3:02 PM, DeWayne Filppi 
> > wrote:
> >
> > > For the "resource realization" part, I was not even considering
> > > multicloud/vim.   I was considering single cloud even outside of
> > > composition.  Just reqs and caps.  If my node "requires" a compute node
> > > with Suse Linux version X with a minimum of 4

Re: TOSCA simple YAML 1.1 support

2017-08-15 Thread Tal Liron
We're still in the investigation phase. Is there a specific 1.1 feature you
would want to prioritize?

On Tue, Aug 15, 2017 at 3:03 AM, David Haja  wrote:

> Hi,
>
>
> Do you have any timelines to support TOSCA simple YAML 1.1?
>
>
> Regards,
>
> David
>


Re: cloudify plugin relationship mapping

2017-08-18 Thread Tal Liron
Yes. The HostedOn relationship is used in requirements for a Container
capability, e.g.:

node_types:
  tosca.nodes.SoftwareComponent:
...
requirements:
  - host:
  capability: tosca.capabilities.Container
  node: tosca.nodes.Compute
  relationship: tosca.relationships.HostedOn

On Fri, Aug 18, 2017 at 4:47 PM, DeWayne Filppi  wrote:

> Is "HostedOn" in TOSCA equivalent to "contained_in" in Cloudify DSL?
>


Re: cloudify openstack plugin example

2017-08-23 Thread Tal Liron
Input files indeed look like that (as long as they have a .yaml suffix).

If you do "aria services show -f" you can get a complete dump of the entire
model. Can you check that everything is correct there before we move on to
debugging the execution?

On Wed, Aug 23, 2017 at 12:54 PM, DeWayne Filppi 
wrote:

> Having trouble with inputs when trying to run the openstack helloworld.  I
> provide inputs that look like this:
>
> ssh_username: ubuntu
> external_network_name: public_net
> webserver_port: 8080
> private_key_path: ~/dfilppi-rs.pen
> image: some image id
> flavor: "2"
> openstack_config:
>   username: dewayne
>   password: xxx
>   tenant_name: dewayne-tenant
>   auth_url: https://rackspace-api.gigaspaces.com:5000/v3
>
> Openstack config map entry values all become empty strings in the
> execution.  Am I specifying it wrong?  There is no example inputs file to
> compare with, alas.
>


Re: Seeing error "Validation issues: unknown parent type "tosca:Root" in WebServer"

2017-08-24 Thread Tal Liron
You need not just the helloworld YAML file, but also all the scripts it
references. Try copying the whole examples directory to make sure.

We are planning to eventually display a validation error if the YAML file
references artifacts that don't exist, so you wouldn't have to wait until
execution to see the error. This would be part of our general work on
improving artifact support.

On Thu, Aug 24, 2017 at 12:26 PM, Vishwanath Jayaraman <
vishwana...@hotmail.com> wrote:

> I copied the helloworld.yaml to '~/examples'  (i.e home directory) on my
> box.
>
>
> From the instructions at http://ariatosca.incubator.
> apache.org/getting-started/
>
> I executed the
>
> aria service-templates store examples/helloworld.yaml my-service-template
>
> command from the home directory.
>
>
>
> Additional Info:
>
> - I am running this on a fresh Ubuntu 16.04 system
>
>
> ARIA Installation sequence followed is below
>
> - sudo apt-get update
>
> - sudo apt-install python-pip
>
> - pip install --upgrade pip setuptools
>
> - sudo pip install apache-ariatosca
>
> - sudo apt-get install -y python-dev gcc libffi-dev libssl-dev
>
> - sudo pip install apache-ariatosca[ssh]
>
>
> After I execute
>
> 'aria service-templates store examples/helloworld.yaml my-service-template
> -vvv'
>
> I do not see the 'images', 'index.html' and 'scripts' in the
>
> '.aria/resources/service_template/1/'.
>
> I am guessing this is generated by the code.
>
>
> Vish
>
>
> 
> From: Ran Ziv 
> Sent: Thursday, August 24, 2017 4:22 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Seeing error "Validation issues: unknown parent type
> "tosca:Root" in WebServer"
>
> No, it doesn't :)
> This is what mine looks like:
>
> $ ls ~/.aria/resources/service_template/2/
> hello-world.yaml  images  index.html  scripts
>
> How did you store the service-template? What was the directory you were in
> when running the store command? (not that it should matter, but I don't
> have any better clue at the moment)
>
>
>
> On Thu, Aug 24, 2017 at 10:51 AM, Vishwanath Jayaraman <
> vishwana...@hotmail.com> wrote:
>
> > Find below the console output , does it look right?
> >
> >
> > ubuntu@intellij:~$ ls -l .aria/
> > total 6
> > -rw-rw-r-- 1 ubuntu ubuntu 14897 Aug 23 20:53 cli.log
> > -rw-rw-r-- 1 ubuntu ubuntu  1246 Aug 23 20:48 config.yaml
> > drwxrwxr-x 2 ubuntu ubuntu 3 Aug 23 20:53 models
> > drwxrwxr-x 2 ubuntu ubuntu 2 Aug 23 20:48 plugins
> > drwxrwxr-x 5 ubuntu ubuntu 5 Aug 23 20:48 resources
> >
> >
> > ubuntu@intellij:~$ ls -l .aria/models/
> > total 29
> > -rw-r--r-- 1 ubuntu ubuntu 188416 Aug 23 20:53 db.sqlite
> >
> >
> > ubuntu@intellij:~$ ls -l .aria/resources/service_template/1/
> > total 1
> > -rw-rw-r-- 1 ubuntu ubuntu 720 Aug 23 20:48 helloworld.yaml
> >
> >
> >
> > Vish
> >
> >
> > 
> > From: Ran Ziv 
> > Sent: Thursday, August 24, 2017 2:27 AM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Seeing error "Validation issues: unknown parent type
> > "tosca:Root" in WebServer"
> >
> > Interesting. The script resource should have been placed in that
> directory
> > when you stored the service-template.
> > Try looking inside ~/.aria and see what you can find under the model
> > storage directory - the "service_template/1/.." path mentioned above
> should
> > be relative to there.
> >
> >
> > On Wed, Aug 23, 2017 at 11:52 PM, Vishwanath Jayaraman <
> > vishwana...@hotmail.com> wrote:
> >
> > > THanks for making me aware of the -vvv option
> > >
> > >
> > > Below is what I see now
> > >
> > > Console output START
> > >
> > > ubuntu@intellij:~$ aria service-templates store /tmp/helloworld.yaml
> > > my-service-template
> > > Storing service template my-service-template...
> > > Service template my-service-template stored
> > > ubuntu@intellij:~$ aria services create my-service -t
> > my-service-template
> > > -vvv
> > > Creating new service from service template my-service-template...
> > > Service created. The service's name is my-service
> > > ubuntu@intellij:~$ aria executions start install -s my-service -vvv
> > > Starting execution. Press Ctrl+C cancel
> > > 20:49:28 | I | install | {} | Starting 'install' workflow execution
> > > 20:49:30 | D | None | {} | web_server_1 Standard.create has no
> > > implementation
> > > 20:49:30 | D | None | {} | web_server_1 Standard.configure has no
> > > implementation
> > > 20:49:31 | D | None | {} | web_server_1 Standard.start has no
> > > implementation
> > > 20:49:33 | D | None | {} | web_app_1 Standard.create has no
> > implementation
> > > 20:49:34 | D | None | {} | web_app_1->web_server_1
> > > Configure.pre_configure_source has no implementation
> > > 20:49:34 | D | None | {} | web_app_1->web_server_1
> > > Configure.pre_configure_target has no implementation
> > > 20:49:41 | I | aria.orchestrator.execution_
> plugin.operations.run_script_
> > locally
> > > | {u'process': {}, u'scrip

Re: subnet connected to router

2017-08-24 Thread Tal Liron
A few syntax problems:

1. It looks like you don't have any operation implementation, which is a
required field. (What do you expect the inputs to be sent to?)
2. Also, you are not naming the input. It should be "inputs: {
my_input_name: { get_input: openstack_config } }"
3. But #2 won't work because you can't just add inputs in this case,
because they are not declared at the interface type.

Assuming you do have an implementation, you could you try passing it using
execution configuration:

https://cwiki.apache.org/confluence/display/ARIATOSCA/Execution+Configuration

Try something like this:

interfaces:
  Configure:
add_target:
  primary: my_script.sh
  dependencies:
- openstack_config: { get_input: openstack_config }

On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi  wrote:

> In the ARIA usage of the plugin, I need to pass openstack_config explicitly
> to every operation.  Since the relationships are implicit, how do I
> accomplish this?  Currently I get errors when trying to connect a subnet to
> a router.   I've tried overriding the relationship like so:
>
> subnet:
>   type: aria.openstack.nodes.Subnet
>   properties:
> resource_id: aria_helloworld_subnet
> create_if_missing: true
>   interfaces:
> Standard:
>   create:
> inputs:
>   openstack_config: { get_input: openstack_config }
>   requirements:
> - router:
> node: router
> relationship:
>   type: aria.openstack.subnet_connected_to_router
>   interfaces:
> Configure:
>   add_target:
> inputs: { get_input: openstack_config }
> - network: network
>
> Note the router requirement.  Does this syntax look correct?  Spoiler:
> openstack_config never makes it to the plugin.
>
> DeWayne
>


Re: subnet connected to router

2017-08-24 Thread Tal Liron
Oops, sorry, this is the syntax:

interfaces:
  Configure:
add_target:
  primary: my_script.sh
  dependencies:
- "openstack_config > { get_input: openstack_config }"

On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron  wrote:

> A few syntax problems:
>
> 1. It looks like you don't have any operation implementation, which is a
> required field. (What do you expect the inputs to be sent to?)
> 2. Also, you are not naming the input. It should be "inputs: {
> my_input_name: { get_input: openstack_config } }"
> 3. But #2 won't work because you can't just add inputs in this case,
> because they are not declared at the interface type.
>
> Assuming you do have an implementation, you could you try passing it using
> execution configuration:
>
> https://cwiki.apache.org/confluence/display/ARIATOSCA/
> Execution+Configuration
>
> Try something like this:
>
> interfaces:
>   Configure:
> add_target:
>   primary: my_script.sh
>   dependencies:
> - openstack_config: { get_input: openstack_config }
>
> On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi 
> wrote:
>
>> In the ARIA usage of the plugin, I need to pass openstack_config
>> explicitly
>> to every operation.  Since the relationships are implicit, how do I
>> accomplish this?  Currently I get errors when trying to connect a subnet
>> to
>> a router.   I've tried overriding the relationship like so:
>>
>> subnet:
>>   type: aria.openstack.nodes.Subnet
>>   properties:
>> resource_id: aria_helloworld_subnet
>> create_if_missing: true
>>   interfaces:
>> Standard:
>>   create:
>> inputs:
>>   openstack_config: { get_input: openstack_config }
>>   requirements:
>> - router:
>> node: router
>> relationship:
>>   type: aria.openstack.subnet_connected_to_router
>>   interfaces:
>> Configure:
>>   add_target:
>> inputs: { get_input: openstack_config }
>> - network: network
>>
>> Note the router requirement.  Does this syntax look correct?  Spoiler:
>> openstack_config never makes it to the plugin.
>>
>> DeWayne
>>
>
>


Re: subnet connected to router

2017-08-24 Thread Tal Liron
I'm not sure what you mean.

Please read the wiki page, it explains the issue in the depth.

On Thu, Aug 24, 2017 at 6:17 PM, DeWayne Filppi  wrote:

> So no "inputs"?  Seems weird.
>
> On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron  wrote:
>
> > Oops, sorry, this is the syntax:
> >
> > interfaces:
> >   Configure:
> > add_target:
> >   primary: my_script.sh
> >   dependencies:
> > - "openstack_config > { get_input: openstack_config }"
> >
> > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron  wrote:
> >
> > > A few syntax problems:
> > >
> > > 1. It looks like you don't have any operation implementation, which is
> a
> > > required field. (What do you expect the inputs to be sent to?)
> > > 2. Also, you are not naming the input. It should be "inputs: {
> > > my_input_name: { get_input: openstack_config } }"
> > > 3. But #2 won't work because you can't just add inputs in this case,
> > > because they are not declared at the interface type.
> > >
> > > Assuming you do have an implementation, you could you try passing it
> > using
> > > execution configuration:
> > >
> > > https://cwiki.apache.org/confluence/display/ARIATOSCA/
> > > Execution+Configuration
> > >
> > > Try something like this:
> > >
> > > interfaces:
> > >   Configure:
> > > add_target:
> > >   primary: my_script.sh
> > >   dependencies:
> > > - openstack_config: { get_input: openstack_config }
> > >
> > > On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi 
> > > wrote:
> > >
> > >> In the ARIA usage of the plugin, I need to pass openstack_config
> > >> explicitly
> > >> to every operation.  Since the relationships are implicit, how do I
> > >> accomplish this?  Currently I get errors when trying to connect a
> subnet
> > >> to
> > >> a router.   I've tried overriding the relationship like so:
> > >>
> > >> subnet:
> > >>   type: aria.openstack.nodes.Subnet
> > >>   properties:
> > >> resource_id: aria_helloworld_subnet
> > >> create_if_missing: true
> > >>   interfaces:
> > >> Standard:
> > >>   create:
> > >> inputs:
> > >>   openstack_config: { get_input: openstack_config }
> > >>   requirements:
> > >> - router:
> > >> node: router
> > >> relationship:
> > >>   type: aria.openstack.subnet_connected_to_router
> > >>   interfaces:
> > >> Configure:
> > >>   add_target:
> > >> inputs: { get_input: openstack_config }
> > >> - network: network
> > >>
> > >> Note the router requirement.  Does this syntax look correct?  Spoiler:
> > >> openstack_config never makes it to the plugin.
> > >>
> > >> DeWayne
> > >>
> > >
> > >
> >
>


Re: subnet connected to router

2017-08-24 Thread Tal Liron
What is the error?

On Thu, Aug 24, 2017 at 6:22 PM, DeWayne Filppi  wrote:

> actually "dependencies" fails validation.
>
> On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron  wrote:
>
> > Oops, sorry, this is the syntax:
> >
> > interfaces:
> >   Configure:
> > add_target:
> >   primary: my_script.sh
> >   dependencies:
> > - "openstack_config > { get_input: openstack_config }"
> >
> > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron  wrote:
> >
> > > A few syntax problems:
> > >
> > > 1. It looks like you don't have any operation implementation, which is
> a
> > > required field. (What do you expect the inputs to be sent to?)
> > > 2. Also, you are not naming the input. It should be "inputs: {
> > > my_input_name: { get_input: openstack_config } }"
> > > 3. But #2 won't work because you can't just add inputs in this case,
> > > because they are not declared at the interface type.
> > >
> > > Assuming you do have an implementation, you could you try passing it
> > using
> > > execution configuration:
> > >
> > > https://cwiki.apache.org/confluence/display/ARIATOSCA/
> > > Execution+Configuration
> > >
> > > Try something like this:
> > >
> > > interfaces:
> > >   Configure:
> > > add_target:
> > >   primary: my_script.sh
> > >   dependencies:
> > > - openstack_config: { get_input: openstack_config }
> > >
> > > On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi 
> > > wrote:
> > >
> > >> In the ARIA usage of the plugin, I need to pass openstack_config
> > >> explicitly
> > >> to every operation.  Since the relationships are implicit, how do I
> > >> accomplish this?  Currently I get errors when trying to connect a
> subnet
> > >> to
> > >> a router.   I've tried overriding the relationship like so:
> > >>
> > >> subnet:
> > >>   type: aria.openstack.nodes.Subnet
> > >>   properties:
> > >> resource_id: aria_helloworld_subnet
> > >> create_if_missing: true
> > >>   interfaces:
> > >> Standard:
> > >>   create:
> > >> inputs:
> > >>   openstack_config: { get_input: openstack_config }
> > >>   requirements:
> > >> - router:
> > >> node: router
> > >> relationship:
> > >>   type: aria.openstack.subnet_connected_to_router
> > >>   interfaces:
> > >> Configure:
> > >>   add_target:
> > >> inputs: { get_input: openstack_config }
> > >> - network: network
> > >>
> > >> Note the router requirement.  Does this syntax look correct?  Spoiler:
> > >> openstack_config never makes it to the plugin.
> > >>
> > >> DeWayne
> > >>
> > >
> > >
> >
>


Re: subnet connected to router

2017-08-24 Thread Tal Liron
Did you read the wiki? ARIA will send those specially formatted
dependencies as arguments to the @operation function.

It would help to see your complete example, as I don't know what you're
doing and not doing anymore. Could you throw it into a GitHub repo perhaps?

On Thu, Aug 24, 2017 at 6:53 PM, DeWayne Filppi  wrote:

> 'dependencies' is a child of implementation in the spec.   I don't think
> it's going to do anything for me anyway.  I just want to pass
> openstack_config to the add_target operation as inputs.
>
> On Thu, Aug 24, 2017 at 4:28 PM, Tal Liron  wrote:
>
> > What is the error?
> >
> > On Thu, Aug 24, 2017 at 6:22 PM, DeWayne Filppi 
> > wrote:
> >
> > > actually "dependencies" fails validation.
> > >
> > > On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron  wrote:
> > >
> > > > Oops, sorry, this is the syntax:
> > > >
> > > > interfaces:
> > > >   Configure:
> > > > add_target:
> > > >   primary: my_script.sh
> > > >   dependencies:
> > > > - "openstack_config > { get_input: openstack_config }"
> > > >
> > > > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron  wrote:
> > > >
> > > > > A few syntax problems:
> > > > >
> > > > > 1. It looks like you don't have any operation implementation, which
> > is
> > > a
> > > > > required field. (What do you expect the inputs to be sent to?)
> > > > > 2. Also, you are not naming the input. It should be "inputs: {
> > > > > my_input_name: { get_input: openstack_config } }"
> > > > > 3. But #2 won't work because you can't just add inputs in this
> case,
> > > > > because they are not declared at the interface type.
> > > > >
> > > > > Assuming you do have an implementation, you could you try passing
> it
> > > > using
> > > > > execution configuration:
> > > > >
> > > > > https://cwiki.apache.org/confluence/display/ARIATOSCA/
> > > > > Execution+Configuration
> > > > >
> > > > > Try something like this:
> > > > >
> > > > > interfaces:
> > > > >   Configure:
> > > > > add_target:
> > > > >   primary: my_script.sh
> > > > >   dependencies:
> > > > > - openstack_config: { get_input: openstack_config }
> > > > >
> > > > > On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi <
> dewa...@cloudify.co
> > >
> > > > > wrote:
> > > > >
> > > > >> In the ARIA usage of the plugin, I need to pass openstack_config
> > > > >> explicitly
> > > > >> to every operation.  Since the relationships are implicit, how do
> I
> > > > >> accomplish this?  Currently I get errors when trying to connect a
> > > subnet
> > > > >> to
> > > > >> a router.   I've tried overriding the relationship like so:
> > > > >>
> > > > >> subnet:
> > > > >>   type: aria.openstack.nodes.Subnet
> > > > >>   properties:
> > > > >> resource_id: aria_helloworld_subnet
> > > > >> create_if_missing: true
> > > > >>   interfaces:
> > > > >> Standard:
> > > > >>   create:
> > > > >> inputs:
> > > > >>   openstack_config: { get_input: openstack_config }
> > > > >>   requirements:
> > > > >> - router:
> > > > >> node: router
> > > > >> relationship:
> > > > >>   type: aria.openstack.subnet_connected_to_router
> > > > >>   interfaces:
> > > > >> Configure:
> > > > >>   add_target:
> > > > >> inputs: { get_input: openstack_config }
> > > > >> - network: network
> > > > >>
> > > > >> Note the router requirement.  Does this syntax look correct?
> > Spoiler:
> > > > >> openstack_config never makes it to the plugin.
> > > > >>
> > > > >> DeWayne
> > > > >>
> > > > >
> > > > >
> > > >
> > >
> >
>


Re: subnet connected to router

2017-08-25 Thread Tal Liron
"implementation" is a required field in the TOSCA spec, so you must specify
it even if it is the same.

On Fri, Aug 25, 2017 at 12:47 PM, DeWayne Filppi 
wrote:

> It appears that this issue *was* fixed by repeating the implementation key
> in the add_target block.  Intuitively, I would expect that fields I didn't
> override would be untouched, but apparently not.
>
> On Thu, Aug 24, 2017 at 4:59 PM, Tal Liron  wrote:
>
> > Did you read the wiki? ARIA will send those specially formatted
> > dependencies as arguments to the @operation function.
> >
> > It would help to see your complete example, as I don't know what you're
> > doing and not doing anymore. Could you throw it into a GitHub repo
> perhaps?
> >
> > On Thu, Aug 24, 2017 at 6:53 PM, DeWayne Filppi 
> > wrote:
> >
> > > 'dependencies' is a child of implementation in the spec.   I don't
> think
> > > it's going to do anything for me anyway.  I just want to pass
> > > openstack_config to the add_target operation as inputs.
> > >
> > > On Thu, Aug 24, 2017 at 4:28 PM, Tal Liron  wrote:
> > >
> > > > What is the error?
> > > >
> > > > On Thu, Aug 24, 2017 at 6:22 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > actually "dependencies" fails validation.
> > > > >
> > > > > On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > Oops, sorry, this is the syntax:
> > > > > >
> > > > > > interfaces:
> > > > > >   Configure:
> > > > > > add_target:
> > > > > >   primary: my_script.sh
> > > > > >   dependencies:
> > > > > > - "openstack_config > { get_input: openstack_config }"
> > > > > >
> > > > > > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron 
> > wrote:
> > > > > >
> > > > > > > A few syntax problems:
> > > > > > >
> > > > > > > 1. It looks like you don't have any operation implementation,
> > which
> > > > is
> > > > > a
> > > > > > > required field. (What do you expect the inputs to be sent to?)
> > > > > > > 2. Also, you are not naming the input. It should be "inputs: {
> > > > > > > my_input_name: { get_input: openstack_config } }"
> > > > > > > 3. But #2 won't work because you can't just add inputs in this
> > > case,
> > > > > > > because they are not declared at the interface type.
> > > > > > >
> > > > > > > Assuming you do have an implementation, you could you try
> passing
> > > it
> > > > > > using
> > > > > > > execution configuration:
> > > > > > >
> > > > > > > https://cwiki.apache.org/confluence/display/ARIATOSCA/
> > > > > > > Execution+Configuration
> > > > > > >
> > > > > > > Try something like this:
> > > > > > >
> > > > > > > interfaces:
> > > > > > >   Configure:
> > > > > > > add_target:
> > > > > > >   primary: my_script.sh
> > > > > > >   dependencies:
> > > > > > > - openstack_config: { get_input: openstack_config }
> > > > > > >
> > > > > > > On Thu, Aug 24, 2017 at 5:49 PM, DeWayne Filppi <
> > > dewa...@cloudify.co
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > >> In the ARIA usage of the plugin, I need to pass
> openstack_config
> > > > > > >> explicitly
> > > > > > >> to every operation.  Since the relationships are implicit, how
> > do
> > > I
> > > > > > >> accomplish this?  Currently I get errors when trying to
> connect
> > a
> > > > > subnet
> > > > > > >> to
> > > > > > >> a router.   I've tried overriding the relationship like so:
> > > > > > >>
> > > > > > >> subnet:
> > > > > > >>   type: aria.openstack.nodes.Subnet
> > > > > > >>   properties:
> > > > > > >> resource_id: aria_helloworld_subnet
> > > > > > >> create_if_missing: true
> > > > > > >>   interfaces:
> > > > > > >> Standard:
> > > > > > >>   create:
> > > > > > >> inputs:
> > > > > > >>   openstack_config: { get_input: openstack_config
> }
> > > > > > >>   requirements:
> > > > > > >> - router:
> > > > > > >> node: router
> > > > > > >> relationship:
> > > > > > >>   type: aria.openstack.subnet_connected_to_router
> > > > > > >>   interfaces:
> > > > > > >> Configure:
> > > > > > >>   add_target:
> > > > > > >> inputs: { get_input: openstack_config }
> > > > > > >> - network: network
> > > > > > >>
> > > > > > >> Note the router requirement.  Does this syntax look correct?
> > > > Spoiler:
> > > > > > >> openstack_config never makes it to the plugin.
> > > > > > >>
> > > > > > >> DeWayne
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: subnet connected to router

2017-08-25 Thread Tal Liron
Generally the TOSCA spec is very weak in clearly defining what happens
during inheritance, especially concerning polymorphism. What fields get
inherited? What limitations exist for overriding types and values?

I am hoping to have time in the future to sit down and write a whole
chapter called "Inheritance and Polymorphism" as a contribution to the
spec. In ARIA, at least, we try to work within the required syntax while
still validating the deeper the grammar. I'm working right now on a test
suite for these things.

On Fri, Aug 25, 2017 at 1:03 PM, DeWayne Filppi  wrote:

> Pity.  I guess the stock answer is "if you want to hide such details,
> create your own type".
>
> On Fri, Aug 25, 2017 at 10:57 AM, Tal Liron  wrote:
>
> > "implementation" is a required field in the TOSCA spec, so you must
> specify
> > it even if it is the same.
> >
> > On Fri, Aug 25, 2017 at 12:47 PM, DeWayne Filppi 
> > wrote:
> >
> > > It appears that this issue *was* fixed by repeating the implementation
> > key
> > > in the add_target block.  Intuitively, I would expect that fields I
> > didn't
> > > override would be untouched, but apparently not.
> > >
> > > On Thu, Aug 24, 2017 at 4:59 PM, Tal Liron  wrote:
> > >
> > > > Did you read the wiki? ARIA will send those specially formatted
> > > > dependencies as arguments to the @operation function.
> > > >
> > > > It would help to see your complete example, as I don't know what
> you're
> > > > doing and not doing anymore. Could you throw it into a GitHub repo
> > > perhaps?
> > > >
> > > > On Thu, Aug 24, 2017 at 6:53 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > 'dependencies' is a child of implementation in the spec.   I don't
> > > think
> > > > > it's going to do anything for me anyway.  I just want to pass
> > > > > openstack_config to the add_target operation as inputs.
> > > > >
> > > > > On Thu, Aug 24, 2017 at 4:28 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > What is the error?
> > > > > >
> > > > > > On Thu, Aug 24, 2017 at 6:22 PM, DeWayne Filppi <
> > dewa...@cloudify.co
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > actually "dependencies" fails validation.
> > > > > > >
> > > > > > > On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron 
> > > wrote:
> > > > > > >
> > > > > > > > Oops, sorry, this is the syntax:
> > > > > > > >
> > > > > > > > interfaces:
> > > > > > > >   Configure:
> > > > > > > > add_target:
> > > > > > > >   primary: my_script.sh
> > > > > > > >   dependencies:
> > > > > > > > - "openstack_config > { get_input: openstack_config
> }"
> > > > > > > >
> > > > > > > > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron 
> > > > wrote:
> > > > > > > >
> > > > > > > > > A few syntax problems:
> > > > > > > > >
> > > > > > > > > 1. It looks like you don't have any operation
> implementation,
> > > > which
> > > > > > is
> > > > > > > a
> > > > > > > > > required field. (What do you expect the inputs to be sent
> > to?)
> > > > > > > > > 2. Also, you are not naming the input. It should be
> "inputs:
> > {
> > > > > > > > > my_input_name: { get_input: openstack_config } }"
> > > > > > > > > 3. But #2 won't work because you can't just add inputs in
> > this
> > > > > case,
> > > > > > > > > because they are not declared at the interface type.
> > > > > > > > >
> > > > > > > > > Assuming you do have an implementation, you could you try
> > > passing
> > > > > it
> > > > > > > > using
> > > > > > > > > execution configuration:
> > > > > > > > >
> > > > > > > > > https://cwiki.apache.org/conflu

Re: subnet connected to router

2017-08-25 Thread Tal Liron
Could you create a minimal YAML file that demonstrates this problem so we
can reproduce it? It could be a bug.

On Fri, Aug 25, 2017 at 2:48 PM, DeWayne Filppi  wrote:

> I don't know if this is a clue or not, but I modified the
> relationship:  aria.openstack.subnet_connected_to_router, in the aria
> openstack plugin.yaml to require the 'openstack_config' input for the
> Configure.add_target operation (and got rid of the default).  Afterwards,
> any attempt to validate this:
>
> subnet:
>   type: aria.openstack.nodes.Subnet
>   properties:
> resource_id: aria_helloworld_subnet
> create_if_missing: true
>   interfaces:
> Standard:
>   create:
> inputs:
>   openstack_config: { get_input: openstack_config }
>   requirements:
> - router:
> node: router
> relationship:
>   type: aria.openstack.subnet_connected_to_router
>   interfaces:
> Configure:
>   add_target:
> inputs:
>   openstack_config: { get_input: openstack_config }
>
> Fails with the error : Validation issues:
>   4: interface definition "Configure" does not assign a value to a required
> operation input "add_target.openstack_config" in "relationship"
>
> Which is further confirmation that the input isn't seen, and normally the
> default gets used (empty strings).  I don't see examples anywhere that show
> how to properly override the interface inside a relationship inside a
> requirement.
>
>
> On Fri, Aug 25, 2017 at 10:57 AM, Tal Liron  wrote:
>
> > "implementation" is a required field in the TOSCA spec, so you must
> specify
> > it even if it is the same.
> >
> > On Fri, Aug 25, 2017 at 12:47 PM, DeWayne Filppi 
> > wrote:
> >
> > > It appears that this issue *was* fixed by repeating the implementation
> > key
> > > in the add_target block.  Intuitively, I would expect that fields I
> > didn't
> > > override would be untouched, but apparently not.
> > >
> > > On Thu, Aug 24, 2017 at 4:59 PM, Tal Liron  wrote:
> > >
> > > > Did you read the wiki? ARIA will send those specially formatted
> > > > dependencies as arguments to the @operation function.
> > > >
> > > > It would help to see your complete example, as I don't know what
> you're
> > > > doing and not doing anymore. Could you throw it into a GitHub repo
> > > perhaps?
> > > >
> > > > On Thu, Aug 24, 2017 at 6:53 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > 'dependencies' is a child of implementation in the spec.   I don't
> > > think
> > > > > it's going to do anything for me anyway.  I just want to pass
> > > > > openstack_config to the add_target operation as inputs.
> > > > >
> > > > > On Thu, Aug 24, 2017 at 4:28 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > What is the error?
> > > > > >
> > > > > > On Thu, Aug 24, 2017 at 6:22 PM, DeWayne Filppi <
> > dewa...@cloudify.co
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > actually "dependencies" fails validation.
> > > > > > >
> > > > > > > On Thu, Aug 24, 2017 at 4:08 PM, Tal Liron 
> > > wrote:
> > > > > > >
> > > > > > > > Oops, sorry, this is the syntax:
> > > > > > > >
> > > > > > > > interfaces:
> > > > > > > >   Configure:
> > > > > > > > add_target:
> > > > > > > >   primary: my_script.sh
> > > > > > > >   dependencies:
> > > > > > > > - "openstack_config > { get_input: openstack_config
> }"
> > > > > > > >
> > > > > > > > On Thu, Aug 24, 2017 at 6:00 PM, Tal Liron 
> > > > wrote:
> > > > > > > >
> > > > > > > > > A few syntax problems:
> > > > > > > > >
> > > > > > > > > 1. It looks like you don't have any operation
> implementation,
> > > > which
> > > > > > is
> > > > > > > a
> > > > > &g

Re: subnet connected to router

2017-08-25 Thread Tal Liron
DeWayne, this is still not very minimal, and I don't understand what "dies"
means. Could you please provide the error?

Here's my stab at a minimal example, please let me know if it's what you
got:

tosca_definitions_version: tosca_simple_yaml_1_0

relationship_types:

  MyRelationship:
interfaces:
  Configure:
add_target:
  inputs:
my_input:
  type: string

node_types:

  MyNode:
requirements:
  - my_requirement:
  capability: tosca.capabilities.Container
  relationship: MyRelationship

topology_template:

  node_templates:
my_node:
  type: MyNode
  requirements:
- my_requirement:
relationship:
  interfaces:
Configure:
  add_target:
inputs:
  my_input: test

The above gave me this exception:

AttributeError: 'NoneType' object has no attribute '_name'
  File "/home/emblemparade/ariatosca/aria/parser/consumption/consumer.py",
line 73, in consume
consumer.consume()
  File "/home/emblemparade/ariatosca/aria/parser/consumption/modeling.py",
line 36, in consume
self.context.presentation.presenter._get_model(self.context)
  File "/home/emblemparade/ariatosca/aria/utils/caching.py", line 84, in
__call__
return_value = self.func(*args, **kwargs)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/presenter.py",
line 82, in _get_model
return create_service_template_model(context)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 123, in create_service_template_model
fix_node_template_model(context, model, node_template)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 209, in fix_node_template_model
requirement))
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 293, in create_requirement_template_model
create_relationship_template_model(context, service_template,
relationship)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 318, in create_relationship_template_model
relationship.interfaces)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 621, in create_interface_template_models
interface = create_interface_template_model(context, service_template,
interface)
  File
"/home/emblemparade/ariatosca/extensions/aria_extension_tosca/simple_v1_0/modeling/__init__.py",
line 354, in create_interface_template_model
interface_type =
service_template.interface_types.get_descendant(interface_type._name)
Validation issues:
  0: 'NoneType' object has no attribute '_name'
 AttributeError: 'NoneType' object has no attribute '_name'


On Fri, Aug 25, 2017 at 3:28 PM, DeWayne Filppi  wrote:

> tosca_definitions_version: tosca_simple_yaml_1_0
>
> imports:
>   -
> https://raw.githubusercontent.com/cloudify-cosmo/aria-
> extension-cloudify/master/plugins/openstack/plugin.yaml
>   - aria-1.0
>
> dsl_definitions:
>   openstack_config: &openstack_config
> username:
> password:
> tenant_name:
> auth_url:
>
> topology_template:
>
>   node_templates:
>
> router:
>   type: aria.openstack.nodes.Router
>   properties:
> external_network: gateway_net
> create_if_missing: true
> resource_id: aria_helloworld_rtr
>   interfaces:
> Standard:
>   create:
> inputs:
>   openstack_config: *openstack_config
>
> network:
>   type: aria.openstack.nodes.Network
>   properties:
> resource_id: aria_helloworld_network
> create_if_missing: true
>   interfaces:
> Standard:
>   create:
> inputs:
>   openstack_config: *openstack_config
>
> subnet:
>   type: aria.openstack.nodes.Subnet
>   properties:
> resource_id: aria_helloworld_subnet
> create_if_missing: true
>   interfaces:
> Standard:
>   create:
> inputs:
>   openstack_config: *openstack_config
>   requirements:
> - router:
> node: router
> relationship:
>   type: aria.openstack.subnet_connected_to_router
>   interfaces:
> Configure:
>   add_target:
>         inputs:
>   openstack_config: *openstack_config
> - network: ne

Re: subnet connected to router

2017-08-25 Thread Tal Liron
Is my attempt not what you meant? Was your error different?

On Fri, Aug 25, 2017 at 4:01 PM, DeWayne Filppi  wrote:

> OK.  You want something not tied to Openstack, probably with just two
> nodes.  Will do.
>
> On Fri, Aug 25, 2017 at 1:55 PM, Tal Liron  wrote:
>
> > DeWayne, this is still not very minimal, and I don't understand what
> "dies"
> > means. Could you please provide the error?
> >
> > Here's my stab at a minimal example, please let me know if it's what you
> > got:
> >
> > tosca_definitions_version: tosca_simple_yaml_1_0
> >
> > relationship_types:
> >
> >   MyRelationship:
> > interfaces:
> >   Configure:
> > add_target:
> >   inputs:
> > my_input:
> >   type: string
> >
> > node_types:
> >
> >   MyNode:
> > requirements:
> >   - my_requirement:
> >   capability: tosca.capabilities.Container
> >   relationship: MyRelationship
> >
> > topology_template:
> >
> >   node_templates:
> > my_node:
> >   type: MyNode
> >   requirements:
> > - my_requirement:
> > relationship:
> >   interfaces:
> > Configure:
> >   add_target:
> > inputs:
> >   my_input: test
> >
> > The above gave me this exception:
> >
> > AttributeError: 'NoneType' object has no attribute '_name'
> >   File "/home/emblemparade/ariatosca/aria/parser/consumption/
> consumer.py",
> > line 73, in consume
> > consumer.consume()
> >   File "/home/emblemparade/ariatosca/aria/parser/consumption/
> modeling.py",
> > line 36, in consume
> > self.context.presentation.presenter._get_model(self.context)
> >   File "/home/emblemparade/ariatosca/aria/utils/caching.py", line 84, in
> > __call__
> > return_value = self.func(*args, **kwargs)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/presenter.py",
> > line 82, in _get_model
> > return create_service_template_model(context)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 123, in create_service_template_model
> > fix_node_template_model(context, model, node_template)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 209, in fix_node_template_model
> > requirement))
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 293, in create_requirement_template_model
> > create_relationship_template_model(context, service_template,
> > relationship)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 318, in create_relationship_template_model
> > relationship.interfaces)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 621, in create_interface_template_models
> > interface = create_interface_template_model(context,
> service_template,
> > interface)
> >   File
> > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > tosca/simple_v1_0/modeling/__init__.py",
> > line 354, in create_interface_template_model
> > interface_type =
> > service_template.interface_types.get_descendant(interface_type._name)
> > Validation issues:
> >   0: 'NoneType' object has no attribute '_name'
> >  AttributeError: 'NoneType' object has no attribute '_name'
> >
> >
> > On Fri, Aug 25, 2017 at 3:28 PM, DeWayne Filppi 
> > wrote:
> >
> > > tosca_definitions_version: tosca_simple_yaml_1_0
> > >
> > > imports:
> > >   -
> > > https://raw.githubusercontent.com/cloudify-cosmo/aria-
> > > extension-cloudify/master/plugins/openstack/plugin.yaml
> > >   - aria-1.0
> > >
> > > dsl_definitions:
> > >   openstack_config: &openstack_config
> > > username:
> > > password:
> > > tenant_name:
> > > auth_url:
> > >
> > > topology_template:
&g

Re: subnet connected to router

2017-08-25 Thread Tal Liron
That you supply where? Override where? What is the error? Sorry DeWayne,
very hard for me to follow your explanation. We really need a complete
simple example here and the exact errors that you get.

On Fri, Aug 25, 2017 at 4:06 PM, DeWayne Filppi  wrote:

> Yes, in my case the relationship bewteen the subnet and router (add_target)
> is executed properly.   The problem is that the "openstack_config" input
> that I supply is not passed as an input.  When I run 'aria service show
> -f', it is clear that my override is ignore and default (all empty strings)
> is used.
>
> On Fri, Aug 25, 2017 at 2:02 PM, Tal Liron  wrote:
>
> > Is my attempt not what you meant? Was your error different?
> >
> > On Fri, Aug 25, 2017 at 4:01 PM, DeWayne Filppi 
> > wrote:
> >
> > > OK.  You want something not tied to Openstack, probably with just two
> > > nodes.  Will do.
> > >
> > > On Fri, Aug 25, 2017 at 1:55 PM, Tal Liron  wrote:
> > >
> > > > DeWayne, this is still not very minimal, and I don't understand what
> > > "dies"
> > > > means. Could you please provide the error?
> > > >
> > > > Here's my stab at a minimal example, please let me know if it's what
> > you
> > > > got:
> > > >
> > > > tosca_definitions_version: tosca_simple_yaml_1_0
> > > >
> > > > relationship_types:
> > > >
> > > >   MyRelationship:
> > > > interfaces:
> > > >   Configure:
> > > > add_target:
> > > >   inputs:
> > > > my_input:
> > > >   type: string
> > > >
> > > > node_types:
> > > >
> > > >   MyNode:
> > > > requirements:
> > > >   - my_requirement:
> > > >   capability: tosca.capabilities.Container
> > > >   relationship: MyRelationship
> > > >
> > > > topology_template:
> > > >
> > > >   node_templates:
> > > > my_node:
> > > >   type: MyNode
> > > >   requirements:
> > > > - my_requirement:
> > > > relationship:
> > > >   interfaces:
> > > > Configure:
> > > >   add_target:
> > > > inputs:
> > > >   my_input: test
> > > >
> > > > The above gave me this exception:
> > > >
> > > > AttributeError: 'NoneType' object has no attribute '_name'
> > > >   File "/home/emblemparade/ariatosca/aria/parser/consumption/
> > > consumer.py",
> > > > line 73, in consume
> > > > consumer.consume()
> > > >   File "/home/emblemparade/ariatosca/aria/parser/consumption/
> > > modeling.py",
> > > > line 36, in consume
> > > > self.context.presentation.presenter._get_model(self.context)
> > > >   File "/home/emblemparade/ariatosca/aria/utils/caching.py", line
> 84,
> > in
> > > > __call__
> > > > return_value = self.func(*args, **kwargs)
> > > >   File
> > > > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > > > tosca/simple_v1_0/presenter.py",
> > > > line 82, in _get_model
> > > > return create_service_template_model(context)
> > > >   File
> > > > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > > > tosca/simple_v1_0/modeling/__init__.py",
> > > > line 123, in create_service_template_model
> > > > fix_node_template_model(context, model, node_template)
> > > >   File
> > > > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > > > tosca/simple_v1_0/modeling/__init__.py",
> > > > line 209, in fix_node_template_model
> > > > requirement))
> > > >   File
> > > > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > > > tosca/simple_v1_0/modeling/__init__.py",
> > > > line 293, in create_requirement_template_model
> > > > create_relationship_template_model(context, service_template,
> > > > relationship)
> > > >   File
> > > > "/home/emblemparade/ariatosca/extensions/aria_extension_
> > > > tosca/simple_v1_0/modeling/__ini

Re: subnet connected to router

2017-08-25 Thread Tal Liron
If you couldn't do it until now, then I'm confused as to where you see a
failure. You say you already see a problem in "aria service show -f", so
shouldn't that be enough to see the bug?

To be honest, I'm also unsure how to access the operation inputs cleanly
with ctx. The problem is that we merge the inputs into the operation
arguments, and there's no way to know which is which. I've argued against
this in the past. Perhaps this use case could help convince the rest of the
team that it's a good idea to separate them.

For now, you can access the task arguments: those include the inputs as
well as other implementation-specific arguments. To print them out in JSON
from an implementation script:

ctx --json-output task arguments

There is also a security breach here: the task arguments include things
like the SSH password, which I don't think should be exposed via ctx ever.
Yet another argument for separation.


Re: subnet connected to router

2017-08-25 Thread Tal Liron
And you're saying that also in "aria services show -f" you see that it's
NOT SET?

On Fri, Aug 25, 2017 at 5:29 PM, DeWayne Filppi  wrote:

> Never mind, figured it out from the code.  Here's the simplified template:
>
> tosca_definitions_version: tosca_simple_yaml_1_0
>
>
> imports:
>   - aria-1.0
>
> dsl_definitions:
>   openstack_config: &openstack_config
> username: dewayne
>
> data_types:
>   config:
> properties:
>   username:
> type: string
> default: 'NOT SET'
>
> relationship_types:
>   subnet_connected_to_router:
> derived_from: ConnectsTo
> interfaces:
>   Configure:
> add_target:
>   implementation: connect.sh
>   inputs:
>   openstack_config:
> type: config
> required: true
> default: {}
>
> node_types:
>   router:
> derived_from: tosca.nodes.Root
>
>   subnet:
> derived_from: tosca.nodes.Root
> requirements:
>   - router:
>   capability: tosca.capabilities.Node
>   node: router
>   relationship: subnet_connected_to_router
>
> topology_template:
>
>   node_templates:
>
> router:
>   type: router
>
> subnet:
>   type: subnet
>   requirements:
> - router:
> node: router
> relationship:
>   type: subnet_connected_to_router
>   interfaces:
> Configure:
>   add_target:
> inputs:
>   openstack_config: *openstack_config
>
>
> There is a script in the same directory referred to "connect.sh":
>
> #!/bin/sh
>
> ctx logger info "HERE $openstack_config"
>
>
> When "install" is run, the output of the log statement is "NOT SET" (the
> default).  Even though I have overridden it (should be "dewayne").
>
>
>
> On Fri, Aug 25, 2017 at 3:06 PM, DeWayne Filppi 
> wrote:
>
> > For the simplified example I need to provide a shell script or python
> > script that dumps the inputs passed to the operation impl.  Not sure how
> > that's done in ARIA.  IOW, I don't know how to refer to inputs, and don't
> > see any example.
> >
> > On Fri, Aug 25, 2017 at 2:17 PM, Tal Liron  wrote:
> >
> >> That you supply where? Override where? What is the error? Sorry DeWayne,
> >> very hard for me to follow your explanation. We really need a complete
> >> simple example here and the exact errors that you get.
> >>
> >> On Fri, Aug 25, 2017 at 4:06 PM, DeWayne Filppi 
> >> wrote:
> >>
> >> > Yes, in my case the relationship bewteen the subnet and router
> >> (add_target)
> >> > is executed properly.   The problem is that the "openstack_config"
> input
> >> > that I supply is not passed as an input.  When I run 'aria service
> show
> >> > -f', it is clear that my override is ignore and default (all empty
> >> strings)
> >> > is used.
> >> >
> >> > On Fri, Aug 25, 2017 at 2:02 PM, Tal Liron  wrote:
> >> >
> >> > > Is my attempt not what you meant? Was your error different?
> >> > >
> >> > > On Fri, Aug 25, 2017 at 4:01 PM, DeWayne Filppi <
> dewa...@cloudify.co>
> >> > > wrote:
> >> > >
> >> > > > OK.  You want something not tied to Openstack, probably with just
> >> two
> >> > > > nodes.  Will do.
> >> > > >
> >> > > > On Fri, Aug 25, 2017 at 1:55 PM, Tal Liron 
> wrote:
> >> > > >
> >> > > > > DeWayne, this is still not very minimal, and I don't understand
> >> what
> >> > > > "dies"
> >> > > > > means. Could you please provide the error?
> >> > > > >
> >> > > > > Here's my stab at a minimal example, please let me know if it's
> >> what
> >> > > you
> >> > > > > got:
> >> > > > >
> >> > > > > tosca_definitions_version: tosca_simple_yaml_1_0
> >> > > > >
> >> > > > > relationship_types:
> >> > > > >
> >> > > > >   MyRelationship:
> >> > > > > interfaces:
> >> > > > >   Configure:
> >> > &g

Re: subnet connected to router

2017-08-25 Thread Tal Liron
OK. Now we're getting somewhere. I tried to create a more minimal example
to reproduce this, but without success. My example correctly assigns the
value when I run "aria services show -f". I wonder if it's a bug that was
fixed somewhere or if there's something else going on in your more complex
example.

Could you try with the attached yaml?

On Fri, Aug 25, 2017 at 5:33 PM, DeWayne Filppi  wrote:

> Yeah:
>
>   Arguments:
> process: {} (map)
> Sub-process configuration.
> script_path: 'connect.sh' (string)
> Relative path to the executable file.
> openstack_config: {'username': 'NOT SET'} (map)
>
>
>
> On Fri, Aug 25, 2017 at 3:31 PM, Tal Liron  wrote:
>
> > And you're saying that also in "aria services show -f" you see that it's
> > NOT SET?
> >
> > On Fri, Aug 25, 2017 at 5:29 PM, DeWayne Filppi 
> > wrote:
> >
> > > Never mind, figured it out from the code.  Here's the simplified
> > template:
> > >
> > > tosca_definitions_version: tosca_simple_yaml_1_0
> > >
> > >
> > > imports:
> > >   - aria-1.0
> > >
> > > dsl_definitions:
> > >   openstack_config: &openstack_config
> > > username: dewayne
> > >
> > > data_types:
> > >   config:
> > > properties:
> > >   username:
> > > type: string
> > > default: 'NOT SET'
> > >
> > > relationship_types:
> > >   subnet_connected_to_router:
> > > derived_from: ConnectsTo
> > > interfaces:
> > >   Configure:
> > > add_target:
> > >   implementation: connect.sh
> > >   inputs:
> > >   openstack_config:
> > > type: config
> > > required: true
> > > default: {}
> > >
> > > node_types:
> > >   router:
> > > derived_from: tosca.nodes.Root
> > >
> > >   subnet:
> > > derived_from: tosca.nodes.Root
> > > requirements:
> > >   - router:
> > >   capability: tosca.capabilities.Node
> > >   node: router
> > >   relationship: subnet_connected_to_router
> > >
> > > topology_template:
> > >
> > >   node_templates:
> > >
> > > router:
> > >   type: router
> > >
> > > subnet:
> > >   type: subnet
> > >   requirements:
> > > - router:
> > > node: router
> > > relationship:
> > >   type: subnet_connected_to_router
> > >   interfaces:
> > > Configure:
> > >   add_target:
> > > inputs:
> > >   openstack_config: *openstack_config
> > >
> > >
> > > There is a script in the same directory referred to "connect.sh":
> > >
> > > #!/bin/sh
> > >
> > > ctx logger info "HERE $openstack_config"
> > >
> > >
> > > When "install" is run, the output of the log statement is "NOT SET"
> (the
> > > default).  Even though I have overridden it (should be "dewayne").
> > >
> > >
> > >
> > > On Fri, Aug 25, 2017 at 3:06 PM, DeWayne Filppi 
> > > wrote:
> > >
> > > > For the simplified example I need to provide a shell script or python
> > > > script that dumps the inputs passed to the operation impl.  Not sure
> > how
> > > > that's done in ARIA.  IOW, I don't know how to refer to inputs, and
> > don't
> > > > see any example.
> > > >
> > > > On Fri, Aug 25, 2017 at 2:17 PM, Tal Liron  wrote:
> > > >
> > > >> That you supply where? Override where? What is the error? Sorry
> > DeWayne,
> > > >> very hard for me to follow your explanation. We really need a
> complete
> > > >> simple example here and the exact errors that you get.
> > > >>
> > > >> On Fri, Aug 25, 2017 at 4:06 PM, DeWayne Filppi <
> dewa...@cloudify.co>
> > > >> wrote:
> > > >>
> > > >> > Yes, in my case the relationship bewteen 

Re: subnet connected to router

2017-08-28 Thread Tal Liron
Hm, are you using a git snapshot or a release? This issue was fixed on git
but not released yet.

On Fri, Aug 25, 2017 at 7:20 PM, DeWayne Filppi  wrote:

> I got this:
>
> Validation issues:
>   5: requirement "my_requirement" of node "my_node2_1" has no target node
> template
>
>
> On Fri, Aug 25, 2017 at 3:42 PM, Tal Liron  wrote:
>
> > OK. Now we're getting somewhere. I tried to create a more minimal example
> > to reproduce this, but without success. My example correctly assigns the
> > value when I run "aria services show -f". I wonder if it's a bug that was
> > fixed somewhere or if there's something else going on in your more
> complex
> > example.
> >
> > Could you try with the attached yaml?
> >
> > On Fri, Aug 25, 2017 at 5:33 PM, DeWayne Filppi 
> > wrote:
> >
> >> Yeah:
> >>
> >>   Arguments:
> >> process: {} (map)
> >> Sub-process configuration.
> >> script_path: 'connect.sh' (string)
> >> Relative path to the executable file.
> >> openstack_config: {'username': 'NOT SET'} (map)
> >>
> >>
> >>
> >> On Fri, Aug 25, 2017 at 3:31 PM, Tal Liron  wrote:
> >>
> >> > And you're saying that also in "aria services show -f" you see that
> it's
> >> > NOT SET?
> >> >
> >> > On Fri, Aug 25, 2017 at 5:29 PM, DeWayne Filppi 
> >> > wrote:
> >> >
> >> > > Never mind, figured it out from the code.  Here's the simplified
> >> > template:
> >> > >
> >> > > tosca_definitions_version: tosca_simple_yaml_1_0
> >> > >
> >> > >
> >> > > imports:
> >> > >   - aria-1.0
> >> > >
> >> > > dsl_definitions:
> >> > >   openstack_config: &openstack_config
> >> > > username: dewayne
> >> > >
> >> > > data_types:
> >> > >   config:
> >> > > properties:
> >> > >   username:
> >> > > type: string
> >> > > default: 'NOT SET'
> >> > >
> >> > > relationship_types:
> >> > >   subnet_connected_to_router:
> >> > > derived_from: ConnectsTo
> >> > > interfaces:
> >> > >   Configure:
> >> > > add_target:
> >> > >   implementation: connect.sh
> >> > >   inputs:
> >> > >   openstack_config:
> >> > > type: config
> >> > > required: true
> >> > > default: {}
> >> > >
> >> > > node_types:
> >> > >   router:
> >> > > derived_from: tosca.nodes.Root
> >> > >
> >> > >   subnet:
> >> > > derived_from: tosca.nodes.Root
> >> > > requirements:
> >> > >   - router:
> >> > >   capability: tosca.capabilities.Node
> >> > >   node: router
> >> > >   relationship: subnet_connected_to_router
> >> > >
> >> > > topology_template:
> >> > >
> >> > >   node_templates:
> >> > >
> >> > > router:
> >> > >   type: router
> >> > >
> >> > > subnet:
> >> > >   type: subnet
> >> > >   requirements:
> >> > > - router:
> >> > > node: router
> >> > > relationship:
> >> > >   type: subnet_connected_to_router
> >> > >   interfaces:
> >> > > Configure:
> >> > >   add_target:
> >> > > inputs:
> >> > >   openstack_config: *openstack_config
> >> > >
> >> > >
> >> > > There is a script in the same directory referred to "connect.sh":
> >> > >
> >> > > #!/bin/sh
> >> > >
> >> > > ctx logger info "HERE $openstack_config"
> >> > >
> >> > >
> >> > > When "install" is

Re: Seeing error "Validation issues: unknown parent type "tosca:Root" in WebServer"

2017-08-28 Thread Tal Liron
That is definitely a bug. Could you please try installing apache-ariatosca
in a virtualenv and see if you get the ctx link there? We mostly test in
virtualenvs.

On Fri, Aug 25, 2017 at 9:27 PM, Vishwanath Jayaraman <
vishwana...@hotmail.com> wrote:

> After the installation of apache-ariatosca, I executed the "pip show
> apache-ariatosca" command and below is the console output. Which of the
> below is related to "ctx"? Also, the /usr/local/bin/ is missing the 'ctx'
> binary on the 16.04 ubuntu, however, I do see the 'aria' binary in that
> location.
>
>
> ==Begin Console Output===
>
> Name: apache-ariatosca
>
> Version: 0.1.1
>
> Summary: ARIA
>
> Home-page: http://ariatosca.incubator.apache.org/
>
> Author: ARIA
>
> Author-email: dev@ariatosca.incubator.apache.org
>
> License: Apache License 2.0
>
> Location: /usr/local/lib/python2.7/dist-packages
>
> Requires: psutil, ruamel.yaml, SQLAlchemy, logutils, requests,
> PrettyTable, jsonpickle, click-didyoumean, blinker,
> backports.shutil-get-terminal-size, clint, colorama, wagon, CacheControl,
> retrying, bottle, click, setuptools, networkx, shortuuid, Jinja2
>
> ==End Console output=
>
>
> Vish
>
>
> 
> From: Vishwanath Jayaraman 
> Sent: Friday, August 25, 2017 11:20 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Seeing error "Validation issues: unknown parent type
> "tosca:Root" in WebServer"
>
>
> Ran,
>
> On a fresh Ubuntu 16.04.3 LTS install, I followed the below steps
>
> 1  sudo apt-get update -y
> 2  sudo apt install -y python-pip git
> 3  sudo pip install --upgrade pip setuptools
> 4  sudo apt-get install -y python-dev gcc libffi-dev libssl-dev
> 5  sudo pip install apache-ariatosca
> 6  sudo pip install apache-ariatosca[ssh]
> 7 ctx (console output message is "ctx: command not found")
>
> From the above steps, does it look like I could be missing something that
> is not installing the 'ctx' binary.
>
>
> Thanks
>
> Vish
>
>
> 
> From: Ran Ziv 
> Sent: Friday, August 25, 2017 4:43 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Seeing error "Validation issues: unknown parent type
> "tosca:Root" in WebServer"
>
> Hi Vishwanath,
>
> Thanks for helping in updating the hello-world example readme. Sorry about
> the lack of clarity there regarding the need to copy the template's
> resources as well.
>
> Regarding the ctx error, the ctx is a binary that should get installed in
> your environment when you install ARIA. It should not be installed
> separately.
> try reinstalling ARIA and running "ctx" from the shell - that should give
> you an error, but one from the "ctx" program, not one that such a program
> was not found.
>
>
> On Fri, Aug 25, 2017 at 4:40 AM, Vishwanath Jayaraman <
> vishwana...@hotmail.com> wrote:
>
> > For a first time user, its not apparent in the instructions at
> >
> > http://ariatosca.incubator.apache.org/getting-started/ that they may
> need
> > to copy the entire examples directory or clone the github repo, I will go
> > ahead and open a JIRA to update the README.rst with those additional
> > details.
> >
> > Only issue pending at this time is "ctx: command not found" error. Once,
> I
> > get a response on how that gets installed, I will include that in the
> > dependencies section.
> >
> >
> > Vish
> >
> >
> > 
> > From: DeWayne Filppi 
> > Sent: Thursday, August 24, 2017 1:28 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Seeing error "Validation issues: unknown parent type
> > "tosca:Root" in WebServer"
> >
> > I'm running in a cloned github repo, so all is there.  On another front,
> > the Aria port type for Openstack requires that every port have a
> > public/floating ip.
> >
> > On Thu, Aug 24, 2017 at 11:11 AM, Tal Liron  wrote:
> >
> > > You need not just the helloworld YAML file, but also all the scripts it
> > > references. Try copying the whole examples directory to make sure.
> > >
> > > We are planning to eventually display a validation error if the YAML
> file
> > > references artifacts that don't exist, so you wouldn't have to wait
> until
> > > execution to see the error. This would be part of our general work on
> > > 

Re: subnet connected to router

2017-08-28 Thread Tal Liron
OK, so unfortunately you still have the bug. To workaround, you have to
specify the "node" field explicitly for all requirements. Or you can use
git master for now.

On Mon, Aug 28, 2017 at 11:58 AM, DeWayne Filppi 
wrote:

> I'm on 0.1.1
>
> On Mon, Aug 28, 2017 at 9:48 AM, Tal Liron  wrote:
>
> > Hm, are you using a git snapshot or a release? This issue was fixed on
> git
> > but not released yet.
> >
> > On Fri, Aug 25, 2017 at 7:20 PM, DeWayne Filppi 
> > wrote:
> >
> > > I got this:
> > >
> > > Validation issues:
> > >   5: requirement "my_requirement" of node "my_node2_1" has no target
> node
> > > template
> > >
> > >
> > > On Fri, Aug 25, 2017 at 3:42 PM, Tal Liron  wrote:
> > >
> > > > OK. Now we're getting somewhere. I tried to create a more minimal
> > example
> > > > to reproduce this, but without success. My example correctly assigns
> > the
> > > > value when I run "aria services show -f". I wonder if it's a bug that
> > was
> > > > fixed somewhere or if there's something else going on in your more
> > > complex
> > > > example.
> > > >
> > > > Could you try with the attached yaml?
> > > >
> > > > On Fri, Aug 25, 2017 at 5:33 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > >> Yeah:
> > > >>
> > > >>   Arguments:
> > > >> process: {} (map)
> > > >> Sub-process configuration.
> > > >> script_path: 'connect.sh' (string)
> > > >> Relative path to the executable file.
> > > >> openstack_config: {'username': 'NOT SET'} (map)
> > > >>
> > > >>
> > > >>
> > > >> On Fri, Aug 25, 2017 at 3:31 PM, Tal Liron  wrote:
> > > >>
> > > >> > And you're saying that also in "aria services show -f" you see
> that
> > > it's
> > > >> > NOT SET?
> > > >> >
> > > >> > On Fri, Aug 25, 2017 at 5:29 PM, DeWayne Filppi <
> > dewa...@cloudify.co>
> > > >> > wrote:
> > > >> >
> > > >> > > Never mind, figured it out from the code.  Here's the simplified
> > > >> > template:
> > > >> > >
> > > >> > > tosca_definitions_version: tosca_simple_yaml_1_0
> > > >> > >
> > > >> > >
> > > >> > > imports:
> > > >> > >   - aria-1.0
> > > >> > >
> > > >> > > dsl_definitions:
> > > >> > >   openstack_config: &openstack_config
> > > >> > > username: dewayne
> > > >> > >
> > > >> > > data_types:
> > > >> > >   config:
> > > >> > > properties:
> > > >> > >   username:
> > > >> > > type: string
> > > >> > > default: 'NOT SET'
> > > >> > >
> > > >> > > relationship_types:
> > > >> > >   subnet_connected_to_router:
> > > >> > > derived_from: ConnectsTo
> > > >> > > interfaces:
> > > >> > >   Configure:
> > > >> > > add_target:
> > > >> > >   implementation: connect.sh
> > > >> > >   inputs:
> > > >> > >   openstack_config:
> > > >> > > type: config
> > > >> > > required: true
> > > >> > > default: {}
> > > >> > >
> > > >> > > node_types:
> > > >> > >   router:
> > > >> > > derived_from: tosca.nodes.Root
> > > >> > >
> > > >> > >   subnet:
> > > >> > > derived_from: tosca.nodes.Root
> > > >> > > requirements:
> > > >> > >   - router:
> > > >> > >   capability: tosca.capabilities.Node
> > > >> > >   node: router
> > > >> > >   relationship: su

Re: Seeing error "Validation issues: unknown parent type "tosca:Root" in WebServer"

2017-08-28 Thread Tal Liron
Name: apache-ariatosca
Version: 0.2.0
Summary: ARIA
Home-page: http://ariatosca.incubator.apache.org/
Author: ARIA
Author-email: dev@ariatosca.incubator.apache.org
License: Apache License 2.0
Location: /home/user/ariatosca
Requires: requests, networkx, retrying, blinker, jsonpickle, ruamel.yaml,
Jinja2, shortuuid, CacheControl, SQLAlchemy, wagon, bottle, setuptools,
click, colorama, PrettyTable, click-didyoumean,
backports.shutil-get-terminal-size, logutils, psutil


On Mon, Aug 28, 2017 at 12:48 PM, Vishwanath Jayaraman <
vishwana...@hotmail.com> wrote:

> Tal,
>
> Do you mind sharing the output of command "pip show apache-ariatosca" from
> your development environment?
>
> Thanks
>
>
> Vish
>
>
> 
> From: Tal Liron 
> Sent: Monday, August 28, 2017 11:49 AM
> To: dev@ariatosca.incubator.apache.org
> Subject: Re: Seeing error "Validation issues: unknown parent type
> "tosca:Root" in WebServer"
>
> That is definitely a bug. Could you please try installing apache-ariatosca
> in a virtualenv and see if you get the ctx link there? We mostly test in
> virtualenvs.
>
> On Fri, Aug 25, 2017 at 9:27 PM, Vishwanath Jayaraman <
> vishwana...@hotmail.com> wrote:
>
> > After the installation of apache-ariatosca, I executed the "pip show
> > apache-ariatosca" command and below is the console output. Which of the
> > below is related to "ctx"? Also, the /usr/local/bin/ is missing the 'ctx'
> > binary on the 16.04 ubuntu, however, I do see the 'aria' binary in that
> > location.
> >
> >
> > ==Begin Console Output===
> >
> > Name: apache-ariatosca
> >
> > Version: 0.1.1
> >
> > Summary: ARIA
> >
> > Home-page: http://ariatosca.incubator.apache.org/
> Welcome to The Apache Software Foundation!<http://ariatosca.
> incubator.apache.org/>
> ariatosca.incubator.apache.org
> Open. The Apache Software Foundation. provides support for the Apache
> Community of open-source software projects, which provide software products
> for the public good.
>
>
>
> >
> > Author: ARIA
> >
> > Author-email: dev@ariatosca.incubator.apache.org
> >
> > License: Apache License 2.0
> >
> > Location: /usr/local/lib/python2.7/dist-packages
> >
> > Requires: psutil, ruamel.yaml, SQLAlchemy, logutils, requests,
> > PrettyTable, jsonpickle, click-didyoumean, blinker,
> > backports.shutil-get-terminal-size, clint, colorama, wagon,
> CacheControl,
> > retrying, bottle, click, setuptools, networkx, shortuuid, Jinja2
> >
> > ==End Console output=
> >
> >
> > Vish
> >
> >
> > 
> > From: Vishwanath Jayaraman 
> > Sent: Friday, August 25, 2017 11:20 AM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Seeing error "Validation issues: unknown parent type
> > "tosca:Root" in WebServer"
> >
> >
> > Ran,
> >
> > On a fresh Ubuntu 16.04.3 LTS install, I followed the below steps
> >
> > 1  sudo apt-get update -y
> > 2  sudo apt install -y python-pip git
> > 3  sudo pip install --upgrade pip setuptools
> > 4  sudo apt-get install -y python-dev gcc libffi-dev libssl-dev
> > 5  sudo pip install apache-ariatosca
> > 6  sudo pip install apache-ariatosca[ssh]
> > 7 ctx (console output message is "ctx: command not found")
> >
> > From the above steps, does it look like I could be missing something that
> > is not installing the 'ctx' binary.
> >
> >
> > Thanks
> >
> > Vish
> >
> >
> > 
> > From: Ran Ziv 
> > Sent: Friday, August 25, 2017 4:43 AM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Seeing error "Validation issues: unknown parent type
> > "tosca:Root" in WebServer"
> >
> > Hi Vishwanath,
> >
> > Thanks for helping in updating the hello-world example readme. Sorry
> about
> > the lack of clarity there regarding the need to copy the template's
> > resources as well.
> >
> > Regarding the ctx error, the ctx is a binary that should get installed in
> > your environment when you install ARIA. It should not be installed
> > separately.
> > try reinstalling ARIA and running "ctx" from the shell - that should give
> > you an error, but one from the "ctx" program, not one that such a program
> > was not found

Re: subnet connected to router

2017-08-28 Thread Tal Liron
I'm again confused, DeWayne. Is the error with the example I provided here?
Please let's start with this minimal example to make sure we're on the same
page. If the example validates for you, we can try adding features to try
to see what replicates the bug.

On Mon, Aug 28, 2017 at 1:37 PM, DeWayne Filppi  wrote:

> Yeah, except my original example I sent *did* specify the node.
>
> On Mon, Aug 28, 2017 at 10:06 AM, Tal Liron  wrote:
>
> > OK, so unfortunately you still have the bug. To workaround, you have to
> > specify the "node" field explicitly for all requirements. Or you can use
> > git master for now.
> >
> > On Mon, Aug 28, 2017 at 11:58 AM, DeWayne Filppi 
> > wrote:
> >
> > > I'm on 0.1.1
> > >
> > > On Mon, Aug 28, 2017 at 9:48 AM, Tal Liron  wrote:
> > >
> > > > Hm, are you using a git snapshot or a release? This issue was fixed
> on
> > > git
> > > > but not released yet.
> > > >
> > > > On Fri, Aug 25, 2017 at 7:20 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > I got this:
> > > > >
> > > > > Validation issues:
> > > > >   5: requirement "my_requirement" of node "my_node2_1" has no
> target
> > > node
> > > > > template
> > > > >
> > > > >
> > > > > On Fri, Aug 25, 2017 at 3:42 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > OK. Now we're getting somewhere. I tried to create a more minimal
> > > > example
> > > > > > to reproduce this, but without success. My example correctly
> > assigns
> > > > the
> > > > > > value when I run "aria services show -f". I wonder if it's a bug
> > that
> > > > was
> > > > > > fixed somewhere or if there's something else going on in your
> more
> > > > > complex
> > > > > > example.
> > > > > >
> > > > > > Could you try with the attached yaml?
> > > > > >
> > > > > > On Fri, Aug 25, 2017 at 5:33 PM, DeWayne Filppi <
> > dewa...@cloudify.co
> > > >
> > > > > > wrote:
> > > > > >
> > > > > >> Yeah:
> > > > > >>
> > > > > >>   Arguments:
> > > > > >> process: {} (map)
> > > > > >> Sub-process configuration.
> > > > > >> script_path: 'connect.sh' (string)
> > > > > >> Relative path to the executable file.
> > > > > >> openstack_config: {'username': 'NOT SET'} (map)
> > > > > >>
> > > > > >>
> > > > > >>
> > > > > >> On Fri, Aug 25, 2017 at 3:31 PM, Tal Liron 
> > wrote:
> > > > > >>
> > > > > >> > And you're saying that also in "aria services show -f" you see
> > > that
> > > > > it's
> > > > > >> > NOT SET?
> > > > > >> >
> > > > > >> > On Fri, Aug 25, 2017 at 5:29 PM, DeWayne Filppi <
> > > > dewa...@cloudify.co>
> > > > > >> > wrote:
> > > > > >> >
> > > > > >> > > Never mind, figured it out from the code.  Here's the
> > simplified
> > > > > >> > template:
> > > > > >> > >
> > > > > >> > > tosca_definitions_version: tosca_simple_yaml_1_0
> > > > > >> > >
> > > > > >> > >
> > > > > >> > > imports:
> > > > > >> > >   - aria-1.0
> > > > > >> > >
> > > > > >> > > dsl_definitions:
> > > > > >> > >   openstack_config: &openstack_config
> > > > > >> > > username: dewayne
> > > > > >> > >
> > > > > >> > > data_types:
> > > > > >> > >   config:
> > > > > >> > > properties:
> > > > > >> > >   username:
> > > > > >> > > type: string

Re: Seeing error "Validation issues: unknown parent type "tosca:Root" in WebServer"

2017-08-28 Thread Tal Liron
Vish, I would very much appreciate if you could verify if ctx is available
in these cases:

1) 0.1.1: system install
2) 0.1.1: virtualenv
3) git master: system install
4) git master: virtualenv

Also, what OS are you using, and is there anything special about your
install?


On Mon, Aug 28, 2017 at 1:42 PM, DeWayne Filppi  wrote:

> I have 0.1.1 and ctx is there.
>
> On Mon, Aug 28, 2017 at 11:34 AM, Vishwanath Jayaraman <
> vishwana...@hotmail.com> wrote:
>
> > Tal,
> >
> >  Appreciate the prompt response.
> >
> > Looks like the apache-ariatosca version that gets installed when
> following
> > instructions at http://ariatosca.incubator.apache.org/getting-started/
> > is 0.1.1 (in my case) and in your output, the version is 0.2.0.
> >
> > So, like you mentioned in your earlier email, there could be a bug in
> > 0.1.1.
> >
> > Also, I do not see in the "Requires:" section of the output anything
> > related to "ctx", I am guessing ctx is installed when apache-ariatosca is
> > installed.
> >
> >
> > If one other person can confirm that "ctx" is not installed when
> > apache-ariatosca version 0.1.1 is installed, do you think we should open
> a
> > bug?
> >
> > Thoughts, suggestions?
> >
> > Thanks
> >
> > Vish
> >
> >
> > 
> > From: Tal Liron 
> > Sent: Monday, August 28, 2017 1:11 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Seeing error "Validation issues: unknown parent type
> > "tosca:Root" in WebServer"
> >
> > Name: apache-ariatosca
> > Version: 0.2.0
> > Summary: ARIA
> > Home-page: http://ariatosca.incubator.apache.org/
> > Welcome to The Apache Software Foundation!<http://ariatosca.
> > incubator.apache.org/>
> > ariatosca.incubator.apache.org
> > Open. The Apache Software Foundation. provides support for the Apache
> > Community of open-source software projects, which provide software
> products
> > for the public good.
> >
> >
> >
> > Author: ARIA
> > Author-email: dev@ariatosca.incubator.apache.org
> > License: Apache License 2.0
> > Location: /home/user/ariatosca
> > Requires: requests, networkx, retrying, blinker, jsonpickle, ruamel.yaml,
> > Jinja2, shortuuid, CacheControl, SQLAlchemy, wagon, bottle, setuptools,
> > click, colorama, PrettyTable, click-didyoumean,
> > backports.shutil-get-terminal-size, logutils, psutil
> >
> >
> > On Mon, Aug 28, 2017 at 12:48 PM, Vishwanath Jayaraman <
> > vishwana...@hotmail.com> wrote:
> >
> > > Tal,
> > >
> > > Do you mind sharing the output of command "pip show apache-ariatosca"
> > from
> > > your development environment?
> > >
> > > Thanks
> > >
> > >
> > > Vish
> > >
> > >
> > > 
> > > From: Tal Liron 
> > > Sent: Monday, August 28, 2017 11:49 AM
> > > To: dev@ariatosca.incubator.apache.org
> > > Subject: Re: Seeing error "Validation issues: unknown parent type
> > > "tosca:Root" in WebServer"
> > >
> > > That is definitely a bug. Could you please try installing
> > apache-ariatosca
> > > in a virtualenv and see if you get the ctx link there? We mostly test
> in
> > > virtualenvs.
> > >
> > > On Fri, Aug 25, 2017 at 9:27 PM, Vishwanath Jayaraman <
> > > vishwana...@hotmail.com> wrote:
> > >
> > > > After the installation of apache-ariatosca, I executed the "pip show
> > > > apache-ariatosca" command and below is the console output. Which of
> the
> > > > below is related to "ctx"? Also, the /usr/local/bin/ is missing the
> > 'ctx'
> > > > binary on the 16.04 ubuntu, however, I do see the 'aria' binary in
> that
> > > > location.
> > > >
> > > >
> > > > ==Begin Console Output===
> > > >
> > > > Name: apache-ariatosca
> > > >
> > > > Version: 0.1.1
> > > >
> > > > Summary: ARIA
> > > >
> > > > Home-page: http://ariatosca.incubator.apache.org/
> > Welcome to The Apache Software Foundation!<http://ariatosca.
> > incubator.apache.org/>
> > ariatosca.incubator.apache.org
> > Open. The Apache Software Foundation. provides support for the Apache
> >

Re: subnet connected to router

2017-08-28 Thread Tal Liron
DeWayne, could please use the example I provided? I prefer to start with
something without Openstack or any other dependencies so we can isolate the
bug precisely.

On Mon, Aug 28, 2017 at 3:53 PM, DeWayne Filppi  wrote:

> OK.  Here's the example with no dependencies.  Two nodes.
>
> tosca_definitions_version: tosca_simple_yaml_1_0
>
>
> imports:
>   -
> https://raw.githubusercontent.com/cloudify-cosmo/aria-
> extension-cloudify/master/plugins/openstack/plugin.yaml
>   - aria-1.0
>
> dsl_definitions:
>   openstack_config: &openstack_config
> username: dewayne
>
> data_types:
>   config:
> properties:
>   username:
> type: string
> default: 'NOT SET'
>
> relationship_types:
>   subnet_connected_to_router:
> derived_from: ConnectsTo
> interfaces:
>   Configure:
> add_target:
>   implementation: connect.sh
>   inputs:
>   openstack_config:
> type: config
> required: true
> default: {}
>
> node_types:
>   router:
> derived_from: tosca.nodes.Root
>
>   subnet:
> derived_from: tosca.nodes.Root
> requirements:
>   - router:
>   capability: tosca.capabilities.Node
>   relationship: subnet_connected_to_router
>
> topology_template:
>
>   node_templates:
>
> router:
>   type: router
>
> subnet:
>   type: subnet
>   requirements:
> - router:
> node: router
> relationship:
>   type: subnet_connected_to_router
>   interfaces:
> Configure:
>   add_target:
> inputs:
>   openstack_config: *openstack_config
>
>
> On Mon, Aug 28, 2017 at 1:14 PM, Tal Liron  wrote:
>
> > I'm again confused, DeWayne. Is the error with the example I provided
> here?
> > Please let's start with this minimal example to make sure we're on the
> same
> > page. If the example validates for you, we can try adding features to try
> > to see what replicates the bug.
> >
> > On Mon, Aug 28, 2017 at 1:37 PM, DeWayne Filppi 
> > wrote:
> >
> > > Yeah, except my original example I sent *did* specify the node.
> > >
> > > On Mon, Aug 28, 2017 at 10:06 AM, Tal Liron  wrote:
> > >
> > > > OK, so unfortunately you still have the bug. To workaround, you have
> to
> > > > specify the "node" field explicitly for all requirements. Or you can
> > use
> > > > git master for now.
> > > >
> > > > On Mon, Aug 28, 2017 at 11:58 AM, DeWayne Filppi <
> dewa...@cloudify.co>
> > > > wrote:
> > > >
> > > > > I'm on 0.1.1
> > > > >
> > > > > On Mon, Aug 28, 2017 at 9:48 AM, Tal Liron 
> wrote:
> > > > >
> > > > > > Hm, are you using a git snapshot or a release? This issue was
> fixed
> > > on
> > > > > git
> > > > > > but not released yet.
> > > > > >
> > > > > > On Fri, Aug 25, 2017 at 7:20 PM, DeWayne Filppi <
> > dewa...@cloudify.co
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > I got this:
> > > > > > >
> > > > > > > Validation issues:
> > > > > > >   5: requirement "my_requirement" of node "my_node2_1" has no
> > > target
> > > > > node
> > > > > > > template
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Aug 25, 2017 at 3:42 PM, Tal Liron 
> > > wrote:
> > > > > > >
> > > > > > > > OK. Now we're getting somewhere. I tried to create a more
> > minimal
> > > > > > example
> > > > > > > > to reproduce this, but without success. My example correctly
> > > > assigns
> > > > > > the
> > > > > > > > value when I run "aria services show -f". I wonder if it's a
> > bug
> > > > that
> > > > > > was
> > > > > > > > fixed somewhere or if there's something else going on in your
> > > more
> > > > > > > complex
> > > > > > > > example.
> > > > > > > >
> > > 

Re: subnet connected to router

2017-08-28 Thread Tal Liron
DeWayne, please slow down. We need to be on the same page here. At the very
least we need to use the same versions of ARIA.

Why couldn't you use my example?

On Mon, Aug 28, 2017 at 5:38 PM, DeWayne Filppi  wrote:

> Note that creating a subtype of Subnet had no effect.   Tried to force the
> settings into a common subtype as a workaround, but had no luck.
>
> On Mon, Aug 28, 2017 at 2:31 PM, DeWayne Filppi 
> wrote:
>
> > I couldn't use your example.  The one I just sent illustrates the problem
> > and has no openstack dependency.  I just forgot to delete the import
> > statement.
> >
> > On Mon, Aug 28, 2017 at 2:24 PM, Tal Liron  wrote:
> >
> >> DeWayne, could please use the example I provided? I prefer to start with
> >> something without Openstack or any other dependencies so we can isolate
> >> the
> >> bug precisely.
> >>
> >> On Mon, Aug 28, 2017 at 3:53 PM, DeWayne Filppi 
> >> wrote:
> >>
> >> > OK.  Here's the example with no dependencies.  Two nodes.
> >> >
> >> > tosca_definitions_version: tosca_simple_yaml_1_0
> >> >
> >> >
> >> > imports:
> >> >   -
> >> > https://raw.githubusercontent.com/cloudify-cosmo/aria-
> >> > extension-cloudify/master/plugins/openstack/plugin.yaml
> >> >   - aria-1.0
> >> >
> >> > dsl_definitions:
> >> >   openstack_config: &openstack_config
> >> > username: dewayne
> >> >
> >> > data_types:
> >> >   config:
> >> > properties:
> >> >   username:
> >> > type: string
> >> > default: 'NOT SET'
> >> >
> >> > relationship_types:
> >> >   subnet_connected_to_router:
> >> > derived_from: ConnectsTo
> >> > interfaces:
> >> >   Configure:
> >> > add_target:
> >> >   implementation: connect.sh
> >> >   inputs:
> >> >   openstack_config:
> >> > type: config
> >> > required: true
> >> > default: {}
> >> >
> >> > node_types:
> >> >   router:
> >> > derived_from: tosca.nodes.Root
> >> >
> >> >   subnet:
> >> > derived_from: tosca.nodes.Root
> >> > requirements:
> >> >   - router:
> >> >   capability: tosca.capabilities.Node
> >> >   relationship: subnet_connected_to_router
> >> >
> >> > topology_template:
> >> >
> >> >   node_templates:
> >> >
> >> > router:
> >> >   type: router
> >> >
> >> > subnet:
> >> >   type: subnet
> >> >   requirements:
> >> > - router:
> >> > node: router
> >> > relationship:
> >> >   type: subnet_connected_to_router
> >> >   interfaces:
> >> > Configure:
> >> >   add_target:
> >> > inputs:
> >> >   openstack_config: *openstack_config
> >> >
> >> >
> >> > On Mon, Aug 28, 2017 at 1:14 PM, Tal Liron  wrote:
> >> >
> >> > > I'm again confused, DeWayne. Is the error with the example I
> provided
> >> > here?
> >> > > Please let's start with this minimal example to make sure we're on
> the
> >> > same
> >> > > page. If the example validates for you, we can try adding features
> to
> >> try
> >> > > to see what replicates the bug.
> >> > >
> >> > > On Mon, Aug 28, 2017 at 1:37 PM, DeWayne Filppi <
> dewa...@cloudify.co>
> >> > > wrote:
> >> > >
> >> > > > Yeah, except my original example I sent *did* specify the node.
> >> > > >
> >> > > > On Mon, Aug 28, 2017 at 10:06 AM, Tal Liron 
> >> wrote:
> >> > > >
> >> > > > > OK, so unfortunately you still have the bug. To workaround, you
> >> have
> >> > to
> >> > > > > specify the "node" field explicitly for all requirements. Or you
> >> can
> >> > > use
> >> > > > &g

Re: subnet connected to router

2017-08-28 Thread Tal Liron
As I said, if you're not using git master, you need to add a "node" field
to the requirement to solve this error.

If you must use the Openstack plugin, I can't help you much because I'm not
very familiar with it. Perhaps someone else on the list could assist?

But I have a feeling this is a higher-level issue, we just need to be
patient and try to isolate it.

On Mon, Aug 28, 2017 at 5:49 PM, DeWayne Filppi  wrote:

> Sent this before:
>
> I got this:
>
> Validation issues:
>   5: requirement "my_requirement" of node "my_node2_1" has no target node
> template
>
>
> I'm rushing because I need a workaround at least very soon.
>
> On Mon, Aug 28, 2017 at 3:46 PM, Tal Liron  wrote:
>
> > DeWayne, please slow down. We need to be on the same page here. At the
> very
> > least we need to use the same versions of ARIA.
> >
> > Why couldn't you use my example?
> >
> > On Mon, Aug 28, 2017 at 5:38 PM, DeWayne Filppi 
> > wrote:
> >
> > > Note that creating a subtype of Subnet had no effect.   Tried to force
> > the
> > > settings into a common subtype as a workaround, but had no luck.
> > >
> > > On Mon, Aug 28, 2017 at 2:31 PM, DeWayne Filppi 
> > > wrote:
> > >
> > > > I couldn't use your example.  The one I just sent illustrates the
> > problem
> > > > and has no openstack dependency.  I just forgot to delete the import
> > > > statement.
> > > >
> > > > On Mon, Aug 28, 2017 at 2:24 PM, Tal Liron  wrote:
> > > >
> > > >> DeWayne, could please use the example I provided? I prefer to start
> > with
> > > >> something without Openstack or any other dependencies so we can
> > isolate
> > > >> the
> > > >> bug precisely.
> > > >>
> > > >> On Mon, Aug 28, 2017 at 3:53 PM, DeWayne Filppi <
> dewa...@cloudify.co>
> > > >> wrote:
> > > >>
> > > >> > OK.  Here's the example with no dependencies.  Two nodes.
> > > >> >
> > > >> > tosca_definitions_version: tosca_simple_yaml_1_0
> > > >> >
> > > >> >
> > > >> > imports:
> > > >> >   -
> > > >> > https://raw.githubusercontent.com/cloudify-cosmo/aria-
> > > >> > extension-cloudify/master/plugins/openstack/plugin.yaml
> > > >> >   - aria-1.0
> > > >> >
> > > >> > dsl_definitions:
> > > >> >   openstack_config: &openstack_config
> > > >> > username: dewayne
> > > >> >
> > > >> > data_types:
> > > >> >   config:
> > > >> > properties:
> > > >> >   username:
> > > >> > type: string
> > > >> > default: 'NOT SET'
> > > >> >
> > > >> > relationship_types:
> > > >> >   subnet_connected_to_router:
> > > >> > derived_from: ConnectsTo
> > > >> > interfaces:
> > > >> >   Configure:
> > > >> > add_target:
> > > >> >   implementation: connect.sh
> > > >> >   inputs:
> > > >> >   openstack_config:
> > > >> > type: config
> > > >> > required: true
> > > >> >     default: {}
> > > >> >
> > > >> > node_types:
> > > >> >   router:
> > > >> > derived_from: tosca.nodes.Root
> > > >> >
> > > >> >   subnet:
> > > >> > derived_from: tosca.nodes.Root
> > > >> > requirements:
> > > >> >   - router:
> > > >> >   capability: tosca.capabilities.Node
> > > >> >   relationship: subnet_connected_to_router
> > > >> >
> > > >> > topology_template:
> > > >> >
> > > >> >   node_templates:
> > > >> >
> > > >> > router:
> > > >> >   type: router
> > > >> >
> > > >> > subnet:
> > > >> >   type: subnet
> > > >> >   requirements:
> > > >> > - router:
> > > >> >  

Re: subnet connected to router

2017-08-28 Thread Tal Liron
OK, let's please go back to my simpler example on master. If that works for
you, we're on the same page and can try to reproduce the bug there.

On Mon, Aug 28, 2017 at 7:23 PM, DeWayne Filppi  wrote:

> Hopefully.  Has nothing to do with the Openstack plugin.  My example had no
> reference to it.  Just switched to master branch, and it had no effect.
> The subnet node has a direct "node:" reference to the router node.
>
> On Mon, Aug 28, 2017 at 3:52 PM, Tal Liron  wrote:
>
> > As I said, if you're not using git master, you need to add a "node" field
> > to the requirement to solve this error.
> >
> > If you must use the Openstack plugin, I can't help you much because I'm
> not
> > very familiar with it. Perhaps someone else on the list could assist?
> >
> > But I have a feeling this is a higher-level issue, we just need to be
> > patient and try to isolate it.
> >
> > On Mon, Aug 28, 2017 at 5:49 PM, DeWayne Filppi 
> > wrote:
> >
> > > Sent this before:
> > >
> > > I got this:
> > >
> > > Validation issues:
> > >   5: requirement "my_requirement" of node "my_node2_1" has no target
> node
> > > template
> > >
> > >
> > > I'm rushing because I need a workaround at least very soon.
> > >
> > > On Mon, Aug 28, 2017 at 3:46 PM, Tal Liron  wrote:
> > >
> > > > DeWayne, please slow down. We need to be on the same page here. At
> the
> > > very
> > > > least we need to use the same versions of ARIA.
> > > >
> > > > Why couldn't you use my example?
> > > >
> > > > On Mon, Aug 28, 2017 at 5:38 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > Note that creating a subtype of Subnet had no effect.   Tried to
> > force
> > > > the
> > > > > settings into a common subtype as a workaround, but had no luck.
> > > > >
> > > > > On Mon, Aug 28, 2017 at 2:31 PM, DeWayne Filppi <
> dewa...@cloudify.co
> > >
> > > > > wrote:
> > > > >
> > > > > > I couldn't use your example.  The one I just sent illustrates the
> > > > problem
> > > > > > and has no openstack dependency.  I just forgot to delete the
> > import
> > > > > > statement.
> > > > > >
> > > > > > On Mon, Aug 28, 2017 at 2:24 PM, Tal Liron 
> > wrote:
> > > > > >
> > > > > >> DeWayne, could please use the example I provided? I prefer to
> > start
> > > > with
> > > > > >> something without Openstack or any other dependencies so we can
> > > > isolate
> > > > > >> the
> > > > > >> bug precisely.
> > > > > >>
> > > > > >> On Mon, Aug 28, 2017 at 3:53 PM, DeWayne Filppi <
> > > dewa...@cloudify.co>
> > > > > >> wrote:
> > > > > >>
> > > > > >> > OK.  Here's the example with no dependencies.  Two nodes.
> > > > > >> >
> > > > > >> > tosca_definitions_version: tosca_simple_yaml_1_0
> > > > > >> >
> > > > > >> >
> > > > > >> > imports:
> > > > > >> >   -
> > > > > >> > https://raw.githubusercontent.com/cloudify-cosmo/aria-
> > > > > >> > extension-cloudify/master/plugins/openstack/plugin.yaml
> > > > > >> >   - aria-1.0
> > > > > >> >
> > > > > >> > dsl_definitions:
> > > > > >> >   openstack_config: &openstack_config
> > > > > >> > username: dewayne
> > > > > >> >
> > > > > >> > data_types:
> > > > > >> >   config:
> > > > > >> > properties:
> > > > > >> >   username:
> > > > > >> > type: string
> > > > > >> > default: 'NOT SET'
> > > > > >> >
> > > > > >> > relationship_types:
> > > > > >> >   subnet_connected_to_router:
> > > > > >> > derived_from: ConnectsTo
> > > > > >> > interfaces:
> > > > > >> >   Configure:

Re: subnet connected to router

2017-08-28 Thread Tal Liron
Great! So, now let's think what makes your full Openstack example
different... I'm off for the day, will continue looking tomorrow.

On Mon, Aug 28, 2017 at 7:43 PM, DeWayne Filppi  wrote:

> It validated and seems correct via show -f.
>
>
> On Mon, Aug 28, 2017 at 5:31 PM, Tal Liron  wrote:
>
> > OK, let's please go back to my simpler example on master. If that works
> for
> > you, we're on the same page and can try to reproduce the bug there.
> >
> > On Mon, Aug 28, 2017 at 7:23 PM, DeWayne Filppi 
> > wrote:
> >
> > > Hopefully.  Has nothing to do with the Openstack plugin.  My example
> had
> > no
> > > reference to it.  Just switched to master branch, and it had no effect.
> > > The subnet node has a direct "node:" reference to the router node.
> > >
> > > On Mon, Aug 28, 2017 at 3:52 PM, Tal Liron  wrote:
> > >
> > > > As I said, if you're not using git master, you need to add a "node"
> > field
> > > > to the requirement to solve this error.
> > > >
> > > > If you must use the Openstack plugin, I can't help you much because
> I'm
> > > not
> > > > very familiar with it. Perhaps someone else on the list could assist?
> > > >
> > > > But I have a feeling this is a higher-level issue, we just need to be
> > > > patient and try to isolate it.
> > > >
> > > > On Mon, Aug 28, 2017 at 5:49 PM, DeWayne Filppi  >
> > > > wrote:
> > > >
> > > > > Sent this before:
> > > > >
> > > > > I got this:
> > > > >
> > > > > Validation issues:
> > > > >   5: requirement "my_requirement" of node "my_node2_1" has no
> target
> > > node
> > > > > template
> > > > >
> > > > >
> > > > > I'm rushing because I need a workaround at least very soon.
> > > > >
> > > > > On Mon, Aug 28, 2017 at 3:46 PM, Tal Liron 
> wrote:
> > > > >
> > > > > > DeWayne, please slow down. We need to be on the same page here.
> At
> > > the
> > > > > very
> > > > > > least we need to use the same versions of ARIA.
> > > > > >
> > > > > > Why couldn't you use my example?
> > > > > >
> > > > > > On Mon, Aug 28, 2017 at 5:38 PM, DeWayne Filppi <
> > dewa...@cloudify.co
> > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Note that creating a subtype of Subnet had no effect.   Tried
> to
> > > > force
> > > > > > the
> > > > > > > settings into a common subtype as a workaround, but had no
> luck.
> > > > > > >
> > > > > > > On Mon, Aug 28, 2017 at 2:31 PM, DeWayne Filppi <
> > > dewa...@cloudify.co
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > I couldn't use your example.  The one I just sent illustrates
> > the
> > > > > > problem
> > > > > > > > and has no openstack dependency.  I just forgot to delete the
> > > > import
> > > > > > > > statement.
> > > > > > > >
> > > > > > > > On Mon, Aug 28, 2017 at 2:24 PM, Tal Liron 
> > > > wrote:
> > > > > > > >
> > > > > > > >> DeWayne, could please use the example I provided? I prefer
> to
> > > > start
> > > > > > with
> > > > > > > >> something without Openstack or any other dependencies so we
> > can
> > > > > > isolate
> > > > > > > >> the
> > > > > > > >> bug precisely.
> > > > > > > >>
> > > > > > > >> On Mon, Aug 28, 2017 at 3:53 PM, DeWayne Filppi <
> > > > > dewa...@cloudify.co>
> > > > > > > >> wrote:
> > > > > > > >>
> > > > > > > >> > OK.  Here's the example with no dependencies.  Two nodes.
> > > > > > > >> >
> > > > > > > >> > tosca_definitions_version: tosca_simple_yaml_1_0
> > > > > > > >> >
> > > > > > &g

Re: subnet connected to router

2017-08-29 Thread Tal Liron
OK! DeWayne provided me with the complete service template offline and I
managed to reproduce the problem and also create a more minimal way to
reproduce it.

To follow it, see this JIRA:

https://issues.apache.org/jira/browse/ARIA-356


Re: Service Composition / Substitution Mapping

2017-08-31 Thread Tal Liron
gt; > From: Ran Ziv [mailto:r...@cloudify.co]
> > Sent: Wednesday, August 16, 2017 6:19 PM
> > To: dev@ariatosca.incubator.apache.org
> > Subject: Re: Service Composition / Substitution Mapping
> >
> > I agree, especially when the benefit of being able to use an existing
> > service - yet only one which hasn't been deployed via a workflow -
> doesn't
> > seem all that interesting IMO.
> >
> > Another concern I could add to the ones you've mentioned is the service's
> > inputs - the substituting template's inputs should be received via the
> > properties of the abstract node in the top level service template. If the
> > service already exists, these inputs would not be passed as expected.
> >
> > Ran
> >
> > On Wed, Aug 16, 2017 at 3:25 PM, D Jayachandran <
> > d.jayachand...@ericsson.com
> > > wrote:
> >
> > > Hi Ran,
> > >
> > > When Tal mentioned about "substituting service", I thought it was
> > > about the services which dint have any associated executions/workflows
> > triggered.
> > > Am also in favor of  "substituting service templates" rather than
> > > "substituting service".
> > > With "substituting service" approach (when the service is not
> > > instantiated), I see some open points
> > > - In a multi-user scenario, what will happen when a service is
> > > composed using the substituting service and at the sametime a
> > > workflow is triggered for the substituting service. ?
> > > - Is it okay to delete(dissolve) the substituting service
> > > after it is used to create the composed service. ?
> > >
> > > Starting with it might be a good idea to only have "substituting
> > > service templates" approach.
> > >
> > > Regards,
> > > DJ
> > > -Original Message-
> > > From: Ran Ziv [mailto:r...@cloudify.co]
> > > Sent: Wednesday, August 16, 2017 4:29 PM
> > > To: dev@ariatosca.incubator.apache.org
> > > Subject: Re: Service Composition / Substitution Mapping
> > >
> > > I'd say right now we're looking at "static service composition" which
> > > is only about "substituting service templates", not "substituting
> > > service". If a service is already running, it will not be used.
> > >
> > > I think what Tal meant was that each service template - whether the
> > > top level one or one of the substituting templates - needs to resolve
> > > its inner reqs&caps internally first, and then resolve substitution
> > > reqs&caps across service templates.
> > >
> > >
> > > On Wed, Aug 16, 2017 at 12:00 PM, D Jayachandran <
> > > d.jayachand...@ericsson.com> wrote:
> > >
> > > > Hi Tal,
> > > >
> > > > Thanks for organizing the points.
> > > > So if I understand correctly we are looking only at "Static service
> > > > composition" which includes "substituting service template" and
> > > > "substituting service".
> > > >
> > > > As you said with "substituting service template" approach ,we will
> > > > have all the nodes aggregated from other service templates and a
> > > > single workflow would be triggered to perform life-cycle operation
> > > > on
> > > all the nodes.
> > > > Am not sure why the workflows needs to be "boundary aware" for nodes
> > > > being substituted ? I see nodes are already part of the composed
> > > > service, Could you please help me understand this ?
> > > >
> > > >
> > > > Regards,
> > > > DJ
> > > > -Original Message-
> > > > From: Tal Liron [mailto:t...@cloudify.co]
> > > > Sent: Saturday, August 12, 2017 4:52 AM
> > > > To: dev@ariatosca.incubator.apache.org
> > > > Subject: Re: Service Composition / Substitution Mapping
> > > >
> > > > You are correct -- to participate in this "multi-VIM" scenario, the
> > > > Openstack plugin would have to know how to translate the TOSCA
> > > > properties to a flavor ID. This could all be done in 100% TOSCA via
> > > > policies (say, an aria.Openstack).
> > > >
> > > > Doing this automatically might not be a good idea, or even necessary.
> > > > Worst case is you get a validation error 

  1   2   3   4   >