[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153344804
  
--- Diff: aria/parser/loading/uri.py ---
@@ -44,6 +45,7 @@ def __init__(self, context, location, 
origin_location=None):
 self.location = location
 self._prefixes = StrictList(value_class=basestring)
 self._loader = None
+self._canonical_location = None
--- End diff --

I added documentation in `loader.py` (the base class).


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153343795
  
--- Diff: aria/parser/reading/reader.py ---
@@ -28,16 +28,9 @@ def __init__(self, context, location, loader):
 
 def load(self):
 with OpenClose(self.loader) as loader:
--- End diff --

Because `__enter__` and `__exit__` are magic functions, and it seems very 
awkward to me to have users call them directly. There are not supposed to be 
called directly. However, in this case, I do think open and close should be 
part of the class's protocol.

As for using a mixin ... seems far more awkward to me than using a helper 
class. I still suggest leaving this as is.

There's no new code here, too -- it's what we had for a long time.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-27 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153343290
  
--- Diff: aria/utils/threading.py ---
@@ -161,11 +242,7 @@ def close(self):
 self._workers = None
 
 def drain(self):
-"""
-Blocks until all current tasks finish execution, but leaves the 
worker threads alive.
-"""
-
-self._tasks.join()  # oddly, the API does not support a timeout 
parameter
+self._tasks.join() # oddly, the API does not support a timeout 
parameter
--- End diff --

Sure, but for better or for worse it's what we have right now. I did enough 
sweeping fixes in this PR to get people mad at me, I suggest postponing this 
one for a different PR.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153016106
  
--- Diff: aria/modeling/service_common.py ---
@@ -587,12 +588,11 @@ class MetadataBase(TemplateModelMixin):
 :ivar name: name
 :vartype name: basestring
 :ivar value: value
-:vartype value: basestring
 """
 
 __tablename__ = 'metadata'
 
-value = Column(Text)
+value = Column(PickleType)
--- End diff --

Why not? :)


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014666
  
--- Diff: aria/utils/threading.py ---
@@ -161,11 +242,7 @@ def close(self):
 self._workers = None
 
 def drain(self):
-"""
-Blocks until all current tasks finish execution, but leaves the 
worker threads alive.
-"""
-
-self._tasks.join()  # oddly, the API does not support a timeout 
parameter
+self._tasks.join() # oddly, the API does not support a timeout 
parameter
--- End diff --

Hm, I always hated this, and a lot of projects ignore it. Right now we have 
a mix of styles.

Let's put this as a separate JIRA and maybe change all our comments at once?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014611
  
--- Diff: aria/parser/reading/yaml.py ---
@@ -82,7 +84,11 @@ def read(self):
 # see issue here:
 # 
https://bitbucket.org/ruamel/yaml/issues/61/roundtriploader-causes-exceptions-with
 #yaml_loader = yaml.RoundTripLoader(data)
-yaml_loader = yaml.SafeLoader(data)
+try:
+# Faster C-based loader, might not be available on all 
platforms
+yaml_loader = yaml.CSafeLoader(data)
+except BaseException:
--- End diff --

I think I'm sure ... I want the failover to always work. Even if 
CSafeLoader does exist but fails somehow internally, we should still make an 
effort to run.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014418
  
--- Diff: aria/parser/reading/reader.py ---
@@ -28,16 +28,9 @@ def __init__(self, context, location, loader):
 
 def load(self):
 with OpenClose(self.loader) as loader:
--- End diff --

If you can refactor this to make it use context manager and still be clear, 
please show me.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153014247
  
--- Diff: aria/parser/presentation/presentation.py ---
@@ -199,6 +199,9 @@ class Presentation(PresentationBase):
 """
 
 def _validate(self, context):
+if (not 
context.presentation.configuration.get('validate_normative', True)) \
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013980
  
--- Diff: aria/parser/loading/uri.py ---
@@ -44,6 +45,7 @@ def __init__(self, context, location, 
origin_location=None):
 self.location = location
 self._prefixes = StrictList(value_class=basestring)
 self._loader = None
+self._canonical_location = None
--- End diff --

I think "canonical" is a well-known adjective for file systems and URIs and 
might not need explanation. "Canonical" means globally absolute. If you think 
it should be documented, then where? It is used a lot.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013913
  
--- Diff: aria/parser/loading/loader.py ---
@@ -32,3 +32,6 @@ def close(self):
 
 def load(self):
 raise NotImplementedError
+
+def get_canonical_location(self):  
 # pylint: disable=no-self-use
--- End diff --

What do you propose?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013856
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
default_presenter_class

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013725
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -31,47 +32,33 @@ class Read(Consumer):
 instances.
 
 It supports agnostic raw data composition for presenters that have
-``_get_import_locations`` and ``_merge_import``.
+``_get_import_locations``, ``_validate_import``, and ``_merge_import``.
 
 To improve performance, loaders are called asynchronously on separate 
threads.
 
 Note that parsing may internally trigger more than one 
loading/reading/presentation
 cycle, for example if the agnostic raw data has dependencies that must 
also be parsed.
 """
 
-def consume(self):
-if self.context.presentation.location is None:
-self.context.validation.report('Presentation consumer: missing 
location')
-return
-
-presenter = None
-imported_presentations = None
+def __init__(self, context):
+super(Read, self).__init__(context)
+self._cache = {}
 
-executor = 
FixedThreadPoolExecutor(size=self.context.presentation.threads,
-   
timeout=self.context.presentation.timeout)
-executor.print_exceptions = 
self.context.presentation.print_exceptions
-try:
-presenter = self._present(self.context.presentation.location, 
None, None, executor)
-executor.drain()
-
-# Handle exceptions
-for e in executor.exceptions:
-self._handle_exception(e)
+def consume(self):
+# Present the main location and all imports recursively
+main, results = self._present_all()
 
-imported_presentations = executor.returns
-finally:
-executor.close()
+# Merge presentations
+main.merge(results, self.context)
 
-# Merge imports
-if (imported_presentations is not None) and hasattr(presenter, 
'_merge_import'):
-for imported_presentation in imported_presentations:
-okay = True
-if hasattr(presenter, '_validate_import'):
-okay = presenter._validate_import(self.context, 
imported_presentation)
-if okay:
-presenter._merge_import(imported_presentation)
+# Cache merged presentations
+if self.context.presentation.cache:
+for result in results:
+result.cache()
 
-self.context.presentation.presenter = presenter
+self.context.presentation.presenter = main.presentation
+if main.canonical_location is not None:
--- End diff --

The loader might fail for some reason to turn the location into a canonical 
location


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013051
  
--- Diff: tests/mechanisms/utils.py ---
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import itertools
+
+
+def matrix(*iterables, **kwargs):
+"""
+Generates a matrix of parameters for ``@pytest.mark.parametrize``.
--- End diff --

Because `*iterables` can be any length... if you use both `*` and `**` how 
else can specify a named parameter?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153013058
  
--- Diff: tests/mechanisms/utils.py ---
@@ -0,0 +1,71 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import itertools
+
+
+def matrix(*iterables, **kwargs):
+"""
+Generates a matrix of parameters for ``@pytest.mark.parametrize``.
+
+The matrix is essentially the Cartesian product of the arguments 
(which should be iterables),
+with the added ability to "flatten" each value by breaking up tuples 
and recombining them into a
+final flat value.
+
+To do such recombination, use the ``counts`` argument (tuple) to 
specify the number of elements
+per value in order. Any count greater than 1 (the default) enables 
recombination of that value.
+
+Example::
+
+  x = ('hello', 'goodbye')
+  y = ('Linus', 'Richard')
+  matrix(x, y) ->
+('hello', 'Linus'),
+('hello', 'Richard'),
+('goodbye', 'Linus'),
+('goodbye', 'Richard')
+
+  y = (('Linus', 'Torvalds'), ('Richard', 'Stallman'))
+  matrix(x, y) ->
+('hello', ('Linus', 'Torvalds')),
+('hello', ('Richard', 'Stallman')),
+('goodbye', ('Linus', 'Torvalds')),
+('goodbye', ('Richard', 'Stallman'))
+
+  matrix(x, y, counts=(1, 2)) ->
+('hello', 'Linus', 'Torvalds'),
+('hello', 'Richard', 'Stallman'),
+('goodbye', 'Linus', 'Torvalds'),
+('goodbye', 'Richard', 'Stallman')
+"""
--- End diff --

I will add more documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153012904
  
--- Diff: tests/mechanisms/parsing/__init__.py ---
@@ -0,0 +1,75 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+import jinja2
+
+
+LINE_BREAK = '\n' + '-' * 60
+
+
+class Parsed(object):
+def __init__(self):
+self.issues = []
+self.text = ''
+self.verbose = False
+
+def assert_success(self):
+__tracebackhide__ = True # pylint: disable=unused-variable
--- End diff --

A PyTest feature ... I will link to the documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153012524
  
--- Diff: 
tests/extensions/aria_extension_tosca/simple_v1_0/functions/test_function_concat.py
 ---
@@ -0,0 +1,102 @@
+# -*- coding: utf-8 -*-
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+def test_functions_concat_syntax_empty(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [] }
+""").assert_success()
+
+
+def test_functions_concat_strings(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, b, c ] }
+""").assert_success()
+
+
+def test_functions_concat_mixed(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, 1, 1.1, null, [], {} ] }
+""").assert_success()
+
+
+def test_functions_concat_nested(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  MyType:
+properties:
+  my_parameter:
+type: string
+topology_template:
+  node_templates:
+my_node:
+  type: MyType
+  properties:
+my_parameter: { concat: [ a, { concat: [ b, c ] } ] }
+""").assert_success()
+
+
+# Unicode
+
+def test_functions_concat_unicode(parser):
+parser.parse_literal("""
+tosca_definitions_version: tosca_simple_yaml_1_0
+node_types:
+  類型:
+properties:
+  參數:
+type: string
+topology_template:
+  node_templates:
+模板:
+  type: 類型
+  properties:
+參數: { concat: [ 一, 二, 三 ] }
+""").assert_success()
--- End diff --

It would impossible to create such a test that would work with non-TOSCA 
parsers.

In the future we will boost our topology engine tests to test actual 
function evaluation. )And of course things get much more complicated there, 
because we deal with HOST, TARGET, and other topological aspects, as well as 
runtime values in `get_attribute`.)

For this suite, we are just testing parsing of intrinsic functions, not 
their evaluation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011903
  
--- Diff: tests/extensions/aria_extension_tosca/conftest.py ---
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+PyTest configuration module.
+
+Add support for a "--tosca-parser" CLI option.
+"""
+
+import pytest
+
+from ...mechanisms.parsing.aria import AriaParser
+
+
+def pytest_addoption(parser):
--- End diff --

These are PyTest-defined hooks. I will add a link to the PyTest 
documentation.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153011505
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/data_types.py ---
@@ -318,15 +318,15 @@ def report(message, constraint):
 #
 
 def get_data_type_value(context, presentation, field_name, type_name):
-the_type = get_type_by_name(context, type_name, 'data_types')
-if the_type is not None:
-value = getattr(presentation, field_name)
-if value is not None:
+value = getattr(presentation, field_name)
+if value is not None:
+the_type = get_type_by_name(context, type_name, 'data_types')
--- End diff --

+1 to `data_type`


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153010864
  
--- Diff: aria/utils/collections.py ---
@@ -220,27 +225,30 @@ def __setitem__(self, key, value, **_):
 return super(StrictDict, self).__setitem__(key, value)
 
 
-def merge(dict_a, dict_b, path=None, strict=False):
+def merge(dict_a, dict_b, copy=True, strict=False, path=None):
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009548
  
--- Diff: aria/parser/reading/yaml.py ---
@@ -16,18 +16,30 @@
 from .reader import Reader
 from .locator import Locator
 from .exceptions import ReaderSyntaxError
-from .locator import LocatableString, LocatableInt, LocatableFloat
+from .locator import (LocatableString, LocatableInt, LocatableFloat)
 
-# Add our types to ruamel.yaml
+
+MERGE_TAG = u'tag:yaml.org,2002:merge'
+MAP_TAG = u'tag:yaml.org,2002:map'
+
+
+# Add our types to RoundTripRepresenter
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableString, 
yaml.representer.RoundTripRepresenter.represent_unicode)
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableInt, yaml.representer.RoundTripRepresenter.represent_int)
 yaml.representer.RoundTripRepresenter.add_representer(
 LocatableFloat, yaml.representer.RoundTripRepresenter.represent_float)
 
-MERGE_TAG = u'tag:yaml.org,2002:merge'
-MAP_TAG = u'tag:yaml.org,2002:map'
+
+def construct_yaml_map(self, node):
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009232
  
--- Diff: aria/parser/presentation/field_validators.py ---
@@ -14,12 +14,29 @@
 # limitations under the License.
 
 
+from ...utils.formatting import safe_repr
 from ..validation import Issue
 from .utils import (parse_types_dict_names, report_issue_for_unknown_type,
 report_issue_for_parent_is_self, 
report_issue_for_unknown_parent_type,
 report_issue_for_circular_type_hierarchy)
 
 
+def not_negative_validator(field, presentation, context):
+"""
+Makes sure that the field is not negative.
+
+Can be used with the :func:`field_validator` decorator.
+"""
+
+field.default_validate(presentation, context)
+value = getattr(presentation, field.name)
+if (value is not None) and (value < 0):
--- End diff --

The field also has a type which is enforced as a separate validation. I 
wanted this particular validator to be general purpose, so it would work with 
any kind of object that supports comparison ("__gt__" and other magic methods).


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153009028
  
--- Diff: aria/parser/presentation/context.py ---
@@ -63,3 +67,12 @@ def get_from_dict(self, *names):
 """
 
 return self.presenter._get_from_dict(*names) if self.presenter is 
not None else None
+
+def create_executor(self):
+if self.threads == 1:
--- End diff --

What do you mean by "initiator"? You can configure the thread count in the 
parser context, just like everything else, whenever you start the parsing 
process. Normally you don't need to do this -- the defaults should be fine. 
Just for running tests in tox (which is multiprocess) it makes sense to 
override the default and enforce single-threading.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008880
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
default_presenter_class

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008578
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
default_presenter_class

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008582
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
default_presenter_class

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008552
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
+cache_key = (origin_canonical_location, location)
+try:
+canonical_location = CANONICAL_LOCATION_CACHE[cache_key]
+return loader, canonical_location
+except KeyError:
+pass
+else:
+cache_key = None
+
+canonical_location = loader.get_canonical_location()
+
+# Because retrieving the canonical location can be costly, we will 
try to cache it
+if cache_key is not None:
+CANONICAL_LOCATION_CACHE[cache_key] = canonical_location
+
+return loader, canonical_location
+
+def _create_presentation(self, canonical_location, loader, 
default_presenter_class

[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153008314
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+try:
+# Is the presentation in the local cache?
+presentation = self._cache[canonical_location]
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+except KeyError:
+pass
+
+# Create and cache new presentation
+presentation = self._create_presentation(canonical_location, 
loader, origin_presenter_class)
+self._cache[canonical_location] = presentation
 
+# Submit imports to executor
+if hasattr(presentation, '_get_import_locations'):
+import_locations = 
presentation._get_import_locations(self.context)
+if import_locations:
+for import_location in import_locations:
+import_location = UriLocation(import_location)
+executor.submit(self._present, import_location, 
canonical_location,
+presentation.__class__, executor)
+
+return _Result(presentation, canonical_location, 
origin_canonical_location)
+
+def _create_loader(self, location, origin_canonical_location):
+loader = 
self.context.loading.loader_source.get_loader(self.context.loading, location,
+   
origin_canonical_location)
+
+canonical_location = None
+
+if origin_canonical_location is not None:
--- End diff --

Perhaps the goal wasn't clear here: "location" is relative, while 
"canonical_location" is globally absolute. So the cache key has to be a 
combination of both, hence a tuple. I will add documentation to clarify this.


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153007941
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -86,52 +73,193 @@ def dump(self):
 self.context.presentation.presenter._dump(self.context)
 
 def _handle_exception(self, e):
-if isinstance(e, AlreadyReadException):
+if isinstance(e, _Skip):
 return
 super(Read, self)._handle_exception(e)
 
-def _present(self, location, origin_location, presenter_class, 
executor):
+def _present_all(self):
+location = self.context.presentation.location
+
+if location is None:
+self.context.validation.report('Read consumer: missing 
location')
+return
+
+executor = self.context.presentation.create_executor()
+try:
+# This call may recursively submit tasks to the executor if 
there are imports
+main = self._present(location, None, None, executor)
+
+# Wait for all tasks to complete
+executor.drain()
+
+# Handle exceptions
+for e in executor.exceptions:
+self._handle_exception(e)
+
+results = executor.returns or []
+finally:
+executor.close()
+
+results.insert(0, main)
+
+return main, results
+
+def _present(self, location, origin_canonical_location, 
origin_presenter_class, executor):
 # Link the context to this thread
 self.context.set_thread_local()
 
-raw = self._read(location, origin_location)
+# Canonicalize the location
+if self.context.reading.reader is None:
+loader, canonical_location = self._create_loader(location, 
origin_canonical_location)
+else:
+# If a reader is specified in the context then we skip loading
+loader = None
+canonical_location = location
+
+# Skip self imports
+if canonical_location == origin_canonical_location:
+raise _Skip()
+
+if self.context.presentation.cache:
+# Is the presentation in the global cache?
+try:
+presentation = PRESENTATION_CACHE[canonical_location]
--- End diff --

Yes there is. An "if" statement plus a retrieval statement are non-atomic, 
and since we are in a concurrent situation it's possible that between the if 
and the retrieval that the data will removed from the cache, causing the 
retrieval to fail with an exception even though the "if" succeeded. A single 
retrieval is atomic.

(This is the generally the idiomatic "Python way" to do this -- always 
choose the atomic!)


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-24 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r153006807
  
--- Diff: aria/parser/consumption/presentation.py ---
@@ -31,47 +32,33 @@ class Read(Consumer):
 instances.
 
 It supports agnostic raw data composition for presenters that have
-``_get_import_locations`` and ``_merge_import``.
+``_get_import_locations``, ``_validate_import``, and ``_merge_import``.
 
 To improve performance, loaders are called asynchronously on separate 
threads.
 
 Note that parsing may internally trigger more than one 
loading/reading/presentation
 cycle, for example if the agnostic raw data has dependencies that must 
also be parsed.
 """
 
-def consume(self):
-if self.context.presentation.location is None:
-self.context.validation.report('Presentation consumer: missing 
location')
-return
-
-presenter = None
-imported_presentations = None
+def __init__(self, context):
+super(Read, self).__init__(context)
+self._cache = {}
 
-executor = 
FixedThreadPoolExecutor(size=self.context.presentation.threads,
-   
timeout=self.context.presentation.timeout)
-executor.print_exceptions = 
self.context.presentation.print_exceptions
-try:
-presenter = self._present(self.context.presentation.location, 
None, None, executor)
-executor.drain()
-
-# Handle exceptions
-for e in executor.exceptions:
-self._handle_exception(e)
+def consume(self):
+# Present the main location and all imports recursively
+main, results = self._present_all()
--- End diff --

+1 renamed to "main_result" and "all_results"


---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-09 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
OK. So I guess you don't want to change the imports in `/tests/`?

On Wed, Nov 8, 2017 at 4:32 PM, Maxim Orlov <notificati...@github.com>
wrote:

> so, i think we're ready
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-ariatosca/pull/199#issuecomment-342983490>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/ABF_4Z5-j-tU5KsVd0FKsJcbugbiCbdXks5s0iwOgaJpZM4QC6qG>
> .
>



---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-08 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
Still, I don't see why it should be in `aria/__init__.py` and be always
activated. I think it should be in activated only for a module that
specifically is using ruamel. It would still only happen once.

On Wed, Nov 8, 2017 at 2:31 PM, Maxim Orlov <notificati...@github.com>
wrote:

> This code only runs once, it doesn't provide with a patched up ruamel, it
> patched the loading mechanism...so loading should work everywhere, as long
> as the code was run...
>
> —
> You are receiving this because you were mentioned.
> Reply to this email directly, view it on GitHub
> 
<https://github.com/apache/incubator-ariatosca/pull/199#issuecomment-342949895>,
> or mute the thread
> 
<https://github.com/notifications/unsubscribe-auth/ABF_4dyDXJXUA4TT4Mc2ofbzAiholfI7ks5s0g-7gaJpZM4QC6qG>
> .
>



---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-11-08 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/207#discussion_r149718206
  
--- Diff: 
tests/extensions/aria_extension_tosca/simple_nfv_v1_0/test_profile.py ---
@@ -0,0 +1,20 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
--- End diff --

You are looking here specifically at the test to make sure the NFV profile 
is valid. This is not where the unit tests for the parser lie.


---


[GitHub] incubator-ariatosca issue #199: ARIA-392 Failing to load ruamel.yaml

2017-11-08 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/199
  
@mxmrlv You didn't address these:

1. Let's move this to `from aria.utils.yaml import yaml`. I don't think the 
aria `__init__.py` is the right place for this.
2. Let's go over *all* places in the codebase where we import ruamel to use 
the above.


---


[GitHub] incubator-ariatosca issue #200: ARIA-393 Enable configuration of extension l...

2017-11-06 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/200
  
Could you also add this to the doc?

`:type strict: bool`

After that it's fine by me to squash and merge!


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-06 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r149136650
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

I just think `from aria.utils.yaml import yaml` will give us a clean 
separation in case we have a better fix in the future, so I do prefer that.

Also, for this PR to merge we need to change *all* our uses of ruamel in 
the codebase to that import.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148628206
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

Another follow up: if this is still an issue with newer versions of ruaeml, 
I strongly suggest we open a bug at that project. I've done it in the past and 
have gotten things fixed. If you do that, make sure to link that bug report in 
a code comment here -- in case it's solved in the future, we should remove our 
workaround.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148621970
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
+except ImportError:
--- End diff --

I recent commit removed Py2.6 support and allowed us to upgrade ruamel to 
its latest version. So, I suggest rebasing on master and testing again: perhaps 
this hack isn't needed anymore.

Assuming this hack is still needed, we need to find a way to generalize it, 
since we import ruamel in many, many places in the code. Perhaps we should have 
an {{aria.utils.yaml}} package where we do all this work once. Everywhere else 
we use yaml would then have to be replaced to {{from aria.utils.yaml import 
yaml}}. A nice side benefit would be that it would be easy to replace ruamel 
with a different yaml parser is that is ever necessary.


---


[GitHub] incubator-ariatosca pull request #199: ARIA-392 Failing to load ruamel.yaml

2017-11-02 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/199#discussion_r148620801
  
--- Diff: aria/__init__.py ---
@@ -17,14 +17,45 @@
 The ARIA root package provides entry points for extension and storage 
initialization.
 """
 
+import os
 import sys
+import types
 
 from pkgutil import iter_modules
 import pkg_resources
-
 aria_package_name = 'apache-ariatosca'
 __version__ = pkg_resources.get_distribution(aria_package_name).version
 
+
+
+
+try:
+import ruamel   # noqa: F401
--- End diff --

I think we've decided to remove IDE-specific comments.


---


[GitHub] incubator-ariatosca issue #200: ARIA-393 Enable configuration of extension l...

2017-11-02 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/200
  
Could you just fix the Sphinx docstring to include the argument?


---


[GitHub] incubator-ariatosca pull request #207: ARIA-1 Parser test suite

2017-10-31 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/207

ARIA-1 Parser test suite

This commit additionally fixes many parser bugs revealed by the test
suite, which includes adding validations that were missing.

A new "extensions" tox suite is introduced.

The /tests/parser cases were refactored into /tests/topology and
/tests/extensions.

The Hello World example was fixed and refactored, as it in fact had
invalid TOSCA (it previously passed due to a missing validation).

Parser performance was greatly improved by:

1. Switching to the YAML C library
2. Aggressive caching of parsed presentations
3. The ability to skip importing the TOSCA profile
4. A new deepcopy_fast util
5. A new BlockingExecutor that is much faster for single-threaded use

Unicode is now fully supported for all validation and log message. This
requires the use a unicode (u'' notation) for all .format specs.

Additionally, PyLint comment directives have been standardized by
pushing them to column 100.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-1-parser-test-suite

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/207.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #207


commit 22cee6d90c5873230eabe7449b5bbccf33222bad
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-08-17T22:50:27Z

ARIA-1 Parser test suite

This commit additionally fixes many parser bugs revealed by the test
suite, which includes adding validations that were missing.

A new "extensions" tox suite is introduced.

The /tests/parser cases were refactored into /tests/topology and
/tests/extensions.

The Hello World example was fixed and refactored, as it in fact had
invalid TOSCA (it previously passed due to a missing validation).

Parser performance was greatly improved by:

1. Switching to the YAML C library
2. Aggressive caching of parsed presentations
3. The ability to skip importing the TOSCA profile
4. A new deepcopy_fast util
5. A new BlockingExecutor that is much faster for single-threaded use

Unicode is now fully supported for all validation and log message. This
requires the use a unicode (u'' notation) for all .format specs.

Additionally, PyLint comment directives have been standardized by
pushing them to column 100.




---


[GitHub] incubator-ariatosca pull request #206: ARIA-405 Remove support for Python 2....

2017-10-30 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/206

ARIA-405 Remove support for Python 2.6

* setup.py now requires Python 2.7
* Upgrade all 3rd party libraries to recent versions
* API changes to networkx
* Stricter yaml.load call for ruamel.yaml
* Remove iter_modules implementation for Python 2.6
* Remove NullHander implementation for Python 2.6
* Remove "py26" tox test environments

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-405-remove-py26

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/206.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #206


commit b1f03efcedd814f5fe38f051bfa0106cc715586f
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-10-30T21:56:57Z

ARIA-405 Remove support for Python 2.6

* setup.py now requires Python 2.7
* Upgrade all 3rd party libraries to recent versions
* API changes to networkx
* Stricter yaml.load call for ruamel.yaml
* Remove iter_modules implementation for Python 2.6
* Remove NullHander implementation for Python 2.6
* Remove "py26" tox test environments




---


[GitHub] incubator-ariatosca pull request #201: ARIA-395 Fix AppVeyor failures due to...

2017-10-30 Thread tliron
Github user tliron closed the pull request at:

https://github.com/apache/incubator-ariatosca/pull/201


---


[GitHub] incubator-ariatosca pull request #201: ARIA-395 Fix AppVeyor failures due to...

2017-10-23 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/201

ARIA-395 Fix AppVeyor failures due to use of SSL



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-395-appveyor-failures

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/201.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #201


commit a7c381fbace2ffe0435e9d59cd7a8aedd1f18061
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-10-23T17:08:59Z

ARIA-395 Fix AppVeyor failures due to use of SSL




---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r138103044
  
--- Diff: .travis.yml ---
@@ -10,36 +10,55 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-sudo: false
+# We need to set "sudo: true" in order to use a virtual machine instead of 
a container, because
+# SSH tests fail in the container. See:
+# 
https://docs.travis-ci.com/user/reference/overview/#Virtualization-environments
+
+dist: trusty
+sudo: true
 
 language: python
 
-dist: precise
+addons:
+  apt:
+sources:
+  - sourceline: 'ppa:fkrull/deadsnakes'
+packages:
+  # Ubuntu 14.04 (trusty) does not come with Python 2.6, so we will 
install it from Felix
+  # Krull's PPA
+  - python2.6
+  - python2.6-dev
+
 python:
+  # We handle Python 2.6 testing from within tox (see tox.ini); note that 
this means that we run
+  # tox itself always from Python 2.7
   - '2.7'
 
 env:
-  - TOX_ENV=pylint_code
-  - TOX_ENV=pylint_tests
-  - TOX_ENV=py27
-  - TOX_ENV=py26
-  - TOX_ENV=py27e2e
-  - TOX_ENV=py26e2e
-  - TOX_ENV=py27ssh
-  - TOX_ENV=py26ssh
-  - TOX_ENV=docs
-
-install:
+  # The PYTEST_PROCESSES environment var is used in tox.ini to override 
the --numprocesses argument
+  # for PyTest's xdist plugin. The reason this is necessary is that 
conventional Travis environments
+  # may report a large amount of available CPUs, but they they are greatly 
restricted. Through trial
+  # and error we found that more than 1 process may result in failures.
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_code
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_tests
+  - PYTEST_PROCESSES=1 TOX_ENV=py27
+  - PYTEST_PROCESSES=1 TOX_ENV=py26
+  - PYTEST_PROCESSES=1 TOX_ENV=py27e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py26e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py27ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=py26ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=docs
+
+before_install:
+  # Create SSH keys for SSH tests
+  - ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
+  - cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
+  # Python dependencies
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
-
-script:
-  - pip --version
   - tox --version
-  - PYTEST_PROCESSES=1 tox -e $TOX_ENV
--- End diff --

While working on this file it seemed odd to me that this one env variable 
is not in the .travis.yml "env" section. It makes sense to group it here, too, 
because in the future we might find that some tox envs run fine with 
multiprocess. (The Travis VMs/containers all have 2 cores, actually. I still 
don't know why are our tests fail when we switch to 2 processes.)


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r138102421
  
--- Diff: .travis.yml ---
@@ -10,36 +10,55 @@
 # See the License for the specific language governing permissions and
 # limitations under the License.
 
-sudo: false
+# We need to set "sudo: true" in order to use a virtual machine instead of 
a container, because
+# SSH tests fail in the container. See:
+# 
https://docs.travis-ci.com/user/reference/overview/#Virtualization-environments
+
+dist: trusty
+sudo: true
 
 language: python
 
-dist: precise
+addons:
+  apt:
+sources:
+  - sourceline: 'ppa:fkrull/deadsnakes'
+packages:
+  # Ubuntu 14.04 (trusty) does not come with Python 2.6, so we will 
install it from Felix
+  # Krull's PPA
+  - python2.6
+  - python2.6-dev
+
 python:
+  # We handle Python 2.6 testing from within tox (see tox.ini); note that 
this means that we run
+  # tox itself always from Python 2.7
   - '2.7'
 
 env:
-  - TOX_ENV=pylint_code
-  - TOX_ENV=pylint_tests
-  - TOX_ENV=py27
-  - TOX_ENV=py26
-  - TOX_ENV=py27e2e
-  - TOX_ENV=py26e2e
-  - TOX_ENV=py27ssh
-  - TOX_ENV=py26ssh
-  - TOX_ENV=docs
-
-install:
+  # The PYTEST_PROCESSES environment var is used in tox.ini to override 
the --numprocesses argument
+  # for PyTest's xdist plugin. The reason this is necessary is that 
conventional Travis environments
+  # may report a large amount of available CPUs, but they they are greatly 
restricted. Through trial
+  # and error we found that more than 1 process may result in failures.
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_code
+  - PYTEST_PROCESSES=1 TOX_ENV=pylint_tests
+  - PYTEST_PROCESSES=1 TOX_ENV=py27
+  - PYTEST_PROCESSES=1 TOX_ENV=py26
+  - PYTEST_PROCESSES=1 TOX_ENV=py27e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py26e2e
+  - PYTEST_PROCESSES=1 TOX_ENV=py27ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=py26ssh
+  - PYTEST_PROCESSES=1 TOX_ENV=docs
+
+before_install:
+  # Create SSH keys for SSH tests
+  - ssh-keygen -f $HOME/.ssh/id_rsa -t rsa -N ''
+  - cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
+
+  # Python dependencies
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
-
-script:
-  - pip --version
--- End diff --

While working on fixing the tests, I actually added a whole bunch of stuff 
here to assist debugging. And then it occurred to me, why test pip version 
specifically? There is so much other stuff here that might be useful. (Also, we 
explicitly upgrade pip to the latest version when the test runs.) I think it's 
more confusing to have this here. If someone needs to debug something specific 
with the test mechanism on Travis, they can add whatever they want here (like I 
did) that is relevant.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-06 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137270329
  
--- Diff: examples/clearwater/scripts/bono/delete.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
--- End diff --

+1 added TODO


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137070505
  
--- Diff: aria/modeling/service_instance.py ---
@@ -228,6 +228,80 @@ def service_template_fk(cls):
 :type: :class:`~datetime.datetime`
 """)
 
+def get_node_by_type(self, type_name):
+"""
+Finds the first node of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+node_types = service_template.node_types
+if node_types is not None:
+for node in self.nodes.itervalues():
+if node_types.is_descendant(type_name, node.type.name):
+return node
+
+return None
+
+def get_policy_by_type(self, type_name):
+"""
+Finds the first policy of a type (or descendent type).
+"""
+service_template = self.service_template
+
+if service_template is not None:
+policy_types = service_template.policy_types
+if policy_types is not None:
+for policy in self.policies.itervalues():
+if policy_types.is_descendant(type_name, 
policy.type.name):
+return policy
+
+return None
+
+def satisfy_requirements(self):
+satisfied = True
+for node in self.nodes.itervalues():
+if not node.satisfy_requirements():
+satisfied = False
+return satisfied
+
+def validate_capabilities(self):
+satisfied = True
+for node in self.nodes.itervalues():
+if not node.validate_capabilities():
+satisfied = False
+return satisfied
+
+def find_hosts(self):
+for node in self.nodes.itervalues():
+node.find_host()
+
+def configure_operations(self):
+for node in self.nodes.itervalues():
+node.configure_operations()
+for group in self.groups.itervalues():
+group.configure_operations()
+for operation in self.workflows.itervalues():
+operation.configure()
+
+def is_node_a_target(self, target_node):
+for node in self.nodes.itervalues():
+if self._is_node_a_target(node, target_node):
+return True
+return False
+
+def _is_node_a_target(self, source_node, target_node):
--- End diff --

This is a leftover utility function from a previous experiment of mine (for 
solving an SMTP config challenge). It's not needed anymore, so I will just 
remove it.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137052935
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/data_types.py ---
@@ -159,10 +159,6 @@ def get_data_type(context, presentation, field_name, 
allow_none=False):
 else:
 return str
 
-# Make sure not derived from self
-if type_name == presentation._name:
--- End diff --

The previous code was broken and threw an exception when reaching those 
lines. I found the bug by accident while working on this so just decided to do 
a spot fix.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137049267
  
--- Diff: examples/clearwater/scripts/ralf/create.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
--- End diff --

In terms of software installation, installing Dime will install 
Ralf+Homestead. However, those components run entirely differently in terms of 
configuration, logging, networking, etc. I imagine that in the multi-node 
Clearwater there will be things in this script to configure networking (open 
ports) for Ralf specifically. So, again, this is left here as a TODO.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048198
  
--- Diff: examples/clearwater/scripts/bono/delete.sh ---
@@ -0,0 +1,15 @@
+#!/bin/bash
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
--- End diff --

I'm leaving that as TODOs -- we do need a way to uininstall, too, no?


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048035
  
--- Diff: examples/clearwater/clearwater-live-test-existing.yaml ---
@@ -0,0 +1,54 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-live-test-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
+
+imports:
+  - types/clearwater.yaml
+  - aria-1.0
+
+topology_template:
+
+  inputs:
+hosts.ssh.user:
+  type: string
+hosts.ssh.password:
--- End diff --

I was thinking forward here to the future template that will have multiple 
hosts. For consistency, I thought to call "hosts." so it would apply to all 
hosts. This is a simpler example with only one host.


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137048061
  
--- Diff: examples/clearwater/clearwater-single-existing.yaml ---
@@ -0,0 +1,147 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-single-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-09-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/191#discussion_r137047729
  
--- Diff: examples/clearwater/clearwater-live-test-existing.yaml ---
@@ -0,0 +1,54 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+description: >-
+  Project Clearwater is an open-source IMS core, developed by Metaswitch 
Networks and released under
+  the GNU GPLv3.
+
+metadata:
+  template_name: clearwater-live-test-existing
+  template_author: ARIA
+  template_version: '1.0'
+  aria_version: '0.1.2'
--- End diff --

+1


---


[GitHub] incubator-ariatosca pull request #191: ARIA-321 Provide Clearwater IMS examp...

2017-08-15 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/191

ARIA-321 Provide Clearwater IMS example

Related fixes included in this commit:

* Allows capabilities, interfaces, and properties to override parent
definition types only if the new type is a descendant of the overridden
type
* Fix bugs in model instrumentation (to allow ctx access to capability
properties and complex values)
* Fix to get_property intrinsic function
* Fix the "required" field for parameters (it wasn't working)
* Don't let scalar values be negative
* Doc fixes related to ARIA-277

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca ARIA-321-clearwater

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/191.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #191


commit 8bb52180a34ca5617cc89c1ac39b152e0bda7d25
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-07-27T22:58:17Z

ARIA-321 Provide Clearwater IMS example

Related fixes included in this commit:

* Allows capabilities, interfaces, and properties to override parent
definition types only if the new type is a descendant of the overridden
type
* Fix bugs in model instrumentation (to allow ctx access to capability
properties and complex values)
* Fix to get_property intrinsic function
* Fix the "required" field for parameters (it wasn't working)
* Don't let scalar values be negative
* Doc fixes related to ARIA-277




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131703163
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
+args = list(args)
+arg = args.pop(0)
 
-def _is_int(arg):
-try:
-int(arg)
-except ValueError:
-return False
-return True
+# Call?
+if arg == '[':
+# TODO: should there be a way to escape "[" and "]" in case they 
are needed as real
+# arguments?
+try:

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131702731
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
+# Modify item value (dict, list, and similar)
+if isinstance(obj, (list, tuple)):
+modifying_key = int(modifying_key)
+obj[modifying_key] = modifying_value
+elif hasattr(obj, modifying_key):
 # Modify object attribute
-setattr(current, modifying_key, modifying_value)
-
-return current
+setattr(obj, modifying_key, modifying_value)
+else:
+raise ProcessingError('Cannot modify `{0}` of `{1!r}`'
+  .format(modifying_key, obj))
 
+return obj
 
-def _desugar_attr(obj, attr):
-if not isinstance(attr, basestring):
-return None
-if hasattr(obj, attr):
-return attr
-attr = attr.replace('-', '_')
-if hasattr(obj, attr):
-return attr
-return None
 
+def _parse_argument(obj, args, modifying):
--- End diff --

But you wouldn't know in advance which function to call until you start 
parsing. I renamed it and did some little cleanups to add clarity.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
con

[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131700894
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,17 +146,29 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
-current = ctx
-index = 0
+class ProcessingError(RuntimeError):
+pass
+
 
+class ParsingError(ProcessingError):
+pass
+
+
+def _parse_request(ctx, request):
+request = json.loads(request)
+args = request['args']
+return _parse_arguments(ctx, args)
+
+
+def _parse_arguments(obj, args):
+# Modifying?
 try:
 # TODO: should there be a way to escape "=" in case it is needed 
as real argument?
-equals_index = args.index('=')
+equals_index = args.index('=') # raises ValueError if not found
--- End diff --

It looks more awkward, but +1.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131700830
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -170,101 +178,59 @@ def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-man
 modifying_key = None
 modifying_value = None
 
-num_args = len(args)
-
-while index < num_args:
-arg = args[index]
-
-# Object attribute
-attr = _desugar_attr(current, arg)
-if attr is not None:
-current = getattr(current, attr)
-
-# List entry
-elif isinstance(current, list) and _is_int(arg):
-current = current[int(arg)]
-
-# Dict (or dict-like object) value
-elif hasattr(current, '__getitem__'):
-if modifying and (not arg in current):
-current[arg] = {}
-current = current[arg]
-
-# Call
-elif callable(current):
-if isinstance(current, functools.partial):
-argspec = inspect.getargspec(current.func)
-# Don't count initial args fulfilled by the partial
-spec_args = argspec.args[len(current.args):]
-# Don't count keyword args fulfilled by the partial
-spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
-else:
-argspec = inspect.getargspec(current)
-spec_args = argspec.args
-
-callable_kwargs = {}
-if isinstance(arg, dict):
-# If the first arg is a dict, treat it as our kwargs
-# TODO: what if the first argument really needs to be a 
dict?
-callable_kwargs = arg
-index += 1
-
-if argspec.varargs is not None:
-# Gobble the rest of the args
-callable_args = args[index:]
-else:
-# Take only what we need
-spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
-spec_args_count = len(spec_args)
-if inspect.ismethod(current):
-# Don't count "self" argument
-spec_args_count -= 1
-callable_args = args[index:index + spec_args_count]
-# Note: we might actually have fewer args than the 
args_count, but that could be OK
-# if the user expects subsequent args to have default 
values
-
-args_count = len(callable_args)
-if args_count > 1:
-index += args_count - 1
-
-current = current(*callable_args, **callable_kwargs)
-
-else:
-raise RuntimeError('`{0}` cannot be processed in 
{1}'.format(arg, args))
-
-index += 1
-
-if callable(current):
-current = current()
+# Parse all arguments
+while len(args) > 0:
+obj, args = _parse_argument(obj, args, modifying)
 
 if modifying:
-if hasattr(current, '__setitem__'):
-# Modify dict value
-current[modifying_key] = modifying_value
-else:
+if hasattr(obj, '__setitem__'):
--- End diff --

Cannot, because it wouldn't support dict-like objects like instrumented 
wrapper classes. This is generally better because it would support anything 
that supported __get_item__ and __set_item__, which allows for any kind of 
classes.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131693627
  
--- Diff: tests/orchestrator/execution_plugin/test_ctx_proxy_server.py ---
@@ -55,61 +55,32 @@ def test_dict_prop_access_set(self, server, ctx):
 assert ctx.node.properties['prop3'][2]['value'] == 'new_value_2'
 assert ctx.node.properties['prop4']['some']['new']['path'] == 
'some_new_value'
 
+def test_dict_prop_access_set_with_list_index(self, server, ctx):
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-08-07 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r131693585
  
--- Diff: examples/hello-world/scripts/stop.sh ---
@@ -16,14 +16,13 @@
 
 set -e
 
-TEMP_DIR="/tmp"
-PYTHON_FILE_SERVER_ROOT=${TEMP_DIR}/python-simple-http-webserver
-PID_FILE="server.pid"
+TEMP_DIR=/tmp
+PYTHON_FILE_SERVER_ROOT="${TEMP_DIR}/python-simple-http-webserver"
+PID_FILE=server.pid
+PID=$(cat "$PYTHON_FILE_SERVER_ROOT/$PID_FILE")
 
-PID=`cat ${PYTHON_FILE_SERVER_ROOT}/${PID_FILE}`
+ctx logger info [ "Shutting down web server, pid = ${PID}." ]
+kill -9 "$PID" || exit $?
 
-ctx logger info "Shutting down web server, pid = ${PID}."
-kill -9 ${PID} || exit $?
-
-ctx logger info "Removing web server root folder: 
${PYTHON_FILE_SERVER_ROOT}."
-rm -rf ${PYTHON_FILE_SERVER_ROOT}
+ctx logger info [ "Removing web server root folder: 
$PYTHON_FILE_SERVER_ROOT." ]
--- End diff --

Heh, not 100% sure I agree with that, but it does look clear. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-31 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130399573
  
--- Diff: aria/storage/collection_instrumentation.py ---
@@ -213,32 +235,32 @@ def __init__(self, mapi, *args, **kwargs):
 """
 The original model.
 
-:param wrapped: model to be instrumented
 :param mapi: MAPI for the wrapped model
+:param wrapped: model to be instrumented
+:param instrumentation: instrumentation dict
+:param instrumentation_kwargs: arguments for instrumentation class
 """
-super(_InstrumentedModel, self).__init__(*args, **kwargs)
+super(_InstrumentedModel, 
self).__init__(instrumentation_kwargs=dict(mapi=mapi),
+ *args, **kwargs)
 self._mapi = mapi
 self._apply_instrumentation()
 
-def __getattr__(self, item):
-return_value = getattr(self._wrapped, item)
-if isinstance(return_value, self._wrapped.__class__):
-return _create_instrumented_model(return_value, self._mapi, 
self._instrumentation)
-if isinstance(return_value, (list, dict)):
-return _create_wrapped_model(return_value, self._mapi, 
self._instrumentation)
-return return_value
-
 def _apply_instrumentation(self):
 for field in self._instrumentation:
+if field.parent.class_ != type(self._wrapped):
--- End diff --

To fix: check by isubclass instead of equality


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca issue #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on the issue:

https://github.com/apache/incubator-ariatosca/pull/188
  
Hey guys, I suggest going over the complex use cases in the [Clearwater 
example](https://github.com/apache/incubator-ariatosca/blob/ARIA-321-clearwater/examples/clearwater/scripts/host/configure.sh).

Basically all the changes in the PR were inspired by those real-world needs 
of `ctx`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247609
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
+argspec = inspect.getargspec(current.func)
+# Don't count initial args fulfilled by the partial
+spec_args = argspec.args[len(current.args):]
+# Don't count keyword args fulfilled by the partial
+spec_args = tuple(a for a in spec_args if a not in 
current.keywords)
+else:
+argspec = inspect.getargspec(current)
+spec_args = argspec.args
+
+callable_kwargs = {}
+if isinstance(arg, dict):
+# If the first arg is a dict, treat it as our kwargs
+# TODO: what if the first argument really needs to be a 
dict?
+callable_kwargs = arg
+index += 1
+
+if argspec.varargs is not None:
+# Gobble the rest of the args
+callable_args = args[index:]
+else:
+# Take only what we need
+spec_args = tuple(a for a in spec_args if a not in 
callable_kwargs)
+spec_args_count = len(spec_args)
+if inspect.ismethod(current):
+# Don't count "self" argument
+spec_args_count -= 1
+callable_args = args[index:index + spec_args_count]
--- End diff --

See comment above: we can't just grab all remaining args, because we might 
need these args for further delving into the returned value of the callable.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247616
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247602
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
--- End diff --

+1

The main reason for all of this is that we sometimes need to continue 
access properties on returned values on callables. The previous implementation 
simpled "gobbled" all remaining args, but we might need those args. Example 
from Clearwater:

`ctx service get_policy_by_type clearwater.Configuration properties zone`

But generally I suggest you take a look at the various uses of `ctx` in the 
[Clearwater 
example](https://github.com/apache/incubator-ariatosca/blob/ARIA-321-clearwater/examples/clearwater/scripts/host/configure.sh).






---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247513
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
+try:
+# TODO: should there be a way to escape "=" in case it is needed 
as real argument?
+equals_index = args.index('=')
+if equals_index == 0:
+raise RuntimeError('The "=" argument cannot be first')
+if equals_index != len(args) - 2:
+raise RuntimeError('The "=" argument must be penultimate')
+modifying = True
+modifying_key = args[-3]
+modifying_value = args[-1]
+args = args[:-3]
+except ValueError:
+modifying = False
+modifying_key = None
+modifying_value = None
+
+num_args = len(args)
+
 while index < num_args:
 arg = args[index]
+
+# Object attribute
 attr = _desugar_attr(current, arg)
-if attr:
+if attr is not None:
 current = getattr(current, attr)
-elif isinstance(current, collections.MutableMapping):
-key = arg
-path_dict = _PathDictAccess(current)
-if index + 1 == num_args:
-# read dict prop by path
-value = path_dict.get(key)
-current = value
-elif index + 2 == num_args:
-# set dict prop by path
-value = args[index + 1]
-path_dict.set(key, value)
-current = None
-else:
-raise RuntimeError('Illegal argument while accessing dict')
-break
+
+# List entry
+elif isinstance(current, list) and _is_int(arg):
+current = current[int(arg)]
+
+# Dict (or dict-like object) value
+elif hasattr(current, '__getitem__'):
+if modifying and (not arg in current):
+current[arg] = {}
+current = current[arg]
+
+# Call
 elif callable(current):
-kwargs = {}
-remaining_args = args[index:]
-if isinstance(remaining_args[-1], collections.MutableMapping):
-kwargs = remaining_args[-1]
-remaining_args = remaining_args[:-1]
-current = current(*remaining_args, **kwargs)
-break
+if isinstance(current, functools.partial):
--- End diff --

This happens a lot, actually. I forget which of our tests would showcase 
this exactly, but I think even `ctx logger` is based on partials.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247470
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
 current = ctx
-num_args = len(args)
 index = 0
+
--- End diff --

This was the whole point of this PR.

The previous implementation assumed either one arg or two: one arg meant 
"read" and two args meant "write". But this required us to support dot notation 
so that the one arg could refer to a deep nested attribute. Removing dot 
notation requires a different to support "write".

This PR also supports working on the return value of function calls.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247255
  
--- Diff: aria/orchestrator/execution_plugin/ctx_proxy/server.py ---
@@ -150,43 +150,101 @@ def __exit__(self, *args, **kwargs):
 self.close()
 
 
-def _process_ctx_request(ctx, args):
+def _process_ctx_request(ctx, args): # pylint: 
disable=too-many-branches,too-many-statements
--- End diff --

Of course we can, but it doesn't seem a very long function at all. This is 
one of those cases where PyLint is not being helpful.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247244
  
--- Diff: aria/storage/collection_instrumentation.py ---
@@ -213,32 +235,32 @@ def __init__(self, mapi, *args, **kwargs):
 """
 The original model.
 
-:param wrapped: model to be instrumented
 :param mapi: MAPI for the wrapped model
+:param wrapped: model to be instrumented
+:param instrumentation: instrumentation dict
+:param instrumentation_kwargs: arguments for instrumentation class
 """
-super(_InstrumentedModel, self).__init__(*args, **kwargs)
+super(_InstrumentedModel, 
self).__init__(instrumentation_kwargs=dict(mapi=mapi),
+ *args, **kwargs)
 self._mapi = mapi
 self._apply_instrumentation()
 
-def __getattr__(self, item):
-return_value = getattr(self._wrapped, item)
-if isinstance(return_value, self._wrapped.__class__):
-return _create_instrumented_model(return_value, self._mapi, 
self._instrumentation)
-if isinstance(return_value, (list, dict)):
-return _create_wrapped_model(return_value, self._mapi, 
self._instrumentation)
-return return_value
-
 def _apply_instrumentation(self):
 for field in self._instrumentation:
+if field.parent.class_ != type(self._wrapped):
--- End diff --

It doesn't seem to break this in tests.

Without this check, the only check is by field name, which caused a lot of 
breakage. You would node properties being instrumented for capability 
properties. Again, consider the use case of `ctx node capabilities host 
properties username`.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247176
  
--- Diff: aria/orchestrator/workflows/api/task.py ---
@@ -140,13 +140,18 @@ def __init__(self,
 self.arguments = modeling_utils.merge_parameter_values(arguments,

operation.arguments,

model_cls=models.Argument)
-if getattr(self.actor, 'outbound_relationships', None) is not None:
+
+actor = self.actor
+if hasattr(actor, '_wrapped'):
--- End diff --

Because you often get instrumented/wrapped models here, and they would fail 
the `isinstance` check. (With duck typing check they succeed, but I thought 
duck typing was bad.)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247158
  
--- Diff: aria/orchestrator/workflows/api/task.py ---
@@ -140,13 +140,18 @@ def __init__(self,
 self.arguments = modeling_utils.merge_parameter_values(arguments,

operation.arguments,

model_cls=models.Argument)
-if getattr(self.actor, 'outbound_relationships', None) is not None:
--- End diff --

Why not import models? This should have been explained in comments. Duck 
typing seems very inaccurate and prone to errors.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx proxy access

2017-07-30 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/188#discussion_r130247107
  
--- Diff: aria/orchestrator/context/common.py ---
@@ -38,10 +38,27 @@ class BaseContext(object):
 """
 
 INSTRUMENTATION_FIELDS = (
+modeling.models.Service.inputs,
--- End diff --

Because you should be able to do things like: {{ctx node capabilities host 
properties username}}

The ctx proxy should let you access all models.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #188: ARIA-324 Refactor ctx access

2017-07-27 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/188

ARIA-324 Refactor ctx access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict.

The same format can now be used to access object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-324-refactor-ctx-access

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/188.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #188


commit 956e98b20874b99e80b12849b3c10dc869a4b4f6
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-07-27T22:58:17Z

ARIA-324 Refactor ctx proxy access

Our previous use of "." to delimit nested dict keys was wrong (keys
could have a ".") and inflexible. The new implementation uses subsequent
args to move into the dict. The same format can now be used to access
object attributes.

This commit also changes how to support setting values: we must now use
"=" as the penultimate argument with the new value following.

Also fixed: callables will now "grab" the number of args they need
instead of all remaining args, making it possible to do further
inspection on the returned value from the callable. To allow for this,
kwargs are now expected as the first arg rather than the last.

Relatedly, this commit instruments all parameter fields from all models
and fixes related bugs in the instrumentation implementation.

Furthmore, this commit fixes a bad null check in the ctx client, and
also allows it to retrieve Unicode data.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #183: ARIA-76 Parallelize PyTest

2017-07-12 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/183#discussion_r126884450
  
--- Diff: .travis.yml ---
@@ -23,11 +23,12 @@ env:
 - TOX_ENV=py26e2e
 - TOX_ENV=py27ssh
 - TOX_ENV=py26ssh
+- TOX_ENV=docs
 install:
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
 script:
   - pip --version
   - tox --version
-  - tox -e $TOX_ENV
+  - PYTEST_PROCESSES=1 tox -e $TOX_ENV
--- End diff --

Yes AppVeyor and yes concurrency. :) AppVeyor works fine with the `-n auto` 
defaults, likely because it's running in a restricted VM. The problem is only 
with Travis, because the OS reports a very large amount of CPUs (12 in our 
case) but the container is extremely restricted, and thus we get failures. 


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #183: ARIA-76 Parallelize PyTest

2017-07-12 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/183#discussion_r126884207
  
--- Diff: .travis.yml ---
@@ -23,11 +23,12 @@ env:
 - TOX_ENV=py26e2e
 - TOX_ENV=py27ssh
 - TOX_ENV=py26ssh
+- TOX_ENV=docs
 install:
   - pip install --upgrade pip
   - pip install --upgrade setuptools
   - pip install tox
 script:
   - pip --version
   - tox --version
-  - tox -e $TOX_ENV
+  - PYTEST_PROCESSES=1 tox -e $TOX_ENV
--- End diff --

+1


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #183: ARIA-76 Parallelize PyTest

2017-07-11 Thread tliron
Github user tliron closed the pull request at:

https://github.com/apache/incubator-ariatosca/pull/183


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #183: ARIA-76 Parallelize PyTest

2017-07-11 Thread tliron
GitHub user tliron reopened a pull request:

https://github.com/apache/incubator-ariatosca/pull/183

ARIA-76 Parallelize PyTest

Use the PyTest xdist plugin to parallelize tests in boxed subprocesses.
Through benchmarking we discovered that paralellizing on the number of
CPU cores ("-n auto") provides the best times.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-76-parallel-pytest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #183


commit b2310536e00894ec781cd12ee20a620257cbb437
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-07-10T15:06:05Z

ARIA-76 Parallelize PyTest

Use the PyTest xdist plugin to parallelize tests in boxed subprocesses.
Through benchmarking we discovered that paralellizing on the number of
CPU cores ("-n auto") provides the best times.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #184: ARIA-307 Automate release process

2017-07-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/184#discussion_r126629909
  
--- Diff: release.sh ---
@@ -0,0 +1,235 @@
+#!/bin/bash
--- End diff --

Missing Apache license header


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #184: ARIA-307 Automate release process

2017-07-11 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/184#discussion_r126630635
  
--- Diff: release.sh ---
@@ -0,0 +1,235 @@
+#!/bin/bash
--- End diff --

I think it would help readers to have a short comment at the beginning of 
this file explaining its purpose. Since there are a lot of Apache-specific 
things here, it might also be nice to link to the relevant Apache regulations 
and documentations. This will help other maintain the script and fix issues 
that might arise.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #183: ARIA-76 Parallelize PyTest

2017-07-10 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/183

ARIA-76 Parallelize PyTest

Use the PyTest xdist plugin to parallelize tests in boxed subprocesses.
Through benchmarking we discovered that paralellizing on the number of
CPU cores ("-n auto") provides the best times.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-76-parallel-pytest

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/183.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #183


commit b2310536e00894ec781cd12ee20a620257cbb437
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-07-10T15:06:05Z

ARIA-76 Parallelize PyTest

Use the PyTest xdist plugin to parallelize tests in boxed subprocesses.
Through benchmarking we discovered that paralellizing on the number of
CPU cores ("-n auto") provides the best times.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #181: ARIA-103 Remove dependency on Clint

2017-07-10 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/181

ARIA-103 Remove dependency on Clint

We no longer require this third-party library, instead the utils/console
module uses the existing cli/color module.

This commit also fixes the cli/color module to properly support Unicode,
and also properly deinitialize Colorama in Windows.

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-103-remove-clint-dependency

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/181.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #181


commit d81d734da451d8ffa82c61830947589e378b76e5
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-07-10T09:28:23Z

ARIA-103 Remove dependency on Clint

We no longer require this third-party library, instead the utils/console
module uses the existing cli/color module.

This commit also fixes the cli/color module to properly support Unicode,
and also properly deinitialize Colorama in Windows.




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-10 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r126353998
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/capabilities.py ---
@@ -162,6 +164,30 @@ def 
convert_capability_from_definition_to_assignment(context, presentation, cont
 return CapabilityAssignment(name=presentation._name, raw=raw, 
container=container)
 
 
+def merge_capability_definition(context, presentation, 
capability_definition,
+from_capability_definition):
--- End diff --

Because I prefer it to fit with the style of the rest of code right now, to 
keep it consistent. I would like to change it, but everything together and give 
it some more thought.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-10 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r126353859
  
--- Diff: aria/modeling/service_template.py ---
@@ -690,19 +666,104 @@ def dump(self):
 console.puts(context.style.meta(self.description))
 with context.style.indent:
 console.puts('Type: 
{0}'.format(context.style.type(self.type.name)))
-console.puts('Instances: {0:d} ({1:d}{2})'.format(
-self.default_instances,
-self.min_instances,
-' to {0:d}'.format(self.max_instances)
-if self.max_instances is not None
-else ' or more'))
 utils.dump_dict_values(self.properties, 'Properties')
 utils.dump_dict_values(self.attributes, 'Attributes')
 utils.dump_interfaces(self.interface_templates)
 utils.dump_dict_values(self.artifact_templates, 'Artifact 
templates')
 utils.dump_dict_values(self.capability_templates, 'Capability 
templates')
 utils.dump_list_values(self.requirement_templates, 
'Requirement templates')
 
+@property
+def next_index(self):
+"""
+Next available node index.
+
+:returns: node index
+:rtype: int
+"""
+
+max_index = 0
+if self.nodes:
+max_index = max(int(n.name.rsplit('_', 1)[-1]) for n in 
self.nodes)
+return max_index + 1
+
+@property
+def next_name(self):
+"""
+Next available node name.
+
+:returns: node name
+:rtype: basestring
+"""
+
+return '{name}_{index}'.format(name=self.name, 
index=self.next_index)
+
+@property
+def scaling(self):
+scaling = {}
+
+def extract_property(properties, name):
+if name in scaling:
+return
+prop = properties.get(name)
+if (prop is not None) and (prop.type_name == 'integer') and 
(prop.value is not None):
+scaling[name] = prop.value
+
+def extract_properties(properties):
+extract_property(properties, 'min_instances')
+extract_property(properties, 'max_instances')
+extract_property(properties, 'default_instances')
+
+def default_property(name, value):
+if name not in scaling:
+scaling[name] = value
+
+# From our scaling capabilities
+for capability_template in self.capability_templates.itervalues():
+if capability_template.type.role == 'scaling':
+extract_properties(capability_template.properties)
+
+# From service scaling policies
+for policy_template in 
self.service_template.policy_templates.itervalues():
+if policy_template.type.role == 'scaling':
+if policy_template.is_for_node_template(self.name):
+extract_properties(policy_template.properties)
+
+# Defaults
+default_property('min_instances', 0)
+default_property('max_instances', 1)
+default_property('default_instances', 1)
+
+# Validate
+# pylint: disable=too-many-boolean-expressions
+if (scaling['min_instances'] < 0) or \
--- End diff --

I see no difference in readability between this and using `or`, seems to me 
just to be bowing to PyLint's weirdness... but I don't mind switching.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125756533
  
--- Diff: aria/modeling/service_template.py ---
@@ -690,19 +666,104 @@ def dump(self):
 console.puts(context.style.meta(self.description))
 with context.style.indent:
 console.puts('Type: 
{0}'.format(context.style.type(self.type.name)))
-console.puts('Instances: {0:d} ({1:d}{2})'.format(
-self.default_instances,
-self.min_instances,
-' to {0:d}'.format(self.max_instances)
-if self.max_instances is not None
-else ' or more'))
 utils.dump_dict_values(self.properties, 'Properties')
 utils.dump_dict_values(self.attributes, 'Attributes')
 utils.dump_interfaces(self.interface_templates)
 utils.dump_dict_values(self.artifact_templates, 'Artifact 
templates')
 utils.dump_dict_values(self.capability_templates, 'Capability 
templates')
 utils.dump_list_values(self.requirement_templates, 
'Requirement templates')
 
+@property
+def next_index(self):
+"""
+Next available node index.
+
+:returns: node index
+:rtype: int
+"""
+
+max_index = 0
+if self.nodes:
+max_index = max(int(n.name.rsplit('_', 1)[-1]) for n in 
self.nodes)
+return max_index + 1
+
+@property
+def next_name(self):
+"""
+Next available node name.
+
+:returns: node name
+:rtype: basestring
+"""
+
+return '{name}_{index}'.format(name=self.name, 
index=self.next_index)
+
+@property
+def scaling(self):
+scaling = {}
+
+def extract_property(properties, name):
+if name in scaling:
+return
+prop = properties.get(name)
+if (prop is not None) and (prop.type_name == 'integer') and 
(prop.value is not None):
+scaling[name] = prop.value
+
+def extract_properties(properties):
+extract_property(properties, 'min_instances')
+extract_property(properties, 'max_instances')
+extract_property(properties, 'default_instances')
+
+def default_property(name, value):
+if name not in scaling:
+scaling[name] = value
+
+# From our scaling capabilities
+for capability_template in self.capability_templates.itervalues():
+if capability_template.type.role == 'scaling':
+extract_properties(capability_template.properties)
+
+# From service scaling policies
+for policy_template in 
self.service_template.policy_templates.itervalues():
+if policy_template.type.role == 'scaling':
+if policy_template.is_for_node_template(self.name):
+extract_properties(policy_template.properties)
+
+# Defaults
+default_property('min_instances', 0)
+default_property('max_instances', 1)
+default_property('default_instances', 1)
+
+# Validate
+# pylint: disable=too-many-boolean-expressions
+if (scaling['min_instances'] < 0) or \
--- End diff --

I'm not sure how `any` would work here. How would you write this?


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125755887
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/capabilities.py ---
@@ -162,6 +164,30 @@ def 
convert_capability_from_definition_to_assignment(context, presentation, cont
 return CapabilityAssignment(name=presentation._name, raw=raw, 
container=container)
 
 
+def merge_capability_definition(context, presentation, 
capability_definition,
+from_capability_definition):
+raw_properties = OrderedDict()
+
+# Merge properties from type
+from_property_defintions = from_capability_definition.properties
+merge_raw_parameter_definitions(context, presentation, raw_properties, 
from_property_defintions,
+'properties')
+
+# Merge our properties
+merge_raw_parameter_definitions(context, presentation, raw_properties,
+capability_definition.properties, 
'properties')
--- End diff --

Not exactly. This is a utility method called by 
`get_inherited_capability_definitions`. Its goal it to merge the current node 
type's capability definitions over the inherited parent node type's capability 
definitions. "from" here is the parent's.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125755316
  
--- Diff: aria/modeling/service_template.py ---
@@ -690,19 +666,104 @@ def dump(self):
 console.puts(context.style.meta(self.description))
 with context.style.indent:
 console.puts('Type: 
{0}'.format(context.style.type(self.type.name)))
-console.puts('Instances: {0:d} ({1:d}{2})'.format(
-self.default_instances,
-self.min_instances,
-' to {0:d}'.format(self.max_instances)
-if self.max_instances is not None
-else ' or more'))
 utils.dump_dict_values(self.properties, 'Properties')
 utils.dump_dict_values(self.attributes, 'Attributes')
 utils.dump_interfaces(self.interface_templates)
 utils.dump_dict_values(self.artifact_templates, 'Artifact 
templates')
 utils.dump_dict_values(self.capability_templates, 'Capability 
templates')
 utils.dump_list_values(self.requirement_templates, 
'Requirement templates')
 
+@property
+def next_index(self):
--- End diff --

Actually, this was intentional -- I thought it could be useful for outside 
callers. But no problem changed this and also `_next_name` similarly.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125754664
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/capabilities.py ---
@@ -162,6 +164,30 @@ def 
convert_capability_from_definition_to_assignment(context, presentation, cont
 return CapabilityAssignment(name=presentation._name, raw=raw, 
container=container)
 
 
+def merge_capability_definition(context, presentation, 
capability_definition,
+from_capability_definition):
--- End diff --

+1, but I suggest refactoring some other time


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125754130
  
--- Diff: 
extensions/aria_extension_tosca/simple_v1_0/modeling/capabilities.py ---
@@ -162,6 +164,30 @@ def 
convert_capability_from_definition_to_assignment(context, presentation, cont
 return CapabilityAssignment(name=presentation._name, raw=raw, 
container=container)
 
 
+def merge_capability_definition(context, presentation, 
capability_definition,
+from_capability_definition):
+raw_properties = OrderedDict()
+
+# Merge properties from type
+from_property_defintions = from_capability_definition.properties
+merge_raw_parameter_definitions(context, presentation, raw_properties, 
from_property_defintions,
+'properties')
+
+# Merge our properties
+merge_raw_parameter_definitions(context, presentation, raw_properties,
+capability_definition.properties, 
'properties')
+
+if raw_properties:
+capability_definition._raw['properties'] = raw_properties
+capability_definition._reset_method_cache()
+
+# Merge occurrences
+occurrences = from_capability_definition._raw.get('occurrences')
+if (occurrences is not None) and 
(capability_definition._raw.get('occurrences') is None):
--- End diff --

I do not consider them redundant at all. Nobody should have to remember 
operation precedence, you should be able to tell at a glance. I recommend 
everybody follow this practice in every language. :)


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create multiple nodes per te...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/143#discussion_r125753620
  
--- Diff: 
extensions/aria_extension_tosca/profiles/tosca-simple-1.0/capabilities.yaml ---
@@ -132,6 +132,7 @@ capability_types:
   specification: tosca-simple-1.0
   specification_section: 5.4.10
   specification_url: 
'http://docs.oasis-open.org/tosca/TOSCA-Simple-Profile-YAML/v1.0/cos01/TOSCA-Simple-Profile-YAML-v1.0-cos01.html#DEFN_TYPE_CAPABILITIES_SCALABLE'
+  role: scaling
--- End diff --

The `role` meta data is the way to specify special behavior for types in 
ARIA. This is much safer than checking if name == `aria.Scaling`, or even if it 
inherits from that. Roles are inherited with the type, so subtypes will have 
the same role (unless they override it). We use this in a few cases, for 
example there is `host` role for Compute and for HostedOn relationships.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #145: ARIA-260 Send interface inputs as arg...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/145#discussion_r125753009
  
--- Diff: tests/instantiation/test_configuration.py ---
@@ -0,0 +1,175 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+
+from tests.parser.service_templates import consume_literal
+
+
+TEMPLATE = """
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+interface_types:
+  MyInterface:
+derived_from: tosca.interfaces.Root   
+inputs:
+  interface_string:
+type: string
+default: value1
+  interface_integer:
+type: integer
+default: 1
+operation:
+  implementation: operation.sh
+  inputs:
+operation_string:
+  type: string
+  default: value2
+operation_integer:
+  type: integer
+  default: 2
+interface_integer: # will override interface input
+  type: integer
+  default: 3
+
+node_types:
+  LocalNode:
+derived_from: tosca.nodes.Root
+interfaces:
+  MyInterface:
+type: MyInterface
+
+  RemoteNode:
+derived_from: tosca.nodes.Compute
+interfaces:
+  MyInterface:
+type: MyInterface
+
+topology_template:
+  node_templates:
+local_node:
+  type: LocalNode
+
+remote_node:
+  type: RemoteNode   
+"""
+
+
+BROKEN_TEMPLATE = """
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+interface_types:
+  MyInterface:
+derived_from: tosca.interfaces.Root   
+inputs:
+  ctx: # reserved name
+type: string
+default: value1
+  interface_integer:
+type: integer
+default: 1
+operation:
+  implementation: operation.sh
+  inputs:
+operation_string:
+  type: string
+  default: value2
+toolbelt: # reserved name
+  type: integer
+  default: 2
+
+node_types:
+  LocalNode:
+derived_from: tosca.nodes.Root
+interfaces:
+  MyInterface:
+type: MyInterface
+
+topology_template:
+  node_templates:
+local_node:
+  type: LocalNode
+"""
+
+
+@pytest.fixture
+def service():
+context, _ = consume_literal(TEMPLATE)
+yield context.modeling.instance
+
+
+@pytest.fixture
+def broken_service_issues():
+context, _ = consume_literal(BROKEN_TEMPLATE, no_issues=False)
+yield context.validation.issues
+
+
+def _values(the_dict):
--- End diff --

Done. `aria.modeling.utils.parameters_as_values`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #145: ARIA-260 Send interface inputs as arg...

2017-07-05 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/145#discussion_r125752870
  
--- Diff: tests/instantiation/test_configuration.py ---
@@ -0,0 +1,175 @@
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import pytest
+
+from tests.parser.service_templates import consume_literal
+
+
+TEMPLATE = """
+tosca_definitions_version: tosca_simple_yaml_1_0
+
+interface_types:
--- End diff --

On second look, I disagree with your suggested test cases. The 
`instantiation` testing is supposed to test instantiation, and what you're 
suggesting is parsing. Especially when we introduce a real instantiation module 
soon, I think such tests would look very strange here.

We obviously need to test the things you mention, but they are part of 
parser testing, a huge undertaking which we have yet to begin...


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #168: ARIA-286 Sphinx documentation for cod...

2017-06-28 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/168

ARIA-286 Sphinx documentation for code and CLI



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-286-sphinx-documentation

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/168.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #168


commit 3f09ecdeb45966f2f9e671ebf23892a9f6bb5c4b
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-06-23T02:13:28Z

ARIA-286 Sphinx documentation for code and CLI




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #146: ARIA-199 Add "services outputs" CLI c...

2017-06-06 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/146#discussion_r120391233
  
--- Diff: 
tests/resources/service-templates/tosca-simple-1.0/node-cellar/node-cellar.yaml 
---
@@ -302,6 +306,14 @@ topology_template:
 capabilities:
   app_endpoint: [ loadbalancer, client ]
 
+  outputs:
--- End diff --

Done.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #146: ARIA-199 Add "services outputs" CLI c...

2017-06-02 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/146

ARIA-199 Add "services outputs" CLI command



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-199-cli-service-output

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/146.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #146


commit 79b94e1f9aed3be3176b96870f7ea472146032c3
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-06-02T19:20:28Z

ARIA-199 Add "services outputs" CLI command




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #145: ARIA-260 Send interface inputs as arg...

2017-06-02 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/145

ARIA-260 Send interface inputs as arguments



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-260-send-interface-inputs

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/145.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #145


commit a733be67b5c9770081d94c295c47803dbd7f3ef4
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-06-02T18:35:21Z

ARIA-260 Send interface inputs as arguments




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #144: ARIA-64 Remove PyYAML dependency

2017-06-02 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/144

ARIA-64 Remove PyYAML dependency



You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-64-remove-pyyaml

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/144.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #144


commit 62d3b8d7c2b7bcacd6d456c971e6c97abd8378f1
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-06-02T17:39:00Z

ARIA-64 Remove PyYAML dependency




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #143: ARIA-254 Create of multiple nodes per...

2017-06-01 Thread tliron
GitHub user tliron opened a pull request:

https://github.com/apache/incubator-ariatosca/pull/143

ARIA-254 Create of multiple nodes per template

* New aria.Scaling policy (and "scaling" role)
* NodeTemplate model no longer stores scaling values (default_instances,
etc.) but instead fetches them from applicable scaling policies
* Some code cleanup

You can merge this pull request into a Git repository by running:

$ git pull https://github.com/apache/incubator-ariatosca 
ARIA-254-multiple-nodes-per-template

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/incubator-ariatosca/pull/143.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #143


commit acfc7f461b7ee6f01b6095d8eb4e28554df8928c
Author: Tal Liron <tal.li...@gmail.com>
Date:   2017-06-01T19:17:17Z

ARIA-254 Create of multiple nodes per template

* New aria.Scaling policy (and "scaling" role)
* NodeTemplate model no longer stores scaling values (default_instances,
etc.) but instead fetches them from applicable scaling policies
* Some code cleanup




---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #138: ARIA-149 Enhance operation configurat...

2017-05-31 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/138#discussion_r119446847
  
--- Diff: tests/orchestrator/test_workflow_runner.py ---
@@ -259,8 +258,9 @@ def _setup_mock_workflow_in_service(request, 
inputs=None):
 workflow = models.Operation(
 name=mock_workflow_name,
 service=service,
-implementation='workflow.mock_workflow',
-inputs=inputs or {})
+function='workflow.mock_workflow',
+inputs=inputs or {},
+arguments=inputs or {})
--- End diff --

No, not in the way this particular test works, since it creates an 
operation model directly and doesn't call `configure()`


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #138: ARIA-149 Enhance operation configurat...

2017-05-31 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/138#discussion_r119446252
  
--- Diff: aria/modeling/exceptions.py ---
@@ -57,3 +57,9 @@ class UndeclaredParametersException(ParameterException):
 """
 ARIA modeling exception: Undeclared parameters have been provided.
 """
+
+
+class ForbiddenParameterNamesException(ParameterException):
--- End diff --

Changing to "Reserved".


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


[GitHub] incubator-ariatosca pull request #138: ARIA-149 Enhance operation configurat...

2017-05-31 Thread tliron
Github user tliron commented on a diff in the pull request:

https://github.com/apache/incubator-ariatosca/pull/138#discussion_r119446194
  
--- Diff: extensions/aria_extension_tosca/simple_v1_0/modeling/functions.py 
---
@@ -69,7 +69,7 @@ def __evaluate__(self, container_holder):
 e, final = evaluate(e, final, container_holder)
 if e is not None:
 value.write(unicode(e))
-value = value.getvalue()
+value = value.getvalue() or u''
--- End diff --

We default to Unicode everywhere possible in values -- this avoids thorny 
problems later down the road with values.


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastruct...@apache.org or file a JIRA ticket
with INFRA.
---


  1   2   >