Re: [Proposal] Extension of the Apex configuration to add dependent jar files in runtime.

2018-02-05 Thread Sergey Golovko
The Apex platform system configuration is hardcoded in the Apex-core Java
code. But the platform does not have a flexible system configuration.
The plugins can only cover the run-time configuration of Apex applications.
It means the plugins cannot be a part of the system configuration of the
platform.

Thanks,
Sergey


On Mon, Feb 5, 2018 at 9:39 AM, Vlad Rozov <vro...@apache.org> wrote:

> Apex platform dependencies are already covered with the compile time check
> in place. Platform extensions are covered by plugins.
>
> Thank you,
>
> Vlad
>
>
> On 2/4/18 08:56, Sergey Golovko wrote:
>
>> The usage of any Apex attributes is the generic configuration of Apex
>> applications on the end-user level. But the subject of the discussion is
>> to
>> provide the system level configuration of Apex applications. I guess the
>> having of the two different layers of the configuration (system and
>> end-user) is a generic approach for all good designed tools.
>>
>> Thanks,
>> Sergey
>>
>>
>> On Sat, Feb 3, 2018 at 10:02 AM, Pramod Immaneni <pra...@datatorrent.com>
>> wrote:
>>
>> Yes generic in the Attribute class
>>>
>>> On Feb 3, 2018, at 10:00 AM, Vlad Rozov <vro...@apache.org> wrote:
>>>>
>>>> +1 assuming that support for merge/override will be generic for all
>>>>
>>> attributes that support list/set of values and not limited to
>>> LIBRARY_JARS
>>> attribute only.
>>>
>>>> Thank you,
>>>>
>>>> Vlad
>>>>
>>>> On 2/3/18 09:13, Pramod Immaneni wrote:
>>>>
>>>>> I too agree that the discussion has veered off from the original topic.
>>>>>
>>>> Why
>>>
>>>> can't LIBRARY_JARS be used for this, albeit with a minor improvement?
>>>>> Currently, our attribute layering is an override, so if you have an
>>>>> attribute that is specified as apex.application..
>>>>>
>>>> attr.
>>>
>>>> it overrides apex.attr. for that application. What if were to
>>>>> expand the attribute definition to allow for the specification of how
>>>>>
>>>> the
>>>
>>>> layering of attributes will be combined, override being one option,
>>>>>
>>>> merge
>>>
>>>> being another with these being implemented with a combiner interface?
>>>>>
>>>> This
>>>
>>>> way a set of common jars could be specified using dt.attr.LIBRARY_JARS
>>>>>
>>>> and
>>>
>>>> applications can still add extra jars on top.
>>>>>
>>>>> On Fri, Feb 2, 2018 at 6:32 PM, Vlad Rozov <vro...@apache.org> wrote:
>>>>>
>>>>> IMO, support for Kubernetes, Docker images, Mesos and anything outside
>>>>>>
>>>>> of
>>>
>>>> Yarn deployments is a topic by itself and design for such support
>>>>>>
>>>>> needs to
>>>
>>>> be discussed. I do not want to propose any specific design, but assume
>>>>>>
>>>>> that
>>>
>>>> logic to create proper execution environment would be coded into Apex
>>>>>> client. Whether it (hardcoded logic to create an execution
>>>>>>
>>>>> environment) can
>>>
>>>> be expressed simply as a list of dependent classes or jars is at
>>>>>>
>>>>> minimum
>>>
>>>> questionable. Until design is proposed and agreed upon, I'd prefer to
>>>>>>
>>>>> use
>>>
>>>> plugins for the subject.
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Vlad
>>>>>>
>>>>>>
>>>>>> On 2/2/18 13:17, Sanjay Pujare wrote:
>>>>>>
>>>>>> In cases where we have an "über" docker image containing support for
>>>>>>> multiple execution environments it might be useful for the Apex core
>>>>>>>
>>>>>> to
>>>
>>>> infer what kind of execution environment to use for a particular
>>>>>>> invocation  (say based on configuration values/environment variables)
>>>>>>>
>>>>>> and
>>>
>>>> in that case the core will load the corresponding libraries. And I
>>>

Re: [Proposal] Extension of the Apex configuration to add dependent jar files in runtime.

2018-02-04 Thread Sergey Golovko
The usage of any Apex attributes is the generic configuration of Apex
applications on the end-user level. But the subject of the discussion is to
provide the system level configuration of Apex applications. I guess the
having of the two different layers of the configuration (system and
end-user) is a generic approach for all good designed tools.

Thanks,
Sergey


On Sat, Feb 3, 2018 at 10:02 AM, Pramod Immaneni <pra...@datatorrent.com>
wrote:

> Yes generic in the Attribute class
>
> > On Feb 3, 2018, at 10:00 AM, Vlad Rozov <vro...@apache.org> wrote:
> >
> > +1 assuming that support for merge/override will be generic for all
> attributes that support list/set of values and not limited to LIBRARY_JARS
> attribute only.
> >
> > Thank you,
> >
> > Vlad
> >
> > On 2/3/18 09:13, Pramod Immaneni wrote:
> >> I too agree that the discussion has veered off from the original topic.
> Why
> >> can't LIBRARY_JARS be used for this, albeit with a minor improvement?
> >> Currently, our attribute layering is an override, so if you have an
> >> attribute that is specified as apex.application..
> attr.
> >> it overrides apex.attr. for that application. What if were to
> >> expand the attribute definition to allow for the specification of how
> the
> >> layering of attributes will be combined, override being one option,
> merge
> >> being another with these being implemented with a combiner interface?
> This
> >> way a set of common jars could be specified using dt.attr.LIBRARY_JARS
> and
> >> applications can still add extra jars on top.
> >>
> >> On Fri, Feb 2, 2018 at 6:32 PM, Vlad Rozov <vro...@apache.org> wrote:
> >>
> >>> IMO, support for Kubernetes, Docker images, Mesos and anything outside
> of
> >>> Yarn deployments is a topic by itself and design for such support
> needs to
> >>> be discussed. I do not want to propose any specific design, but assume
> that
> >>> logic to create proper execution environment would be coded into Apex
> >>> client. Whether it (hardcoded logic to create an execution
> environment) can
> >>> be expressed simply as a list of dependent classes or jars is at
> minimum
> >>> questionable. Until design is proposed and agreed upon, I'd prefer to
> use
> >>> plugins for the subject.
> >>>
> >>> Thank you,
> >>>
> >>> Vlad
> >>>
> >>>
> >>> On 2/2/18 13:17, Sanjay Pujare wrote:
> >>>
> >>>> In cases where we have an "über" docker image containing support for
> >>>> multiple execution environments it might be useful for the Apex core
> to
> >>>> infer what kind of execution environment to use for a particular
> >>>> invocation  (say based on configuration values/environment variables)
> and
> >>>> in that case the core will load the corresponding libraries. And I
> think
> >>>> this kind of flexibility or support would be difficult through the
> plugins
> >>>> hence I think Sergey's proposal will be useful.
> >>>>
> >>>> Sanjay
> >>>>
> >>>>
> >>>> On Fri, Feb 2, 2018 at 11:18 AM, Sergey Golovko <
> ser...@datatorrent.com>
> >>>> wrote:
> >>>>
> >>>> Unfortunately the moving of .apa file to a docker image cannot
> resolve all
> >>>>> problems with the dependencies. If we assume an Apex application
> should
> >>>>> be
> >>>>> run in different execution environments, the application docker image
> >>>>> must
> >>>>> contain all possible execution environment dependencies.
> >>>>>
> >>>>> I think the better way is to assume that the original application
> docker
> >>>>> image like the current .apa file should contain the application
> specific
> >>>>> dependencies only. And some smart client tool should create the
> >>>>> executable
> >>>>> application docker image form the original one and include the
> execution
> >>>>> specific environment dependencies into the target application docker
> >>>>> image.
> >>>>> It means anyway an smart client Apex tool should have an interface to
> >>>>> define different environment dependencies or combination of different
> >>>>> dimensions of the environment dependencies.
> >>

Re: [Proposal] Extension of the Apex configuration to add dependent jar files in runtime.

2018-02-02 Thread Sergey Golovko
Unfortunately the moving of .apa file to a docker image cannot resolve all
problems with the dependencies. If we assume an Apex application should be
run in different execution environments, the application docker image must
contain all possible execution environment dependencies.

I think the better way is to assume that the original application docker
image like the current .apa file should contain the application specific
dependencies only. And some smart client tool should create the executable
application docker image form the original one and include the execution
specific environment dependencies into the target application docker image.
It means anyway an smart client Apex tool should have an interface to
define different environment dependencies or combination of different
dimensions of the environment dependencies.

Thanks,
Sergey


On Fri, Feb 2, 2018 at 10:23 AM, Thomas Weise <t...@apache.org> wrote:

> The current dependencies are based on how Apex YARN client works. YARN
> depends on a DFS implementation for deployment (not necessarily HDFS).
>
> I think a better way to look at this is to consider that instead of an .apa
> file the application is a docker image, which would contain Apex and all
> dependencies that the "StramClient"  today adds for YARN.
>
> In that world there would be no Apex CLI or Apex specific client.
>
> Thomas
>
>
>
> On Thu, Feb 1, 2018 at 5:57 PM, Sergey Golovko <ser...@datatorrent.com>
> wrote:
>
> > I agree. It can be implemented with usage of plugins. But if I need to
> > enable and configurate the plugin I need to put this information into
> > dt-site.xml. It means The plugin and its parameter must be documented and
> > the list of the added specific jars will be visible and available for
> > updates to the end-user. The implementation via plugins is more dynamic
> > solution that is more convenient for the application developers. But I'm
> > talking about the static configuration of the Apex build or installation
> > that relates more to the platform development.
> >
> > The current Apex core implementation uses the static unchanged list of
> jars
> > for long time, because the Apex implementation still contains several
> basic
> > static assumptions (for instance, the usage of YARN, HDSF, etc.). And the
> > current Apex assumptions are hardcoded in the implementation. But if we
> are
> > going to improve Apex and use Java interfaces in generic Apex
> > implementation, the current static approach in Apex code to hardcode a
> list
> > of dependent jars will not work anymore. It will require to include a new
> > solution to add/change jars in specific Apex builds/configurations. And I
> > don't think the usage of the plugins will be good for that.
> >
> > Thanks,
> > Sergey
> >
> >
> > On Thu, Feb 1, 2018 at 1:47 PM, Vlad Rozov <vro...@apache.org> wrote:
> >
> > > There is a way to get the same end result by using plugins. It will be
> > > good to understand why plugin can't be used and can they be extended to
> > > provide the required functionality.
> > >
> > > Thank you,
> > >
> > > Vlad
> > >
> > >
> > > On 1/29/18 15:14, Sergey Golovko wrote:
> > >
> > >> Hello All,
> > >>
> > >> In Apex there are two ways to deploy non-Hadoop jars to the deployed
> > >> cluster.
> > >>
> > >> The first approach is static (hardcoded) and it is used by Apex
> platform
> > >> developers only. There are several final static arrays of Java classes
> > >> in StramClient.java
> > >> that define which of the available jars should be included into
> > deployment
> > >> for every Apex application.
> > >>
> > >> The second approach is to add paths of all dependent jar-files to the
> > >> value
> > >> of the attribute LIB_JARS. The end-user can set/update the value of
> the
> > >> attribute LIB_JARS via dt-site.xml files, command line parameters,
> > >> application properties and plugins. The usage of the
> > >> attribute LIB_JARS is the official documented way for all Apex users
> to
> > >> manage by the deployment jars.
> > >>
> > >> But some of the dependent jars (not from the Apex core) can be common
> > for
> > >> all customer's applications for a specific installation and/or
> execution
> > >> environment. Unfortunately the Apex implementation does not contain
> the
> > >> middle solution that would allow the Apex developers and cu

Re: [Proposal] Extension of the Apex configuration to add dependent jar files in runtime.

2018-02-01 Thread Sergey Golovko
I agree. It can be implemented with usage of plugins. But if I need to
enable and configurate the plugin I need to put this information into
dt-site.xml. It means The plugin and its parameter must be documented and
the list of the added specific jars will be visible and available for
updates to the end-user. The implementation via plugins is more dynamic
solution that is more convenient for the application developers. But I'm
talking about the static configuration of the Apex build or installation
that relates more to the platform development.

The current Apex core implementation uses the static unchanged list of jars
for long time, because the Apex implementation still contains several basic
static assumptions (for instance, the usage of YARN, HDSF, etc.). And the
current Apex assumptions are hardcoded in the implementation. But if we are
going to improve Apex and use Java interfaces in generic Apex
implementation, the current static approach in Apex code to hardcode a list
of dependent jars will not work anymore. It will require to include a new
solution to add/change jars in specific Apex builds/configurations. And I
don't think the usage of the plugins will be good for that.

Thanks,
Sergey


On Thu, Feb 1, 2018 at 1:47 PM, Vlad Rozov <vro...@apache.org> wrote:

> There is a way to get the same end result by using plugins. It will be
> good to understand why plugin can't be used and can they be extended to
> provide the required functionality.
>
> Thank you,
>
> Vlad
>
>
> On 1/29/18 15:14, Sergey Golovko wrote:
>
>> Hello All,
>>
>> In Apex there are two ways to deploy non-Hadoop jars to the deployed
>> cluster.
>>
>> The first approach is static (hardcoded) and it is used by Apex platform
>> developers only. There are several final static arrays of Java classes
>> in StramClient.java
>> that define which of the available jars should be included into deployment
>> for every Apex application.
>>
>> The second approach is to add paths of all dependent jar-files to the
>> value
>> of the attribute LIB_JARS. The end-user can set/update the value of the
>> attribute LIB_JARS via dt-site.xml files, command line parameters,
>> application properties and plugins. The usage of the
>> attribute LIB_JARS is the official documented way for all Apex users to
>> manage by the deployment jars.
>>
>> But some of the dependent jars (not from the Apex core) can be common for
>> all customer's applications for a specific installation and/or execution
>> environment. Unfortunately the Apex implementation does not contain the
>> middle solution that would allow the Apex developers and customer support
>> to
>> define and add new dependent jar-files (jars that should not be
>> configurable/managed by the end-user) without the updates/recompilation of
>> the Apex Java code during the Apex building process and/or
>> installation/configuration.
>>
>> Also the having of such kind of flexibility would allow the Apex core
>> developers to use Java interfaces during the development to define an
>> abstraction layer in Apex implementation and configurate Apex core to add
>> some specific jars to all Apex applications without recompilation of the
>> Apex source code.
>>
>> For instance, now the usage of HDFS is hardcoded in Apex platform code but
>> it can be replaced with any other distributed or cloud base file system.
>> The Apex core code can use an interface for all I/O operations but the
>> supporting of a real specific file system implementation can be added as
>> an
>> independent jar-file. Or if the implementation of some of Apex operators
>> depend on a specific service, and it is necessary to add some of the
>> service jars to every Apex application implicitly.
>>
>> The proposal:
>>
>> - add a predefined configuration text file (we can make any choice for the
>> file syntax: XML, JSON or Properties) to Apex engine resources with
>> predefined values of some of the Apex attributes (now we can include
>> LIB_JARS
>> attribute only);
>> - allow to have a configuration text file with the same functionality in
>> the Apex installation folder "conf";
>> - read the content of the predefined configuration text files by the stram
>> client in runtime and add the jars to the list of the dependent jars;
>> - allow to use paths to jars and Java classes to refer to the dependent
>> jars (the references can have the extensions: .class and .jar).
>>
>> Thanks,
>> Sergey
>>
>>
>


[Proposal] Extension of the Apex configuration to add dependent jar files in runtime.

2018-01-29 Thread Sergey Golovko
Hello All,

In Apex there are two ways to deploy non-Hadoop jars to the deployed
cluster.

The first approach is static (hardcoded) and it is used by Apex platform
developers only. There are several final static arrays of Java classes
in StramClient.java
that define which of the available jars should be included into deployment
for every Apex application.

The second approach is to add paths of all dependent jar-files to the value
of the attribute LIB_JARS. The end-user can set/update the value of the
attribute LIB_JARS via dt-site.xml files, command line parameters,
application properties and plugins. The usage of the
attribute LIB_JARS is the official documented way for all Apex users to
manage by the deployment jars.

But some of the dependent jars (not from the Apex core) can be common for
all customer's applications for a specific installation and/or execution
environment. Unfortunately the Apex implementation does not contain the
middle solution that would allow the Apex developers and customer support to
define and add new dependent jar-files (jars that should not be
configurable/managed by the end-user) without the updates/recompilation of
the Apex Java code during the Apex building process and/or
installation/configuration.

Also the having of such kind of flexibility would allow the Apex core
developers to use Java interfaces during the development to define an
abstraction layer in Apex implementation and configurate Apex core to add
some specific jars to all Apex applications without recompilation of the
Apex source code.

For instance, now the usage of HDFS is hardcoded in Apex platform code but
it can be replaced with any other distributed or cloud base file system.
The Apex core code can use an interface for all I/O operations but the
supporting of a real specific file system implementation can be added as an
independent jar-file. Or if the implementation of some of Apex operators
depend on a specific service, and it is necessary to add some of the
service jars to every Apex application implicitly.

The proposal:

- add a predefined configuration text file (we can make any choice for the
file syntax: XML, JSON or Properties) to Apex engine resources with
predefined values of some of the Apex attributes (now we can include LIB_JARS
attribute only);
- allow to have a configuration text file with the same functionality in
the Apex installation folder "conf";
- read the content of the predefined configuration text files by the stram
client in runtime and add the jars to the list of the dependent jars;
- allow to use paths to jars and Java classes to refer to the dependent
jars (the references can have the extensions: .class and .jar).

Thanks,
Sergey


Re: [VOTE] Major version change for Apex Library (Malhar)

2017-08-23 Thread Sergey Golovko
-1 for the option 2

I don't think it makes sense to rush to rename the package name. There are
Apache Java projects that use the original package names after migration to
Apache Software Foundation. For instance,

Apache Felix  (org.osgi)
Apache Groovy  (groovy)

Personally I don't like the idea to rename package names for any existing
tools and applications. It can just be a big confusion for users without
any real benefits.

-1 for the option 1

I see only one valid reason to change the major version now. It is the full
refactoring of the code without supporting of any backward compatibility.
If we are going to make the package refactoring we need to change the major
version. If we are not going to do it now, it does not make sense to
change the major version. I don't think it makes sense to vote for the two
options separately.

Thanks,
Sergey


On Wed, Aug 23, 2017 at 6:39 AM, Thomas Weise  wrote:

> So far everyone else has voted +1 on option 1. Your -1 is not a veto
> (unlike your previous -1 on a pull request), but your response also states
> "I am for option 1" and that you want to have the branch release-3
> included. So why don't you include that into your vote for option 1 as a
> condition, since that's what is going to happen anyways.
>
> Thomas
>
>
> On Tue, Aug 22, 2017 at 6:17 PM, Amol Kekre  wrote:
>
> > On just voting part, I remain -1 on both options
> >
> > Thks
> > Amol
> >
> >
> >
> On Tue, Aug 22, 2017 at 10:03 AM, Amol Kekre  wrote:
>
> > I am -1 on option 2. There is no need to do so, as going back on versions
> > at this stage has consequences to Apex users.
> >
> > I am for option 1, but I want to propose explicit change to the text.
> Based
> > on verbatim text, I am voting -1 on option 1. I believe in the original
> > discussion thread there was talk about continuing release-3 that should
> be
> > explicit in the vote.
> >
> >
>


Questions about the method JarHelper.getJar()

2017-07-09 Thread Sergey Golovko
Hello All,

I have a couple of questions about the usage of the
method JarHelper.getJar().

1. The implementation of the method JarHelper.getJar() takes a Java class
as a parameter and tries to find the jar file that contains the Java class.
If the class does not belong to a jar-file, it  implicitly creates a
jar-file from the folder that contains the class.

I'm not sure the name of the method is correct, because the implementation
of the method combines the getting of a jar file and creation of a jar
file. It can be confusion for developers who try to use the method or read
the source code that calls the method.

Also the implementation of the method JarHelper.getJar() excludes the usage
of the method if somebody needs to get a jar-file by Java class without
creation of the jar implicitly.

I'd suggest to split the implementation of the method on the two
independent methods getJar() and createJar().

2. The method StramClient.findJars() is only the method in the Apex code
(excluding JUnit tests) that calls the method JarHelper.getJar() in order
to include the jar files that contain all dag Java classes. Are there any
independent Java classes in a dag (the classes that don't belong to any
jar-files)? Do we really need to create jar-files in run-time?

Thanks,
Sergey


[jira] [Commented] (APEXCORE-754) Add plugin dependency jar-files to application package

2017-07-06 Thread Sergey Golovko (JIRA)

[ 
https://issues.apache.org/jira/browse/APEXCORE-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077234#comment-16077234
 ] 

Sergey Golovko commented on APEXCORE-754:
-

Yes, I'm looking at plugins that are not a part of an application package. The 
plugins can be attached dynamically to an application via dt-site.xml or 
command line properties and be running in the application environment. For 
instance, a plugin can collect some statistics or watch some events.

In order to support such kind of plugins we have to deploy dynamically the 
execution plugin code to Apex master and worker containers. And this is the 
idea of the current improvement.


> Add plugin dependency jar-files to application package
> --
>
> Key: APEXCORE-754
> URL: https://issues.apache.org/jira/browse/APEXCORE-754
> Project: Apache Apex Core
>  Issue Type: Improvement
>    Reporter: Sergey Golovko
>        Assignee: Sergey Golovko
>
> If an apex plugin is enabled, all plugin jar-files should be included into 
> the application package and names of the plugin jar-files should be added to 
> the application classpath.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (APEXCORE-754) Add plugin dependency jar-files to application package

2017-07-06 Thread Sergey Golovko (JIRA)

[ 
https://issues.apache.org/jira/browse/APEXCORE-754?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16077092#comment-16077092
 ] 

Sergey Golovko commented on APEXCORE-754:
-

The improvement will allow the users to simplify definition of apex runtime 
plugins. The user will be able to add the new apex plugins in dt-site.xml files 
and specify names of java classes that are associated with the plugins.

Syntax:


 apex.plugin.dag.setup
  {plugin-class-1}[, {plugin-class-2}[,...]]


The implementation should be able to find the corresponded jar-files by the 
plugin Java classes and add the jars to the application package. Also the 
implementation should find all extra dependent jar files and include them into 
the application package too. The extra dependent jar files can be defined via 
the property "apex-dependencies" in manifest of the top level plugin jar file 
(the jar-file that contains the plugin Java class).

The syntax of the property in manifest:

apex-dependencies : {jar-1}[,{jar-2}[,...]]


> Add plugin dependency jar-files to application package
> --
>
> Key: APEXCORE-754
> URL: https://issues.apache.org/jira/browse/APEXCORE-754
> Project: Apache Apex Core
>  Issue Type: Improvement
>    Reporter: Sergey Golovko
>Assignee: Sergey Golovko
>
> If an apex plugin is enabled, all plugin jar-files should be included into 
> the application package and names of the plugin jar-files should be added to 
> the application classpath.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (APEXCORE-754) Add plugin dependency jar-files to application package

2017-07-06 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-754:
---

 Summary: Add plugin dependency jar-files to application package
 Key: APEXCORE-754
 URL: https://issues.apache.org/jira/browse/APEXCORE-754
 Project: Apache Apex Core
  Issue Type: Improvement
Reporter: Sergey Golovko
Assignee: Sergey Golovko


If an apex plugin is enabled, all plugin jar-files should be included into the 
application package and names of the plugin jar-files should be added to the 
application classpath.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (APEXCORE-744) Add setting of predefined static logger appender properties

2017-06-07 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-744:
---

 Summary: Add setting of predefined static logger appender 
properties 
 Key: APEXCORE-744
 URL: https://issues.apache.org/jira/browse/APEXCORE-744
 Project: Apache Apex Core
  Issue Type: Improvement
Reporter: Sergey Golovko
Assignee: Sergey Golovko


Apex application has several static properties that can be useful to send with 
logger events (application name, container id, user name, service name, node 
name).



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (APEXCORE-723) Replace double quotes with a single quotes in command line arguments for passing of the logger appender properties

2017-05-17 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-723:
---

 Summary: Replace double quotes with a single quotes in command 
line arguments for passing of the logger appender properties
 Key: APEXCORE-723
 URL: https://issues.apache.org/jira/browse/APEXCORE-723
 Project: Apache Apex Core
  Issue Type: Bug
Reporter: Sergey Golovko
Assignee: Sergey Golovko






--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (APEXCORE-719) Pass an application name from stram client to application master and container via command line properties

2017-05-11 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-719:
---

 Summary: Pass an application name from stram client to application 
master and container via command line properties
 Key: APEXCORE-719
 URL: https://issues.apache.org/jira/browse/APEXCORE-719
 Project: Apache Apex Core
  Issue Type: Improvement
Reporter: Sergey Golovko
Assignee: Sergey Golovko
Priority: Trivial


The application name should be available in the Apex application master and 
containers environments. 



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Programmatic log4j appender in Apex

2017-04-15 Thread Sergey Golovko
Hi Thomas,

I assume to have maximum generic implementation. The signatures of all
methods in the implementation will not contain anything log4j specific. And
if we decide to add an abstraction layer to the logger calls in Apex or to
use another logger implementation, it can be easily changed to support any
new appender interfaces.

Thanks,
Sergey


On Sat, Apr 15, 2017 at 4:00 PM, Thomas Weise <t...@apache.org> wrote:

> Hi Sergey,
>
> What I'm asking is that the feature is implemented in a way that will allow
> Apex to run with different logger backend. That means that log4j needs to
> be optional.
>
> Thanks,
> Thomas
>
>
> On Sat, Apr 15, 2017 at 2:01 PM, Sergey Golovko <ser...@datatorrent.com>
> wrote:
>
> > I agree it would be very nice to use only slf4j interfaces for the
> > implementation. But unfortunately the interface Appender belongs to
> > org.apache.log4j package.
> >
> > "SLF4J is only a facade, meaning that it does not provide a complete
> > logging solution. Operations such as configuring appenders or setting
> > logging levels cannot be performed with SLF4J. Thus, at some point in
> time,
> > any non-trivial application will need to directly invoke the underlying
> > logging system. In other words, complete independence from the API
> > underlying logging system is not possible for a stand-alone application.
> > Nevertheless, SLF4J reduces the impact of this dependence to
> near-painless
> > levels."
> >
> > https://www.slf4j.org/faq.html#when
> >
> > Thanks,
> > Sergey
> >
> >
> > On Thu, Apr 13, 2017 at 7:56 AM, Thomas Weise <t...@apache.org> wrote:
> >
> > > +1
> > >
> > > Also the proposed feature would need to be implemented in a way that
> > avoids
> > > a hard dependency on log4j. The interface for logging is slf4j and it
> > > should be possible to use other logger backends.
> > >
> > >
> > > On Mon, Apr 10, 2017 at 9:21 PM, Sergey Golovko <
> ser...@datatorrent.com>
> > > wrote:
> > >
> > > > I don't think an operator needs a specific appender. An appender can
> be
> > > > dynamically assigned to an application designer, application master
> and
> > > > container.
> > > >
> > > > Thanks,
> > > > Sergey
> > > >
> > > >
> > > > On Mon, Apr 10, 2017 at 6:26 PM, Munagala Ramanath <
> > r...@datatorrent.com>
> > > > wrote:
> > > >
> > > > > I don't have one, I thought that was what the intent of the
> proposal
> > > was,
> > > > > but looks like
> > > > > I misunderstood. After re-reading some of the earlier responses, I
> > > > > understand the
> > > > > proposal better.
> > > > >
> > > > > Ram
> > > > >
> > > > >
> > > > >
> > > > > On Mon, Apr 10, 2017 at 5:39 PM, Vlad Rozov <
> v.ro...@datatorrent.com
> > >
> > > > > wrote:
> > > > >
> > > > > > I don't see a use case where an individual operators need to
> > define a
> > > > > > specific appender, can you provide one?
> > > > > >
> > > > > > Thank you,
> > > > > >
> > > > > > Vlad
> > > > > >
> > > > > > On 4/10/17 16:53, Munagala Ramanath wrote:
> > > > > >
> > > > > >> Yes, totally agree, it would be helpful to have a detailed use
> > case
> > > > > and/or
> > > > > >> a detailed spec
> > > > > >> of the desired capabilities -- not necessarily a complete spec
> but
> > > > with
> > > > > >> enough detail to
> > > > > >> understand why existing capabilities are inadequate.
> > > > > >>
> > > > > >> Ram
> > > > > >>
> > > > > >> On Mon, Apr 10, 2017 at 4:43 PM, Vlad Rozov <
> > > v.ro...@datatorrent.com>
> > > > > >> wrote:
> > > > > >>
> > > > > >> It will be good to understand a use case where an operator
> needs a
> > > > > >>> specific appender.
> > > > > >>>
> > > > > >>> IMO, an operator designer defines *what* should be logged and
> > > dev-ops
> > > > > >>> team
> > > 

[jira] [Created] (APEXCORE-704) Add supporting of programmatic logger appender

2017-04-15 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-704:
---

 Summary: Add supporting of programmatic logger appender
 Key: APEXCORE-704
 URL: https://issues.apache.org/jira/browse/APEXCORE-704
 Project: Apache Apex Core
  Issue Type: Improvement
Reporter: Sergey Golovko
Assignee: Sergey Golovko


Add supporting of a programmatic logger appender that can be added to Apex 
Application Master and Containers and be configurable programmatically.




--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Programmatic log4j appender in Apex

2017-04-15 Thread Sergey Golovko
I agree it would be very nice to use only slf4j interfaces for the
implementation. But unfortunately the interface Appender belongs to
org.apache.log4j package.

"SLF4J is only a facade, meaning that it does not provide a complete
logging solution. Operations such as configuring appenders or setting
logging levels cannot be performed with SLF4J. Thus, at some point in time,
any non-trivial application will need to directly invoke the underlying
logging system. In other words, complete independence from the API
underlying logging system is not possible for a stand-alone application.
Nevertheless, SLF4J reduces the impact of this dependence to near-painless
levels."

https://www.slf4j.org/faq.html#when

Thanks,
Sergey


On Thu, Apr 13, 2017 at 7:56 AM, Thomas Weise <t...@apache.org> wrote:

> +1
>
> Also the proposed feature would need to be implemented in a way that avoids
> a hard dependency on log4j. The interface for logging is slf4j and it
> should be possible to use other logger backends.
>
>
> On Mon, Apr 10, 2017 at 9:21 PM, Sergey Golovko <ser...@datatorrent.com>
> wrote:
>
> > I don't think an operator needs a specific appender. An appender can be
> > dynamically assigned to an application designer, application master and
> > container.
> >
> > Thanks,
> > Sergey
> >
> >
> > On Mon, Apr 10, 2017 at 6:26 PM, Munagala Ramanath <r...@datatorrent.com>
> > wrote:
> >
> > > I don't have one, I thought that was what the intent of the proposal
> was,
> > > but looks like
> > > I misunderstood. After re-reading some of the earlier responses, I
> > > understand the
> > > proposal better.
> > >
> > > Ram
> > >
> > >
> > >
> > > On Mon, Apr 10, 2017 at 5:39 PM, Vlad Rozov <v.ro...@datatorrent.com>
> > > wrote:
> > >
> > > > I don't see a use case where an individual operators need to define a
> > > > specific appender, can you provide one?
> > > >
> > > > Thank you,
> > > >
> > > > Vlad
> > > >
> > > > On 4/10/17 16:53, Munagala Ramanath wrote:
> > > >
> > > >> Yes, totally agree, it would be helpful to have a detailed use case
> > > and/or
> > > >> a detailed spec
> > > >> of the desired capabilities -- not necessarily a complete spec but
> > with
> > > >> enough detail to
> > > >> understand why existing capabilities are inadequate.
> > > >>
> > > >> Ram
> > > >>
> > > >> On Mon, Apr 10, 2017 at 4:43 PM, Vlad Rozov <
> v.ro...@datatorrent.com>
> > > >> wrote:
> > > >>
> > > >> It will be good to understand a use case where an operator needs a
> > > >>> specific appender.
> > > >>>
> > > >>> IMO, an operator designer defines *what* should be logged and
> dev-ops
> > > >>> team
> > > >>> defines *where* to log.
> > > >>>
> > > >>> Thank you,
> > > >>>
> > > >>> Vlad
> > > >>> On 4/10/17 16:27, Munagala Ramanath wrote:
> > > >>>
> > > >>> Yes, I understand, I was just wondering if individual operators
> could
> > > >>>> define the appenders
> > > >>>> they potentially need at compile time and then the operator
> > callbacks
> > > >>>> could
> > > >>>> simply
> > > >>>> check the desired runtime condition and add the appropriate
> > appender.
> > > >>>>
> > > >>>> Or are we saying there are scenarios where we absolutely cannot
> > create
> > > >>>> the
> > > >>>> appender beforehand ?
> > > >>>>
> > > >>>> So broadly speaking, my question is whether the combination of
> > > providing
> > > >>>> predefined appenders
> > > >>>> and the PropertyConfigurator capabilities meets the need.
> > > >>>>
> > > >>>> Ram
> > > >>>>
> > > >>>> On Mon, Apr 10, 2017 at 2:18 PM, Sergey Golovko <
> > > ser...@datatorrent.com
> > > >>>> >
> > > >>>> wrote:
> > > >>>>
> > > >>>> Ram,
> > > >>>>
> > > >>>>> Really the new appender cl

Re: Programmatic log4j appender in Apex

2017-04-10 Thread Sergey Golovko
I don't think an operator needs a specific appender. An appender can be
dynamically assigned to an application designer, application master and
container.

Thanks,
Sergey


On Mon, Apr 10, 2017 at 6:26 PM, Munagala Ramanath <r...@datatorrent.com>
wrote:

> I don't have one, I thought that was what the intent of the proposal was,
> but looks like
> I misunderstood. After re-reading some of the earlier responses, I
> understand the
> proposal better.
>
> Ram
>
>
>
> On Mon, Apr 10, 2017 at 5:39 PM, Vlad Rozov <v.ro...@datatorrent.com>
> wrote:
>
> > I don't see a use case where an individual operators need to define a
> > specific appender, can you provide one?
> >
> > Thank you,
> >
> > Vlad
> >
> > On 4/10/17 16:53, Munagala Ramanath wrote:
> >
> >> Yes, totally agree, it would be helpful to have a detailed use case
> and/or
> >> a detailed spec
> >> of the desired capabilities -- not necessarily a complete spec but with
> >> enough detail to
> >> understand why existing capabilities are inadequate.
> >>
> >> Ram
> >>
> >> On Mon, Apr 10, 2017 at 4:43 PM, Vlad Rozov <v.ro...@datatorrent.com>
> >> wrote:
> >>
> >> It will be good to understand a use case where an operator needs a
> >>> specific appender.
> >>>
> >>> IMO, an operator designer defines *what* should be logged and dev-ops
> >>> team
> >>> defines *where* to log.
> >>>
> >>> Thank you,
> >>>
> >>> Vlad
> >>> On 4/10/17 16:27, Munagala Ramanath wrote:
> >>>
> >>> Yes, I understand, I was just wondering if individual operators could
> >>>> define the appenders
> >>>> they potentially need at compile time and then the operator callbacks
> >>>> could
> >>>> simply
> >>>> check the desired runtime condition and add the appropriate appender.
> >>>>
> >>>> Or are we saying there are scenarios where we absolutely cannot create
> >>>> the
> >>>> appender beforehand ?
> >>>>
> >>>> So broadly speaking, my question is whether the combination of
> providing
> >>>> predefined appenders
> >>>> and the PropertyConfigurator capabilities meets the need.
> >>>>
> >>>> Ram
> >>>>
> >>>> On Mon, Apr 10, 2017 at 2:18 PM, Sergey Golovko <
> ser...@datatorrent.com
> >>>> >
> >>>> wrote:
> >>>>
> >>>> Ram,
> >>>>
> >>>>> Really the new appender class must extend the abstract class
> >>>>> AppenderSkeleton. And in order to add a new appender programmatically
> >>>>> in
> >>>>> Java, some code in Apex should call the following log4j method:
> >>>>>
> >>>>> org.apache.log4j.Logger.getRootLogger().addAppender(Appender
> >>>>> newAppender)
> >>>>>
> >>>>> The general idea of my proposal is "*based on some runtime
> parameter(s)
> >>>>> to
> >>>>> provide ability to create an appender instance via reflection and add
> >>>>> it
> >>>>> to
> >>>>> the list of active log4j appenders*".
> >>>>>
> >>>>> Thanks,
> >>>>> Sergey
> >>>>>
> >>>>>
> >>>>> On Mon, Apr 10, 2017 at 2:04 PM, Vlad Rozov <v.ro...@datatorrent.com
> >
> >>>>> wrote:
> >>>>>
> >>>>> It will require application recompilation and repackaging. The
> proposed
> >>>>>
> >>>>>> functionality is for dev-ops to be able to route application logging
> >>>>>> to
> >>>>>> a
> >>>>>> preferred destination without recompiling applications. It is
> run-time
> >>>>>> configuration vs compile time hardcoded appender.
> >>>>>>
> >>>>>> Thank you,
> >>>>>>
> >>>>>> Vlad
> >>>>>>
> >>>>>> On 4/10/17 11:23, Munagala Ramanath wrote:
> >>>>>>
> >>>>>> You can do it in a trivial derived class without changing the base
> >>>>>> class.
> >>>>>> Ram
> >>>>>&g

Re: Programmatic log4j appender in Apex

2017-04-10 Thread Sergey Golovko
Ram,

Really the new appender class must extend the abstract class
AppenderSkeleton. And in order to add a new appender programmatically in
Java, some code in Apex should call the following log4j method:

org.apache.log4j.Logger.getRootLogger().addAppender(Appender newAppender)

The general idea of my proposal is "*based on some runtime parameter(s) to
provide ability to create an appender instance via reflection and add it to
the list of active log4j appenders*".

Thanks,
Sergey


On Mon, Apr 10, 2017 at 2:04 PM, Vlad Rozov <v.ro...@datatorrent.com> wrote:

> It will require application recompilation and repackaging. The proposed
> functionality is for dev-ops to be able to route application logging to a
> preferred destination without recompiling applications. It is run-time
> configuration vs compile time hardcoded appender.
>
> Thank you,
>
> Vlad
>
> On 4/10/17 11:23, Munagala Ramanath wrote:
>
>> You can do it in a trivial derived class without changing the base class.
>>
>> Ram
>>
>> On Mon, Apr 10, 2017 at 11:19 AM, Vlad Rozov <v.ro...@datatorrent.com>
>> wrote:
>>
>> Does not the proposal to use Logger.addAppender() requires modifications
>>> to used operators code?
>>>
>>> Thank you,
>>>
>>> Vlad
>>>
>>> On 4/10/17 10:58, Munagala Ramanath wrote:
>>>
>>> People can currently do this by simply implementing the Appender
>>>> interface
>>>> and adding it
>>>> with Logger.addAppender() in the setup method. Why do we need something
>>>> more elaborate ?
>>>>
>>>> Ram
>>>>
>>>> On Mon, Apr 10, 2017 at 10:30 AM, Sergey Golovko <
>>>> ser...@datatorrent.com>
>>>> wrote:
>>>>
>>>> The configuration of a log4j appender via log4j configuration file is a
>>>>
>>>>> static configuration that cannot be disabled/enabled and managed
>>>>> dynamically by an application designer. The programmatic approach will
>>>>> allow  an application designer to specify which of the available log4j
>>>>> appenders should be used for the specific application.
>>>>>
>>>>> It is not necessary Apex should use the predefined log4j appenders
>>>>> only.
>>>>> The log4j events contain useful but the very limited number of
>>>>> properties
>>>>> which values can be printed into output log4j sources. But based on the
>>>>> knowledge of the software product workflow, the custom defined log4j
>>>>> appender can extend a list of predefined output log events properties
>>>>> and,
>>>>> for instance for Apex, return: node, user name, application name,
>>>>> application id, container id, operator name, etc.
>>>>>
>>>>> Also the output log events that are generated by a custom defined log4j
>>>>> appender can be stored and indexed by any type of a full text search
>>>>> database. It will allow the customers and developers to simplify
>>>>> collection
>>>>> of log events statistics and searching/filtering of specific events for
>>>>> debugging and investigation.
>>>>>
>>>>> Thanks,
>>>>> Sergey
>>>>>
>>>>>
>>>>> On Mon, Apr 10, 2017 at 6:34 AM, Vlad Rozov <v.ro...@datatorrent.com>
>>>>> wrote:
>>>>>
>>>>> +1 Apex engine does not own log4j config file - it is provided either
>>>>> by
>>>>>
>>>>>> Hadoop or an application. Hadoop log4j config does not necessarily
>>>>>> meet
>>>>>> application logging requirements, but if log4j is provided by an
>>>>>> application designer, who can only specify what to log, it may not
>>>>>> meet
>>>>>> operations requirements. Dev-ops should have an ability to specify
>>>>>> where
>>>>>>
>>>>>> to
>>>>>
>>>>> log depending on the available infrastructure at run-time.
>>>>>>
>>>>>> It will be good to have an ability not only specify extra log4j
>>>>>> appenders
>>>>>> at lunch time, but also at run-time, the same way how log4j logger
>>>>>> levels
>>>>>> may be changed.
>>>>>>
>>>>>> Thank you,
>>>>>>
>>>>>> Vla

Programmatic log4j appender in Apex

2017-04-07 Thread Sergey Golovko
Hi All,

I'd like to add supporting of a custom defined log4j appender that can be
added to Apex Application Master and Containers and be configurable
programmatically.

Sometimes it is not trivial to control log4j configuration via log4j
properties. And I think the having of the approach to add a log4j appender
programmatically will allow the customers and developers to plugin their
own custom defined log4j appenders and be much flexible for streaming and
collection of Apex log events.

I assume to provide generic approach for definition of the programmatic
log4j appender and to pass all configuration parameters including a name of
the Java class with implementation of the log4j appender via system and/or
command line properties.

Thanks,
Sergey


[jira] [Commented] (APEXCORE-644) get-app-package-operators with parent option does not work

2017-02-09 Thread Sergey Golovko (JIRA)

[ 
https://issues.apache.org/jira/browse/APEXCORE-644?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15860367#comment-15860367
 ] 

Sergey Golovko commented on APEXCORE-644:
-

There are two apex command line operators that have the option "-parent" 
(get-app-package-operators and get-jar-operator-classes). And both of them have 
the mandatory argument .

The implementation of the class GetOperatorClassesCommandLineOptions that 
builds command line parser options created the descriptor of the option 
"-parent" without an argument. And as the result, the argument of the option 
was ignored and it was moved to the list of the regular command line parameters.

The bug fix is trivial. It defines the option "-parent" as an option with an 
argument.

Added the new unit test ApexCliTest.testGetOperatorClassesCommandLineInfo().


> get-app-package-operators with parent option does not work
> --
>
> Key: APEXCORE-644
> URL: https://issues.apache.org/jira/browse/APEXCORE-644
> Project: Apache Apex Core
>  Issue Type: Bug
>    Reporter: Yatin Chaubal
>Assignee: Sergey Golovko
>Priority: Minor
>
> Issue: get-app-package-operators with -parent option doesnot work
>  
> Steps:
> 1) Start dtcli/apex
> 2) Run get-app-package-operators -parent com.datatorrent.demos.pi 
> /home/hduser/tf2jan/apa/pi-demo-3.4.0.apa
> Expected out output: valid JSON 
> Actual output: 
> {noformat}
> com.datatorrent.stram.cli.ApexCli$CliException: 
> /home/hduser/tf2jan/com.datatorrent.demos.pi does not match any file
> at com.datatorrent.stram.cli.ApexCli.expandFileName(ApexCli.java:918)
> at com.datatorrent.stram.cli.ApexCli.access$000(ApexCli.java:152)
> at 
> com.datatorrent.stram.cli.ApexCli$GetAppPackageOperatorsCommand.execute(ApexCli.java:3827)
> at com.datatorrent.stram.cli.ApexCli$3.run(ApexCli.java:1492)
> {noformat}
> Reference:
> Without -parent option this work fine
> apex> get-app-package-operators  /home/hduser/tf2jan/apa/pi-demo-3.4.0.apa
> {
>   "operatorClasses": [
> {
>   "name": "com.datatorrent.common.util.DefaultDelayOperator",
>   "properties": [],
>   "portTypeInfo": [
> {



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


Re: Schema Discovery Support in Apex Applications

2017-01-30 Thread Sergey Golovko
Sorry, I’m a new person in the APEX team. And I don't understand clearly who 
are consumers of the output port operator schema(s).

1. If the consumers are non-run-time callers like the application manager or UI 
designer, maybe it makes sense to use Java static method(s) to retrieve the 
output port operator schema(s). I guess the performance of a single call of a 
static method via reflection can be ignored.

2. If the consumer is next downstream operator, maybe it makes sense to send an 
output port operator schema from upstream operator to next downstream operator 
via the stream. The corresponded methods that would send and receive the schema 
should be declared in the interface/abstract-class of the upstream and 
downstream operators. The sending/receiving of an output schema should be 
processed right before the sending of the first data record via the stream.

One of examples of a typical implementation for sending of metadata with a 
regular result set is the sending of JDBC metadata as a part of JDBC result 
set. And I hope the output schema (metadata of the streamed data) in the 
implementation should contain not only a signature of the streamed objects 
(like field names and data types), but also any other properties of the data 
that can be useful by the schema receiver to process the data (for instance, a 
delimiter for CSV record stream).

Thanks,
Sergey

On 2017-01-25 01:47 (-0800), Chinmay Kolhatkar  wrote: 
> Thank you all for the feedback.
> 
> I've created a Jira for this: APEXCORE-623 and I'll attach the same
> document and link to this mailchain there.
> 
> As a first part of this Jira, there are 2 steps I would like to propose:
> 1. Add following interface at com.datatorrent.common.util.SchemaAware.
> 
> interface SchemaAware {
> 
> Map registerSchema(Map inputSchema);
> }
> 
> This interface can be implemented by Operators to communicate its output
> schema(s) to engine.
> Input to this schema will be schema at its input port.
> 
> 2. After LogicalPlan is created call SchemaAware method from upstream to
> downstream operator in the DAG to propagate the Schema.
> 
> Once this is done, changes can be done in Malhar for the operators in
> question.
> 
> Please share your opinion on this approach.
> 
> Thanks,
> Chinmay.
> 
> 
> 
> 
> On Wed, Jan 18, 2017 at 2:31 PM, Priyanka Gugale  wrote:
> 
> > +1 to have this feature.
> >
> > -Priyanka
> >
> > On Tue, Jan 17, 2017 at 9:18 PM, Pramod Immaneni 
> > wrote:
> >
> > > +1
> > >
> > > On Mon, Jan 16, 2017 at 1:23 AM, Chinmay Kolhatkar 
> > > wrote:
> > >
> > > > Hi All,
> > > >
> > > > Currently a DAG that is generated by user, if contains any POJOfied
> > > > operators, TUPLE_CLASS attribute needs to be set on each and every port
> > > > which receives or sends a POJO.
> > > >
> > > > For e.g., if a DAG is like File -> Parser -> Transform -> Dedup ->
> > > > Formatter -> Kafka, then TUPLE_CLASS attribute needs to be set by user
> > on
> > > > both input and output ports of transform, dedup operators and also on
> > > > parser output and formatter input.
> > > >
> > > > The proposal here is to reduce work that is required by user to
> > configure
> > > > the DAG. Technically speaking if an operators knows input schema and
> > > > processing properties, it can determine output schema and convey it to
> > > > downstream operators. This way the complete pipeline can be configured
> > > > without user setting TUPLE_CLASS or even creating POJOs and adding them
> > > to
> > > > classpath.
> > > >
> > > > On the same idea, I want to propose an approach where the pipeline can
> > be
> > > > configured without user setting TUPLE_CLASS or even creating POJOs and
> > > > adding them to classpath.
> > > > Here is the document which at a high level explains the idea and a high
> > > > level design:
> > > > https://docs.google.com/document/d/1ibLQ1KYCLTeufG7dLoHyN_
> > > > tRQXEM3LR-7o_S0z_porQ/edit?usp=sharing
> > > >
> > > > I would like to get opinion from community about feasibility and
> > > > applications of this proposal.
> > > > Once we get some consensus we can discuss the design in details.
> > > >
> > > > Thanks,
> > > > Chinmay.
> > > >
> > >
> >
> 


[jira] [Created] (APEXCORE-627) Unit test AtMostOnceTest intermittently fails

2017-01-27 Thread Sergey Golovko (JIRA)
Sergey Golovko created APEXCORE-627:
---

 Summary: Unit test AtMostOnceTest intermittently fails
 Key: APEXCORE-627
 URL: https://issues.apache.org/jira/browse/APEXCORE-627
 Project: Apache Apex Core
  Issue Type: Bug
 Environment: The test is reproducible on macOS Sierra, Processor 2.2 
GHz Intel Core i7, Memory 16GB 1600 MHz DDR3.

Reporter: Sergey Golovko
Assignee: Sergey Golovko
Priority: Minor


The test AtMostOnceTest is not able to reach the criteria to stop the test. And 
it continue to recover an input operator and rerun the test in a loop.




--
This message was sent by Atlassian JIRA
(v6.3.4#6332)