[jira] [Commented] (FLINK-7475) support update() in ListState

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249096#comment-16249096
 ] 

ASF GitHub Bot commented on FLINK-7475:
---

Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4963
  
@yunfan123 sorry I can't find where `PredefinedOptions` defines it. Isn't 
Flink using RocksDB's `StringAppendTESTOperator` in `RocksDBKeyedStateBackend`?

Hi @aljoscha , do you have insightful suggestions?


> support update() in ListState
> -
>
> Key: FLINK-7475
> URL: https://issues.apache.org/jira/browse/FLINK-7475
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, DataStream API, State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: yf
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>
> If I want to update the list. 
> I have to do two steps: 
> listState.clear() 
> for (Element e : myList) { 
> listState.add(e); 
> } 
> Why not I update the state by: 
> listState.update(myList) ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4963: [FLINK-7475] [core][DataStream API] support update() in L...

2017-11-12 Thread bowenli86
Github user bowenli86 commented on the issue:

https://github.com/apache/flink/pull/4963
  
@yunfan123 sorry I can't find where `PredefinedOptions` defines it. Isn't 
Flink using RocksDB's `StringAppendTESTOperator` in `RocksDBKeyedStateBackend`?

Hi @aljoscha , do you have insightful suggestions?


---


[jira] [Commented] (FLINK-7475) support update() in ListState

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7475?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249066#comment-16249066
 ] 

ASF GitHub Bot commented on FLINK-7475:
---

Github user yunfan123 commented on the issue:

https://github.com/apache/flink/pull/4963
  
Flink use StringAppendOperator as merge operator. It used in 
org.apache.flink.contrib.streaming.state.PredefinedOptions.
It use native java method. 
Can we use the native method to merge our list?


> support update() in ListState
> -
>
> Key: FLINK-7475
> URL: https://issues.apache.org/jira/browse/FLINK-7475
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, DataStream API, State Backends, Checkpointing
>Affects Versions: 1.4.0
>Reporter: yf
>Assignee: Bowen Li
> Fix For: 1.5.0
>
>
> If I want to update the list. 
> I have to do two steps: 
> listState.clear() 
> for (Element e : myList) { 
> listState.add(e); 
> } 
> Why not I update the state by: 
> listState.update(myList) ?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4963: [FLINK-7475] [core][DataStream API] support update() in L...

2017-11-12 Thread yunfan123
Github user yunfan123 commented on the issue:

https://github.com/apache/flink/pull/4963
  
Flink use StringAppendOperator as merge operator. It used in 
org.apache.flink.contrib.streaming.state.PredefinedOptions.
It use native java method. 
Can we use the native method to merge our list?


---


[GitHub] flink pull request #5002: [hotfix][docs] Remove the caveat about Cassandra c...

2017-11-12 Thread mcfongtw
GitHub user mcfongtw opened a pull request:

https://github.com/apache/flink/pull/5002

[hotfix][docs] Remove the caveat about Cassandra connector.

## What is the purpose of the change

Remove a caveat in Cassandra connector docs.

## Brief change log

As FLINK-4500 being committed, Cassandra connector would now flush pending 
mutations properly when a checkpoint was triggered. Thus, remove the related 
caveat from documents. 





You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcfongtw/flink hotfix-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5002


commit e6d13baa2544c1ac0504199f685b12872451d0e1
Author: Michael Fong 
Date:   2017-11-13T02:17:02Z

[hotfix][docs] Remove the caveat about Cassandra connector.

As FLINK-4500 being committed, Cassandra connector would now flush pending 
mutations properly when a checkpoint was triggered.




---


[jira] [Commented] (FLINK-4500) Cassandra sink can lose messages

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4500?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16249046#comment-16249046
 ] 

ASF GitHub Bot commented on FLINK-4500:
---

GitHub user mcfongtw opened a pull request:

https://github.com/apache/flink/pull/5002

[hotfix][docs] Remove the caveat about Cassandra connector.

## What is the purpose of the change

Remove a caveat in Cassandra connector docs.

## Brief change log

As FLINK-4500 being committed, Cassandra connector would now flush pending 
mutations properly when a checkpoint was triggered. Thus, remove the related 
caveat from documents. 





You can merge this pull request into a Git repository by running:

$ git pull https://github.com/mcfongtw/flink hotfix-doc

Alternatively you can review and apply these changes as the patch at:

https://github.com/apache/flink/pull/5002.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

This closes #5002


commit e6d13baa2544c1ac0504199f685b12872451d0e1
Author: Michael Fong 
Date:   2017-11-13T02:17:02Z

[hotfix][docs] Remove the caveat about Cassandra connector.

As FLINK-4500 being committed, Cassandra connector would now flush pending 
mutations properly when a checkpoint was triggered.




> Cassandra sink can lose messages
> 
>
> Key: FLINK-4500
> URL: https://issues.apache.org/jira/browse/FLINK-4500
> Project: Flink
>  Issue Type: Bug
>  Components: Cassandra Connector
>Affects Versions: 1.1.0
>Reporter: Elias Levy
>Assignee: Michael Fong
> Fix For: 1.4.0
>
>
> The problem is the same as I pointed out with the Kafka producer sink 
> (FLINK-4027).  The CassandraTupleSink's send() and CassandraPojoSink's send() 
> both send data asynchronously to Cassandra and record whether an error occurs 
> via a future callback.  But CassandraSinkBase does not implement 
> Checkpointed, so it can't stop checkpoint from happening even though the are 
> Cassandra queries in flight from the checkpoint that may fail.  If they do 
> fail, they would subsequently not be replayed when the job recovered, and 
> would thus be lost.
> In addition, 
> CassandraSinkBase's close should check whether there is a pending exception 
> and throw it, rather than silently close.  It should also wait for any 
> pending async queries to complete and check their status before closing.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8006) flink-daemon.sh: line 103: binary operator expected

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8006?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248964#comment-16248964
 ] 

ASF GitHub Bot commented on FLINK-8006:
---

Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
Hi, Do I need to do something more in order to this PR be merged?


> flink-daemon.sh: line 103: binary operator expected
> ---
>
> Key: FLINK-8006
> URL: https://issues.apache.org/jira/browse/FLINK-8006
> Project: Flink
>  Issue Type: Bug
>  Components: Startup Shell Scripts
>Affects Versions: 1.3.2
> Environment: Linux 4.12.12-gentoo #2 SMP x86_64 Intel(R) Core(TM) 
> i3-3110M CPU @ 2.40GHz GenuineIntel GNU/Linux
>Reporter: Alejandro 
>  Labels: easyfix, newbie
>   Original Estimate: 1m
>  Remaining Estimate: 1m
>
> When executing `./bin/start-local.sh` I get
> flink-1.3.2/bin/flink-daemon.sh: line 79: $pid: ambiguous redirect
> flink-1.3.2/bin/flink-daemon.sh: line 103: [: /tmp/flink-Alejandro: binary 
> operator expected
> I solved the problem replacing $pid by "$pid" in lines 79 and 103.
> Should I make a PR to the repo?



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4968: [FLINK-8006] [Startup Shell Scripts] Enclosing $pid in qu...

2017-11-12 Thread elbaulp
Github user elbaulp commented on the issue:

https://github.com/apache/flink/pull/4968
  
Hi, Do I need to do something more in order to this PR be merged?


---


[jira] [Updated] (FLINK-7049) TestingApplicationMaster keeps running after integration tests finish

2017-11-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7049?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7049:
--
Description: 
After integration tests finish, TestingApplicationMaster is still running.

Toward the end of 
flink-yarn-tests/target/flink-yarn-tests-ha/flink-yarn-tests-ha-logDir-nm-1_0/application_1498768839874_0001/container_1498768839874_0001_03_01/jobmanager.log
 :

{code}
2017-06-29 22:09:49,681 INFO  org.apache.zookeeper.ClientCnxn   
- Opening socket connection to server 127.0.0.1/127.0.0.1:46165
2017-06-29 22:09:49,681 ERROR 
org.apache.flink.shaded.org.apache.curator.ConnectionState- Authentication 
failed
2017-06-29 22:09:49,682 WARN  org.apache.zookeeper.ClientCnxn   
- Session 0x0 for server null, unexpected error, closing socket 
connection and attempting reconnect
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-06-29 22:09:50,782 WARN  org.apache.zookeeper.ClientCnxn   
- SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/jaas-3597644653611245612.conf'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it.
2017-06-29 22:09:50,782 INFO  org.apache.zookeeper.ClientCnxn   
- Opening socket connection to server 127.0.0.1/127.0.0.1:46165
2017-06-29 22:09:50,782 ERROR 
org.apache.flink.shaded.org.apache.curator.ConnectionState- Authentication 
failed
2017-06-29 22:09:50,783 WARN  org.apache.zookeeper.ClientCnxn   
- Session 0x0 for server null, unexpected error, closing socket 
connection and attempting reconnect
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
{code}

  was:
After integration tests finish, TestingApplicationMaster is still running.

Toward the end of 
flink-yarn-tests/target/flink-yarn-tests-ha/flink-yarn-tests-ha-logDir-nm-1_0/application_1498768839874_0001/container_1498768839874_0001_03_01/jobmanager.log
 :
{code}
2017-06-29 22:09:49,681 INFO  org.apache.zookeeper.ClientCnxn   
- Opening socket connection to server 127.0.0.1/127.0.0.1:46165
2017-06-29 22:09:49,681 ERROR 
org.apache.flink.shaded.org.apache.curator.ConnectionState- Authentication 
failed
2017-06-29 22:09:49,682 WARN  org.apache.zookeeper.ClientCnxn   
- Session 0x0 for server null, unexpected error, closing socket 
connection and attempting reconnect
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
2017-06-29 22:09:50,782 WARN  org.apache.zookeeper.ClientCnxn   
- SASL configuration failed: 
javax.security.auth.login.LoginException: No JAAS configuration section named 
'Client' was found in specified JAAS configuration file: 
'/tmp/jaas-3597644653611245612.conf'. Will continue connection to Zookeeper 
server without SASL authentication, if Zookeeper server allows it.
2017-06-29 22:09:50,782 INFO  org.apache.zookeeper.ClientCnxn   
- Opening socket connection to server 127.0.0.1/127.0.0.1:46165
2017-06-29 22:09:50,782 ERROR 
org.apache.flink.shaded.org.apache.curator.ConnectionState- Authentication 
failed
2017-06-29 22:09:50,783 WARN  org.apache.zookeeper.ClientCnxn   
- Session 0x0 for server null, unexpected error, closing socket 
connection and attempting reconnect
java.net.ConnectException: Connection refused
  at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
  at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
  at 
org.apache.zookeeper.ClientCnxnSocketNIO.doTransport(ClientCnxnSocketNIO.java:361)
  at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1081)
{code}


> TestingApplicationMaster keeps running after integration tests finish
> -
>
> 

[jira] [Updated] (FLINK-7775) Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs

2017-11-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7775?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7775:
--
Description: 
{code}
  public int getNumberOfCachedJobs() {
return jobRefCounters.size();
  }
{code}

The method is not used.
We should remove it.

  was:
{code}
  public int getNumberOfCachedJobs() {
return jobRefCounters.size();
  }
{code}
The method is not used.
We should remove it.


> Remove unreferenced method PermanentBlobCache#getNumberOfCachedJobs
> ---
>
> Key: FLINK-7775
> URL: https://issues.apache.org/jira/browse/FLINK-7775
> Project: Flink
>  Issue Type: Task
>  Components: Local Runtime
>Reporter: Ted Yu
>Priority: Minor
>
> {code}
>   public int getNumberOfCachedJobs() {
> return jobRefCounters.size();
>   }
> {code}
> The method is not used.
> We should remove it.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Updated] (FLINK-7877) Fix compilation against the Hadoop 3 beta1 release

2017-11-12 Thread Ted Yu (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-7877?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ted Yu updated FLINK-7877:
--
Labels:   (was: build)

> Fix compilation against the Hadoop 3 beta1 release
> --
>
> Key: FLINK-7877
> URL: https://issues.apache.org/jira/browse/FLINK-7877
> Project: Flink
>  Issue Type: Bug
>  Components: Build System
>Reporter: Ted Yu
>
> When compiling against hadoop 3.0.0-beta1, I got:
> {code}
> [ERROR] 
> /mnt/disk2/a/flink/flink-yarn/src/test/java/org/apache/flink/yarn/UtilsTest.java:[224,16]
>  org.apache.flink.yarn.UtilsTest.TestingContainer is not abstract and does 
> not override abstract method 
> setExecutionType(org.apache.hadoop.yarn.api.records.ExecutionType) in 
> org.apache.hadoop.yarn.api.records.Container
> {code}
> There may other hadoop API(s) that need adjustment.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-8046) ContinuousFileMonitoringFunction wrongly ignores files with exact same timestamp

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248924#comment-16248924
 ] 

ASF GitHub Bot commented on FLINK-8046:
---

Github user juanmirocks commented on the issue:

https://github.com/apache/flink/pull/4997
  
Perhaps access time could be leveraged. However, as of Flink 1.3.2 
`FileStatus#getAccessTime()` (at least for a local file system), always returns 
`0` ... 


> ContinuousFileMonitoringFunction wrongly ignores files with exact same 
> timestamp
> 
>
> Key: FLINK-8046
> URL: https://issues.apache.org/jira/browse/FLINK-8046
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.3.2
>Reporter: Juan Miguel Cejuela
>  Labels: stream
> Fix For: 1.5.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current monitoring of files sets the internal variable 
> `globalModificationTime` to filter out files that are "older". However, the 
> current test (to check "older") does 
> `boolean shouldIgnore = modificationTime <= globalModificationTime;` (rom 
> `shouldIgnore`)
> The comparison should strictly be SMALLER (NOT smaller or equal). The method 
> documentation also states "This happens if the modification time of the file 
> is _smaller_ than...".
> The equality acceptance for "older", makes some files with same exact 
> timestamp to be ignored. The behavior is also non-deterministic, as the first 
> file to be accepted ("first" being pretty much random) makes the rest of 
> files with same exact timestamp to be ignored.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4997: [FLINK-8046] [flink-streaming-java] Have filter of timest...

2017-11-12 Thread juanmirocks
Github user juanmirocks commented on the issue:

https://github.com/apache/flink/pull/4997
  
Perhaps access time could be leveraged. However, as of Flink 1.3.2 
`FileStatus#getAccessTime()` (at least for a local file system), always returns 
`0` ... 


---


[jira] [Commented] (FLINK-8046) ContinuousFileMonitoringFunction wrongly ignores files with exact same timestamp

2017-11-12 Thread ASF GitHub Bot (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-8046?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248913#comment-16248913
 ] 

ASF GitHub Bot commented on FLINK-8046:
---

Github user juanmirocks commented on the issue:

https://github.com/apache/flink/pull/4997
  
No. I don't think this is going to be a suitable solution, as if = is 
allowed in the comparison, the very same file will be triggered multiple times.

Note that the older and deprecated `FileMonitoringFunction` solves this 
situation by having a map of filenames to modification times. More robust but 
also more expensive memory-wise. A limit to a possible map could be given in 
`LinkedHashMap` with `removeEldestEntry`.


> ContinuousFileMonitoringFunction wrongly ignores files with exact same 
> timestamp
> 
>
> Key: FLINK-8046
> URL: https://issues.apache.org/jira/browse/FLINK-8046
> Project: Flink
>  Issue Type: Bug
>  Components: Streaming
>Affects Versions: 1.3.2
>Reporter: Juan Miguel Cejuela
>  Labels: stream
> Fix For: 1.5.0
>
>   Original Estimate: 24h
>  Remaining Estimate: 24h
>
> The current monitoring of files sets the internal variable 
> `globalModificationTime` to filter out files that are "older". However, the 
> current test (to check "older") does 
> `boolean shouldIgnore = modificationTime <= globalModificationTime;` (rom 
> `shouldIgnore`)
> The comparison should strictly be SMALLER (NOT smaller or equal). The method 
> documentation also states "This happens if the modification time of the file 
> is _smaller_ than...".
> The equality acceptance for "older", makes some files with same exact 
> timestamp to be ignored. The behavior is also non-deterministic, as the first 
> file to be accepted ("first" being pretty much random) makes the rest of 
> files with same exact timestamp to be ignored.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[GitHub] flink issue #4997: [FLINK-8046] [flink-streaming-java] Have filter of timest...

2017-11-12 Thread juanmirocks
Github user juanmirocks commented on the issue:

https://github.com/apache/flink/pull/4997
  
No. I don't think this is going to be a suitable solution, as if = is 
allowed in the comparison, the very same file will be triggered multiple times.

Note that the older and deprecated `FileMonitoringFunction` solves this 
situation by having a map of filenames to modification times. More robust but 
also more expensive memory-wise. A limit to a possible map could be given in 
`LinkedHashMap` with `removeEldestEntry`.


---


[jira] [Created] (FLINK-8049) RestClient#shutdown() ignores exceptions thrown when shutting down netty.

2017-11-12 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-8049:
-

 Summary: RestClient#shutdown() ignores exceptions thrown when 
shutting down netty.
 Key: FLINK-8049
 URL: https://issues.apache.org/jira/browse/FLINK-8049
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.4.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas
Priority: Critical






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Created] (FLINK-8050) RestServer#shutdown() ignores exceptions thrown when shutting down netty.

2017-11-12 Thread Kostas Kloudas (JIRA)
Kostas Kloudas created FLINK-8050:
-

 Summary: RestServer#shutdown() ignores exceptions thrown when 
shutting down netty.
 Key: FLINK-8050
 URL: https://issues.apache.org/jira/browse/FLINK-8050
 Project: Flink
  Issue Type: Bug
  Components: REST
Affects Versions: 1.4.0
Reporter: Kostas Kloudas
Assignee: Kostas Kloudas






--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Commented] (FLINK-4575) DataSet aggregate methods should support POJOs

2017-11-12 Thread Gabor Gevay (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4575?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16248874#comment-16248874
 ] 

Gabor Gevay commented on FLINK-4575:


[~vcycyv], I'm not sure how would {{getFlatFields}} help here. (How would you 
convert back to POJO at the end?)

But if you would like to work on this jira, then the approach outlined in the 
jira description should work. I think this is the cleanest solution, since 
{{FieldAccessor}} is exactly for situations like we have here, where we have to 
get and set a field, based on a field expression. However, you would have to 
resolve https://issues.apache.org/jira/browse/FLINK-4578 first. I think that 
could be resolved by the solution that I wrote in a comment there.

> DataSet aggregate methods should support POJOs
> --
>
> Key: FLINK-4575
> URL: https://issues.apache.org/jira/browse/FLINK-4575
> Project: Flink
>  Issue Type: Improvement
>  Components: DataSet API
>Reporter: Gabor Gevay
>Priority: Minor
>  Labels: starter
>
> The aggregate methods of DataSets (aggregate, sum, min, max) currently only 
> support Tuples, with the fields specified by indices. With 
> https://issues.apache.org/jira/browse/FLINK-3702 resolved, adding support for 
> POJOs and field expressions would be easy: {{AggregateOperator}} would create 
> {{FieldAccessors}} instead of just storing field positions, and 
> {{AggregateOperator.AggregatingUdf}} would use these {{FieldAccessors}} 
> instead of the Tuple field access methods.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)