[jira] [Commented] (FLINK-3952) Bump Netty to 4.1

2018-05-14 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16474137#comment-16474137
 ] 

Alexey Diomin commented on FLINK-3952:
--

[~pnowojski] main problem with migration was netty-router, he still use old 
version of netty.

> Bump Netty to 4.1
> -
>
> Key: FLINK-3952
> URL: https://issues.apache.org/jira/browse/FLINK-3952
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Network
>Reporter: rektide de la fey
>Assignee: Piotr Nowojski
>Priority: Major
>  Labels: netty
>
> Netty 4.1 is about to release final. This release has [a number of 
> significant 
> enhancements|http://netty.io/wiki/new-and-noteworthy-in-4.1.html], and in 
> particular I find HTTP/2 codecs to be incredibly desirable to have. 
> Additionally, hopefully, the [Hadoop patches for Netty 
> 4.1|https://issues.apache.org/jira/browse/HADOOP-11716] get some tests and 
> get merged, & I believe if/when that happens it'll be important for Flink to 
> also be using the new Netty minor version.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)


[jira] [Commented] (FLINK-7735) Improve date/time handling in publically-facing Expressions

2017-09-29 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-7735?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16186007#comment-16186007
 ] 

Alexey Diomin commented on FLINK-7735:
--

Java 7 will be dropped very soon FLINK-7242

Maybe make sense discus in java 8 time API?

> Improve date/time handling in publically-facing Expressions
> ---
>
> Key: FLINK-7735
> URL: https://issues.apache.org/jira/browse/FLINK-7735
> Project: Flink
>  Issue Type: Wish
>  Components: Table API & SQL
>Reporter: Kent Murra
>Priority: Minor
>
> I would like to discuss potential improvements to date/time/timestamp 
> handling in Expressions.  Since Flink now offers expression push down for 
> table sources, which includes time-related functions, timezone handling is 
> more visible to the end user.
> I think that the current usage of java.sql.Time, java.sql.Date, and 
> java.sql.Timestamp are fairly ambiguous.  We're taking a Date subclass in the 
> constructor of Literal, and assuming that the year, month, day, and hour 
> fields apply to UTC rather than the user's default timezone.   Per that 
> assumption, Flink is [adjusting the value of the epoch 
> timestamp|https://github.com/apache/flink/blob/master/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/expressions/literals.scala#L106]
>  silently when converting to the RexLiteral.  This provides correct behavior 
> if the user assumes that the year/month/day/hour fields in the Date object 
> are the same timezone that the SQL statement assumes (which is UTC).  
> However, if they work at all with the epoch timestamp (which is a public 
> field) this can lead to incorrect results.  Moreover, its confusing if you're 
> considering the time zones your data is in, requiring some amount of research 
> to determine correct behavior.
> It would be ideal to:
> # Provide primitives that have time-zone information associated by default, 
> thus disambiguating the times. 
> # Properly document all TimeZone related assumptions in Expression literals.  
> # Document that the TIMESTAMP calcite function will assume that the timestamp 
> is in UTC in web documentation.  
> # Having a timezone based date parsing function in the SQL language.
> Regarding the primitives, since we have to support Java 7, we can't use Java 
> 8 time API.  I'm guessing it'd be a decision between using Joda Time or 
> making thin data structures that could easily be converted to various other 
> time primitives.



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)


[jira] [Comment Edited] (FLINK-3952) Bump Netty to 4.1

2017-05-20 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018605#comment-16018605
 ] 

Alexey Diomin edited comment on FLINK-3952 at 5/20/17 8:53 PM:
---

No, you mistake.
try 
{code}
jar tvf flink-dist_2.10-1.2.1.jar  | grep netty
{code}

you found then shaded only old version of netty 3.х, 
but addition we distributed without shading 3.х and current used version of 
netty 4.0.x 
{code}
org/apache/flink/hadoop/shaded/org/jboss/netty/
org/jboss/netty/
io/netty/
{code}

as result if you try use Apache Beam with Flink you have error like 
https://gist.github.com/xhumanoid/291d7bfc50f830857971c15c34083351

reason it's mix netty 4.1 from beam and netty 4.0 from flink
current hotfix for me it's exclude netty from my result jar for app with beam, 
but it's potential problem because beam use grpc which require netty 4.1

but because tv.cntt:netty-router:jar:1.10:compile work with netty 4.0 only, 
isn't possible update to netty 4.1.

netty-router looks like was dropped from any support =(


was (Author: humanoid):
No, you mistake.
try 
{code}
jar tvf flink-dist_2.10-1.2.1.jar  | grep netty
{code}

you found then shaded only old version of netty 3.х, 
but addition we distributed without shading 3.х and current used version of 
netty 4.0.x 
{code}
org/apache/flink/hadoop/shaded/org/jboss/netty/
org/jboss/netty/
io/netty/
{code}

as result if you try use Apache Beam with Flink you have error like 
https://gist.github.com/xhumanoid/291d7bfc50f830857971c15c34083351

reason it's mix netty 4.1 from beam and netty 4.0 from flink
current hotfix for me it's exclude netty from my result jar for app with beam, 
but it's potential problem because beam use grpc which require netty 4.1

but because tv.cntt:netty-router:jar:1.10:compile work with netty 4.0 only 
isn't possible update to netty 4.1.

netty-router looks like was dropped from any support =(

> Bump Netty to 4.1
> -
>
> Key: FLINK-3952
> URL: https://issues.apache.org/jira/browse/FLINK-3952
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Network
>Reporter: rektide de la fey
>  Labels: netty
>
> Netty 4.1 is about to release final. This release has [a number of 
> significant 
> enhancements|http://netty.io/wiki/new-and-noteworthy-in-4.1.html], and in 
> particular I find HTTP/2 codecs to be incredibly desirable to have. 
> Additionally, hopefully, the [Hadoop patches for Netty 
> 4.1|https://issues.apache.org/jira/browse/HADOOP-11716] get some tests and 
> get merged, & I believe if/when that happens it'll be important for Flink to 
> also be using the new Netty minor version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Comment Edited] (FLINK-3952) Bump Netty to 4.1

2017-05-20 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018605#comment-16018605
 ] 

Alexey Diomin edited comment on FLINK-3952 at 5/20/17 8:52 PM:
---

No, you mistake.
try 
{code}
jar tvf flink-dist_2.10-1.2.1.jar  | grep netty
{code}

you found then shaded only old version of netty 3.х, 
but addition we distributed without shading 3.х and current used version of 
netty 4.0.x 
{code}
org/apache/flink/hadoop/shaded/org/jboss/netty/
org/jboss/netty/
io/netty/
{code}

as result if you try use Apache Beam with Flink you have error like 
https://gist.github.com/xhumanoid/291d7bfc50f830857971c15c34083351

reason it's mix netty 4.1 from beam and netty 4.0 from flink
current hotfix for me it's exclude netty from my result jar for app with beam, 
but it's potential problem because beam use grpc which require netty 4.1

but because tv.cntt:netty-router:jar:1.10:compile work with netty 4.0 only 
isn't possible update to netty 4.1.

netty-router looks like was dropped from any support =(


was (Author: humanoid):
No, you mistake.
try 
{code}
jar tvf flink-dist_2.10-1.2.1.jar  | grep netty
{code}

you found then shaded only old version of netty 3.х, 
but addition we distributed without shading 3.х and current used version of 
netty 4.0.x 
{code}
org/apache/flink/hadoop/shaded/org/jboss/netty/
org/jboss/netty/
io/netty/
{code}

as result if you try use Apache Beam with Flink you have error like 
https://gist.github.com/xhumanoid/291d7bfc50f830857971c15c34083351

reason it's mix netty 4.1 from beam and netty 4.0 from flink
current hotfix for me it's exclude netty from my result jar for app with beam, 
but it's potential problem because beam use grpc which require netty 4.1

> Bump Netty to 4.1
> -
>
> Key: FLINK-3952
> URL: https://issues.apache.org/jira/browse/FLINK-3952
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Network
>Reporter: rektide de la fey
>  Labels: netty
>
> Netty 4.1 is about to release final. This release has [a number of 
> significant 
> enhancements|http://netty.io/wiki/new-and-noteworthy-in-4.1.html], and in 
> particular I find HTTP/2 codecs to be incredibly desirable to have. 
> Additionally, hopefully, the [Hadoop patches for Netty 
> 4.1|https://issues.apache.org/jira/browse/HADOOP-11716] get some tests and 
> get merged, & I believe if/when that happens it'll be important for Flink to 
> also be using the new Netty minor version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Commented] (FLINK-3952) Bump Netty to 4.1

2017-05-20 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-3952?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16018605#comment-16018605
 ] 

Alexey Diomin commented on FLINK-3952:
--

No, you mistake.
try 
{code}
jar tvf flink-dist_2.10-1.2.1.jar  | grep netty
{code}

you found then shaded only old version of netty 3.х, 
but addition we distributed without shading 3.х and current used version of 
netty 4.0.x 
{code}
org/apache/flink/hadoop/shaded/org/jboss/netty/
org/jboss/netty/
io/netty/
{code}

as result if you try use Apache Beam with Flink you have error like 
https://gist.github.com/xhumanoid/291d7bfc50f830857971c15c34083351

reason it's mix netty 4.1 from beam and netty 4.0 from flink
current hotfix for me it's exclude netty from my result jar for app with beam, 
but it's potential problem because beam use grpc which require netty 4.1

> Bump Netty to 4.1
> -
>
> Key: FLINK-3952
> URL: https://issues.apache.org/jira/browse/FLINK-3952
> Project: Flink
>  Issue Type: Improvement
>  Components: Core, Network
>Reporter: rektide de la fey
>  Labels: netty
>
> Netty 4.1 is about to release final. This release has [a number of 
> significant 
> enhancements|http://netty.io/wiki/new-and-noteworthy-in-4.1.html], and in 
> particular I find HTTP/2 codecs to be incredibly desirable to have. 
> Additionally, hopefully, the [Hadoop patches for Netty 
> 4.1|https://issues.apache.org/jira/browse/HADOOP-11716] get some tests and 
> get merged, & I believe if/when that happens it'll be important for Flink to 
> also be using the new Netty minor version.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)


[jira] [Created] (FLINK-5497) remove duplicated tests

2017-01-15 Thread Alexey Diomin (JIRA)
Alexey Diomin created FLINK-5497:


 Summary: remove duplicated tests
 Key: FLINK-5497
 URL: https://issues.apache.org/jira/browse/FLINK-5497
 Project: Flink
  Issue Type: Improvement
  Components: Tests
Reporter: Alexey Diomin
Priority: Minor


Now we have test which run the same code 4 times, every run 17+ seconds.

Need do small refactoring and remove duplicated code.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-5431) time format for akka status

2017-01-10 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-5431?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15817437#comment-15817437
 ] 

Alexey Diomin commented on FLINK-5431:
--

comment Till Rohrmann

---
I agree with your proposal to use "-MM-dd HH:mm:ss" or even
"-MM-ddTHH:mm:ss" to follow the ISO standard per default but still give
the user the possibility to configure it.
---

> time format for akka status
> ---
>
> Key: FLINK-5431
> URL: https://issues.apache.org/jira/browse/FLINK-5431
> Project: Flink
>  Issue Type: Improvement
>Reporter: Alexey Diomin
>Assignee: Anton Solovev
>Priority: Minor
>
> In ExecutionGraphMessages we have code
> {code}
> private val DATE_FORMATTER: SimpleDateFormat = new 
> SimpleDateFormat("MM/dd/ HH:mm:ss")
> {code}
> But sometimes it cause confusion when main logger configured with 
> "dd/MM/".
> We need making this format configurable or maybe stay only "HH:mm:ss" for 
> prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-5431) time format for akka status

2017-01-09 Thread Alexey Diomin (JIRA)
Alexey Diomin created FLINK-5431:


 Summary: time format for akka status
 Key: FLINK-5431
 URL: https://issues.apache.org/jira/browse/FLINK-5431
 Project: Flink
  Issue Type: Improvement
Reporter: Alexey Diomin
Priority: Minor


In ExecutionGraphMessages we have code
{code}
private val DATE_FORMATTER: SimpleDateFormat = new SimpleDateFormat("MM/dd/ 
HH:mm:ss")
{code}

But sometimes it cause confusion when main logger configured with "dd/MM/".

We need making this format configurable or maybe stay only "HH:mm:ss" for 
prevent misunderstanding output date-time



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Commented] (FLINK-4148) incorrect calculation distance in QuadTree

2016-08-24 Thread Alexey Diomin (JIRA)

[ 
https://issues.apache.org/jira/browse/FLINK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=15434345#comment-15434345
 ] 

Alexey Diomin commented on FLINK-4148:
--

yes, I will make PR

> incorrect calculation distance in QuadTree
> --
>
> Key: FLINK-4148
> URL: https://issues.apache.org/jira/browse/FLINK-4148
> Project: Flink
>  Issue Type: Bug
>Reporter: Alexey Diomin
>Priority: Trivial
> Attachments: 
> 0001-FLINK-4148-incorrect-calculation-minDist-distance-in.patch
>
>
> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105
> Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
> always move in first case and never reach case for math.sqrt(minDist)
> correct match first EuclideanDistanceMetric and after it 
> SquaredEuclideanDistanceMetric
> p.s. because EuclideanDistanceMetric more compute expensive and stay as 
> default DistanceMetric it's can cause some performance degradation for KNN on 
> default parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4148) incorrect calculation distance in QuadTree

2016-07-04 Thread Alexey Diomin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Diomin updated FLINK-4148:
-
Attachment: 0001-FLINK-4148-incorrect-calculation-minDist-distance-in.patch

small patch

> incorrect calculation distance in QuadTree
> --
>
> Key: FLINK-4148
> URL: https://issues.apache.org/jira/browse/FLINK-4148
> Project: Flink
>  Issue Type: Bug
>Reporter: Alexey Diomin
>Priority: Trivial
> Attachments: 
> 0001-FLINK-4148-incorrect-calculation-minDist-distance-in.patch
>
>
> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105
> Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
> always move in first case and never reach case for math.sqrt(minDist)
> correct match first EuclideanDistanceMetric and after it 
> SquaredEuclideanDistanceMetric
> p.s. because EuclideanDistanceMetric more compute expensive and stay as 
> default DistanceMetric it's can cause some performance degradation for KNN on 
> default parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Updated] (FLINK-4148) incorrect calculation distance in QuadTree

2016-07-04 Thread Alexey Diomin (JIRA)

 [ 
https://issues.apache.org/jira/browse/FLINK-4148?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Alexey Diomin updated FLINK-4148:
-
Description: 
https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105

Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
always move in first case and never reach case for math.sqrt(minDist)

correct match first EuclideanDistanceMetric and after it 
SquaredEuclideanDistanceMetric

p.s. because EuclideanDistanceMetric more compute expensive and stay as default 
DistanceMetric it's can cause some performance degradation for KNN on default 
parameters

  was:
https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105

Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
always move in first case and never reach case for math.sqrt(minDist)

correct match first EuclideanDistanceMetric and after it 
SquaredEuclideanDistanceMetric


> incorrect calculation distance in QuadTree
> --
>
> Key: FLINK-4148
> URL: https://issues.apache.org/jira/browse/FLINK-4148
> Project: Flink
>  Issue Type: Bug
>Reporter: Alexey Diomin
>Priority: Trivial
>
> https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105
> Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
> always move in first case and never reach case for math.sqrt(minDist)
> correct match first EuclideanDistanceMetric and after it 
> SquaredEuclideanDistanceMetric
> p.s. because EuclideanDistanceMetric more compute expensive and stay as 
> default DistanceMetric it's can cause some performance degradation for KNN on 
> default parameters



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)


[jira] [Created] (FLINK-4148) incorrect calculation distance in QuadTree

2016-07-04 Thread Alexey Diomin (JIRA)
Alexey Diomin created FLINK-4148:


 Summary: incorrect calculation distance in QuadTree
 Key: FLINK-4148
 URL: https://issues.apache.org/jira/browse/FLINK-4148
 Project: Flink
  Issue Type: Bug
Reporter: Alexey Diomin
Priority: Trivial


https://github.com/apache/flink/blob/master/flink-libraries/flink-ml/src/main/scala/org/apache/flink/ml/nn/QuadTree.scala#L105

Because EuclideanDistanceMetric extends SquaredEuclideanDistanceMetric we 
always move in first case and never reach case for math.sqrt(minDist)

correct match first EuclideanDistanceMetric and after it 
SquaredEuclideanDistanceMetric



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)