Re: Flink SQL client always cost 2 minutes to submit job to a local cluster

2018-12-28 Thread Hequn Cheng
Hi, yinhua

Thanks for looking into the problem. I'm not familiar with the code of
these part. As a workaround, you can put your jars into the flink lib
folder or add your jars into the classpath. Hope this helps.

Best, Hequn


On Fri, Dec 28, 2018 at 11:52 AM yinhua.dai  wrote:

> I am using Flink 1.6.1, I tried to use flink sql client with some own jars
> with --jar and --library.
> It can work to execute sql query, however it always cause around 2 minutes
> to submit the job the local cluster, but when I copy my jar to flink lib,
> and remove --jar and --library parameter, it can submit the job
> immediately.
>
> I debugged and found the 2 minutes is cost by the RestClusterClient to send
> the request with the jar as payload to the flink cluster.
>
> I don't know why it uses 2 minutes to upload the package? Is there a way to
> work around it?
> Thanks.
>
>
>
> --
> Sent from:
> http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
>


Re: Timestamp conversion problem in Flink Table/SQL

2018-12-28 Thread Hequn Cheng
Hi Jiayichao,

The two methods do not have to appear in pairs, so you can't use
timestamp.getTime() directly.
Currently, Flink doesn't support time zone configuration. The
timestamp(time of type Timestamp) always means the time in UTC+0. So in the
test of your pr[1], the output timestamp means a time in UTC+0, instead of
a time in your timezone. You can verify it by changing your sql to:
String sqlQuery = "select proctime, LOCALTIMESTAMP from MyTable";

But you raised a good question and it is true that it would be better to
support time zone configuration in Flink. For example, provide a global
timezone configuration. However, it is not a one or two lines code change.
We need to take all operators into consideration. And it is better to solve
it once for all.

Best, Hequn

[1] https://github.com/apache/flink/pull/7180


On Fri, Dec 28, 2018 at 3:15 PM jia yichao  wrote:

> Hi community,
>
>
> Recently I have encountered a problem with time conversion in Flink
> Table/SQL . When the processed field contains a timestamp type, the code of
> the flink table codegen first converts the timestamp type to a long type,
> and then converts the long type to a timestamp type on output.
> In the code generated by codegen,
>  “public static long toLong (Timestamp v)” and
> “public static java.sql.Timestamp internalToTimestamp (long v)”
>  are used in the conversion.
> The internal implementation of these two methods will add or subtract the
> time zone offset.
> In some cases, the two methods do not appear in pairs which causes the
> conversion time to be incorrect, resulting in watermark timestamp metrics
> on the web ui is equal to the correct value plus time zone offset, and the
> output of the process time field is equal to the correct value minus the
> time zone offset.
>
> Why the time conversion method in calcite (SqlFunctions.java)  add or
> subtract time zones?Why flink Table/SQL uses these time conversion methods
> instead of using timestamp.getTime() .
>
>
> calcite SqlFunctions.java==
> /** Converts the Java type used for UDF parameters of SQL TIMESTAMP type
>  * ({@link java.sql.Timestamp}) to internal representation (long).
>  *
>  * Converse of {@link #internalToTimestamp(long)}. */
> public static long toLong(Timestamp v) {
>   return toLong(v, LOCAL_TZ);
> }
>
> // mainly intended for java.sql.Timestamp but works for other dates also
> public static long toLong(java.util.Date v, TimeZone timeZone) {
>   final long time = v.getTime();
>   return time + timeZone.getOffset(time);
> }
>
> /** Converts the internal representation of a SQL TIMESTAMP (long) to the
> Java
>  * type used for UDF parameters ({@link java.sql.Timestamp}). */
> public static java.sql.Timestamp internalToTimestamp(long v) {
>   return new java.sql.Timestamp(v - LOCAL_TZ.getOffset(v));
> }
>
>
> Related discussion:
> http://mail-archives.apache.org/mod_mbox/flink-user/201711.mbox/%3c351fd9ab-7a28-4ce0-bd9c-c2a15e537...@163.com%3E
>
> Related issue:https://github.com/apache/flink/pull/7180
>
>
>
> thanks
> Jiayichao
>


When could flink 1.7.1 be downloaded from maven repository

2018-12-28 Thread 徐涛
Hi Experts,
From github I saw flink 1.7.1 is released about 7days ago. But I still 
can not downloaded from maven repository. 
May I know why there are some lag between them? When could the jar be 
downloaded from maven repository?

Best
Henry

How to shut down Flink Web Dashboard in detached Yarn session?

2018-12-28 Thread Sai Inampudi
Hi everyone,

I recently attempted to create a Flink cluster on YARN by executing the 
following:
~/flink-1.5.4/bin/yarn-session.sh -n 5 -tm 2048 -s 4 -d -nm flink_yarn

The resulting command was not completely successful but it did end up creating 
a Apache Flink Dashboard with 1 Task Manager, 1 Task Slot, and 1 Job Manager. 

When I look at my Yarn Resource Manager, I don't see my application running. 
CLI calls for the application id also returned nothing.

I would like to kill the existing web dashboard as well as the other lingering 
task manager/job manager so that I can try recreating the yarn session 
successfully. 

Has anyone encountered this before and has any suggestion? I looked through 
documentation [1] which says to stop a yarn session, you will want to use the 
YARN utilities (yarn application -kill ) to stop the YARN session. 
However, the application id in my logs is not found in the Resource Manager so 
it seems to already have been killed (due to the original yarn session command 
not properly executing?). 




[1] 
https://ci.apache.org/projects/flink/flink-docs-release-1.5/ops/deployment/yarn_setup.html#detached-yarn-session


Re: Access Flink configuration in user functions

2018-12-28 Thread Chesnay Schepler

The configuration is not accessible to user-functions or the main method.

The could either override ConfigurableStatebackend#configure, or 
configure the statebackend globally (see 
https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/state/state_backends.html#setting-default-state-backend), 
the backend factory does get access to it I believe.


On 28.12.2018 05:56, Paul Lam wrote:

Hi to all,

I would like to use a custom RocksDBStateBackend which uses the 
default checkpoint dir in Flink configuration, but I failed to find a 
way to access Flink configuration in the user code. So I wonder is it 
possible to retrieve Flink configurations (not user-defined global 
parameters) at the user main method? Or the configuration is only used 
internally? Thanks!


Best,
Paul Lam





Re: Can't list logs or stdout through web console on Flink 1.7 Kubernetes

2018-12-28 Thread Chesnay Schepler

@Steven: Do you happen do know whether a JIRA exists for this issue?

@Joshua: Does this also happen if you use log4j?

On 26.12.2018 11:33, Joshua Fan wrote:

wow, I met similar situation using flink 1.7 on yarn.

there was no jobmanager.log on the node but jobmanager.out and 
jobmanager.error, and jobmanager.error contains the log message. so , 
there was nothing in the webUI.


I do not know why this happened. by the way, I used logback to do log 
staff.


On Thu, Dec 20, 2018 at 12:50 AM Steven Nelson 
mailto:snel...@sourceallies.com>> wrote:


There is a known issue for this I believe. The problem is that the
containerized versions of Flink output logs to STDOUT instead of
files inside the node. If you pull use docker logs on the
container you can see what you’re looking for. I use the Kube
dashboard to view the logs centrally.

Sent from my iPhone

> On Dec 19, 2018, at 9:40 AM, William Saar mailto:will...@saar.se>> wrote:
>
>
> I'm running Flink 1.7 in ECS, is this a known issue or should I
create a jira?
>
> The web console doesn't show anything when trying to list logs
or stdout for task managers and the job manager log have stack
traces for the errors
> 2018-12-19 15:35:53,498 ERROR

org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerStdoutFileHandler
- Failed to transfer file from TaskExecutor
d7fd266047d5acfaddeb1156bdb23ff3.
> java.util.concurrent.CompletionException:
org.apache.flink.util.FlinkException: The file STDOUT is not
available on the TaskExecutor.
>
> 2018-12-19 15:36:02,538 ERROR
org.apache.flink.runtime.rest.handler.taskmanager.TaskManagerLogFileHandler
- Failed to transfer file from TaskExecutor
d7fd266047d5acfaddeb1156bdb23ff3.
> java.util.concurrent.CompletionException:
org.apache.flink.util.FlinkException: The file LOG is not
available on the TaskExecutor.
>





Re: [ANNOUNCE] Apache Flink 1.5.6 released

2018-12-28 Thread Till Rohrmann
Thanks a lot for being our release manager Thomas. Great work! Also thanks
to the community for making this release possible.

Cheers,
Till

On Thu, Dec 27, 2018 at 2:58 AM Jeff Zhang  wrote:

> Thanks Thomas. It's nice to have a more stable flink 1.5.x
>
> vino yang  于2018年12月27日周四 上午9:43写道:
>
>> Thomas, thanks for being a release manager.
>> And Thanks for the whole community.
>> I think the release of Flink 1.5.6 makes sense for many users who are
>> currently unable to upgrade major versions.
>>
>> Best,
>> Vino
>>
>> jincheng sun  于2018年12月27日周四 上午8:00写道:
>>
>>> Thanks a lot for being our release manager Thomas.
>>> Thanks a lot for made this release possible!
>>>
>>> Cheers,
>>> Jincheng
>>>
>>> Thomas Weise  于2018年12月27日周四 上午4:03写道:
>>>
 The Apache Flink community is very happy to announce the release of
 Apache
 Flink 1.5.6, which is the final bugfix release for the Apache Flink 1.5
 series.

 Apache Flink® is an open-source stream processing framework for
 distributed, high-performing, always-available, and accurate data
 streaming
 applications.

 The release is available for download at:
 https://flink.apache.org/downloads.html

 Please check out the release blog post for an overview of the
 improvements
 for this bugfix release:
 https://flink.apache.org/news/2018/12/22/release-1.5.6.html

 The full release notes are available in Jira:

 https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344315

 We would like to thank all contributors of the Apache Flink community
 who
 made this release possible!

 Regards,
 Thomas

>>>
>
> --
> Best Regards
>
> Jeff Zhang
>