[jira] [Created] (FLINK-25440) Apache Pulsar Connector Document description error about 'Starting Position'.

2021-12-23 Thread xiechenling (Jira)
xiechenling created FLINK-25440:
---

 Summary: Apache Pulsar Connector Document description error about 
'Starting Position'.
 Key: FLINK-25440
 URL: https://issues.apache.org/jira/browse/FLINK-25440
 Project: Flink
  Issue Type: Bug
  Components: Documentation
Affects Versions: 1.14.2
Reporter: xiechenling


Starting Position description error.

Start from the specified message time by Message.getEventTime().

StartCursor.fromMessageTime(long)

it should be 'Start from the specified message time by publishTime.'



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25439) StreamExecCalc collect new StreamRecord to Downstream lost timestamp Attributes

2021-12-23 Thread HunterHunter (Jira)
HunterHunter created FLINK-25439:


 Summary: StreamExecCalc collect new StreamRecord to Downstream 
lost timestamp Attributes
 Key: FLINK-25439
 URL: https://issues.apache.org/jira/browse/FLINK-25439
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner
Reporter: HunterHunter
 Attachments: image-2021-12-24-10-57-01-479.png

When we read data from kafkasource, we can see in SourceReaderBase.pollNext 
that the record has a timestamp attribute, but after StreamExecCalc the 
downstream receives the streamrecord and the timestamp attribute is missing

 



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25438) KafkaProducerExactlyOnceITCase.testMultipleSinkOperators failed due to topic 'exactlyTopicCustomOperator20' already exists

2021-12-23 Thread Yun Tang (Jira)
Yun Tang created FLINK-25438:


 Summary: KafkaProducerExactlyOnceITCase.testMultipleSinkOperators 
failed due to topic 'exactlyTopicCustomOperator20' already exists
 Key: FLINK-25438
 URL: https://issues.apache.org/jira/browse/FLINK-25438
 Project: Flink
  Issue Type: Bug
  Components: Connectors / Kafka, Tests
Reporter: Yun Tang


Dec 23 13:48:21 [ERROR] Failures: 
Dec 23 13:48:21 [ERROR]   
KafkaProducerExactlyOnceITCase.testMultipleSinkOperators:36->KafkaProducerTestBase.testExactlyOnce:236->KafkaTestBase.createTestTopic:216
 Create test topic : exactlyTopicCustomOperator20 failed, 
org.apache.kafka.common.errors.TopicExistsException: Topic 
'exactlyTopicCustomOperator20' already exists.

instance: 
https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28524=logs=c5f0071e-1851-543e-9a45-9ac140befc32=15a22db7-8faa-5b34-3920-d33c9f0ca23c



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25437) Build wheels failed

2021-12-23 Thread Huang Xingbo (Jira)
Huang Xingbo created FLINK-25437:


 Summary: Build wheels failed
 Key: FLINK-25437
 URL: https://issues.apache.org/jira/browse/FLINK-25437
 Project: Flink
  Issue Type: Bug
  Components: API / Python
Affects Versions: 1.12.8, 1.13.6, 1.14.3
Reporter: Huang Xingbo
Assignee: Huang Xingbo


https://dev.azure.com/apache-flink/apache-flink/_build/results?buildId=28552=logs=33dd8067-7758-552f-a1cf-a8b8ff0e44cd=bf344275-d244-5694-d05a-7ad127794669




--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25436) Allow BlobServer/BlobCache to clean up unused blobs after recovering from working directory

2021-12-23 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25436:
-

 Summary: Allow BlobServer/BlobCache to clean up unused blobs after 
recovering from working directory
 Key: FLINK-25436
 URL: https://issues.apache.org/jira/browse/FLINK-25436
 Project: Flink
  Issue Type: Sub-task
  Components: Runtime / Coordination
Affects Versions: 1.15.0
Reporter: Till Rohrmann
Assignee: Till Rohrmann
 Fix For: 1.15.0


In order to let the {{BlobServer}} and the {{BlobCache}} properly clean up 
unused blobs that are recovered from the working directory, we have to register 
them for clean up and offer hooks to delete irrelevant job artifacts.

I propose to scan the blobStorage directory at startup and to register for 
transient blobs the expiry timeouts. Moreover, for the {{BlobServer}} we need 
to add a {{retainJobs}} method that deletes all jobs that are not in the given 
list of {{JobIDs}}. Last but not least we also need to register the permanent 
blobs in the {{PermanentBlobCacheService}} so that they get timed out if not 
used anymore.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25435) Can not read jobmanager/taskmanager.log in yarn-per-job mode.

2021-12-23 Thread Moses (Jira)
Moses created FLINK-25435:
-

 Summary: Can not read jobmanager/taskmanager.log in yarn-per-job 
mode.
 Key: FLINK-25435
 URL: https://issues.apache.org/jira/browse/FLINK-25435
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Client
Reporter: Moses


I'm using SQL Client to submit a job, and using `SET` statement to specify 
deploy mode.
{code:sql}
SET execution.target=yarn-per-job;
...
{code}
But I can not found log files both on master and taskmanagers.

I found that `GenericCLI` and  `FlinkYarnSessionCli` will set 
`$internal.deployment.config-dir={configurationDirectory}` in their execution 
configuration.

Should we set this configuration in `DefaultCLI` as well?



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25434) Throw an error when BigDecimal precision overflows.

2021-12-23 Thread Ada Wong (Jira)
Ada Wong created FLINK-25434:


 Summary: Throw an error when BigDecimal precision overflows.
 Key: FLINK-25434
 URL: https://issues.apache.org/jira/browse/FLINK-25434
 Project: Flink
  Issue Type: Bug
  Components: Table SQL / Planner, Table SQL / Runtime
Affects Versions: 1.14.2
Reporter: Ada Wong


 

Lost a lot of data but no error was thrown.

As the following comment, If the precision overflows, null will be returned.
{code:java}
/**
 If the precision overflows, null will be returned.
 */
public static @Nullable DecimalData fromBigDecimal(BigDecimal bd, int 
precision, int scale) {
bd = bd.setScale(scale, RoundingMode.HALF_UP);
if (bd.precision() > precision) {
return null;
}

long longVal = -1;
if (precision <= MAX_COMPACT_PRECISION) {
longVal = bd.movePointRight(scale).longValueExact();
}
return new DecimalData(precision, scale, longVal, bd);
} {code}



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25433) Integrate retry strategy for cleanup stage

2021-12-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-25433:
-

 Summary: Integrate retry strategy for cleanup stage
 Key: FLINK-25433
 URL: https://issues.apache.org/jira/browse/FLINK-25433
 Project: Flink
  Issue Type: Sub-task
Reporter: Matthias Pohl


The {{ResourceCleaner}} should be able to cleanup not only once but retry 
infinitely.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25432) Implement cleanup strategy

2021-12-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-25432:
-

 Summary: Implement cleanup strategy
 Key: FLINK-25432
 URL: https://issues.apache.org/jira/browse/FLINK-25432
 Project: Flink
  Issue Type: Sub-task
Reporter: Matthias Pohl


We want to combine the job-specific cleanup of the different resources and 
provide a common {{ResourceCleaner}} taking care of the actual cleanup of all 
resources.

This needs to be integrated into the {{Dispatcher}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25431) Implement file-based JobResultStore

2021-12-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-25431:
-

 Summary: Implement file-based JobResultStore
 Key: FLINK-25431
 URL: https://issues.apache.org/jira/browse/FLINK-25431
 Project: Flink
  Issue Type: Sub-task
Reporter: Matthias Pohl


The implementation should comply to what's described in 
[FLIP-194|https://cwiki.apache.org/confluence/display/FLINK/FLIP-194%3A+Introduce+the+JobResultStore]



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25430) Introduce JobResultStore

2021-12-23 Thread Matthias Pohl (Jira)
Matthias Pohl created FLINK-25430:
-

 Summary: Introduce JobResultStore
 Key: FLINK-25430
 URL: https://issues.apache.org/jira/browse/FLINK-25430
 Project: Flink
  Issue Type: Sub-task
Reporter: Matthias Pohl


This issue includes introducing the interface and coming up with a in-memory 
implementation of it that should be integrated into the {{Dispatcher}}.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25429) Avoid to close output streams twice during uploading changelogs

2021-12-23 Thread Yun Tang (Jira)
Yun Tang created FLINK-25429:


 Summary: Avoid to close output streams twice during uploading 
changelogs
 Key: FLINK-25429
 URL: https://issues.apache.org/jira/browse/FLINK-25429
 Project: Flink
  Issue Type: Improvement
  Components: Runtime / State Backends
Reporter: Yun Tang
Assignee: Yun Tang


Current uploader implementation would close {{stream}} and {{fsStream}} one by 
one, which lead to {{fsStream}} closed twice.

{code:java}
try (FSDataOutputStream fsStream = fileSystem.create(path, 
NO_OVERWRITE)) {
fsStream.write(compression ? 1 : 0);
try (OutputStreamWithPos stream = wrap(fsStream); ) {
final Map> tasksOffsets = 
new HashMap<>();
for (UploadTask task : tasks) {
tasksOffsets.put(task, format.write(stream, 
task.changeSets));
}
FileStateHandle handle = new FileStateHandle(path, 
stream.getPos());
// WARN: streams have to be closed before returning the results
// otherwise JM may receive invalid handles
return new LocalResult(tasksOffsets, handle);
}
}
{code}

Not all file system supports to close same stream twice.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


Re: [ANNOUNCE] Apache Flink Stateful Functions 3.1.1 released

2021-12-23 Thread Till Rohrmann
Thanks a lot for being our release manager and swiftly addressing the log4j
CVE Igal!

Cheers,
Till

On Wed, Dec 22, 2021 at 5:41 PM Igal Shilman  wrote:

> The Apache Flink community is very happy to announce the release of Apache
> Flink Stateful Functions (StateFun) 3.1.1.
>
> This is a bugfix release that addresses the recent log4j vulnerabilities,
> users are encouraged to upgrade.
>
> StateFun is a cross-platform stack for building Stateful Serverless
> applications, making it radically simpler to develop scalable, consistent,
> and elastic distributed applications.
>
> Please check out the release blog post for an overview of the release:
> https://flink.apache.org/news/2021/12/22/log4j-statefun-release.html
>
> The release is available for download at:
> https://flink.apache.org/downloads.html
>
> Maven artifacts for StateFun can be found at:
> https://search.maven.org/search?q=g:org.apache.flink%20statefun
>
> Python SDK for StateFun published to the PyPI index can be found at:
> https://pypi.org/project/apache-flink-statefun/
>
> Official Docker images for StateFun are published to Docker Hub:
> https://hub.docker.com/r/apache/flink-statefun
>
> The full release notes are available in Jira:
>
> https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12351096==12315522
>
> We would like to thank all contributors of the Apache Flink community who
> made this release possible!
>
> Thanks!
>


Re: [DISCUSS] Slimmed down docker images.

2021-12-23 Thread Till Rohrmann
Hi David,

Thanks for starting this discussion. I like the idea of providing smaller
images that can be used by more advanced users that don't need everything.
Having smaller image sizes can be really helpful when having to pull the
image (with your changes this time should roughly be decreased by a factor
of 2) in case of failure recovery or rescaling events.

Concerning the added release complexity, can we automate the process of
generating the slim and full images? If this is the case, then I don't see
a big problem in having to release multiple images.

Cheers,
Till

On Wed, Dec 22, 2021 at 2:43 PM David Morávek  wrote:

> Hi,
>
> I did some quick prototyping on the slimmed down docker images, and I was
> able to cut the docker image size by ~40% with a minimum effort [1] (using
> a multi-stage build + trimming examples / opt + using slimmed down JRE
> image).
>
> I think this might be a low hanging fruit for reducing MTTR in some
> scenarios, but I'd like to hear other opinions on the topic.
>
> Pushing this forward would require us to release twice as many images as we
> do now (the current images + slimmed down versions).
>
> Using /opt dependencies + /examples, would look something like this from an
> user perspective:
>
> FROM flink:1.14.0-scala_2.12-slim
> COPY --from=flink:1.14.0-scala_2.12
> /opt/flink/opt/flink-s3-fs-presto-1.15-SNAPSHOT.jar /opt/flink/plugins/s3
> COPY --from=flink:1.14.0-scala_2.12
> /opt/flink/examples/streaming/TopSpeedWindowing.jar /opt/flink/usrlib
>
> Size of the 1.15 images (java 11):
>
> ~/Workspace/apache/flink-docker> docker images | grep flink | grep 1.15
> flink1.15
>  e96a3a3eaab2   15 minutes ago   702MB
> flink1.15-slim
>   e417b7665522   17 minutes ago   427MB
>
>
> Do you see a benefits of this effort? Should image size / distribution size
> be a concern with the modern deployments [2] ?
>
> [1]
>
> https://github.com/dmvk/flink-docker/commit/f866b3e57eacd0e6534b80fd0a1618cb30bbb36a
> [2]
>
> https://cloud.google.com/blog/products/containers-kubernetes/kubernetes-best-practices-how-and-why-to-build-small-container-images
>
> Best,
> D.
>


Re: [DISCUSS] Changing the minimal supported version of Hadoop

2021-12-23 Thread Till Rohrmann
If there are no users strongly objecting to dropping Hadoop support for <
2.8, then I am +1 for this since otherwise we won't gain a lot as Xintong
said.

Cheers,
Till

On Wed, Dec 22, 2021 at 10:33 AM David Morávek  wrote:

> Agreed, if we drop the CI for lower versions, there is actually no point
> of having safeguards as we can't really test for them.
>
> Maybe one more thought (it's more of a feeling), I feel that users running
> really old Hadoop versions are usually slower to adopt (they most likely
> use what the current HDP / CDH version they use offers) and they are less
> likely to use Flink 1.15 any time soon, but I don't have any strong data to
> support this.
>
> D.
>


[jira] [Created] (FLINK-25427) SavepointITCase.testTriggerSavepointAndResumeWithNoClaim fails on AZP

2021-12-23 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25427:
-

 Summary: SavepointITCase.testTriggerSavepointAndResumeWithNoClaim 
fails on AZP
 Key: FLINK-25427
 URL: https://issues.apache.org/jira/browse/FLINK-25427
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.15.0
Reporter: Till Rohrmann
 Fix For: 1.15.0


The test {{SavepointITCase.testTriggerSavepointAndResumeWithNoClaim}} fails on 
AZP with

{code}
2021-12-23T03:10:26.4240179Z Dec 23 03:10:26 [ERROR] 
org.apache.flink.test.checkpointing.SavepointITCase.testTriggerSavepointAndResumeWithNoClaim
  Time elapsed: 62.289 s  <<< ERROR!
2021-12-23T03:10:26.4240998Z Dec 23 03:10:26 
java.util.concurrent.TimeoutException: Condition was not met in given timeout.
2021-12-23T03:10:26.4241716Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:166)
2021-12-23T03:10:26.4242643Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:144)
2021-12-23T03:10:26.4243295Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitUntilCondition(CommonTestUtils.java:136)
2021-12-23T03:10:26.4244433Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:210)
2021-12-23T03:10:26.4245166Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:184)
2021-12-23T03:10:26.4245830Z Dec 23 03:10:26at 
org.apache.flink.runtime.testutils.CommonTestUtils.waitForAllTaskRunning(CommonTestUtils.java:172)
2021-12-23T03:10:26.4246870Z Dec 23 03:10:26at 
org.apache.flink.test.checkpointing.SavepointITCase.testTriggerSavepointAndResumeWithNoClaim(SavepointITCase.java:446)
2021-12-23T03:10:26.4247813Z Dec 23 03:10:26at 
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
2021-12-23T03:10:26.4248808Z Dec 23 03:10:26at 
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
2021-12-23T03:10:26.4249426Z Dec 23 03:10:26at 
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-12-23T03:10:26.4250192Z Dec 23 03:10:26at 
java.lang.reflect.Method.invoke(Method.java:498)
2021-12-23T03:10:26.4251196Z Dec 23 03:10:26at 
org.junit.runners.model.FrameworkMethod$1.runReflectiveCall(FrameworkMethod.java:59)
2021-12-23T03:10:26.4252160Z Dec 23 03:10:26at 
org.junit.internal.runners.model.ReflectiveCallable.run(ReflectiveCallable.java:12)
2021-12-23T03:10:26.4252888Z Dec 23 03:10:26at 
org.junit.runners.model.FrameworkMethod.invokeExplosively(FrameworkMethod.java:56)
2021-12-23T03:10:26.4253547Z Dec 23 03:10:26at 
org.junit.internal.runners.statements.InvokeMethod.evaluate(InvokeMethod.java:17)
2021-12-23T03:10:26.4254142Z Dec 23 03:10:26at 
org.junit.internal.runners.statements.RunBefores.evaluate(RunBefores.java:26)
2021-12-23T03:10:26.4254932Z Dec 23 03:10:26at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2021-12-23T03:10:26.4255513Z Dec 23 03:10:26at 
org.junit.rules.ExternalResource$1.evaluate(ExternalResource.java:54)
2021-12-23T03:10:26.4256091Z Dec 23 03:10:26at 
org.apache.flink.util.TestNameProvider$1.evaluate(TestNameProvider.java:45)
2021-12-23T03:10:26.4256636Z Dec 23 03:10:26at 
org.junit.rules.TestWatcher$1.evaluate(TestWatcher.java:61)
2021-12-23T03:10:26.4257165Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2021-12-23T03:10:26.4257744Z Dec 23 03:10:26at 
org.junit.runners.BlockJUnit4ClassRunner$1.evaluate(BlockJUnit4ClassRunner.java:100)
2021-12-23T03:10:26.4258312Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner.runLeaf(ParentRunner.java:366)
2021-12-23T03:10:26.4258884Z Dec 23 03:10:26at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:103)
2021-12-23T03:10:26.4259488Z Dec 23 03:10:26at 
org.junit.runners.BlockJUnit4ClassRunner.runChild(BlockJUnit4ClassRunner.java:63)
2021-12-23T03:10:26.4260049Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner$4.run(ParentRunner.java:331)
2021-12-23T03:10:26.4260579Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner$1.schedule(ParentRunner.java:79)
2021-12-23T03:10:26.4261108Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner.runChildren(ParentRunner.java:329)
2021-12-23T03:10:26.4261648Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner.access$100(ParentRunner.java:66)
2021-12-23T03:10:26.4262183Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner$2.evaluate(ParentRunner.java:293)
2021-12-23T03:10:26.4262794Z Dec 23 03:10:26at 
org.junit.runners.ParentRunner$3.evaluate(ParentRunner.java:306)
2021-12-23T03:10:26.4263312Z Dec 23 03:10:26at 

[jira] [Created] (FLINK-25428) Expose complex types CAST to String

2021-12-23 Thread Francesco Guardiani (Jira)
Francesco Guardiani created FLINK-25428:
---

 Summary: Expose complex types CAST to String
 Key: FLINK-25428
 URL: https://issues.apache.org/jira/browse/FLINK-25428
 Project: Flink
  Issue Type: Sub-task
  Components: Table SQL / Planner
Reporter: Francesco Guardiani
 Attachments: cast_function_it_case.patch, logical_type_casts.patch

Right now we have all the casting rules for collection, structured and raw 
types to string, that is we have logic to stringify the following types:

* ARRAY
* MAP
* MULTISET
* ROW
* STRUCTURED
* RAW

Unfortunately these don't work, for different reasons, notably:

* We need to support these combinations in {{LogicalTypeCasts}} (check the 
attached patch)
* For some of them Calcite applies its casting validation logic and marks them 
as invalid
* For MULTISET and STRUCTURED, there are issues specific to Table API and its 
expression stack, which cannot correctly convert the values to literal

You can check all these errors by applying the attached patch to the cast 
function it cases.

We need to fix these issues, so users can use SQL and Table API to cast these 
values to string.



--
This message was sent by Atlassian Jira
(v8.20.1#820001)


[jira] [Created] (FLINK-25426) UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP because it cannot allocate enough network buffers

2021-12-23 Thread Till Rohrmann (Jira)
Till Rohrmann created FLINK-25426:
-

 Summary: 
UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint fails on AZP 
because it cannot allocate enough network buffers
 Key: FLINK-25426
 URL: https://issues.apache.org/jira/browse/FLINK-25426
 Project: Flink
  Issue Type: Bug
  Components: Runtime / Checkpointing
Affects Versions: 1.15.0
Reporter: Till Rohrmann
 Fix For: 1.15.0


The test {{UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint}} 
fails with

{code}
2021-12-23T02:54:46.2862342Z Dec 23 02:54:46 [ERROR] 
UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint  Time 
elapsed: 2.992 s  <<< ERROR!
2021-12-23T02:54:46.2865774Z Dec 23 02:54:46 java.lang.OutOfMemoryError: Could 
not allocate enough memory segments for NetworkBufferPool (required (Mb): 64, 
allocated (Mb): 14, missing (Mb): 50). Cause: Direct buffer memory. The direct 
out-of-memory error has occurred. This can mean two things: either job(s) 
require(s) a larger size of JVM direct memory or there is a direct memory leak. 
The direct memory can be allocated by user code or some of its dependencies. In 
this case 'taskmanager.memory.task.off-heap.size' configuration option should 
be increased. Flink framework and its dependencies also consume the direct 
memory, mostly for network communication. The most of network memory is managed 
by Flink and should not result in out-of-memory error. In certain special 
cases, in particular for jobs with high parallelism, the framework may require 
more direct memory which is not managed by Flink. In this case 
'taskmanager.memory.framework.off-heap.size' configuration option should be 
increased. If the error persists then there is probably a direct memory leak in 
user code or some of its dependencies which has to be investigated and fixed. 
The task executor has to be shutdown...
2021-12-23T02:54:46.2868239Z Dec 23 02:54:46at 
org.apache.flink.runtime.io.network.buffer.NetworkBufferPool.(NetworkBufferPool.java:138)
2021-12-23T02:54:46.2868975Z Dec 23 02:54:46at 
org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:140)
2021-12-23T02:54:46.2869771Z Dec 23 02:54:46at 
org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createNettyShuffleEnvironment(NettyShuffleServiceFactory.java:94)
2021-12-23T02:54:46.2870550Z Dec 23 02:54:46at 
org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:79)
2021-12-23T02:54:46.2871312Z Dec 23 02:54:46at 
org.apache.flink.runtime.io.network.NettyShuffleServiceFactory.createShuffleEnvironment(NettyShuffleServiceFactory.java:58)
2021-12-23T02:54:46.2872062Z Dec 23 02:54:46at 
org.apache.flink.runtime.taskexecutor.TaskManagerServices.createShuffleEnvironment(TaskManagerServices.java:414)
2021-12-23T02:54:46.2872767Z Dec 23 02:54:46at 
org.apache.flink.runtime.taskexecutor.TaskManagerServices.fromConfiguration(TaskManagerServices.java:282)
2021-12-23T02:54:46.2873436Z Dec 23 02:54:46at 
org.apache.flink.runtime.taskexecutor.TaskManagerRunner.startTaskManager(TaskManagerRunner.java:523)
2021-12-23T02:54:46.2877615Z Dec 23 02:54:46at 
org.apache.flink.runtime.minicluster.MiniCluster.startTaskManager(MiniCluster.java:645)
2021-12-23T02:54:46.2878247Z Dec 23 02:54:46at 
org.apache.flink.runtime.minicluster.MiniCluster.startTaskManagers(MiniCluster.java:626)
2021-12-23T02:54:46.2878856Z Dec 23 02:54:46at 
org.apache.flink.runtime.minicluster.MiniCluster.start(MiniCluster.java:379)
2021-12-23T02:54:46.2879487Z Dec 23 02:54:46at 
org.apache.flink.runtime.testutils.MiniClusterResource.startMiniCluster(MiniClusterResource.java:209)
2021-12-23T02:54:46.2880152Z Dec 23 02:54:46at 
org.apache.flink.runtime.testutils.MiniClusterResource.before(MiniClusterResource.java:95)
2021-12-23T02:54:46.2880821Z Dec 23 02:54:46at 
org.apache.flink.test.util.MiniClusterWithClientResource.before(MiniClusterWithClientResource.java:64)
2021-12-23T02:54:46.2881519Z Dec 23 02:54:46at 
org.apache.flink.test.checkpointing.UnalignedCheckpointTestBase.execute(UnalignedCheckpointTestBase.java:151)
2021-12-23T02:54:46.2882310Z Dec 23 02:54:46at 
org.apache.flink.test.checkpointing.UnalignedCheckpointRescaleITCase.shouldRescaleUnalignedCheckpoint(UnalignedCheckpointRescaleITCase.java:534)
2021-12-23T02:54:46.2882978Z Dec 23 02:54:46at 
jdk.internal.reflect.GeneratedMethodAccessor123.invoke(Unknown Source)
2021-12-23T02:54:46.2883574Z Dec 23 02:54:46at 
java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
2021-12-23T02:54:46.2884171Z Dec 23 02:54:46at 
java.base/java.lang.reflect.Method.invoke(Method.java:566)
2021-12-23T02:54:46.2884732Z Dec 23 02:54:46at 

[jira] [Created] (FLINK-25425) flink sql cassandra connector

2021-12-23 Thread sky (Jira)
sky created FLINK-25425:
---

 Summary: flink sql  cassandra connector
 Key: FLINK-25425
 URL: https://issues.apache.org/jira/browse/FLINK-25425
 Project: Flink
  Issue Type: New Feature
  Components: Connectors / Cassandra
Affects Versions: 1.15.0
Reporter: sky


1、Cassandra is easier to operate and maintain than hbase.

2、Cassandra has higher performance than hbase.

3、Cassandra is cheaper to learn because Cassandra has been rebooted. 

 

So I hope the community can provide Cassandra Connector based on SQL .



--
This message was sent by Atlassian Jira
(v8.20.1#820001)