Hello All,
I have a job which fails lets say after every 14 days with IO Exception,
failed to fetch blob.
I submitted the job using command line using java jar.Below is the
exception I'm getting:
java.io.IOException: Failed to fetch BLOB
below is one line of my source, the body containes the user logs:
{
body: [
"user1,url1,2018-10-23 00:00:00;user2,url2,2018-10-23
00:01:00;user3,url3,2018-10-23 00:02:00"
]
}
I user LATERAL TABLE and a User-Defined TableFunction flatmap the source to a
new table log, and I
Thanks Fabian for the clarification!
Best regards,
Chengzhi
On Mon, Oct 22, 2018 at 5:19 PM Fabian Hueske wrote:
> Hi Chengzhi,
>
> Broadcast State is checkpointed like any other state and will be restored
> in all failure cases (including the ones you mentioned).
> We added the warning to
a job have two sql
source is kafka
sink is redis or other sink
Asql
Bsql
now only start Asql stop Bsqlsink have key 656.19.173.34
then stop Asql and savepoint hdfs now del key 656.19.173.34( if sink is
kafka Don't delete)
start Bsql from savepoint
you will find sink have key
I have with no luck. I wonder though - do I need to load it only in the map
function? I tried to add it in the open method of the sink function and the
process function I have there too cause they still are using the type.. still
no good. is there any way. is there a way of knowing which
Hi shkob
> i tried to use getRuntimeContext().getUserCodeClassLoader() as the loader
to use for Byte Buddy - but doesnt seem to be enough.
>From the log, it seems that the user class can not be found in the
classloader.
> Cannot load user class: commodel.MyGeneratedClass
Have you ever tried
Hi Julien,
First of all, if you only run streaming jobs you do not need to worry about
"managed" memory.
Regardless of the state backend, that you use, you should remove state that
you don't need anymore. Otherwise, Flink will keep (and checkpoint) the
state forever.
There is no automatic garbage
Hey,
I'm trying to run a job which uses a dynamically generated class (through
Byte Buddy).
think of me having a complex schema as yaml text and generating a class from
it. Throughout the job i am using an artificial super class (MySuperClass)
of the generated class (as for example i need to
Hi Chengzhi,
Broadcast State is checkpointed like any other state and will be restored
in all failure cases (including the ones you mentioned).
We added the warning to inform users that Broadcast state will also be
stored in the JVM memory, even if the RocksDB StateBackend was configured
(which
I am using flink 1.5.3
From: Yan Zhou [FDS Science]
Sent: Monday, October 22, 2018 11:26
To: user@flink.apache.org
Subject: checkpoint/taskmanager is stuck, deadlock on LocalBufferPool
Hi,
My application suddenly stuck and completely doesn't move forward after
Hi,
My application suddenly stuck and completely doesn't move forward after running
for a few days. No exceptions are found. From the thread dump, I can see that
the operator threads and checkpoint threads deadlock on LocalBufferPool.
LocalBufferPool is not able to request memory and keep the
Have you tried "t2" instead of "test.t2"? There is a possibility that catalog
name isn't part of the table name in the table API.
Thanks,
Xuefu
--
Sender:Flavio Pompermaier
Sent at:2018 Oct 22 (Mon) 23:06
Recipient:user
Hey folks,
We are trying to use the broadcast state as "Shared Rule" state to filter
test data in our stream pipeline, the broadcast will be connected with
other streams in the pipeline.
I noticed on broadcast_state[1] important consideration page, it is
mentioned *No RocksDB state backend* and
Hi Olga,
There is an open PR[1] that has some in-progress work on corresponding
AvroSerializationSchema, you can have a look at it. The bigger issue there
is that SerializationSchema does not have access to event's key so using
topic pattern might be problematic.
Best,
Dawid
[1]
Hi to all,
I've tried to register an external catalog and use it with the Table API in
Flink 1.6.1.
The following (Java) test job cannot write to a sink using insertInto
because Flink cannot find the table by id (test.t2). Am I doing something
wrong or is this a bug?
This is my Java test class:
Hi Olga,
Sorry for the late reply.
I think that Gordon (cc’ed) could be able to answer your question.
Cheers,
Kostas
> On Oct 13, 2018, at 3:10 PM, Olga Luganska wrote:
>
> Any suggestions?
>
> Thank you
>
> Sent from my iPhone
>
> On Oct 9, 2018, at 9:28 PM, Olga Luganska
Hi Anand,
Did the suggestion solve your issue?
Essentially when you cancel with savepoint, as Congxian suggested, you stop
emitting checkpoints,
but data keep flowing from the source to the sink. So if you do not set the
producer to exactly
once, you will almost certainly end up with
Hi Szymon,
Dominik is right. The “name” refers to the whole state described by the
specified descriptor.
Kostas
> On Oct 13, 2018, at 10:30 AM, Dominik Wosiński wrote:
>
> Hey,
> It's the name for the whole descriptor. Not the keys, it means that no other
> descriptor should be created with
I think that's because you declared it as transient field.
Move the declaration inside of "open" function to resolve that
On Mon, Oct 22, 2018 at 3:48 PM Ahmad Hassan wrote:
> 2018-10-22 13:46:31,944 INFO org.apache.flink.runtime.taskmanager.Task
> -
2018-10-22 13:46:31,944 INFO org.apache.flink.runtime.taskmanager.Task
- Window(SlidingProcessingTimeWindows(18, 18),
TimeTrigger, MetricWindowFunction) -> Map -> Sink: Unnamed (1/1)
(5677190a0d292df3ad8f3521519cd980) switched from RUNNING to FAILED.
I think it`s related to this issue
https://issues.apache.org/jira/browse/FLINK-10011
On Mon, Oct 22, 2018 at 1:52 PM Kumar Bolar, Harshith
wrote:
> Hi all,
>
>
>
> We run Flink on a five node cluster – three task managers, two job
> managers. One of the job manager running on flink2-0 node
Hi all,
We run Flink on a five node cluster – three task managers, two job managers.
One of the job manager running on flink2-0 node is down and refuses to come
back up, so the cluster is currently running with a single job manager. When I
restart the service, I see this in the logs. Any idea
Hi Chris,
FileInputFormat automatically takes cares of file decompression for the
files with gzip, xz, bz2 and deflate extensions.
--
Thanks,
Amit
Source:
Hi Ning,
I don't think it is possible to pause a Kafka source upon taking a
savepoint without making any changes to the implementation.
I think your problem is that the Cassandra sink doesn't support exactly
once guarantees when the Cassandra query isn't idempotent. If possible, the
cleanest
I'm able to read normal txt or csv files using Flink,
but what would I need to do in order to read them if they
are given to me in zip or gzip format? Assuming I do not want
to have to unzip them.
Thanks!
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi:
According to the code comment, in Scatter-Gather Iteration,
Gather.updateVertex is invoked once per vertex per superstep. But in my
programming, i find GatherFunction.updateVertex was invoked more then once,
is it a problem of the code comment, or a bug of Gelly. My flink version is
1.5.3
Does
Hi Anil,
When we trigger a savepoint, the JobManager's CheckpointCoordinator will
send an akka message for triggering to all source tasks,
which will generate a barrier for the savepoint (checkpoint).
I don't know if this explanation is clear enough.
Thanks, vino.
Anil 于2018年10月22日周一 下午2:21写道:
I think I didn't make myself clear. Sorry.
What I want to know is, when we trigger the savepoint, which checkpoint
barrier will it consider to trigger the savepoint.
--
Sent from: http://apache-flink-user-mailing-list-archive.2336050.n4.nabble.com/
Hi Anil,
When you manually trigger a savepoint, it is clear that it will wait for
the savepoint to complete.
Of course, the behavior of savepoint is consistent with checkpoint.
Thanks, vino.
Anil 于2018年10月22日周一 下午1:16写道:
> A checkpoint is completed when the nth the checkpoint barrier crosses
29 matches
Mail list logo