Delayed Window Trigger

2023-10-27 Thread Kenan Kılıçtepe
Is it possible to trigger a window without changing window-start and
window-end dates?

I have a lot of  jobs run in window tumble (3H) and when they are all
triggered at the same time, it causes performance problems. If somehow I
can delay some of them 10-15 minutes , without changing the original data
in the window and window-start, window-end dates, it would be great.

Thanks
Kenan


Re: Which Flink engine versions do Connectors support?

2023-10-27 Thread Tzu-Li (Gordon) Tai
Hi Xianxun,

You can find the list supported Flink versions for each connector here:
https://flink.apache.org/downloads/#apache-flink-connectors

Specifically for the Kafka connector, we're in the process of releasing a
new version for the connector that works with Flink 1.18.
The release candidate vote thread is here if you want to test that out:
https://lists.apache.org/thread/35gjflv4j2pp2h9oy5syj2vdfpotg486

Thanks,
Gordon


On Fri, Oct 27, 2023 at 12:57 PM Xianxun Ye  wrote:

> 
> Hello Team,
>
> After the release of Flink 1.18, I found that most connectors had been
> externalized, e.g. Kafka, ES, HBase, JDBC, and pulsar connectors.   But I
> didn't find any manual or codes indicating which versions of Flink these
> connectors could work.
>
>
> Best regards,
> Xianxun
>
>


Re: Enhancing File Processing and Kafka Integration with Flink Jobs

2023-10-27 Thread Alexander Fedulov
> I wonder if you could use this fact to query the committed checkpoints
and move them away after the job is done.

This is not a robust solution, I would advise against it.

Best,
Alexander

On Fri, 27 Oct 2023 at 16:41, Andrew Otto  wrote:

> For moving the files:
> > It will keep the files as is and remember the name of the file read in
> checkpointed state to ensure it doesnt read the same file twice.
>
> I wonder if you could use this fact to query the committed checkpoints and
> move them away after the job is done.  I think it should even be safe to do
> this outside of the Flink job periodically (cron, whatever), because on
> restart it won't reprocess the files that have been committed in the
> checkpoints.
>
>
> https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/libs/state_processor_api/#reading-state
>
>
>
>
> On Fri, Oct 27, 2023 at 1:13 AM arjun s  wrote:
>
>> Hi team, Thanks for your quick response.
>> I have an inquiry regarding file processing in the event of a job
>> restart. When the job is restarted, we encounter challenges in tracking
>> which files have been processed and which remain pending. Is there a method
>> to seamlessly resume processing files from where they were left off,
>> particularly in situations where we need to submit and restart the job
>> manually due to any server restart or application restart? This becomes an
>> issue when the job processes all the files in the directory from the
>> beginning after a restart, and I'm seeking a solution to address this.
>>
>> Thanks and regards,
>> Arjun
>>
>> On Fri, 27 Oct 2023 at 07:29, Chirag Dewan 
>> wrote:
>>
>>> Hi Arjun,
>>>
>>> Flink's FileSource doesnt move or delete the files as of now. It will
>>> keep the files as is and remember the name of the file read in checkpointed
>>> state to ensure it doesnt read the same file twice.
>>>
>>> Flink's source API works in a way that single Enumerator operates on the
>>> JobManager. The enumerator is responsible for listing the files and
>>> splitting these into smaller units. These units could be the complete file
>>> (in case of row formats) or splits within a file (for bulk formats). The
>>> reading is done by SplitReaders in the Task Managers. This way it ensures
>>> that only reading is done concurrently and is able to track file
>>> completions.
>>>
>>> You can read more Flink Sources
>>> 
>>>  and here
>>> 
>>>
>>> FileSystem
>>>
>>> FileSystem # This connector provides a unified Source and Sink for BATCH
>>> and STREAMING that reads or writes (par...
>>>
>>> 
>>>
>>>
>>>
>>> On Thursday, 26 October, 2023 at 06:53:23 pm IST, arjun s <
>>> arjunjoice...@gmail.com> wrote:
>>>
>>>
>>> Hello team,
>>> I'm currently in the process of configuring a Flink job. This job
>>> entails reading files from a specified directory and then transmitting the
>>> data to a Kafka sink. I've already successfully designed a Flink job that
>>> reads the file contents in a streaming manner and effectively sends them to
>>> Kafka. However, my specific requirement is a bit more intricate. I need the
>>> job to not only read these files and push the data to Kafka but also
>>> relocate the processed file to a different directory once all of its
>>> contents have been processed. Following this, the job should seamlessly
>>> transition to processing the next file in the source directory.
>>> Additionally, I have some concerns regarding how the job will behave if it
>>> encounters a restart. Could you please advise if this is achievable, and if
>>> so, provide guidance or code to implement it?
>>>
>>> I'm also quite interested in how the job will handle situations where
>>> the source has a parallelism greater than 2 or 3, and how it can accurately
>>> monitor the completion of reading all contents in each file.
>>>
>>> Thanks and Regards,
>>> Arjun
>>>
>>


Re: Invalid Null Check in DefaultFileFilter

2023-10-27 Thread Alexander Fedulov
* with regards to empty string. The null check is still a bit defensive and
one could return false in test(), but it does not matter really since
String.substring in getName() can never return null.

On Fri, 27 Oct 2023 at 16:32, Alexander Fedulov 
wrote:

> Actually, this is not even "defensive programming", but is the required
> logic for processing directories.
> See here:
>
> https://github.com/apache/flink/blob/release-1.18/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/enumerate/NonSplittingRecursiveEnumerator.java#L90
>
>
> https://github.com/apache/flink/blob/release-1.18/flink-core/src/main/java/org/apache/flink/core/fs/Path.java#L295
>
> Returning false would prevent addSplitsForPath from adding all nested
> files recursively.
>
> Best,
> Alexander
>
>
>
> On Fri, 27 Oct 2023 at 04:04, Chirag Dewan 
> wrote:
>
>> Yeah agree, not a problem in general. But it just seems odd. Returning
>> true if a fileName can be null will blow up a lot more in the reader as far
>> as my understanding goes.
>>
>> I just want to understand whether this is an erroneous condition or an
>> actual use case. Lets say is it possible to get a null file name for some
>> sub directories and hence important to return true so that the File Source
>> can monitor inside those sub directories?
>>
>> On Friday, 27 October, 2023 at 12:58:44 am IST, Alexander Fedulov <
>> alexander.fedu...@gmail.com> wrote:
>>
>>
>> Is there an actual issue behind this question?
>>
>> In general: this is a form of defensive programming for a public
>> interface and the decision here is to be more lenient when facing
>> potentially erroneous user input rather than blow up the whole application
>> with a NullPointerException.
>>
>> Best,
>> Alexander Fedulov
>>
>> On Thu, 26 Oct 2023 at 07:35, Chirag Dewan via user <
>> user@flink.apache.org> wrote:
>>
>> Hi,
>>
>> I was looking at this check in DefaultFileFilter:
>>
>> public boolean test(Path path) {
>> final String fileName = path.getName();
>> if (fileName == null || fileName.length() == 0) {
>> return true;
>> }
>>
>> Was wondering how can a file name be null?
>>
>> And if null, shouldnt this be *return false*?
>>
>> I created a JIRA for this - [FLINK-33367] Invalid Check in
>> DefaultFileFilter - ASF JIRA
>> 
>>
>> [FLINK-33367] Invalid Check in DefaultFileFilter - ASF JIRA
>>
>> 
>> Any input is appreciated.
>>
>> Thanks
>>
>>
>>
>>


Re: Invalid Null Check in DefaultFileFilter

2023-10-27 Thread Alexander Fedulov
Actually, this is not even "defensive programming", but is the required
logic for processing directories.
See here:
https://github.com/apache/flink/blob/release-1.18/flink-connectors/flink-connector-files/src/main/java/org/apache/flink/connector/file/src/enumerate/NonSplittingRecursiveEnumerator.java#L90

https://github.com/apache/flink/blob/release-1.18/flink-core/src/main/java/org/apache/flink/core/fs/Path.java#L295

Returning false would prevent addSplitsForPath from adding all nested files
recursively.

Best,
Alexander



On Fri, 27 Oct 2023 at 04:04, Chirag Dewan  wrote:

> Yeah agree, not a problem in general. But it just seems odd. Returning
> true if a fileName can be null will blow up a lot more in the reader as far
> as my understanding goes.
>
> I just want to understand whether this is an erroneous condition or an
> actual use case. Lets say is it possible to get a null file name for some
> sub directories and hence important to return true so that the File Source
> can monitor inside those sub directories?
>
> On Friday, 27 October, 2023 at 12:58:44 am IST, Alexander Fedulov <
> alexander.fedu...@gmail.com> wrote:
>
>
> Is there an actual issue behind this question?
>
> In general: this is a form of defensive programming for a public interface
> and the decision here is to be more lenient when facing potentially
> erroneous user input rather than blow up the whole application with a
> NullPointerException.
>
> Best,
> Alexander Fedulov
>
> On Thu, 26 Oct 2023 at 07:35, Chirag Dewan via user 
> wrote:
>
> Hi,
>
> I was looking at this check in DefaultFileFilter:
>
> public boolean test(Path path) {
> final String fileName = path.getName();
> if (fileName == null || fileName.length() == 0) {
> return true;
> }
>
> Was wondering how can a file name be null?
>
> And if null, shouldnt this be *return false*?
>
> I created a JIRA for this - [FLINK-33367] Invalid Check in
> DefaultFileFilter - ASF JIRA
> 
>
> [FLINK-33367] Invalid Check in DefaultFileFilter - ASF JIRA
>
> 
> Any input is appreciated.
>
> Thanks
>
>
>
>


Which Flink engine versions do Connectors support?

2023-10-27 Thread Xianxun Ye

Hello Team, 

After the release of Flink 1.18, I found that most connectors had been 
externalized, e.g. Kafka, ES, HBase, JDBC, and pulsar connectors.   But I 
didn't find any manual or codes indicating which versions of Flink these 
connectors could work. 


Best regards,
Xianxun



Re: Updating existing state with state processor API

2023-10-27 Thread Alexis Sarda-Espinosa
Hi Matthias,

Thanks for the response. I guess the specific question would be, if I work
with an existing savepoint and pass an empty DataStream to
OperatorTransformation#bootstrapWith, will the new savepoint end up with an
empty state for the modified operator, or will it maintain the existing
state because nothing was changed?

Regards,
Alexis.

Am Fr., 27. Okt. 2023 um 08:40 Uhr schrieb Schwalbe Matthias <
matthias.schwa...@viseca.ch>:

> Good morning Alexis,
>
>
>
> Something like this we do all the time.
>
> Read and existing savepoint, copy some of the not to be changed operator
> states (keyed/non-keyed) over, and process/patch the remaining ones by
> transforming and bootstrapping to new state.
>
>
>
> I could spare more details for more specific questions, if you like 
>
>
>
> Regards
>
>
>
> Thias
>
>
>
> PS: I’m currently working on this ticket in order to get some glitches
> removed: FLINK-26585 
>
>
>
>
>
> *From:* Alexis Sarda-Espinosa 
> *Sent:* Thursday, October 26, 2023 4:01 PM
> *To:* user 
> *Subject:* Updating existing state with state processor API
>
>
>
> Hello,
>
>
>
> The documentation of the state processor API has some examples to modify
> an existing savepoint by defining a StateBootstrapTransformation. In all
> cases, the entrypoint is OperatorTransformation#bootstrapWith, which
> expects a DataStream. If I pass an empty DataStream to bootstrapWith and
> then apply the resulting transformation to an existing savepoint, will the
> transformation still receive data from the existing state?
>
>
>
> If the aforementioned is incorrect, I imagine I could instantiate
> a SavepointReader and create a DataStream of the existing state with it,
> which I could then pass to the bootstrapWith method directly or after
> "unioning" it with additional state. Would this work?
>
>
>
> Regards,
>
> Alexis.
>
>
> Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und
> beinhaltet unter Umständen vertrauliche Mitteilungen. Da die
> Vertraulichkeit von e-Mail-Nachrichten nicht gewährleistet werden kann,
> übernehmen wir keine Haftung für die Gewährung der Vertraulichkeit und
> Unversehrtheit dieser Mitteilung. Bei irrtümlicher Zustellung bitten wir
> Sie um Benachrichtigung per e-Mail und um Löschung dieser Nachricht sowie
> eventueller Anhänge. Jegliche unberechtigte Verwendung oder Verbreitung
> dieser Informationen ist streng verboten.
>
> This message is intended only for the named recipient and may contain
> confidential or privileged information. As the confidentiality of email
> communication cannot be guaranteed, we do not accept any responsibility for
> the confidentiality and the intactness of this message. If you have
> received it in error, please advise the sender by return e-mail and delete
> this message and any attachments. Any unauthorised use or dissemination of
> this information is strictly prohibited.
>


Re: Enhancing File Processing and Kafka Integration with Flink Jobs

2023-10-27 Thread Andrew Otto
For moving the files:
> It will keep the files as is and remember the name of the file read in
checkpointed state to ensure it doesnt read the same file twice.

I wonder if you could use this fact to query the committed checkpoints and
move them away after the job is done.  I think it should even be safe to do
this outside of the Flink job periodically (cron, whatever), because on
restart it won't reprocess the files that have been committed in the
checkpoints.

https://nightlies.apache.org/flink/flink-docs-release-1.18/docs/libs/state_processor_api/#reading-state




On Fri, Oct 27, 2023 at 1:13 AM arjun s  wrote:

> Hi team, Thanks for your quick response.
> I have an inquiry regarding file processing in the event of a job restart.
> When the job is restarted, we encounter challenges in tracking which files
> have been processed and which remain pending. Is there a method to
> seamlessly resume processing files from where they were left off,
> particularly in situations where we need to submit and restart the job
> manually due to any server restart or application restart? This becomes an
> issue when the job processes all the files in the directory from the
> beginning after a restart, and I'm seeking a solution to address this.
>
> Thanks and regards,
> Arjun
>
> On Fri, 27 Oct 2023 at 07:29, Chirag Dewan 
> wrote:
>
>> Hi Arjun,
>>
>> Flink's FileSource doesnt move or delete the files as of now. It will
>> keep the files as is and remember the name of the file read in checkpointed
>> state to ensure it doesnt read the same file twice.
>>
>> Flink's source API works in a way that single Enumerator operates on the
>> JobManager. The enumerator is responsible for listing the files and
>> splitting these into smaller units. These units could be the complete file
>> (in case of row formats) or splits within a file (for bulk formats). The
>> reading is done by SplitReaders in the Task Managers. This way it ensures
>> that only reading is done concurrently and is able to track file
>> completions.
>>
>> You can read more Flink Sources
>> 
>>  and here
>> 
>>
>> FileSystem
>>
>> FileSystem # This connector provides a unified Source and Sink for BATCH
>> and STREAMING that reads or writes (par...
>>
>> 
>>
>>
>>
>> On Thursday, 26 October, 2023 at 06:53:23 pm IST, arjun s <
>> arjunjoice...@gmail.com> wrote:
>>
>>
>> Hello team,
>> I'm currently in the process of configuring a Flink job. This job entails
>> reading files from a specified directory and then transmitting the data to
>> a Kafka sink. I've already successfully designed a Flink job that reads the
>> file contents in a streaming manner and effectively sends them to Kafka.
>> However, my specific requirement is a bit more intricate. I need the job to
>> not only read these files and push the data to Kafka but also relocate the
>> processed file to a different directory once all of its contents have been
>> processed. Following this, the job should seamlessly transition to
>> processing the next file in the source directory. Additionally, I have some
>> concerns regarding how the job will behave if it encounters a restart.
>> Could you please advise if this is achievable, and if so, provide guidance
>> or code to implement it?
>>
>> I'm also quite interested in how the job will handle situations where the
>> source has a parallelism greater than 2 or 3, and how it can accurately
>> monitor the completion of reading all contents in each file.
>>
>> Thanks and Regards,
>> Arjun
>>
>


Re:GlobalWindowAggregate

2023-10-27 Thread Xuyang
Hi, this node exists when you are using a Windowing TVF[1] and using 
mini-batch, and then planner will optimize the plan tree with local-global 
aggregation[2]. You can find the benefits of local global optimization in doc 
above. 


If you don't need this optimization, set 'table.optimizer.agg-phase-strategy' 
to ONE_PHASE to disable it.


BTW, could you using jstack to show what is the thread doing? 







[1] 
https://nightlies.apache.org/flink/flink-docs-release-1.16/docs/dev/table/sql/queries/window-tvf/
[2] 
https://nightlies.apache.org/flink/flink-docs-release-1.14/docs/dev/table/tuning/#local-global-aggregation

--

Best!
Xuyang




At 2023-10-27 16:36:15, "Kenan Kılıçtepe"  wrote:

Hi,



Can someone tell what GlobalWindowAggregate is?
it is always %100 busy in my job graph.




GlobalWindowAggregate(groupBy=[deviceId, fwVersion, modelName, manufacturer, 
phoneNumber], window=[TUMBLE(slice_end=[$slice_end], size=[3 h])], 
select=[deviceId, fwVersion, modelName, manufacturer, phoneNumber, 
COUNT(distinct$0 count$0) AS CNT, start('w$) AS window_start, end('w$) AS 
window_end]) 


 

Thanks

GlobalWindowAggregate

2023-10-27 Thread Kenan Kılıçtepe
Hi,

Can someone tell what GlobalWindowAggregate is?
it is always %100 busy in my job graph.


GlobalWindowAggregate(groupBy=[deviceId, fwVersion, modelName,
manufacturer, phoneNumber], window=[TUMBLE(slice_end=[$slice_end], size=[3
h])], select=[deviceId, fwVersion, modelName, manufacturer, phoneNumber,
COUNT(distinct$0 count$0) AS CNT, start('w$) AS window_start, end('w$) AS
window_end])


Thanks


FW: Unable to achieve Flink kafka connector exactly once delivery semantics.

2023-10-27 Thread Gopal Chennupati (gchennup)
Hi,
Can someone please help me to resolve the below issue while running flink job.
Or provide me any doc/example which describe the exactly-once delivery 
guarantee semantics.

Thanks,
Gopal.

From: Gopal Chennupati (gchennup) 
Date: Friday, 27 October 2023 at 11:00 AM
To: commun...@flink.apache.org , 
u...@flink.apache.org 
Subject: Unable to achieve Flink kafka connector exactly once delivery 
semantics.
Hi Team,


I am trying to configure my kafka sink connector with “exactly-once” delivery 
guarantee, however it’s failing when I run the flink job with this 
configuration, here is the full exception stack trace from the job logs.


[Source: SG-SGT-TransformerJob -> Map -> Sink: Writer -> Sink: Committer 
(5/10)#12] WARN org.apache.kafka.common.utils.AppInfoParser - Error registering 
AppInfo mbean

javax.management.InstanceAlreadyExistsException: 
kafka.producer:type=app-info,id=producer-sgt-4-1

  at 
java.management/com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:436)

  at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1855)

  at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerDynamicMBean(DefaultMBeanServerInterceptor.java:955)

  at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerObject(DefaultMBeanServerInterceptor.java:890)

  at 
java.management/com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:320)

  at 
java.management/com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)

  at 
org.apache.kafka.common.utils.AppInfoParser.registerAppInfo(AppInfoParser.java:64)

  at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:433)

  at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:289)

  at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:316)

  at 
org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:301)

  at 
org.apache.flink.connector.kafka.sink.FlinkKafkaInternalProducer.(FlinkKafkaInternalProducer.java:55)

  at 
org.apache.flink.connector.kafka.sink.KafkaWriter.getOrCreateTransactionalProducer(KafkaWriter.java:332)

  at 
org.apache.flink.connector.kafka.sink.TransactionAborter.abortTransactionOfSubtask(TransactionAborter.java:104)

  at 
org.apache.flink.connector.kafka.sink.TransactionAborter.abortTransactionsWithPrefix(TransactionAborter.java:82)

  at 
org.apache.flink.connector.kafka.sink.TransactionAborter.abortLingeringTransactions(TransactionAborter.java:66)

  at 
org.apache.flink.connector.kafka.sink.KafkaWriter.abortLingeringTransactions(KafkaWriter.java:295)

  at 
org.apache.flink.connector.kafka.sink.KafkaWriter.(KafkaWriter.java:176)

  at 
org.apache.flink.connector.kafka.sink.KafkaSink.createWriter(KafkaSink.java:111)

  at 
org.apache.flink.connector.kafka.sink.KafkaSink.createWriter(KafkaSink.java:57)

  at 
org.apache.flink.streaming.runtime.operators.sink.StatefulSinkWriterStateHandler.createWriter(StatefulSinkWriterStateHandler.java:117)

  at 
org.apache.flink.streaming.runtime.operators.sink.SinkWriterOperator.initializeState(SinkWriterOperator.java:146)

  at 
org.apache.flink.streaming.api.operators.StreamOperatorStateHandler.initializeOperatorState(StreamOperatorStateHandler.java:122)

  at 
org.apache.flink.streaming.api.operators.AbstractStreamOperator.initializeState(AbstractStreamOperator.java:274)

  at 
org.apache.flink.streaming.runtime.tasks.RegularOperatorChain.initializeStateAndOpenOperators(RegularOperatorChain.java:106)

  at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreGates(StreamTask.java:734)

  at 
org.apache.flink.streaming.runtime.tasks.StreamTaskActionExecutor$1.call(StreamTaskActionExecutor.java:55)

  at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restoreInternal(StreamTask.java:709)

  at 
org.apache.flink.streaming.runtime.tasks.StreamTask.restore(StreamTask.java:675)

  at 
org.apache.flink.runtime.taskmanager.Task.runWithSystemExitMonitoring(Task.java:952)

  at 
org.apache.flink.runtime.taskmanager.Task.restoreAndInvoke(Task.java:921)

  at org.apache.flink.runtime.taskmanager.Task.doRun(Task.java:745)

  at org.apache.flink.runtime.taskmanager.Task.run(Task.java:562)

  at java.base/java.lang.Thread.run(Thread.java:834)


And here is the producer configuration,
KafkaSink sink = KafkaSink
.builder()

.setBootstrapServers(producerConfig.getProperty("bootstrap.servers"))
.setKafkaProducerConfig(producerConfig)
.setRecordSerializer(new 
GenericMessageSerialization<>(generic_key.class,
generic_value.class, 
producerConfig.getProperty("topic"),

RE: Updating existing state with state processor API

2023-10-27 Thread Schwalbe Matthias
Good morning Alexis,

Something like this we do all the time.
Read and existing savepoint, copy some of the not to be changed operator states 
(keyed/non-keyed) over, and process/patch the remaining ones by transforming 
and bootstrapping to new state.

I could spare more details for more specific questions, if you like 

Regards

Thias

PS: I’m currently working on this ticket in order to get some glitches removed: 
FLINK-26585


From: Alexis Sarda-Espinosa 
Sent: Thursday, October 26, 2023 4:01 PM
To: user 
Subject: Updating existing state with state processor API

Hello,

The documentation of the state processor API has some examples to modify an 
existing savepoint by defining a StateBootstrapTransformation. In all cases, 
the entrypoint is OperatorTransformation#bootstrapWith, which expects a 
DataStream. If I pass an empty DataStream to bootstrapWith and then apply the 
resulting transformation to an existing savepoint, will the transformation 
still receive data from the existing state?

If the aforementioned is incorrect, I imagine I could instantiate a 
SavepointReader and create a DataStream of the existing state with it, which I 
could then pass to the bootstrapWith method directly or after "unioning" it 
with additional state. Would this work?

Regards,
Alexis.

Diese Nachricht ist ausschliesslich für den Adressaten bestimmt und beinhaltet 
unter Umständen vertrauliche Mitteilungen. Da die Vertraulichkeit von 
e-Mail-Nachrichten nicht gewährleistet werden kann, übernehmen wir keine 
Haftung für die Gewährung der Vertraulichkeit und Unversehrtheit dieser 
Mitteilung. Bei irrtümlicher Zustellung bitten wir Sie um Benachrichtigung per 
e-Mail und um Löschung dieser Nachricht sowie eventueller Anhänge. Jegliche 
unberechtigte Verwendung oder Verbreitung dieser Informationen ist streng 
verboten.

This message is intended only for the named recipient and may contain 
confidential or privileged information. As the confidentiality of email 
communication cannot be guaranteed, we do not accept any responsibility for the 
confidentiality and the intactness of this message. If you have received it in 
error, please advise the sender by return e-mail and delete this message and 
any attachments. Any unauthorised use or dissemination of this information is 
strictly prohibited.