This is an automated email from the ASF dual-hosted git repository.

acosentino pushed a commit to branch camel-master
in repository https://gitbox.apache.org/repos/asf/camel-kafka-connector.git


The following commit(s) were added to refs/heads/camel-master by this push:
     new 64f8562  [create-pull-request] automated change
64f8562 is described below

commit 64f8562999b1712f30e86e039fbc0075a15141a1
Author: github-actions[bot] 
<41898282+github-actions[bot]@users.noreply.github.com>
AuthorDate: Tue Mar 23 03:53:11 2021 +0000

    [create-pull-request] automated change
---
 .../connectors/camel-google-functions-sink.json    | 87 ++++++++++++++++++++++
 .../resources/connectors/camel-kafka-source.json   | 32 ++++++++
 .../connectors/camel-scheduler-source.json         | 20 ++---
 .../connectors/camel-spring-rabbitmq-sink.json     |  7 ++
 .../generated/resources/camel-kafka-source.json    | 32 ++++++++
 .../docs/camel-kafka-kafka-source-connector.adoc   |  5 +-
 .../kafka/CamelKafkaSourceConnectorConfig.java     | 12 +++
 .../resources/camel-scheduler-source.json          | 20 ++---
 .../camel-scheduler-kafka-source-connector.adoc    |  4 +-
 .../CamelSchedulerSourceConnectorConfig.java       | 16 ++--
 .../resources/camel-spring-rabbitmq-sink.json      |  7 ++
 ...camel-spring-rabbitmq-kafka-sink-connector.adoc |  3 +-
 .../CamelSpringrabbitmqSinkConnectorConfig.java    |  4 +
 .../camel-kafka-kafka-source-connector.adoc        |  5 +-
 .../camel-scheduler-kafka-source-connector.adoc    |  4 +-
 ...camel-spring-rabbitmq-kafka-sink-connector.adoc |  3 +-
 16 files changed, 225 insertions(+), 36 deletions(-)

diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-google-functions-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-google-functions-sink.json
new file mode 100644
index 0000000..92708d7
--- /dev/null
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-google-functions-sink.json
@@ -0,0 +1,87 @@
+{
+       "connector": {
+               "class": 
"org.apache.camel.kafkaconnector.googlefunctions.CamelGooglefunctionsSinkConnector",
+               "artifactId": "camel-google-functions-kafka-connector",
+               "groupId": "org.apache.camel.kafkaconnector",
+               "id": "camel-google-functions-sink",
+               "type": "sink",
+               "version": "0.9.0-SNAPSHOT",
+               "description": "Store and retrieve objects from Google Cloud 
Functions Service using the google-cloud-storage library."
+       },
+       "properties": {
+               "camel.sink.path.functionName": {
+                       "name": "camel.sink.path.functionName",
+                       "description": "The user-defined name of the function",
+                       "priority": "HIGH",
+                       "required": "true"
+               },
+               "camel.sink.endpoint.serviceAccountKey": {
+                       "name": "camel.sink.endpoint.serviceAccountKey",
+                       "description": "Service account key to authenticate an 
application as a service account",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.sink.endpoint.lazyStartProducer": {
+                       "name": "camel.sink.endpoint.lazyStartProducer",
+                       "description": "Whether the producer should be started 
lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then creating and starting the pr [...]
+                       "defaultValue": "false",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.sink.endpoint.location": {
+                       "name": "camel.sink.endpoint.location",
+                       "description": "The Google Cloud Location (Region) 
where the Function is located",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.sink.endpoint.operation": {
+                       "name": "camel.sink.endpoint.operation",
+                       "description": "The operation to perform on the 
producer. One of: [listFunctions] [getFunction] [callFunction] 
[generateDownloadUrl] [generateUploadUrl] [createFunction] [updateFunction] 
[deleteFunction]",
+                       "priority": "MEDIUM",
+                       "required": "false",
+                       "enum": [
+                               "listFunctions",
+                               "getFunction",
+                               "callFunction",
+                               "generateDownloadUrl",
+                               "generateUploadUrl",
+                               "createFunction",
+                               "updateFunction",
+                               "deleteFunction"
+                       ]
+               },
+               "camel.sink.endpoint.pojoRequest": {
+                       "name": "camel.sink.endpoint.pojoRequest",
+                       "description": "Specifies if the request is a pojo 
request",
+                       "defaultValue": "false",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.sink.endpoint.project": {
+                       "name": "camel.sink.endpoint.project",
+                       "description": "The Google Cloud Project name where the 
Function is located",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.sink.endpoint.client": {
+                       "name": "camel.sink.endpoint.client",
+                       "description": "The client to use during service 
invocation.",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.component.google-functions.lazyStartProducer": {
+                       "name": 
"camel.component.google-functions.lazyStartProducer",
+                       "description": "Whether the producer should be started 
lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then creating and starting the pr [...]
+                       "defaultValue": "false",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
+               "camel.component.google-functions.autowiredEnabled": {
+                       "name": 
"camel.component.google-functions.autowiredEnabled",
+                       "description": "Whether autowiring is enabled. This is 
used for automatic autowiring options (the option must be marked as autowired) 
by looking up in the registry to find if there is a single instance of matching 
type, which then gets configured on the component. This can be used for 
automatic configuring JDBC data sources, JMS connection factories, AWS Clients, 
etc.",
+                       "defaultValue": "true",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               }
+       }
+}
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
index 42ba6a0..04c880b 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-kafka-source.json
@@ -220,6 +220,19 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.source.endpoint.pollOnError": {
+                       "name": "camel.source.endpoint.pollOnError",
+                       "description": "What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message again RETRY will let t [...]
+                       "priority": "MEDIUM",
+                       "required": "false",
+                       "enum": [
+                               "DISCARD",
+                               "ERROR_HANDLER",
+                               "RECONNECT",
+                               "RETRY",
+                               "STOP"
+                       ]
+               },
                "camel.source.endpoint.pollTimeoutMs": {
                        "name": "camel.source.endpoint.pollTimeoutMs",
                        "description": "The timeout used when polling the 
KafkaConsumer.",
@@ -638,6 +651,19 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.kafka.pollOnError": {
+                       "name": "camel.component.kafka.pollOnError",
+                       "description": "What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message again RETRY will let t [...]
+                       "priority": "MEDIUM",
+                       "required": "false",
+                       "enum": [
+                               "DISCARD",
+                               "ERROR_HANDLER",
+                               "RECONNECT",
+                               "RETRY",
+                               "STOP"
+                       ]
+               },
                "camel.component.kafka.pollTimeoutMs": {
                        "name": "camel.component.kafka.pollTimeoutMs",
                        "description": "The timeout used when polling the 
KafkaConsumer.",
@@ -689,6 +715,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.kafka.pollExceptionStrategy": {
+                       "name": "camel.component.kafka.pollExceptionStrategy",
+                       "description": "To use a custom strategy with the 
consumer to control how to handle exceptions thrown from the Kafka broker while 
pooling messages.",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.kafka.autowiredEnabled": {
                        "name": "camel.component.kafka.autowiredEnabled",
                        "description": "Whether autowiring is enabled. This is 
used for automatic autowiring options (the option must be marked as autowired) 
by looking up in the registry to find if there is a single instance of matching 
type, which then gets configured on the component. This can be used for 
automatic configuring JDBC data sources, JMS connection factories, AWS Clients, 
etc.",
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-scheduler-source.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-scheduler-source.json
index 4da4b8b..05a9757 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-scheduler-source.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-scheduler-source.json
@@ -77,13 +77,6 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.source.endpoint.concurrentTasks": {
-                       "name": "camel.source.endpoint.concurrentTasks",
-                       "description": "Number of threads used by the 
scheduling thread pool. Is by default using a single thread",
-                       "defaultValue": "1",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.source.endpoint.delay": {
                        "name": "camel.source.endpoint.delay",
                        "description": "Milliseconds before the next poll.",
@@ -105,6 +98,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.source.endpoint.poolSize": {
+                       "name": "camel.source.endpoint.poolSize",
+                       "description": "Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread",
+                       "defaultValue": "1",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.source.endpoint.repeatCount": {
                        "name": "camel.source.endpoint.repeatCount",
                        "description": "Specifies a maximum limit of number of 
fires. So if you set it to 1, the scheduler will only fire once. If you set it 
to 5, it will only fire five times. A value of zero or negative means fire 
forever.",
@@ -190,9 +190,9 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.component.scheduler.concurrentTasks": {
-                       "name": "camel.component.scheduler.concurrentTasks",
-                       "description": "Number of threads used by the 
scheduling thread pool. Is by default using a single thread",
+               "camel.component.scheduler.poolSize": {
+                       "name": "camel.component.scheduler.poolSize",
+                       "description": "Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread",
                        "defaultValue": "1",
                        "priority": "MEDIUM",
                        "required": "false"
diff --git 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-spring-rabbitmq-sink.json
 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-spring-rabbitmq-sink.json
index 1a6a93c..955b8fd 100644
--- 
a/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-spring-rabbitmq-sink.json
+++ 
b/camel-kafka-connector-catalog/src/generated/resources/connectors/camel-spring-rabbitmq-sink.json
@@ -106,6 +106,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.spring-rabbitmq.allowNullBody": {
+                       "name": "camel.component.spring-rabbitmq.allowNullBody",
+                       "description": "Whether to allow sending messages with 
no body. If this option is false and the message body is null, then an 
MessageConversionException is thrown.",
+                       "defaultValue": "false",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.spring-rabbitmq.lazyStartProducer": {
                        "name": 
"camel.component.spring-rabbitmq.lazyStartProducer",
                        "description": "Whether the producer should be started 
lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then creating and starting the pr [...]
diff --git 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
index 42ba6a0..04c880b 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
+++ 
b/connectors/camel-kafka-kafka-connector/src/generated/resources/camel-kafka-source.json
@@ -220,6 +220,19 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.source.endpoint.pollOnError": {
+                       "name": "camel.source.endpoint.pollOnError",
+                       "description": "What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message again RETRY will let t [...]
+                       "priority": "MEDIUM",
+                       "required": "false",
+                       "enum": [
+                               "DISCARD",
+                               "ERROR_HANDLER",
+                               "RECONNECT",
+                               "RETRY",
+                               "STOP"
+                       ]
+               },
                "camel.source.endpoint.pollTimeoutMs": {
                        "name": "camel.source.endpoint.pollTimeoutMs",
                        "description": "The timeout used when polling the 
KafkaConsumer.",
@@ -638,6 +651,19 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.kafka.pollOnError": {
+                       "name": "camel.component.kafka.pollOnError",
+                       "description": "What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message again RETRY will let t [...]
+                       "priority": "MEDIUM",
+                       "required": "false",
+                       "enum": [
+                               "DISCARD",
+                               "ERROR_HANDLER",
+                               "RECONNECT",
+                               "RETRY",
+                               "STOP"
+                       ]
+               },
                "camel.component.kafka.pollTimeoutMs": {
                        "name": "camel.component.kafka.pollTimeoutMs",
                        "description": "The timeout used when polling the 
KafkaConsumer.",
@@ -689,6 +715,12 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.kafka.pollExceptionStrategy": {
+                       "name": "camel.component.kafka.pollExceptionStrategy",
+                       "description": "To use a custom strategy with the 
consumer to control how to handle exceptions thrown from the Kafka broker while 
pooling messages.",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.kafka.autowiredEnabled": {
                        "name": "camel.component.kafka.autowiredEnabled",
                        "description": "Whether autowiring is enabled. This is 
used for automatic autowiring options (the option must be marked as autowired) 
by looking up in the registry to find if there is a single instance of matching 
type, which then gets configured on the component. This can be used for 
automatic configuring JDBC data sources, JMS connection factories, AWS Clients, 
etc.",
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
index 03f8c86..5686b4b 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/docs/camel-kafka-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.kafka.CamelKafkaSourceConnector
 ----
 
 
-The camel-kafka source connector supports 122 options, which are listed below.
+The camel-kafka source connector supports 125 options, which are listed below.
 
 
 
@@ -61,6 +61,7 @@ The camel-kafka source connector supports 122 options, which 
are listed below.
 | *camel.source.endpoint.maxPollRecords* | The maximum number of records 
returned in a single call to poll() | "500" | false | MEDIUM
 | *camel.source.endpoint.offsetRepository* | The offset repository to use in 
order to locally store the offset of each partition of the topic. Defining one 
will disable the autocommit. | null | false | MEDIUM
 | *camel.source.endpoint.partitionAssignor* | The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is used | 
"org.apache.kafka.clients.consumer.RangeAssignor" | false | MEDIUM
+| *camel.source.endpoint.pollOnError* | What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message a [...]
 | *camel.source.endpoint.pollTimeoutMs* | The timeout used when polling the 
KafkaConsumer. | "5000" | false | MEDIUM
 | *camel.source.endpoint.seekTo* | Set if KafkaConsumer will read from 
beginning or end on startup: beginning : read from beginning end : read from 
end This is replacing the earlier property seekToBeginning One of: [beginning] 
[end] | null | false | MEDIUM
 | *camel.source.endpoint.sessionTimeoutMs* | The timeout used to detect 
failures when using Kafka's group management facilities. | "10000" | false | 
MEDIUM
@@ -121,6 +122,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.maxPollRecords* | The maximum number of records 
returned in a single call to poll() | "500" | false | MEDIUM
 | *camel.component.kafka.offsetRepository* | The offset repository to use in 
order to locally store the offset of each partition of the topic. Defining one 
will disable the autocommit. | null | false | MEDIUM
 | *camel.component.kafka.partitionAssignor* | The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is used | 
"org.apache.kafka.clients.consumer.RangeAssignor" | false | MEDIUM
+| *camel.component.kafka.pollOnError* | What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message a [...]
 | *camel.component.kafka.pollTimeoutMs* | The timeout used when polling the 
KafkaConsumer. | "5000" | false | MEDIUM
 | *camel.component.kafka.seekTo* | Set if KafkaConsumer will read from 
beginning or end on startup: beginning : read from beginning end : read from 
end This is replacing the earlier property seekToBeginning One of: [beginning] 
[end] | null | false | MEDIUM
 | *camel.component.kafka.sessionTimeoutMs* | The timeout used to detect 
failures when using Kafka's group management facilities. | "10000" | false | 
MEDIUM
@@ -128,6 +130,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.topicIsPattern* | Whether the topic is a pattern 
(regular expression). This can be used to subscribe to dynamic number of topics 
matching the pattern. | false | false | MEDIUM
 | *camel.component.kafka.valueDeserializer* | Deserializer class for value 
that implements the Deserializer interface. | 
"org.apache.kafka.common.serialization.StringDeserializer" | false | MEDIUM
 | *camel.component.kafka.kafkaManualCommitFactory* | Factory to use for 
creating KafkaManualCommit instances. This allows to plugin a custom factory to 
create custom KafkaManualCommit instances in case special logic is needed when 
doing manual commits that deviates from the default implementation that comes 
out of the box. | null | false | MEDIUM
+| *camel.component.kafka.pollExceptionStrategy* | To use a custom strategy 
with the consumer to control how to handle exceptions thrown from the Kafka 
broker while pooling messages. | null | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
 | *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
diff --git 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
index b4af220..aa346f0 100644
--- 
a/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
+++ 
b/connectors/camel-kafka-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/kafka/CamelKafkaSourceConnectorConfig.java
@@ -116,6 +116,9 @@ public class CamelKafkaSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_CONF = 
"camel.source.endpoint.partitionAssignor";
     public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_DOC = "The class name of the 
partition assignment strategy that the client will use to distribute partition 
ownership amongst consumer instances when group management is used";
     public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_DEFAULT = 
"org.apache.kafka.clients.consumer.RangeAssignor";
+    public static final String CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_CONF 
= "camel.source.endpoint.pollOnError";
+    public static final String CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_DOC = 
"What to do if kafka threw an exception while polling for new messages. Will by 
default use the value from the component configuration unless an explicit value 
has been configured on the endpoint level. DISCARD will discard the message and 
continue to poll next message. ERROR_HANDLER will use Camel's error handler to 
process the exception, and afterwards continue to poll next message. RECONNECT 
will re-connect [...]
+    public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_DEFAULT = null;
     public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_CONF = 
"camel.source.endpoint.pollTimeoutMs";
     public static final String CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_DOC 
= "The timeout used when polling the KafkaConsumer.";
     public static final String 
CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_DEFAULT = "5000";
@@ -296,6 +299,9 @@ public class CamelKafkaSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_CONF = 
"camel.component.kafka.partitionAssignor";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_DOC = "The class name of the 
partition assignment strategy that the client will use to distribute partition 
ownership amongst consumer instances when group management is used";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_DEFAULT = 
"org.apache.kafka.clients.consumer.RangeAssignor";
+    public static final String CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_CONF 
= "camel.component.kafka.pollOnError";
+    public static final String CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_DOC 
= "What to do if kafka threw an exception while polling for new messages. Will 
by default use the value from the component configuration unless an explicit 
value has been configured on the endpoint level. DISCARD will discard the 
message and continue to poll next message. ERROR_HANDLER will use Camel's error 
handler to process the exception, and afterwards continue to poll next message. 
RECONNECT will re-connec [...]
+    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_DEFAULT = null;
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_CONF = 
"camel.component.kafka.pollTimeoutMs";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_DOC = "The timeout used when 
polling the KafkaConsumer.";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_DEFAULT = "5000";
@@ -317,6 +323,9 @@ public class CamelKafkaSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_CONF = 
"camel.component.kafka.kafkaManualCommitFactory";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_DOC = "Factory to use 
for creating KafkaManualCommit instances. This allows to plugin a custom 
factory to create custom KafkaManualCommit instances in case special logic is 
needed when doing manual commits that deviates from the default implementation 
that comes out of the box.";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_DEFAULT = null;
+    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_CONF = 
"camel.component.kafka.pollExceptionStrategy";
+    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_DOC = "To use a custom 
strategy with the consumer to control how to handle exceptions thrown from the 
Kafka broker while pooling messages.";
+    public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_DEFAULT = null;
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_CONF = 
"camel.component.kafka.autowiredEnabled";
     public static final String 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DOC = "Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc.";
     public static final Boolean 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DEFAULT = true;
@@ -435,6 +444,7 @@ public class CamelKafkaSourceConnectorConfig
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_MAX_POLL_RECORDS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_MAX_POLL_RECORDS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_ENDPOINT_MAX_POLL_RECORDS_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_OFFSET_REPOSITORY_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_OFFSET_REPOSITORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_ENDPOINT_OFFSET_REPOSITORY_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_ENDPOINT_PARTITION_ASSIGNOR_DOC);
+        conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_ON_ERROR_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_ENDPOINT_POLL_TIMEOUT_MS_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_SEEK_TO_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_SEEK_TO_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_ENDPOINT_SEEK_TO_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_ENDPOINT_SESSION_TIMEOUT_MS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_ENDPOINT_SESSION_TIMEOUT_MS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_ENDPOINT_SESSION_TIMEOUT_MS_DOC);
@@ -495,6 +505,7 @@ public class CamelKafkaSourceConnectorConfig
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_MAX_POLL_RECORDS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_MAX_POLL_RECORDS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_MAX_POLL_RECORDS_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_OFFSET_REPOSITORY_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_OFFSET_REPOSITORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_OFFSET_REPOSITORY_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_PARTITION_ASSIGNOR_DOC);
+        conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_POLL_ON_ERROR_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_POLL_TIMEOUT_MS_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_SEEK_TO_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_SEEK_TO_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_SEEK_TO_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_SESSION_TIMEOUT_MS_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_SESSION_TIMEOUT_MS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_SESSION_TIMEOUT_MS_DOC);
@@ -502,6 +513,7 @@ public class CamelKafkaSourceConnectorConfig
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_TOPIC_IS_PATTERN_CONF, 
ConfigDef.Type.BOOLEAN, CAMEL_SOURCE_KAFKA_COMPONENT_TOPIC_IS_PATTERN_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_TOPIC_IS_PATTERN_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_VALUE_DESERIALIZER_CONF, 
ConfigDef.Type.STRING, CAMEL_SOURCE_KAFKA_COMPONENT_VALUE_DESERIALIZER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_VALUE_DESERIALIZER_DOC);
         
conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_MANUAL_COMMIT_FACTORY_DOC);
+        conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_POLL_EXCEPTION_STRATEGY_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_CONF, 
ConfigDef.Type.BOOLEAN, CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_AUTOWIRED_ENABLED_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_KAFKA_COMPONENT_KAFKA_CLIENT_FACTORY_DOC);
         conf.define(CAMEL_SOURCE_KAFKA_COMPONENT_SYNCHRONOUS_CONF, 
ConfigDef.Type.BOOLEAN, CAMEL_SOURCE_KAFKA_COMPONENT_SYNCHRONOUS_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_KAFKA_COMPONENT_SYNCHRONOUS_DOC);
diff --git 
a/connectors/camel-scheduler-kafka-connector/src/generated/resources/camel-scheduler-source.json
 
b/connectors/camel-scheduler-kafka-connector/src/generated/resources/camel-scheduler-source.json
index 4da4b8b..05a9757 100644
--- 
a/connectors/camel-scheduler-kafka-connector/src/generated/resources/camel-scheduler-source.json
+++ 
b/connectors/camel-scheduler-kafka-connector/src/generated/resources/camel-scheduler-source.json
@@ -77,13 +77,6 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.source.endpoint.concurrentTasks": {
-                       "name": "camel.source.endpoint.concurrentTasks",
-                       "description": "Number of threads used by the 
scheduling thread pool. Is by default using a single thread",
-                       "defaultValue": "1",
-                       "priority": "MEDIUM",
-                       "required": "false"
-               },
                "camel.source.endpoint.delay": {
                        "name": "camel.source.endpoint.delay",
                        "description": "Milliseconds before the next poll.",
@@ -105,6 +98,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.source.endpoint.poolSize": {
+                       "name": "camel.source.endpoint.poolSize",
+                       "description": "Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread",
+                       "defaultValue": "1",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.source.endpoint.repeatCount": {
                        "name": "camel.source.endpoint.repeatCount",
                        "description": "Specifies a maximum limit of number of 
fires. So if you set it to 1, the scheduler will only fire once. If you set it 
to 5, it will only fire five times. A value of zero or negative means fire 
forever.",
@@ -190,9 +190,9 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
-               "camel.component.scheduler.concurrentTasks": {
-                       "name": "camel.component.scheduler.concurrentTasks",
-                       "description": "Number of threads used by the 
scheduling thread pool. Is by default using a single thread",
+               "camel.component.scheduler.poolSize": {
+                       "name": "camel.component.scheduler.poolSize",
+                       "description": "Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread",
                        "defaultValue": "1",
                        "priority": "MEDIUM",
                        "required": "false"
diff --git 
a/connectors/camel-scheduler-kafka-connector/src/main/docs/camel-scheduler-kafka-source-connector.adoc
 
b/connectors/camel-scheduler-kafka-connector/src/main/docs/camel-scheduler-kafka-source-connector.adoc
index 1e3ebc2..5dd79c6 100644
--- 
a/connectors/camel-scheduler-kafka-connector/src/main/docs/camel-scheduler-kafka-source-connector.adoc
+++ 
b/connectors/camel-scheduler-kafka-connector/src/main/docs/camel-scheduler-kafka-source-connector.adoc
@@ -41,10 +41,10 @@ The camel-scheduler source connector supports 25 options, 
which are listed below
 | *camel.source.endpoint.backoffErrorThreshold* | The number of subsequent 
error polls (failed due some error) that should happen before the 
backoffMultipler should kick-in. | null | false | MEDIUM
 | *camel.source.endpoint.backoffIdleThreshold* | The number of subsequent idle 
polls that should happen before the backoffMultipler should kick-in. | null | 
false | MEDIUM
 | *camel.source.endpoint.backoffMultiplier* | To let the scheduled polling 
consumer backoff if there has been a number of subsequent idles/errors in a 
row. The multiplier is then the number of polls that will be skipped before the 
next actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | 
null | false | MEDIUM
-| *camel.source.endpoint.concurrentTasks* | Number of threads used by the 
scheduling thread pool. Is by default using a single thread | 1 | false | MEDIUM
 | *camel.source.endpoint.delay* | Milliseconds before the next poll. | 500L | 
false | MEDIUM
 | *camel.source.endpoint.greedy* | If greedy is enabled, then the 
ScheduledPollConsumer will run immediately again, if the previous run polled 1 
or more messages. | false | false | MEDIUM
 | *camel.source.endpoint.initialDelay* | Milliseconds before the first poll 
starts. | 1000L | false | MEDIUM
+| *camel.source.endpoint.poolSize* | Number of core threads in the thread pool 
used by the scheduling thread pool. Is by default using a single thread | 1 | 
false | MEDIUM
 | *camel.source.endpoint.repeatCount* | Specifies a maximum limit of number of 
fires. So if you set it to 1, the scheduler will only fire once. If you set it 
to 5, it will only fire five times. A value of zero or negative means fire 
forever. | 0L | false | MEDIUM
 | *camel.source.endpoint.runLoggingLevel* | The consumer logs a start/complete 
log line when it polls. This option allows you to configure the logging level 
for that. One of: [TRACE] [DEBUG] [INFO] [WARN] [ERROR] [OFF] | "TRACE" | false 
| MEDIUM
 | *camel.source.endpoint.scheduledExecutorService* | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. | null | false | MEDIUM
@@ -55,7 +55,7 @@ The camel-scheduler source connector supports 25 options, 
which are listed below
 | *camel.source.endpoint.useFixedDelay* | Controls if fixed delay or fixed 
rate is used. See ScheduledExecutorService in JDK for details. | true | false | 
MEDIUM
 | *camel.component.scheduler.bridgeErrorHandler* | Allows for bridging the 
consumer to the Camel routing Error Handler, which mean any exceptions occurred 
while the consumer is trying to pickup incoming messages, or the likes, will 
now be processed as a message and handled by the routing Error Handler. By 
default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal 
with exceptions, that will be logged at WARN or ERROR level and ignored. | 
false | false | MEDIUM
 | *camel.component.scheduler.autowiredEnabled* | Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc. | true | false | MEDIUM
-| *camel.component.scheduler.concurrentTasks* | Number of threads used by the 
scheduling thread pool. Is by default using a single thread | 1 | false | MEDIUM
+| *camel.component.scheduler.poolSize* | Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread | 
1 | false | MEDIUM
 |===
 
 
diff --git 
a/connectors/camel-scheduler-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/scheduler/CamelSchedulerSourceConnectorConfig.java
 
b/connectors/camel-scheduler-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/scheduler/CamelSchedulerSourceConnectorConfig.java
index ae529e8..f08d2e2 100644
--- 
a/connectors/camel-scheduler-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/scheduler/CamelSchedulerSourceConnectorConfig.java
+++ 
b/connectors/camel-scheduler-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/scheduler/CamelSchedulerSourceConnectorConfig.java
@@ -56,9 +56,6 @@ public class CamelSchedulerSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_CONF = 
"camel.source.endpoint.backoffMultiplier";
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_DOC = "To let the scheduled 
polling consumer backoff if there has been a number of subsequent idles/errors 
in a row. The multiplier is then the number of polls that will be skipped 
before the next actual attempt is happening again. When this option is in use 
then backoffIdleThreshold and/or backoffErrorThreshold must also be 
configured.";
     public static final Integer 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_DEFAULT = null;
-    public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_CONF = 
"camel.source.endpoint.concurrentTasks";
-    public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_DOC = "Number of threads used 
by the scheduling thread pool. Is by default using a single thread";
-    public static final Integer 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_DEFAULT = 1;
     public static final String CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_CONF = 
"camel.source.endpoint.delay";
     public static final String CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_DOC = 
"Milliseconds before the next poll.";
     public static final Long CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_DEFAULT = 
500L;
@@ -68,6 +65,9 @@ public class CamelSchedulerSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_CONF = 
"camel.source.endpoint.initialDelay";
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_DOC = "Milliseconds before the 
first poll starts.";
     public static final Long 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_DEFAULT = 1000L;
+    public static final String CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_CONF 
= "camel.source.endpoint.poolSize";
+    public static final String CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_DOC = 
"Number of core threads in the thread pool used by the scheduling thread pool. 
Is by default using a single thread";
+    public static final Integer 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_DEFAULT = 1;
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_CONF = 
"camel.source.endpoint.repeatCount";
     public static final String 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_DOC = "Specifies a maximum limit 
of number of fires. So if you set it to 1, the scheduler will only fire once. 
If you set it to 5, it will only fire five times. A value of zero or negative 
means fire forever.";
     public static final Long 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_DEFAULT = 0L;
@@ -98,9 +98,9 @@ public class CamelSchedulerSourceConnectorConfig
     public static final String 
CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_CONF = 
"camel.component.scheduler.autowiredEnabled";
     public static final String 
CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_DOC = "Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc.";
     public static final Boolean 
CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_DEFAULT = true;
-    public static final String 
CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_CONF = 
"camel.component.scheduler.concurrentTasks";
-    public static final String 
CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_DOC = "Number of threads used 
by the scheduling thread pool. Is by default using a single thread";
-    public static final Integer 
CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_DEFAULT = 1;
+    public static final String CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_CONF 
= "camel.component.scheduler.poolSize";
+    public static final String CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_DOC 
= "Number of core threads in the thread pool used by the scheduling thread 
pool. Is by default using a single thread";
+    public static final Integer 
CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_DEFAULT = 1;
 
     public CamelSchedulerSourceConnectorConfig(
             ConfigDef config,
@@ -124,10 +124,10 @@ public class CamelSchedulerSourceConnectorConfig
         
conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_ERROR_THRESHOLD_CONF, 
ConfigDef.Type.INT, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_ERROR_THRESHOLD_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_ERROR_THRESHOLD_DOC);
         
conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_IDLE_THRESHOLD_CONF, 
ConfigDef.Type.INT, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_IDLE_THRESHOLD_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_IDLE_THRESHOLD_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_CONF, 
ConfigDef.Type.INT, CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_BACKOFF_MULTIPLIER_DOC);
-        conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_CONF, 
ConfigDef.Type.INT, CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_CONCURRENT_TASKS_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_CONF, 
ConfigDef.Type.LONG, CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_ENDPOINT_DELAY_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_GREEDY_CONF, 
ConfigDef.Type.BOOLEAN, CAMEL_SOURCE_SCHEDULER_ENDPOINT_GREEDY_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_ENDPOINT_GREEDY_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_CONF, 
ConfigDef.Type.LONG, CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_ENDPOINT_INITIAL_DELAY_DOC);
+        conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_CONF, 
ConfigDef.Type.INT, CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_ENDPOINT_POOL_SIZE_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_CONF, 
ConfigDef.Type.LONG, CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_ENDPOINT_REPEAT_COUNT_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_RUN_LOGGING_LEVEL_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_RUN_LOGGING_LEVEL_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_RUN_LOGGING_LEVEL_DOC);
         
conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_SCHEDULED_EXECUTOR_SERVICE_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_SCHEDULED_EXECUTOR_SERVICE_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_SCHEDULED_EXECUTOR_SERVICE_DOC);
@@ -138,7 +138,7 @@ public class CamelSchedulerSourceConnectorConfig
         conf.define(CAMEL_SOURCE_SCHEDULER_ENDPOINT_USE_FIXED_DELAY_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_USE_FIXED_DELAY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_ENDPOINT_USE_FIXED_DELAY_DOC);
         
conf.define(CAMEL_SOURCE_SCHEDULER_COMPONENT_BRIDGE_ERROR_HANDLER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SOURCE_SCHEDULER_COMPONENT_BRIDGE_ERROR_HANDLER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_COMPONENT_BRIDGE_ERROR_HANDLER_DOC);
         conf.define(CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_COMPONENT_AUTOWIRED_ENABLED_DOC);
-        conf.define(CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_CONF, 
ConfigDef.Type.INT, CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SOURCE_SCHEDULER_COMPONENT_CONCURRENT_TASKS_DOC);
+        conf.define(CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_CONF, 
ConfigDef.Type.INT, CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_DEFAULT, 
ConfigDef.Importance.MEDIUM, CAMEL_SOURCE_SCHEDULER_COMPONENT_POOL_SIZE_DOC);
         return conf;
     }
 }
\ No newline at end of file
diff --git 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/generated/resources/camel-spring-rabbitmq-sink.json
 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/generated/resources/camel-spring-rabbitmq-sink.json
index 1a6a93c..955b8fd 100644
--- 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/generated/resources/camel-spring-rabbitmq-sink.json
+++ 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/generated/resources/camel-spring-rabbitmq-sink.json
@@ -106,6 +106,13 @@
                        "priority": "MEDIUM",
                        "required": "false"
                },
+               "camel.component.spring-rabbitmq.allowNullBody": {
+                       "name": "camel.component.spring-rabbitmq.allowNullBody",
+                       "description": "Whether to allow sending messages with 
no body. If this option is false and the message body is null, then an 
MessageConversionException is thrown.",
+                       "defaultValue": "false",
+                       "priority": "MEDIUM",
+                       "required": "false"
+               },
                "camel.component.spring-rabbitmq.lazyStartProducer": {
                        "name": 
"camel.component.spring-rabbitmq.lazyStartProducer",
                        "description": "Whether the producer should be started 
lazy (on the first message). By starting lazy you can use this to allow 
CamelContext and routes to startup in situations where a producer may otherwise 
fail during starting and cause the route to fail being started. By deferring 
this startup to be lazy then the startup failure can be handled during routing 
messages via Camel's routing error handlers. Beware that when the first message 
is processed then creating and starting the pr [...]
diff --git 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/main/docs/camel-spring-rabbitmq-kafka-sink-connector.adoc
 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/main/docs/camel-spring-rabbitmq-kafka-sink-connector.adoc
index 9880b29..ac714a2 100644
--- 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/main/docs/camel-spring-rabbitmq-kafka-sink-connector.adoc
+++ 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/main/docs/camel-spring-rabbitmq-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.springrabbitmq.CamelSpringrabbit
 ----
 
 
-The camel-spring-rabbitmq sink connector supports 22 options, which are listed 
below.
+The camel-spring-rabbitmq sink connector supports 23 options, which are listed 
below.
 
 
 
@@ -46,6 +46,7 @@ The camel-spring-rabbitmq sink connector supports 22 options, 
which are listed b
 | *camel.component.spring-rabbitmq.amqpAdmin* | Optional AMQP Admin service to 
use for auto declaring elements (queues, exchanges, bindings) | null | false | 
MEDIUM
 | *camel.component.spring-rabbitmq.connectionFactory* | The connection factory 
to be use. A connection factory must be configured either on the component or 
endpoint. | null | false | MEDIUM
 | *camel.component.spring-rabbitmq.testConnectionOn Startup* | Specifies 
whether to test the connection on startup. This ensures that when Camel starts 
that all the JMS consumers have a valid connection to the JMS broker. If a 
connection cannot be granted then Camel throws an exception on startup. This 
ensures that Camel is not started with failed connections. The JMS producers is 
tested as well. | false | false | MEDIUM
+| *camel.component.spring-rabbitmq.allowNullBody* | Whether to allow sending 
messages with no body. If this option is false and the message body is null, 
then an MessageConversionException is thrown. | false | false | MEDIUM
 | *camel.component.spring-rabbitmq.lazyStartProducer* | Whether the producer 
should be started lazy (on the first message). By starting lazy you can use 
this to allow CamelContext and routes to startup in situations where a producer 
may otherwise fail during starting and cause the route to fail being started. 
By deferring this startup to be lazy then the startup failure can be handled 
during routing messages via Camel's routing error handlers. Beware that when 
the first message is proces [...]
 | *camel.component.spring-rabbitmq.replyTimeout* | Specify the timeout in 
milliseconds to be used when waiting for a reply message when doing 
request/reply messaging. The default value is 5 seconds. A negative value 
indicates an indefinite timeout. | 5000L | false | MEDIUM
 | *camel.component.spring-rabbitmq.autowiredEnabled* | Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc. | true | false | MEDIUM
diff --git 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/springrabbitmq/CamelSpringrabbitmqSinkConnectorConfig.java
 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/springrabbitmq/CamelSpringrabbitmqSinkConnectorConfig.java
index 619bfee..e25f5be 100644
--- 
a/connectors/camel-spring-rabbitmq-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/springrabbitmq/CamelSpringrabbitmqSinkConnectorConfig.java
+++ 
b/connectors/camel-spring-rabbitmq-kafka-connector/src/main/java/org/apache/camel/kafkaconnector/springrabbitmq/CamelSpringrabbitmqSinkConnectorConfig.java
@@ -71,6 +71,9 @@ public class CamelSpringrabbitmqSinkConnectorConfig
     public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_CONF = 
"camel.component.spring-rabbitmq.testConnectionOnStartup";
     public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_DOC = "Specifies 
whether to test the connection on startup. This ensures that when Camel starts 
that all the JMS consumers have a valid connection to the JMS broker. If a 
connection cannot be granted then Camel throws an exception on startup. This 
ensures that Camel is not started with failed connections. The JMS producers is 
tested as well.";
     public static final Boolean 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_DEFAULT = false;
+    public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_CONF = 
"camel.component.spring-rabbitmq.allowNullBody";
+    public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_DOC = "Whether to allow 
sending messages with no body. If this option is false and the message body is 
null, then an MessageConversionException is thrown.";
+    public static final Boolean 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_DEFAULT = false;
     public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_CONF = 
"camel.component.spring-rabbitmq.lazyStartProducer";
     public static final String 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_DOC = "Whether the 
producer should be started lazy (on the first message). By starting lazy you 
can use this to allow CamelContext and routes to startup in situations where a 
producer may otherwise fail during starting and cause the route to fail being 
started. By deferring this startup to be lazy then the startup failure can be 
handled during routing messages via Camel's routing error handlers. Beware [...]
     public static final Boolean 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_DEFAULT = false;
@@ -121,6 +124,7 @@ public class CamelSpringrabbitmqSinkConnectorConfig
         conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AMQP_ADMIN_CONF, 
ConfigDef.Type.STRING, CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AMQP_ADMIN_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AMQP_ADMIN_DOC);
         
conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_CONNECTION_FACTORY_CONF, 
ConfigDef.Type.STRING, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_CONNECTION_FACTORY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_CONNECTION_FACTORY_DOC);
         
conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_CONF,
 ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_TEST_CONNECTION_ON_STARTUP_DOC);
+        conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_ALLOW_NULL_BODY_DOC);
         
conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_LAZY_START_PRODUCER_DOC);
         conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_REPLY_TIMEOUT_CONF, 
ConfigDef.Type.LONG, CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_REPLY_TIMEOUT_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_REPLY_TIMEOUT_DOC);
         
conf.define(CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AUTOWIRED_ENABLED_CONF, 
ConfigDef.Type.BOOLEAN, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AUTOWIRED_ENABLED_DEFAULT, 
ConfigDef.Importance.MEDIUM, 
CAMEL_SINK_SPRINGRABBITMQ_COMPONENT_AUTOWIRED_ENABLED_DOC);
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc 
b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
index 03f8c86..5686b4b 100644
--- a/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
+++ b/docs/modules/ROOT/pages/connectors/camel-kafka-kafka-source-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.kafka.CamelKafkaSourceConnector
 ----
 
 
-The camel-kafka source connector supports 122 options, which are listed below.
+The camel-kafka source connector supports 125 options, which are listed below.
 
 
 
@@ -61,6 +61,7 @@ The camel-kafka source connector supports 122 options, which 
are listed below.
 | *camel.source.endpoint.maxPollRecords* | The maximum number of records 
returned in a single call to poll() | "500" | false | MEDIUM
 | *camel.source.endpoint.offsetRepository* | The offset repository to use in 
order to locally store the offset of each partition of the topic. Defining one 
will disable the autocommit. | null | false | MEDIUM
 | *camel.source.endpoint.partitionAssignor* | The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is used | 
"org.apache.kafka.clients.consumer.RangeAssignor" | false | MEDIUM
+| *camel.source.endpoint.pollOnError* | What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message a [...]
 | *camel.source.endpoint.pollTimeoutMs* | The timeout used when polling the 
KafkaConsumer. | "5000" | false | MEDIUM
 | *camel.source.endpoint.seekTo* | Set if KafkaConsumer will read from 
beginning or end on startup: beginning : read from beginning end : read from 
end This is replacing the earlier property seekToBeginning One of: [beginning] 
[end] | null | false | MEDIUM
 | *camel.source.endpoint.sessionTimeoutMs* | The timeout used to detect 
failures when using Kafka's group management facilities. | "10000" | false | 
MEDIUM
@@ -121,6 +122,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.maxPollRecords* | The maximum number of records 
returned in a single call to poll() | "500" | false | MEDIUM
 | *camel.component.kafka.offsetRepository* | The offset repository to use in 
order to locally store the offset of each partition of the topic. Defining one 
will disable the autocommit. | null | false | MEDIUM
 | *camel.component.kafka.partitionAssignor* | The class name of the partition 
assignment strategy that the client will use to distribute partition ownership 
amongst consumer instances when group management is used | 
"org.apache.kafka.clients.consumer.RangeAssignor" | false | MEDIUM
+| *camel.component.kafka.pollOnError* | What to do if kafka threw an exception 
while polling for new messages. Will by default use the value from the 
component configuration unless an explicit value has been configured on the 
endpoint level. DISCARD will discard the message and continue to poll next 
message. ERROR_HANDLER will use Camel's error handler to process the exception, 
and afterwards continue to poll next message. RECONNECT will re-connect the 
consumer and try poll the message a [...]
 | *camel.component.kafka.pollTimeoutMs* | The timeout used when polling the 
KafkaConsumer. | "5000" | false | MEDIUM
 | *camel.component.kafka.seekTo* | Set if KafkaConsumer will read from 
beginning or end on startup: beginning : read from beginning end : read from 
end This is replacing the earlier property seekToBeginning One of: [beginning] 
[end] | null | false | MEDIUM
 | *camel.component.kafka.sessionTimeoutMs* | The timeout used to detect 
failures when using Kafka's group management facilities. | "10000" | false | 
MEDIUM
@@ -128,6 +130,7 @@ The camel-kafka source connector supports 122 options, 
which are listed below.
 | *camel.component.kafka.topicIsPattern* | Whether the topic is a pattern 
(regular expression). This can be used to subscribe to dynamic number of topics 
matching the pattern. | false | false | MEDIUM
 | *camel.component.kafka.valueDeserializer* | Deserializer class for value 
that implements the Deserializer interface. | 
"org.apache.kafka.common.serialization.StringDeserializer" | false | MEDIUM
 | *camel.component.kafka.kafkaManualCommitFactory* | Factory to use for 
creating KafkaManualCommit instances. This allows to plugin a custom factory to 
create custom KafkaManualCommit instances in case special logic is needed when 
doing manual commits that deviates from the default implementation that comes 
out of the box. | null | false | MEDIUM
+| *camel.component.kafka.pollExceptionStrategy* | To use a custom strategy 
with the consumer to control how to handle exceptions thrown from the Kafka 
broker while pooling messages. | null | false | MEDIUM
 | *camel.component.kafka.autowiredEnabled* | Whether autowiring is enabled. 
This is used for automatic autowiring options (the option must be marked as 
autowired) by looking up in the registry to find if there is a single instance 
of matching type, which then gets configured on the component. This can be used 
for automatic configuring JDBC data sources, JMS connection factories, AWS 
Clients, etc. | true | false | MEDIUM
 | *camel.component.kafka.kafkaClientFactory* | Factory to use for creating 
org.apache.kafka.clients.consumer.KafkaConsumer and 
org.apache.kafka.clients.producer.KafkaProducer instances. This allows to 
configure a custom factory to create instances with logic that extends the 
vanilla Kafka clients. | null | false | MEDIUM
 | *camel.component.kafka.synchronous* | Sets whether synchronous processing 
should be strictly used | false | false | MEDIUM
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-scheduler-kafka-source-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-scheduler-kafka-source-connector.adoc
index 1e3ebc2..5dd79c6 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-scheduler-kafka-source-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-scheduler-kafka-source-connector.adoc
@@ -41,10 +41,10 @@ The camel-scheduler source connector supports 25 options, 
which are listed below
 | *camel.source.endpoint.backoffErrorThreshold* | The number of subsequent 
error polls (failed due some error) that should happen before the 
backoffMultipler should kick-in. | null | false | MEDIUM
 | *camel.source.endpoint.backoffIdleThreshold* | The number of subsequent idle 
polls that should happen before the backoffMultipler should kick-in. | null | 
false | MEDIUM
 | *camel.source.endpoint.backoffMultiplier* | To let the scheduled polling 
consumer backoff if there has been a number of subsequent idles/errors in a 
row. The multiplier is then the number of polls that will be skipped before the 
next actual attempt is happening again. When this option is in use then 
backoffIdleThreshold and/or backoffErrorThreshold must also be configured. | 
null | false | MEDIUM
-| *camel.source.endpoint.concurrentTasks* | Number of threads used by the 
scheduling thread pool. Is by default using a single thread | 1 | false | MEDIUM
 | *camel.source.endpoint.delay* | Milliseconds before the next poll. | 500L | 
false | MEDIUM
 | *camel.source.endpoint.greedy* | If greedy is enabled, then the 
ScheduledPollConsumer will run immediately again, if the previous run polled 1 
or more messages. | false | false | MEDIUM
 | *camel.source.endpoint.initialDelay* | Milliseconds before the first poll 
starts. | 1000L | false | MEDIUM
+| *camel.source.endpoint.poolSize* | Number of core threads in the thread pool 
used by the scheduling thread pool. Is by default using a single thread | 1 | 
false | MEDIUM
 | *camel.source.endpoint.repeatCount* | Specifies a maximum limit of number of 
fires. So if you set it to 1, the scheduler will only fire once. If you set it 
to 5, it will only fire five times. A value of zero or negative means fire 
forever. | 0L | false | MEDIUM
 | *camel.source.endpoint.runLoggingLevel* | The consumer logs a start/complete 
log line when it polls. This option allows you to configure the logging level 
for that. One of: [TRACE] [DEBUG] [INFO] [WARN] [ERROR] [OFF] | "TRACE" | false 
| MEDIUM
 | *camel.source.endpoint.scheduledExecutorService* | Allows for configuring a 
custom/shared thread pool to use for the consumer. By default each consumer has 
its own single threaded thread pool. | null | false | MEDIUM
@@ -55,7 +55,7 @@ The camel-scheduler source connector supports 25 options, 
which are listed below
 | *camel.source.endpoint.useFixedDelay* | Controls if fixed delay or fixed 
rate is used. See ScheduledExecutorService in JDK for details. | true | false | 
MEDIUM
 | *camel.component.scheduler.bridgeErrorHandler* | Allows for bridging the 
consumer to the Camel routing Error Handler, which mean any exceptions occurred 
while the consumer is trying to pickup incoming messages, or the likes, will 
now be processed as a message and handled by the routing Error Handler. By 
default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal 
with exceptions, that will be logged at WARN or ERROR level and ignored. | 
false | false | MEDIUM
 | *camel.component.scheduler.autowiredEnabled* | Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc. | true | false | MEDIUM
-| *camel.component.scheduler.concurrentTasks* | Number of threads used by the 
scheduling thread pool. Is by default using a single thread | 1 | false | MEDIUM
+| *camel.component.scheduler.poolSize* | Number of core threads in the thread 
pool used by the scheduling thread pool. Is by default using a single thread | 
1 | false | MEDIUM
 |===
 
 
diff --git 
a/docs/modules/ROOT/pages/connectors/camel-spring-rabbitmq-kafka-sink-connector.adoc
 
b/docs/modules/ROOT/pages/connectors/camel-spring-rabbitmq-kafka-sink-connector.adoc
index 9880b29..ac714a2 100644
--- 
a/docs/modules/ROOT/pages/connectors/camel-spring-rabbitmq-kafka-sink-connector.adoc
+++ 
b/docs/modules/ROOT/pages/connectors/camel-spring-rabbitmq-kafka-sink-connector.adoc
@@ -24,7 +24,7 @@ 
connector.class=org.apache.camel.kafkaconnector.springrabbitmq.CamelSpringrabbit
 ----
 
 
-The camel-spring-rabbitmq sink connector supports 22 options, which are listed 
below.
+The camel-spring-rabbitmq sink connector supports 23 options, which are listed 
below.
 
 
 
@@ -46,6 +46,7 @@ The camel-spring-rabbitmq sink connector supports 22 options, 
which are listed b
 | *camel.component.spring-rabbitmq.amqpAdmin* | Optional AMQP Admin service to 
use for auto declaring elements (queues, exchanges, bindings) | null | false | 
MEDIUM
 | *camel.component.spring-rabbitmq.connectionFactory* | The connection factory 
to be use. A connection factory must be configured either on the component or 
endpoint. | null | false | MEDIUM
 | *camel.component.spring-rabbitmq.testConnectionOn Startup* | Specifies 
whether to test the connection on startup. This ensures that when Camel starts 
that all the JMS consumers have a valid connection to the JMS broker. If a 
connection cannot be granted then Camel throws an exception on startup. This 
ensures that Camel is not started with failed connections. The JMS producers is 
tested as well. | false | false | MEDIUM
+| *camel.component.spring-rabbitmq.allowNullBody* | Whether to allow sending 
messages with no body. If this option is false and the message body is null, 
then an MessageConversionException is thrown. | false | false | MEDIUM
 | *camel.component.spring-rabbitmq.lazyStartProducer* | Whether the producer 
should be started lazy (on the first message). By starting lazy you can use 
this to allow CamelContext and routes to startup in situations where a producer 
may otherwise fail during starting and cause the route to fail being started. 
By deferring this startup to be lazy then the startup failure can be handled 
during routing messages via Camel's routing error handlers. Beware that when 
the first message is proces [...]
 | *camel.component.spring-rabbitmq.replyTimeout* | Specify the timeout in 
milliseconds to be used when waiting for a reply message when doing 
request/reply messaging. The default value is 5 seconds. A negative value 
indicates an indefinite timeout. | 5000L | false | MEDIUM
 | *camel.component.spring-rabbitmq.autowiredEnabled* | Whether autowiring is 
enabled. This is used for automatic autowiring options (the option must be 
marked as autowired) by looking up in the registry to find if there is a single 
instance of matching type, which then gets configured on the component. This 
can be used for automatic configuring JDBC data sources, JMS connection 
factories, AWS Clients, etc. | true | false | MEDIUM

Reply via email to