[
https://issues.apache.org/jira/browse/CAMEL-8211?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15184893#comment-15184893
]
Luca Burgazzoli commented on CAMEL-8211:
----------------------------------------
Now with verbose mode:
{code}
HDFS :: For reading/writing from/to an HDFS filesystem using Hadoop 1.x.
------------------------------------------------------------------------
label: hadoop,file
maven: org.apache.camel/camel-hdfs/2.17-SNAPSHOT
componentProperties
Property Description
-------- -----------
jAASConfiguration To use the given configuration for security with JAAS.
properties
Property Group Default Value Description
-------- ----- ------------- -----------
hostName common HDFS host to use
port common 8020 HDFS port to use
path common The directory path
to use
blockSize common 67108864 The size of the HDFS
blocks
bufferSize common 4096 The buffer size used
by HDFS
checkIdleInterval common 500 How often (time in
millis) in to run the idle checker background task. This option is only in use
if the splitter strategy is IDLE.
chunkSize common 4096 When reading a
normal file this is split into chunks producing a message per chunk.
compressionCodec common DEFAULT The compression
codec to use
compressionType common NONE The compression type
to use (is default not in use)
connectOnStartup common true Whether to connect
to the HDFS file system on starting the producer/consumer. If false then the
connection is created on-demand. Notice that HDFS may take up till 15 minutes
to establish a connection as it has hardcoded 45 x 20 sec redelivery. By
setting this option to false allows your application to startup and not block
for up till 15 minutes.
fileSystemType common HDFS Set to LOCAL to not
use HDFS but local java.io.File instead.
fileType common NORMAL_FILE The file type to
use. For more details see Hadoop HDFS documentation about the various files
types.
keyType common NULL The type for the key
in case of sequence or map files.
openedSuffix common opened When a file is
opened for reading/writing the file is renamed with this suffix to avoid to
read it during the writing phase.
owner common The file owner must
match this owner for the consumer to pickup the file. Otherwise the file is
skipped.
readSuffix common read Once the file has
been read is renamed with this suffix to avoid to read it again.
replication common 3 The HDFS replication
factor
splitStrategy common In the current
version of Hadoop opening a file in append mode is disabled since it's not very
reliable. So for the moment it's only possible to create new files. The Camel
HDFS endpoint tries to solve this problem in this way: If the split strategy
option has been defined the hdfs path will be used as a directory and files
will be created using the configured UuidGenerator. Every time a splitting
condition is met a new file is created. The splitStrategy option is defined as
a string with the following syntax: splitStrategy=ST:valueST:value... where ST
can be: BYTES a new file is created and the old is closed when the number of
written bytes is more than value MESSAGES a new file is created and the old is
closed when the number of written messages is more than value IDLE a new file
is created and the old is closed when no writing happened in the last value
milliseconds
valueType common BYTES The type for the key
in case of sequence or map files
bridgeErrorHandler consumer false Allows for bridging
the consumer to the Camel routing Error Handler which mean any exceptions
occurred while the consumer is trying to pickup incoming messages or the likes
will now be processed as a message and handled by the routing Error Handler. By
default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal
with exceptions that will be logged at WARN/ERROR level and ignored.
delay consumer 1000 The interval
(milliseconds) between the directory scans.
initialDelay consumer For the consumer how
much to wait (milliseconds) before to start scanning the directory.
pattern consumer * The pattern used for
scanning the directory
sendEmptyMessageWhenIdle consumer false If the polling
consumer did not poll any files you can enable this option to send an empty
message (no body) instead.
exceptionHandler consumer (advanced) To let the consumer
use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is
enabled then this options is not in use. By default the consumer will deal with
exceptions that will be logged at WARN/ERROR level and ignored.
pollStrategy consumer (advanced) A pluggable
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your
custom implementation to control error handling usually occurred during the
poll operation before an Exchange have been created and being routed in Camel.
append producer false Append to existing
file. Notice that not all HDFS file systems support the append option.
overwrite producer true Whether to overwrite
existing files with the same name
exchangePattern advanced InOnly Sets the default
exchange pattern when creating an exchange
synchronous advanced false Sets whether
synchronous processing should be strictly used or Camel is allowed to use
asynchronous processing (if supported).
backoffErrorThreshold scheduler The number of
subsequent error polls (failed due some error) that should happen before the
backoffMultipler should kick-in.
backoffIdleThreshold scheduler The number of
subsequent idle polls that should happen before the backoffMultipler should
kick-in.
backoffMultiplier scheduler To let the scheduled
polling consumer backoff if there has been a number of subsequent idles/errors
in a row. The multiplier is then the number of polls that will be skipped
before the next actual attempt is happening again. When this option is in use
then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
greedy scheduler false If greedy is enabled
then the ScheduledPollConsumer will run immediately again if the previous run
polled 1 or more messages.
runLoggingLevel scheduler TRACE The consumer logs a
start/complete log line when it polls. This option allows you to configure the
logging level for that.
scheduledExecutorService scheduler Allows for
configuring a custom/shared thread pool to use for the consumer. By default
each consumer has its own single threaded thread pool.
scheduler scheduler none To use a cron
scheduler from either camel-spring or camel-quartz2 component
schedulerProperties scheduler To configure
additional properties when using a custom scheduler or any of the Quartz2
Spring based scheduler.
startScheduler scheduler true Whether the
scheduler should be auto started.
timeUnit scheduler MILLISECONDS Time unit for
initialDelay and delay options.
useFixedDelay scheduler true Controls if fixed
delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
{code}
With verbose and filter for label=consumer:
{code}
HDFS :: For reading/writing from/to an HDFS filesystem using Hadoop 1.x.
------------------------------------------------------------------------
label: hadoop,file
maven: org.apache.camel/camel-hdfs/2.17-SNAPSHOT
properties
Property Group Default Value Description
-------- ----- ------------- -----------
bridgeErrorHandler consumer false Allows for bridging
the consumer to the Camel routing Error Handler which mean any exceptions
occurred while the consumer is trying to pickup incoming messages or the likes
will now be processed as a message and handled by the routing Error Handler. By
default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal
with exceptions that will be logged at WARN/ERROR level and ignored.
delay consumer 1000 The interval
(milliseconds) between the directory scans.
initialDelay consumer For the consumer how
much to wait (milliseconds) before to start scanning the directory.
pattern consumer * The pattern used for
scanning the directory
sendEmptyMessageWhenIdle consumer false If the polling
consumer did not poll any files you can enable this option to send an empty
message (no body) instead.
exceptionHandler consumer (advanced) To let the consumer
use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is
enabled then this options is not in use. By default the consumer will deal with
exceptions that will be logged at WARN/ERROR level and ignored.
pollStrategy consumer (advanced) A pluggable
org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your
custom implementation to control error handling usually occurred during the
poll operation before an Exchange have been created and being routed in Camel.
backoffErrorThreshold scheduler The number of
subsequent error polls (failed due some error) that should happen before the
backoffMultipler should kick-in.
backoffIdleThreshold scheduler The number of
subsequent idle polls that should happen before the backoffMultipler should
kick-in.
backoffMultiplier scheduler To let the scheduled
polling consumer backoff if there has been a number of subsequent idles/errors
in a row. The multiplier is then the number of polls that will be skipped
before the next actual attempt is happening again. When this option is in use
then backoffIdleThreshold and/or backoffErrorThreshold must also be configured.
greedy scheduler false If greedy is enabled
then the ScheduledPollConsumer will run immediately again if the previous run
polled 1 or more messages.
runLoggingLevel scheduler TRACE The consumer logs a
start/complete log line when it polls. This option allows you to configure the
logging level for that.
scheduledExecutorService scheduler Allows for
configuring a custom/shared thread pool to use for the consumer. By default
each consumer has its own single threaded thread pool.
scheduler scheduler none To use a cron
scheduler from either camel-spring or camel-quartz2 component
schedulerProperties scheduler To configure
additional properties when using a custom scheduler or any of the Quartz2
Spring based scheduler.
startScheduler scheduler true Whether the
scheduler should be auto started.
timeUnit scheduler MILLISECONDS Time unit for
initialDelay and delay options.
useFixedDelay scheduler true Controls if fixed
delay or fixed rate is used. See ScheduledExecutorService in JDK for details.
{code}
> Camel commands - camel-component-info
> -------------------------------------
>
> Key: CAMEL-8211
> URL: https://issues.apache.org/jira/browse/CAMEL-8211
> Project: Camel
> Issue Type: New Feature
> Components: tooling
> Affects Versions: 2.15.0
> Reporter: Claus Ibsen
> Assignee: Luca Burgazzoli
> Priority: Minor
> Fix For: Future
>
>
> A new camel-catalog-component-info command to display detailed information
> about the component.
> We should show
> component description
> label(s)
> maven coordinate
> list of all its options and description for those
> This allows users to use these commands in tooling to read the component
> documentation.
> In the future we may slurp in any readme.md files we have in the components
> so we can do all component documentation in the source code and not use the
> confluence wiki which gets out of sync etc.
--
This message was sent by Atlassian JIRA
(v6.3.4#6332)
