Re: Unable to reach Nabble site

2021-08-31 Thread Andrea Cosentino
It's not working anymore. Do not consider it as source

Il mar 31 ago 2021, 17:09 Wilken Marci J
 ha scritto:

> Morning everyone,
> Is something up with the nabble site or has it been moved?
> camel.465427.n5.nabble.com. get me "This site can't be reached"
>
> Regards-
> Marci Wilken
>


Re: How to get camel-google-pubsub working on GKE, pubsub with workload identity (instead of service account keys?

2021-08-25 Thread Andrea Cosentino
No, it will be reviewed on Github.

Thanks

Il giorno mer 25 ago 2021 alle ore 16:00 Tamás Utasi 
ha scritto:

> Hi,
>
> I raised a PR: https://github.com/apache/camel/pull/5987.
>
> What should I expect next? Should I write to the dev mailing list?
>
> Tamás
>
> On Tue, 24 Aug 2021 at 00:00, Andrea Cosentino
>  wrote:
>
> > Hello,
> > You're welcome to open a Jira and work on a PR. We need to review more
> > google cloud components for sure in relation to this.
> > Thanks for reaching out to the community
> >
> > Inviato da Yahoo Mail su Android
> >
> >   Il lun, 23 ago, 2021 alle 21:45, Tamás Utasi ha
> > scritto:   I'm trying to get a simple piece of code working using:
> > - GKE (https://cloud.google.com/kubernetes-engine),
> > - google pubsub,
> > - workload identity (
> > https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity
> ),
> > - camel-google-pubsub and camel-google-pubsub-starter v 3.11.0
> >
> > My app comes up OK, but when it tries to connect to my subscription it
> > fails with: "io.grpc.StatusRuntimeException: PERMISSION_DENIED: The
> request
> > is missing a valid API key."
> >
> > This is reasonable as I'm not providing the "serviceAccountKey" query
> > parameter as I want to use workload identity (which I configured all the
> > way through) cause, as of today, that is the recommended way to access
> > Google Cloud services from GKE instead of a mounted service account keys.
> >
> > However. inspecting the code
> >
> >
> https://github.com/apache/camel/blob/camel-3.11.1/components/camel-google/camel-google-pubsub/src/main/java/org/apache/camel/component/google/pubsub/GooglePubsubComponent.java
> > tells me that this is impossible at the moment.
> >
> > I'm happy to create a JIRA and attempt to open a PR to add support for
> > workload identity if someone can confirm that this is desired.
> >
> > Best Regards,
> > Tamas
> >
> >
>
> --
> best regards,
> *Tamás Utasi*
>
> mail: tamas.ut...@gmail.com
>


Re: Error trying to get QUARKUS with Camel to read messages off of a AWS SQS queue

2021-08-24 Thread Andrea Cosentino
Is the secret key containing special characters? If so you should use
RAW() in the parameter.

Il mar 24 ago 2021, 15:46 Matthee, Elmar [elm...@sun.ac.za] <
elm...@sun.ac.za> ha scritto:

> I am trying to write a small quarkus-camel-sql client to read messages off
> of an existing queue. I have "raw" java code that it works fine with, but
> when I try to plug it into a camel route I get the following error and I'm
> not sure how/where to tweak.
>
> WARN [org.apa.cam.com.aws.sqs.Sqs2Consumer] (Camel (camel-1) thread #0 -
> aws2-sqs://arn:aws:sqs:us-east-2:550472003149:JSONTestQ.fifo) Consumer
> SqsConsumer[aws2-sqs://arn:aws:sqs:us-east-2:550472003149:JSONTestQ.fifo?accessKey=AKIAYAKVQFZGTCDOR3OP=xx]
> failed polling endpoint:
> aws2-sqs://arn:aws:sqs:us-east-2:550472003149:JSONTestQ.fifo?accessKey=AKIAYAKVQFZGTCDOR3OP=xx.
> Will try again at next poll. Caused by:
> [software.amazon.awssdk.services.sqs.model.SqsException - The request
> signature we calculated does not match the signature you provided. Check
> your AWS Secret Access Key and signing method. Consult the service
> documentation for details.
>
>
>
>
>
> My camel route looks as follows:
>
>
> from("aws2-sqs://arn:aws:sqs:us-east-2:550472003149:JSONTestQ.fifo?accessKey=AKIAYAKVQFZGTCDOR3OP=secretkey")
> .log("We have a message! ${body}")
>
> .to("file://target/output?fileName=tester-message-${date:now:MMDDyy-HHmmss}.json");<
> https://github.com/localstack/localstack/issues/url>
>
> I'm VERY green with quarkus and camel (and SQS for that matter) and the
> documentation is sparce and not exactly illuminating.
>
> Any help would be greatly appreciated.
>
> Regards
>
> Elmar
>
> Elmar Matthee
> INFORMATION TECHNOLOGY (Institutional Software Solutions)
> University of Stellenbosch
> South Africa
> Tel: +27 21 808 3580 | map <
> http://www.google.com/maps/ms?ie=UTF8=en=0=103842393525197829607.01120a4a06b965308=-33.929545,18.865285=0.01613,0.029182=h=15=0
> >
> Cel: +27 82 829 8417
> Fax: +27 21 808 4102
> --
> Meddle not in the affairs of dragons puny mortal
> for thou art crunchy and taste good with ketchup.
>
>
> [https://www.sun.ac.za/productionfooter/email/ProductionFooter.jpg]<
> https://www.sun.ac.za/english/about-us/strategic-documents>
>
> The integrity and confidentiality of this email are governed by these
> terms. Disclaimer
> Die integriteit en vertroulikheid van hierdie e-pos word deur die volgende
> bepalings bere?l. Vrywaringsklousule<
> https://www.sun.ac.za/emaildisclaimer/default.aspx>
>


R: How to get camel-google-pubsub working on GKE, pubsub with workload identity (instead of service account keys?

2021-08-23 Thread Andrea Cosentino
Hello,
You're welcome to open a Jira and work on a PR. We need to review more google 
cloud components for sure in relation to this.
Thanks for reaching out to the community 

Inviato da Yahoo Mail su Android 
 
  Il lun, 23 ago, 2021 alle 21:45, Tamás Utasi ha 
scritto:   I'm trying to get a simple piece of code working using:
- GKE (https://cloud.google.com/kubernetes-engine),
- google pubsub,
- workload identity (
https://cloud.google.com/kubernetes-engine/docs/how-to/workload-identity),
- camel-google-pubsub and camel-google-pubsub-starter v 3.11.0

My app comes up OK, but when it tries to connect to my subscription it
fails with: "io.grpc.StatusRuntimeException: PERMISSION_DENIED: The request
is missing a valid API key."

This is reasonable as I'm not providing the "serviceAccountKey" query
parameter as I want to use workload identity (which I configured all the
way through) cause, as of today, that is the recommended way to access
Google Cloud services from GKE instead of a mounted service account keys.

However. inspecting the code
https://github.com/apache/camel/blob/camel-3.11.1/components/camel-google/camel-google-pubsub/src/main/java/org/apache/camel/component/google/pubsub/GooglePubsubComponent.java
tells me that this is impossible at the moment.

I'm happy to create a JIRA and attempt to open a PR to add support for
workload identity if someone can confirm that this is desired.

Best Regards,
Tamas
  


Re: xmljson deprecated in camel 3.x

2021-08-17 Thread Andrea Cosentino
In the end we were basing the component on a lib with no future and we
decided to remove this approach. You could have a look at camel-xj too, as
alternative

Il mar 17 ago 2021, 20:26 Daniel Langevin 
ha scritto:

> Thank's
>
> -Message d'origine-
> De : Andrea Cosentino 
> Envoyé : 17 août 2021 14:23
> À : users@camel.apache.org
> Objet : Re: xmljson deprecated in camel 3.x
>
> It's always json-lib
>
> Il mar 17 ago 2021, 20:12 Daniel Langevin 
> ha scritto:
>
> > HI Andrea,
> >
> > I thought the vulnerability had been fixed with version 2.24 and UP
> >
> > Can I know the library, so as not to use it for future project?
> >
> >
> > Thank's
> >
> >
> > Daniel
> >
> >
> >
> >
> > -Message d'origine-
> > De : Andrea Cosentino  Envoyé : 17 août 2021 14:00
> > À : users@camel.apache.org Objet : Re: xmljson deprecated in camel 3.x
> >
> > Hello,
> >
> > It has been removed for a cve related to a library used in the component.
> >
> > This is the reason.
> >
> >
> > Il mar 17 ago 2021, 19:58 Daniel Langevin
> >  ha scritto:
> >
> > > Hi,
> > >
> > > I'm trying to convert an application from camel 2.17 to camel 3.11,
> > > and I have a concerned with xmljson.
> > >
> > > I don't found any component or data format who directly convert from
> > > json to XML. In Camel 3.11
> > >
> > >
> > > The much closer I get is when I  take my input  in JSON , convert to
> > > java.util.hashmap with Jackson an then convert in XML with xstream
> > > But the XML result is very complicated to exploit
> > >
> > > 
> > >  > > unmarshalTypeName="java.util.HashMap"/>
> > > 
> > > ... body is a JSON string
> > >  
> > > 
> > >
> > >
> > > Did I missed something! ???
> > >
> > >
> > > I'm using CAMEL-3.11 in Karaf-4.2.11 with blueprint.
> > >
> > > Regards!
> > >
> > >
> > > Daniel Langevin
> > > Direction de l'assistance et des technologie Direction des
> > > ressources informationnelles et matérielles
> > >
> > >
> >
>


Re: xmljson deprecated in camel 3.x

2021-08-17 Thread Andrea Cosentino
It's always json-lib

Il mar 17 ago 2021, 20:12 Daniel Langevin 
ha scritto:

> HI Andrea,
>
> I thought the vulnerability had been fixed with version 2.24 and UP
>
> Can I know the library, so as not to use it for future project?
>
>
> Thank's
>
>
> Daniel
>
>
>
>
> -----Message d'origine-
> De : Andrea Cosentino 
> Envoyé : 17 août 2021 14:00
> À : users@camel.apache.org
> Objet : Re: xmljson deprecated in camel 3.x
>
> Hello,
>
> It has been removed for a cve related to a library used in the component.
>
> This is the reason.
>
>
> Il mar 17 ago 2021, 19:58 Daniel Langevin 
> ha scritto:
>
> > Hi,
> >
> > I'm trying to convert an application from camel 2.17 to camel 3.11,
> > and I have a concerned with xmljson.
> >
> > I don't found any component or data format who directly convert from
> > json to XML. In Camel 3.11
> >
> >
> > The much closer I get is when I  take my input  in JSON , convert to
> > java.util.hashmap with Jackson an then convert in XML with xstream But
> > the XML result is very complicated to exploit
> >
> > 
> >  > unmarshalTypeName="java.util.HashMap"/>
> > 
> > ... body is a JSON string
> >  
> > 
> >
> >
> > Did I missed something! ???
> >
> >
> > I'm using CAMEL-3.11 in Karaf-4.2.11 with blueprint.
> >
> > Regards!
> >
> >
> > Daniel Langevin
> > Direction de l'assistance et des technologie Direction des ressources
> > informationnelles et matérielles
> >
> >
>


Re: xmljson deprecated in camel 3.x

2021-08-17 Thread Andrea Cosentino
Hello,

It has been removed for a cve related to a library used in the component.

This is the reason.


Il mar 17 ago 2021, 19:58 Daniel Langevin 
ha scritto:

> Hi,
>
> I'm trying to convert an application from camel 2.17 to camel 3.11, and I
> have a concerned with xmljson.
>
> I don't found any component or data format who directly convert from json
> to XML. In Camel 3.11
>
>
> The much closer I get is when I  take my input  in JSON , convert to
> java.util.hashmap with Jackson an then convert in XML with xstream
> But the XML result is very complicated to exploit
>
> 
>  unmarshalTypeName="java.util.HashMap"/>
> 
> ... body is a JSON string
>  
> 
>
>
> Did I missed something! ???
>
>
> I'm using CAMEL-3.11 in Karaf-4.2.11 with blueprint.
>
> Regards!
>
>
> Daniel Langevin
> Direction de l'assistance et des technologie
> Direction des ressources informationnelles et matérielles
>
>


Re: [discuss] find a better name for KameletBinding

2021-08-16 Thread Andrea Cosentino
Maybe KameletPipe?

Il lun 16 ago 2021, 12:29 Zoran Regvart  ha scritto:

> Hi Luca, Cameleers,
> naming in IT... from the top of my head
>
> Pipe
> Processor
> Conduit
> Channel
> Funnel
> Queue
> Glue
> Caravan or Karavan I guess :)
>
> None of these are particularly excellent, though we should pick a name
> that describes the best but also doesn't increase the ambiguity or
> cause confusion for end users. E.g. Channel is a particularly bad name
> as it's being used in Knative...
>
> zoran
>
> On Mon, Aug 16, 2021 at 10:27 AM Luca Burgazzoli 
> wrote:
> >
> > Hello,
> >
> > When the KameletBinding concept was introduced in camel-k, if was meant
> to
> > bind two Kamelets and nothing more, but over time we have added more
> > capabilities, like:
> >
> > - support for processing steps to transform exchanges/messages
> > - support to address/source from different systems so the source/sink
> does
> > not need be a kamelet anymore
> >
> > So I think the term KameletBinding is not more appropriate and to reduce
> > confusion, we should try to find a better name.
> >
> > On top of my mind, I'd see the following names as a possible replacement:
> > - Binding so leave Kamelet out of the game as Kamelets are one of the
> > option but not the exclusive on
> > - Connector as in essence, a KameletBinding describe how to connect A to
> B
> >
> > Any opinion ?
> >
> > ---
> > Luca Burgazzoli
>
>
>
> --
> Zoran Regvart
>


Re: Feature camel-atlasmap not found

2021-08-13 Thread Andrea Cosentino
Hello,

It's not supported in OSGi

Il ven 13 ago 2021, 22:06 Gerald Kallas  ha
scritto:

> Dear all.
>
> I did install a vanilla karaf 4.2.9 w/ Camel 3.11.1.
>
> The command
>
> feature:install camel-atlasmap
>
> responds with
>
> Error executing command: No matching features for camel-atlasmap/0
>
> Same with Camel 3.7.5. Did I miss something? The documentation says
> camel-atlasmap is available from Camel 3.7.
>
> Best
> Gerald


Re: OAI-PMH Component and Simple Language support

2021-08-04 Thread Andrea Cosentino
Can you please show the full route? To use the simple language you need to use 
toD and not to.

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Wednesday, August 4, 2021, 01:22:12 PM GMT+2, Penagos Jaime 
 wrote: 





Hi everyone,

I am currently testing the OAI-PMH component with Camel 3.11.x, the component 
works as intended, but it doesnt let me use Simple Language in the Producer / 
Consumer.

I want to load an ID from the body of a file, and after some basic processing, 
use this information as the identifier, but when I start the routes, it just 
shows in the URL

?verb=GetRecord=exchangeProperty%7BmyId%7D

I've tried using ${exchangeProperty.myId} or the getProperty() builder, but 
nothing seems to work...

Is there a workaround you could recommend to me or something I am missing?

Thank you so much fort he insights!
Best regards

--
Jaime Penagos

Ludwig-Maximilians-Universität
Universitätsbibliothek
Abteilung Informationstechnologie

Geschwister-Scholl-Platz 1, 80539 München
Tel.: +49 89 2180 5747
E-Mail: 
jaime.pena...@ub.uni-muenchen.de<mailto:jaime.pena...@ub.uni-muenchen.de>



Re: AWS2-S3 component fails to send file into bucket

2021-07-28 Thread Andrea Cosentino
It could be something related to your particular S3 account/credentials.

Nobody ever reported this and the aws2-s3 component has been heavily used.

What is your camel version?

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Wednesday, July 28, 2021, 10:14:54 AM GMT+2,  wrote: 






I've tested to wrap secret with RAW as you mentioned.
It did not take an effect.

But I read S3 documentation again and have revealed,
that this exception relates to access rights to perform list-bucket.
But I have checked my access rights and I have the right to perform
list-bucket.

On Wed, 2021-07-28 at 07:21 +, Andrea Cosentino wrote:
> If your accessKey or secretKey contains special characters like + or
> /, you need to prepend RAW
> 
> Like 
> 
> secretKey=RAW()
> 
> This seems to be a problem with your credentials at first sight.
> 
> --
> Andrea Cosentino 
> --
> Apache Camel PMC Chair
> Apache Karaf Committer
> Apache Servicemix PMC Member
> Email: ancosen1...@yahoo.com
> Twitter: @oscerd2
> Github: oscerd
> 
> 
> 
> 
> 
> 
> On Wednesday, July 28, 2021, 09:14:19 AM GMT+2,
> mail4...@gmail.com  wrote: 
> 
> 
> 
> 
> 
> Hello!
> 
> I'm trying to send file to AWS S3 bucket and getting the following
> exception.
> 
> Caused by: org.apache.camel.RuntimeCamelException:
> software.amazon.awssdk.services.s3.model.S3Exception: null (Service:
> S3, Status Code: 400, Request ID: null, Extended Request ID:
> na7tn3NqtzpxOnMvVw8wc4vEMSYn6ZvQvVZx709dq0q75++wwWxgBfSk4DFgtgYPV9hic
> In
> 8M98=)
> 
> What does it mean?
> 
> My code is similar to:
> 
> ---
> from("direct://send-file")
>   .process((exchange) ->
> )
>   .to("aws2-s3://BUCKET-NAME?accessKey=***=***=EU-
> NORTH-1")
> 
> // sender:
> 
> context.createproducerTemplate().sendBody("direct://send-file",
> "TEST");
> ---
> 
> Here is the stacktrace:
> 
> Caused by: org.apache.camel.FailedToStartRouteException: Failed to
> start route route1 because of null
>     at
> org.apache.camel.impl.engine.RouteService.warmUp(RouteService.java:12
> 3)
> 
>     at
> org.apache.camel.impl.engine.InternalRouteStartupManager.doWarmUpRout
> es
> 
> (InternalRouteStartupManager.java:306)
>     at
> org.apache.camel.impl.engine.InternalRouteStartupManager.safelyStartR
> ou
> 
> teServices(InternalRouteStartupManager.java:189)
>     at
> org.apache.camel.impl.engine.InternalRouteStartupManager.doStartOrRes
> um
> 
> eRoutes(InternalRouteStartupManager.java:147)
>     at
> org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(Abstra
> ct
> 
> CamelContext.java:3166)
>     at
> org.apache.camel.impl.engine.AbstractCamelContext.doStartContext(Abst
> ra
> 
> ctCamelContext.java:2846)
>     at
> org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCam
> el
> 
> Context.java:2797)
>     at
> org.apache.camel.spring.boot.SpringBootCamelContext.doStart(SpringBoo
> tC
> 
> amelContext.java:43)
>     at
> org.apache.camel.support.service.BaseService.start(BaseService.java:1
> 19
> 
> )
>     at
> org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamel
> Co
> 
> ntext.java:2494)
>     at
> org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.j
> av
> 
> a:245)
>     at
> org.apache.camel.spring.SpringCamelContext.start(SpringCamelContext.j
> av
> 
> a:119)
>     at
> org.apache.camel.spring.SpringCamelContext.onApplicationEvent(SpringC
> am
> 
> elContext.java:151)
>     at
> org.springframework.context.event.SimpleApplicationEventMulticaster.d
> oI
> 
> nvokeListener(SimpleApplicationEventMulticaster.java:176)
>     at
> org.springframework.context.event.SimpleApplicationEventMulticaster.i
> nv
> 
> okeListener(SimpleApplicationEventMulticaster.java:169)
>     at
> org.springframework.context.event.SimpleApplicationEventMulticaster.m
> ul
> 
> ticastEvent(SimpleApplicationEventMulticaster.java:143)
>     at
> org.springframework.context.support.AbstractApplicationContext.publis
> hE
> 
> vent(AbstractApplicationContext.java:421)
>     at
> org.springframework.context.support.AbstractApplicationContext.publis
> hE
> 
> vent(AbstractApplicationContext.java:378)
>     at
> org.springframework.context.support.AbstractApplicationContext.finish
> Re
> 
> fresh(AbstractApplicationContext.java:938)
>    

Re: Link in Documentation is Broken

2021-07-28 Thread Andrea Cosentino
The JIRA tracker for issue is here

https://issues.apache.org/jira/browse/CAMEL

Thanks for reporting anyway.

Il giorno mer 28 lug 2021 alle ore 09:44 Weiß, Kajetan 
ha scritto:

> Dear fellow camel users,
>
> the link to the example in this document is broken:
>
> https://camel.apache.org/components/latest/validator-component.html
>
> To find the position of the link, search for "example".
>
> The link points to this site, which gives a 404 Not Found:
>
>
> https://github.com/apache/camel/blob/main/components/camel-spring/src/test/resources/org/apache/camel/component/validator/camelContext.xml
>
> I would have opened an issue if the there was an issue tracker for
> users in github. I hope this mail is apropriate otherwise.
>
> An example with a validator written in Java DSL would be much
> apreciated.
>
> Thankyou and have a nice day
> Kajetan Weiß
>


Re: AWS2-S3 component fails to send file into bucket

2021-07-28 Thread Andrea Cosentino
If your accessKey or secretKey contains special characters like + or /, you 
need to prepend RAW

Like 

secretKey=RAW()

This seems to be a problem with your credentials at first sight.

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Wednesday, July 28, 2021, 09:14:19 AM GMT+2, mail4...@gmail.com 
 wrote: 





Hello!

I'm trying to send file to AWS S3 bucket and getting the following
exception.

Caused by: org.apache.camel.RuntimeCamelException:
software.amazon.awssdk.services.s3.model.S3Exception: null (Service:
S3, Status Code: 400, Request ID: null, Extended Request ID:
na7tn3NqtzpxOnMvVw8wc4vEMSYn6ZvQvVZx709dq0q75++wwWxgBfSk4DFgtgYPV9hicIn
8M98=)

What does it mean?

My code is similar to:

---
from("direct://send-file")
  .process((exchange) ->
exchange.getIn().setHeader(AWS2S3Constants.KEY, "Test-Key"))          
)
  .to("aws2-s3://BUCKET-NAME?accessKey=***=***=EU-
NORTH-1")

// sender:

context.createproducerTemplate().sendBody("direct://send-file",
"TEST");
---

Here is the stacktrace:

Caused by: org.apache.camel.FailedToStartRouteException: Failed to
start route route1 because of null
    at
org.apache.camel.impl.engine.RouteService.warmUp(RouteService.java:123)

    at
org.apache.camel.impl.engine.InternalRouteStartupManager.doWarmUpRoutes

(InternalRouteStartupManager.java:306)
    at
org.apache.camel.impl.engine.InternalRouteStartupManager.safelyStartRou

teServices(InternalRouteStartupManager.java:189)
    at
org.apache.camel.impl.engine.InternalRouteStartupManager.doStartOrResum

eRoutes(InternalRouteStartupManager.java:147)
    at
org.apache.camel.impl.engine.AbstractCamelContext.doStartCamel(Abstract

CamelContext.java:3166)
    at
org.apache.camel.impl.engine.AbstractCamelContext.doStartContext(Abstra

ctCamelContext.java:2846)
    at
org.apache.camel.impl.engine.AbstractCamelContext.doStart(AbstractCamel

Context.java:2797)
    at
org.apache.camel.spring.boot.SpringBootCamelContext.doStart(SpringBootC

amelContext.java:43)
    at
org.apache.camel.support.service.BaseService.start(BaseService.java:119

)
    at
org.apache.camel.impl.engine.AbstractCamelContext.start(AbstractCamelCo

ntext.java:2494)
    at
org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.jav

a:245)
    at
org.apache.camel.spring.SpringCamelContext.start(SpringCamelContext.jav

a:119)
    at
org.apache.camel.spring.SpringCamelContext.onApplicationEvent(SpringCam

elContext.java:151)
    at
org.springframework.context.event.SimpleApplicationEventMulticaster.doI

nvokeListener(SimpleApplicationEventMulticaster.java:176)
    at
org.springframework.context.event.SimpleApplicationEventMulticaster.inv

okeListener(SimpleApplicationEventMulticaster.java:169)
    at
org.springframework.context.event.SimpleApplicationEventMulticaster.mul

ticastEvent(SimpleApplicationEventMulticaster.java:143)
    at
org.springframework.context.support.AbstractApplicationContext.publishE

vent(AbstractApplicationContext.java:421)
    at
org.springframework.context.support.AbstractApplicationContext.publishE

vent(AbstractApplicationContext.java:378)
    at
org.springframework.context.support.AbstractApplicationContext.finishRe

fresh(AbstractApplicationContext.java:938)
    at
org.springframework.context.support.AbstractApplicationContext.refresh(

AbstractApplicationContext.java:586)
    at
org.springframework.boot.SpringApplication.refresh(SpringApplication.ja

va:782)
    at
org.springframework.boot.SpringApplication.refresh(SpringApplication.ja

va:774)
    at
org.springframework.boot.SpringApplication.refreshContext(SpringApplica

tion.java:439)
    at
org.springframework.boot.SpringApplication.run(SpringApplication.java:3

39)
    at
org.springframework.boot.test.context.SpringBootContextLoader.loadConte

xt(SpringBootContextLoader.java:123)
    at
org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDe

legate.loadContextInternal(DefaultCacheAwareContextLoaderDelegate.java:

99)
    at
org.springframework.test.context.cache.DefaultCacheAwareContextLoaderDe

legate.loadContext(DefaultCacheAwareContextLoaderDelegate.java:124)
    ... 92 more
Caused by: org.apache.camel.RuntimeCamelException:
software.amazon.awssdk.services.s3.model.S3Exception: null (Service:
S3, Status Code: 400, Request ID: null, Extended Request ID:
na7tn3NqtzpxOnMvVw8wc4vEMSYn6ZvQvVZx709dq0q75++wwWxgBfSk4DFgtgYPV9hicIn

8M98=)
    at
org.apache.camel.RuntimeCamelException.wrapRuntimeCamelException(Runtim

eCamelException.java:51)
    at
org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.

java:67)
    at
org.apache.camel.support.service.ServiceHelper.startService(ServiceHelp

er.java:113)
    at
org.apache.camel.support.service.ServiceHelper.startService(ServiceHelp

er.java:130)
    at
org.apache.camel.impl.en

Re: Tell me about moving messages from one route to another and manual Kafka commit

2021-06-21 Thread Andrea Cosentino
Yes, with direct it should fine.

Il giorno lun 21 giu 2021 alle ore 17:08 Vyacheslav Boyko <
mail4...@gmail.com> ha scritto:

> So... Am I able to use "direct" instead of "seda", right?
>
>
> On Mon, 21 Jun 2021 14:49:50 + (UTC)
> Andrea Cosentino  wrote:
>
> > Hello,
> >
> > The manual commit needs to be done is the same thread of the kafka
> > consumer, so you cannot do this in route 2 or route 3.
> >
> > --
> > Andrea Cosentino
> > --
> > Apache Camel PMC Chair
> > Apache Karaf Committer
> > Apache Servicemix PMC Member
> > Email: ancosen1...@yahoo.com
> > Twitter: @oscerd2
> > Github: oscerd
> >
> >
> >
> >
> >
> >
> > On Monday, June 21, 2021, 02:54:00 PM GMT+2, Vyacheslav Boyko
> >  wrote:
> >
> >
> >
> >
> >
> > Hi!
> >
> > It's the first time I use Kafka + Camel chain.
> > And... what if I have following routes:
> >
> > // 1 route
> > from("kafka://my-topic?allowManualCommit=true=false")
> > .process(someProcessor)
> > .choice()
> > .when(simple("${body} == "MY-CONDITION")
> > .to("seda://yes-queue")
> > .otherwise()
> > .to("seda://no-queue")
> > .end();
> >
> > // 2 route
> > from("seda://yes-route")
> > .process(processorYes);
> >
> > // 3 route
> > from("seda://no-route")
> > .process(processorNo);
> >
> > Am I right saying that all incoming messages will pass through 1 route
> > then 2 or 3 route (conditionally) and will park into processorYes or
> > processorNo. Am I able to perform commit Kafka message not only in 1
> > route? I mean, am I right that Camel will pass exactly one instance of
> > Exchange from route 1 to route 2 or 3 and I will be able to perform
> > Kafka commit in route 2? Or do Camel do something with Kafka fetching
> > makes me being not able to commit Kafka message in next (after Kafka
> > consuming) route?
> >
> >
>
>


Re: Tell me about moving messages from one route to another and manual Kafka commit

2021-06-21 Thread Andrea Cosentino
Hello,

The manual commit needs to be done is the same thread of the kafka consumer, so 
you cannot do this in route 2 or route 3.

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Monday, June 21, 2021, 02:54:00 PM GMT+2, Vyacheslav Boyko 
 wrote: 





Hi!

It's the first time I use Kafka + Camel chain.
And... what if I have following routes:

// 1 route
from("kafka://my-topic?allowManualCommit=true=false")
    .process(someProcessor)
    .choice()
        .when(simple("${body} == "MY-CONDITION")
            .to("seda://yes-queue")
        .otherwise()
            .to("seda://no-queue")
    .end();

// 2 route
from("seda://yes-route")
    .process(processorYes);

// 3 route
from("seda://no-route")
    .process(processorNo);

Am I right saying that all incoming messages will pass through 1 route
then 2 or 3 route (conditionally) and will park into processorYes or
processorNo. Am I able to perform commit Kafka message not only in 1
route? I mean, am I right that Camel will pass exactly one instance of
Exchange from route 1 to route 2 or 3 and I will be able to perform
Kafka commit in route 2? Or do Camel do something with Kafka fetching
makes me being not able to commit Kafka message in next (after Kafka
consuming) route?


-- 
Vyacheslav Boyko
aka bvn13
mailto:mail4...@gmail.com


Re: Camel components overview

2021-06-17 Thread Andrea Cosentino
Hello.

1) The 3.0.x or 3.9.x versions doesn't have space in the documentation
because they were development versions through the LTS, so what you see
listed there are:
 - the last development version release
 - The 3.7.x LTS documentation
 - The 3.4.x LTS documentation
 - The 2.x old documentation
Essentially these are the active releases train with the last development
release

2) The support level of the component is listed in the components table in
the support level column and there is no way of sorting/grouping the
component by support level at website level
3) Not available at website level.

I think you can play with the camel-catalog modules to extract this
information, but on website this is not planned for the future or supported
at the moment.

Cheers

Il giorno gio 17 giu 2021 alle ore 08:15 ski n 
ha scritto:

> Hi all,
>
> Currently, there is a component overview for the latest version and LTS
> versions. For example:
>
> https://camel.apache.org/components/latest/
> https://camel.apache.org/components/3.10.x/
> https://camel.apache.org/components/3.7.x/
> https://camel.apache.org/components/3.4.x/
> https://camel.apache.org/components/2.x/
>
> What I was looking for:
>
> 1) Other versions, for example 3.0.x or 3.9.x
> 2) Overview of components by support status "stable", "deprecated",
> "removed" (sorted by version)
> 3) Overview of components by version with changes in support status (new
> components, deprecated, removed components).
>
> Mostly this information can be found in release notes/blogs, but can they
> found also on the Camel documentation website?
>
> Regards,
>
> Raymond
>


Re: idempotentConsumer Exception

2021-06-10 Thread Andrea Cosentino
Glad to help! 4 eyes it's always better than 2 :-)

Il gio 10 giu 2021, 22:46 Rafael Sainz  ha
scritto:

> Good lord! Thanks for answering me Andrea.
>
> De: Andrea Cosentino 
> Fecha: jueves, 10 de junio de 2021, 22:29
> Para: users@camel.apache.org 
> Asunto: Re: idempotentConsumer Exception
> It is Concatenada, not Concatenado
>
> Il gio 10 giu 2021, 22:13 Rafael Sainz  ha
> scritto:
>
> > Hello Camel team,
> >
> >
> > I am struggling with idempotentConsumer
> >
> >
> >
> > .idempotentConsumer(header("Concatenado"), idempotentPosturasRepo)
> >
> >
> >
> > It returns
> >
> >
> > org.apache.camel.processor.idempotent.NoMessageIdException: No message ID
> > could be found using expression: ${headers.Concatenado} on message
> > exchange: Exchange[]
> >
> >
> > even though the header has been created
> >
> > {CamelHttpResponseCode=200, CamelHttpResponseText=OK, Codrueda=["MINO"],
> > Concatenada=41205Vencida, Content-Length=43380,
> > Content-Type=application/json, Date=Thu, 10 Jun 2021 19:43:15 GMT,
> > Server=waitress}
> >
> > Can you help me please?
> >
> > Thanks!
> >
> > Rafael
> >
>


Re: idempotentConsumer Exception

2021-06-10 Thread Andrea Cosentino
It is Concatenada, not Concatenado

Il gio 10 giu 2021, 22:13 Rafael Sainz  ha
scritto:

> Hello Camel team,
>
>
> I am struggling with idempotentConsumer
>
>
>
> .idempotentConsumer(header("Concatenado"), idempotentPosturasRepo)
>
>
>
> It returns
>
>
> org.apache.camel.processor.idempotent.NoMessageIdException: No message ID
> could be found using expression: ${headers.Concatenado} on message
> exchange: Exchange[]
>
>
> even though the header has been created
>
> {CamelHttpResponseCode=200, CamelHttpResponseText=OK, Codrueda=["MINO"],
> Concatenada=41205Vencida, Content-Length=43380,
> Content-Type=application/json, Date=Thu, 10 Jun 2021 19:43:15 GMT,
> Server=waitress}
>
> Can you help me please?
>
> Thanks!
>
> Rafael
>


Re: [ANNOUNCE] Apache Camel 3.10.0 Released

2021-05-20 Thread Andrea Cosentino
Hello Omar,

No 3.11 will be the next LTS. This release introduce a lot of new stuff.
The next will stabilize a bit and we'll release the LTS 3.11.0

Cheers


Il gio 20 mag 2021, 12:14 Omar Al-Safi  ha scritto:

> Thanks Gregor!
>
> By the way, shouldn't 3.10 be supposed to be an LTS version?
>
> Regards,
> Omar
>
> On Thu, May 20, 2021 at 11:38 AM Gregor Zurowski  >
> wrote:
>
> > The Camel PMC is pleased to announce the release of Apache Camel 3.10.0.
> >
> > Apache Camel is an open source integration framework that empowers you
> > to quickly and easily integrate various systems consuming or producing
> > data.
> >
> > This release is a new minor release and contains 208 bug fixes and
> > improvements. The release is available for immediate download at:
> >
> > https://camel.apache.org/download/
> >
> > For more details please take a look at the release notes at:
> >
> > https://camel.apache.org/releases/release-3.10.0/
> >
>


Re: SMTPS Throws Error

2021-05-19 Thread Andrea Cosentino
Hello,

If you are using Fuse from Red Hat, you should ask to the Red Hat support.

Cheers

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Thursday, May 20, 2021, 03:06:10 AM GMT+2, Mike Oliver 
 wrote: 





Claus,

H ok trying to use google-mail component with the following context.

http://www.osgi.org/xmlns/blueprint/v1.0.0;
    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0
https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
        http://camel.apache.org/schema/blueprint
https://camel.apache.org/schema/blueprint/camel-blueprint.xsd;>
    http://camel.apache.org/schema/blueprint;>
        
            
            
                Error Message
            
            
                Helo World, Error Body!
            
            
            mikeolive...@open4businessonline.com
            
            
            

        
    


mvn clean install build successful.
osgi:install -s also successful.

Blueprint bundle DeadLetterChannel/1.0.0.SNAPSHOT is waiting for
dependencies
[(&(component=google-mail)(objectClass=org.apache.camel.spi.ComponentResolver))]

so feature:install camel-google-mail

fails wtih
org.apache.felix.resolver.reason.ReasonException : Unable to resolve root:
missing requirement [root] osgi.identity; osgi.identity=camel-quartz2;
type=karaf.feature; version="[2.23.2.fuse-790040,2.23.2.fuse-790040]";
filter:="(&(osgi.identity=camel-quartz2)(type=karaf.feature)(version>=2.23.2.fuse-790040)(version<=2.23.2.fuse-790040))"

I thought maybe I mistyped but camel-google-mail must have a dependency on
camel-quartz2

so tried feature:install camel-quartz2 and got...

org.apache.felix.resolver.reason.ReasonException : Unable to resolve root:
missing requirement [root] osgi.identity; osgi.identity=camel-quartz2;
type=karaf.feature; version="[2.23.2.fuse-790040,2.23.2.fuse-790040]";
filter:="(&(osgi.identity=camel-quartz2)(type=karaf.feature)(version>=2.23.2.fuse-790040)(version<=2.23.2.fuse-790040))"

Now my fuse-karaf is 780038 not 790040 if that matters.




*Mike Oliver** Founder**, Open 4 Business Online*
Tel: +1(951)260-0793 | Mobile:**NEW* 639479927462
US Toll free: 1-800-985-4766 **NEW*
http://www.o4bo.com
Mas marunong akong umunawa ng salitang tagalog kaysa magkapagsalita nito
[image: Facebook]
<http://www.facebook.com/pages/Open-4-Business-Online/147285608707176> [image:
Twitter] <https://twitter.com/O4BO> [image: LinkedIn]
<http://ph.linkedin.com/pub/mike-oliver/0/1b9/197> [image: AngelList]
<https://angel.co/open-4-business-online/> [image: Blogger]
<http://blog.open4businessonline.com/> [image: eBay]
<http://www.store.o4bo.com/servlet/StoreFront> [image: YouTube]
<http://www.youtube.com/channel/UCruaIEFosh9uvfkQCq7mtKw> [image: Google
Plus Page] <https://plus.google.com/113688478700619104336/posts>
Contact me: [image: Google Talk] mikeolive...@open4businessonline.com [image:
Skype] MikeOliverAZ



On Wed, May 19, 2021 at 12:44 PM Claus Ibsen  wrote:

> Hi
>
> Sending emails to google mail, via smtp is likely harder than trying
> to use the camel-google-mail component.
> If you need a test email server, then you can find smtp mail servers
> you can run as docker containers to use for testing purposes.
>
> On Wed, May 19, 2021 at 2:40 AM Mike Oliver
>  wrote:
> >
> > Trying to use Camel to send an email but getting the following error...
> >
> > javax.mail.MessagingException: Could not connect to SMTP host:
> > smtp.gmail.com, port: 465;
> >  nested exception is:
> >        javax.net.ssl.SSLHandshakeException: PKIX path building failed:
> > sun.security.provider.certpath.SunCertPathBuilderException: unable to
> > find valid certification path to requested target
> >        at
> com.sun.mail.smtp.SMTPTransport.openServer(SMTPTransport.java:2211)
> >        at
> com.sun.mail.smtp.SMTPTransport.protocolConnect(SMTPTransport.java:740)
> >        at javax.mail.Service.connect(Service.java:366)
> >        at
> org.apache.camel.component.mail.DefaultJavaMailSender.send(DefaultJavaMailSender.java:113)
> >        at
> org.apache.camel.component.mail.MailProducer.process(MailProducer.java:63)
> >
> >
> > The blueprint I am using is...
> >
> > http://www.osgi.org/xmlns/blueprint/v1.0.0;
> >    xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance;
> > xsi:schemaLocation="http://www.osgi.org/xmlns/blueprint/v1.0.0
> > https://www.osgi.org/xmlns/blueprint/v1.0.0/blueprint.xsd
> >              http://camel.apache.org/sche

Re: File filter bean not working in Camel 3

2021-05-19 Thread Andrea Cosentino
It's not possible to read the code.

I would migrate to an LTS 3.7.x and not to 3.6.0, this is the first
suggestion I have.

Il giorno mer 19 mag 2021 alle ore 15:13 Mikael Andersson Wigander
 ha scritto:

> Hi
> In Camel 2.x we have a file component with a filter bean and now migrating
> to Camel 3 the bean is not found and program crashes upon start.
> The migration has been only with the route converting to use
> EndpointRoutebuilder, the config, bean and class is unchanged.
>
> Config
>
> @Bean
>
> public static
>
> HouseKeepingFileFilter
>
> <
>
> String
>
> >
>
> houseKeepingFileFilter
>
> () {
>
> return new
>
> HouseKeepingFileFilter
>
> <>()
>
> ;
>
> }
>
> Impl
>
> @CommonsLog
>
> public class
>
> HouseKeepingFileFilter
>
> 
>
> implements
>
> GenericFileFilter
>
> <
>
> String
>
> > {
>
> public boolean
>
> accept
>
> (
>
> GenericFile
>
> file) {
>
> SimpleDateFormat
>
> sdf
>
> =
>
> new
>
> SimpleDateFormat
>
> (
>
> "-MM-dd HH:mm:ss"
>
> )
>
> ;
>
> final boolean
>
> isValid
>
> =
>
> file
>
> .
>
> isDirectory
>
> ()
>
> ?
>
> true
>
> :
>
> file
>
> .
>
> getLastModified
>
> ()
>
> <
>
> LocalDateTime
>
> .
>
> now
>
> ()
>
> .
>
> minusMonths
>
> (1L)
>
> .
>
> atZone
>
> (
>
> ZoneId
>
> .
>
> systemDefault
>
> ())
>
> .
>
> toInstant
>
> ()
>
> .
>
> toEpochMilli
>
> ()
>
> ;
>
> log
>
> .
>
> trace
>
> (
>
> MessageFormat
>
> .
>
> format
>
> (
>
> "File :{0}, with modification date: {1} is {2} for archive"
>
> ,
>
> file
>
> .
>
> getFileName
>
> ()
>
> ,
>
> sdf
>
> .
>
> format
>
> (file
>
> .
>
> getLastModified
>
> ())
>
> ,
>
> isValid
>
> ?
>
> "valid"
>
> :
>
> "NOT valid"
>
> ))
>
> ;
>
> return
>
> isValid
>
> ;
>
> }
>
> }
>
> Camel 3
>
> @Override
>
> public void
>
> configure
>
> ()
>
> throws
>
> Exception
>
> {
>
> from
>
> (
>
> file
>
> (
>
> "mifir/"
>
> )
>
> .
>
> antInclude
>
> (
>
> "{{housekeeping.files.include}}"
>
> )
>
> .
>
> antExclude
>
> (
>
>
> "**/archive/**,**/data/**,**/LegalReportingModuleTransporterBridge/**,**/MAPPING/**,{{housekeeping.files.exclude}}"
>
> )
>
> .
>
> recursive
>
> (
>
> true
>
> )
>
> .
>
> filter
>
> (
>
> "#houseKeepingFileFilter"
>
> )
>
> .
>
> scheduler
>
> (
>
> "quartz"
>
> )
>
> .
>
> schedulerProperties
>
> (
>
> "cron"
>
> ,
>
> "{{housekeeping.cron}}"
>
> )
>
> .
>
> schedulerProperties
>
> (
>
> "triggerId"
>
> ,
>
> "houseKeepingId"
>
> )
>
> .
>
> schedulerProperties
>
> (
>
> "triggerGroup"
>
> ,
>
> "houseKeepingGroup"
>
> ))
>
> .
>
> description
>
> (
>
> "HOUSEKEEPING-ROUTE"
>
> ,
>
> "Archive files older than a month"
>
> ,
>
> "en"
>
> )
>
> .
>
> process
>
> (s
>
> ->
>
> {
>
> long
>
> houseKeepingSize
>
> =
>
> 0L
>
> ;
>
> final
>
> Message
>
> in
>
> =
>
> s
>
> .
>
> getIn
>
> ()
>
> ;
>
> try
>
> {
>
> houseKeepingSize
>
> =
>
> in
>
> .
>
> getHeader
>
> (
>
> "HouseKeepingSize"
>
> ,
>
> Long
>
> .
>
> class
>
> )
>
> ;
>
> }
>
> catch
>
> (
>
> Exception
>
> ignored) {
>
> }
>
> final
>
> File
>
> body
>
> =
>
> in
>
> .
>
> getBody
>
> (
>
> File
>
> .
>
> class
>
> )
>
> ;
>
> houseKeepingSize
>
> +=
>
> body
>
> .
>
> length
>
> ()
>
> ;
>
> in
>
> .
>
> setHeader
>
> (
>
> "HouseKeepingSize"
>
> ,
>
> houseKeepingSize
>
> )
>
> ;
>
> })
>
> .
>
> aggregate
>
> (
>
> constant
>
> (
>
> true
>
> )
>
> ,
>
> new
>
> ZipAggregationStrategy
>
> (
>
> true
>
> ))
>
> .
>
> to
>
> (
>
> log
>
> (
>
> "HouseKeeping"
>
> )
>
> .
>
> level
>
> (
>
> "INFO"
>
> )
>
> .
>
> groupInterval
>
> (5_000L)
>
> .
>
> groupActiveOnly
>
> (
>
> true
>
> ))
>
> .
>
> completionFromBatchConsumer
>
> ()
>
> .
>
> eagerCheckCompletion
>
> ()
>
> .
>
> to
>
> (
>
> file
>
> (
>
> "mifir/archive"
>
> ))
>
> .
>
> log
>
> (
>
> "Monthly housekeeping is done! Archived
> ${exchangeProperty.CamelAggregatedSize} files and saved
> ${header.HouseKeepingSize} bytes"
>
> )
>
> .
>
> end
>
> ()
>
> ;
>
> }
>
> Camel 2
>
> @Override
>
> public void
>
> configure
>
> ()
>
> throws
>
> Exception
>
> {
>
> from
>
> (
>
> "file:mifir/"
>
> +
>
> "?antInclude={{housekeeping.files.include}}"
>
> +
>
>
> "=**/archive/**,**/data/**,**/LegalReportingModuleTransporterBridge/**,**/MAPPING/**,{{housekeeping.files.exclude}}"
>
> +
>
> "=true"
>
> +
>
> "=#houseKeepingFileFilter"
>
> +
>
> "=quartz2"
>
> +
>
> "={{housekeeping.cron}}"
>
> +
>
> "=houseKeepingId"
>
> +
>
> "=houseKeepingGroup"
>
> )
>
> .
>
> description
>
> (
>
> "HOUSEKEEPING-ROUTE"
>
> ,
>
> "Archive files older than a month"
>
> ,
>
> "en"
>
> )
>
> .
>
> process
>
> (s
>
> ->
>
> {
>
> long
>
> houseKeepingSize
>
> =
>
> 0L
>
> ;
>
> final
>
> Message
>
> in
>
> =
>
> s
>
> .
>
> getIn
>
> ()
>
> ;
>
> try
>
> {
>
> houseKeepingSize
>
> =
>
> in
>
> .
>
> getHeader
>
> (
>
> "HouseKeepingSize"
>
> ,
>
> Long
>
> .
>
> class
>
> )
>
> ;
>
> }
>
> catch
>
> (
>
> Exception
>
> ignored) {
>
> }
>
> final
>
> File
>
> body
>
> =
>
> in
>
> .
>
> getBody
>
> (
>
> File
>
> .
>
> class
>
> )
>
> ;
>
> houseKeepingSize
>
> +=
>
> body
>
> .
>
> length
>
> ()
>
> ;
>
> in
>
> .
>

Re: How to override CXF version for Camel-CXF

2021-05-18 Thread Andrea Cosentino
You can try to override the dependency by dependency exclusion on
camel-cxf, but it's not a best practice.

We can eventually update on the LTS branch and release for 3.7.5

Il giorno mar 18 mag 2021 alle ore 11:34 Chio Chuan Ooi 
ha scritto:

> Hi All,
>
> i am currently using camel 3.7.4 and notice that camel-cxf is using cxf
> 3.4.2.
> For cxf 3.4.2, notice there is a vulnerability in 3.4.2 and already fixed
> in cxf 3.4.3.
>
> when checking on jira CAMEL-16434
> , it seems that the cxf
> 3.4.3 is only done for camel 3.10
> so if i want to update to use cxf 3.4.3, is that possible?
>
>
> Thanks and Regards,
> Chio Chuan
>


Re: Apacke Karaf, Camel and Kafka component

2021-05-17 Thread Andrea Cosentino
Please open an issue about this.

Il mar 18 mag 2021, 01:08 Nicola Cisternino  ha
scritto:

> Other tests ...
> The problem is generated by "snappy" feature required by camel-kafka
> feature (see at:
>
> https://repo1.maven.org/maven2/org/apache/camel/karaf/apache-camel/3.9.0/apache-camel-3.9.0-features.xml
> )
> The sequence to reproduce the problem, starting from a fresh 4.3.1 Karaf
> installation, is:
>
>  __ __  
> / //_/ __ _/ __/
>/ ,<  / __ `/ ___/ __ `/ /_
>   / /| |/ /_/ / /  / /_/ / __/
>  /_/ |_|\__,_/_/   \__,_/_/
>
>Apache Karaf (4.3.1)
>
> Hit '' for a list of available commands
> and '[cmd] --help' for help on a specific command.
> Hit 'system:shutdown' to shutdown Karaf.
> Hit '' or type 'logout' to disconnect shell from current session.
>
> karaf@root()> feature:repo-add camel 3.9.0
> Adding feature url
> mvn:org.apache.camel.karaf/apache-camel/3.9.0/xml/features
> karaf@root()> feature:install http webconsole camel camel-netty
> karaf@root()> feature:install camel-stream
> karaf@root()> feature:install snappy
> karaf@root()> feature:install camel-sql
> Error executing command: Unable to resolve root: missing requirement
> [root] osgi.identity; osgi.identity=kar; type=karaf.feature;
> version="[4.3.1,4.3.1]";
> filter:="(&(osgi.identity=kar)(type=karaf.feature)(version>=4.3.1)(version<=4.3.1))"
>
> [caused by: Unable to resolve kar/4.3.1: missing requirement [kar/4.3.1]
> osgi.identity; osgi.identity=org.apache.karaf.kar.core;
> type=osgi.bundle; version="[4.3.1,4.3.1]"; resolution:=mandatory [caused
> by: Unable to resolve org.apache.karaf.kar.core/4.3.1: missing
> requirement [org.apache.karaf.kar.core/4.3.1] osgi.wiring.package;
>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"]]
> karaf@root()>
>
>
> On 5/17/21 10:21 PM, Andrea Cosentino wrote:
> > 3.9.0 has been tested only with 4.3.1
> >
> > Il lun 17 mag 2021, 22:13 Nicola Cisternino  ha
> > scritto:
> >
> >> Thank you JB
> >>
> >> I've tried to update all to last versions.
> >>
> >> So I've installed:
> >> - Apache Karaf *4.3.2*
> >> - Camel *3.9.0* features
> >>
> >> In features terms:
> >> *feature:install http webconsole**
> >> **feature:repo-add camel 3.9.0**
> >> **feature:install camel camel-netty camel-kafka**
> >> *
> >> ... and all works fine ;-)
> >> ... but ... trying to install some other camel feature (for example
> >> camel-sql or camel-stream) occurs the following error:
> >>
> >> karaf@root()> *feature:install camel-sql*
> >> Error executing command: Unable to resolve root: missing requirement
> >> [root] osgi.identity; osgi.identity=shell; type=karaf.feature;
> >> version="[4.3.2,4.3.2]";
> >>
> filter:="(&(osgi.identity=shell)(type=karaf.feature)(version>=4.3.2)(version<=4.3.2))"
> >>
> >> [caused by: Unable to resolve shell/4.3.2: missing requirement
> >> [shell/4.3.2] osgi.identity; osgi.identity=org.apache.karaf.shell.core;
> >> type=osgi.bundle; version="[4.3.2,4.3.2]"; resolution:=mandatory [caused
> >> by: Unable to resolve org.apache.karaf.shell.core/4.3.2: missing
> >> requirement [org.apache.karaf.shell.core/4.3.2] osgi.wiring.package;
> >>
> >>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"]]
> >> karaf@root()>
> >>
> >> In the log:
> >> 2021-05-17T20:09:52,508 | INFO  | pipe-feature:install camel-sql |
> >> FeaturesServiceImpl  | 18 - org.apache.karaf.features.core -
> >> 4.3.2 | Adding features: camel-sql/[3.9.0,3.9.0]
> >> 2021-05-17T20:09:52,601 | ERROR | Karaf ssh console user karaf |
> >> ShellUtil| 43 - org.apache.karaf.shell.core -
> >> 4.3.2 | Exception caught while executing command
> >> org.apache.felix.resolver.reason.ReasonException: Unable to resolve
> >> root: missing requirement [root] osgi.identity; osgi.identity=shell;
> >> type=karaf.feature; version="[4.3.2,4.3.2]";
> >>
> filter:="(&(osgi.identity=shell)(type=karaf.feature)(version>=4.3.2)(version<=4.3.2))"
> >>
> >> [caused by: Unable to resolve shell/4.3.2: missing requirement
> >> [shell/4.3.2] osgi.identity; osgi.identity=org.apache.karaf.sh

Re: Apacke Karaf, Camel and Kafka component

2021-05-17 Thread Andrea Cosentino
3.9.0 has been tested only with 4.3.1

Il lun 17 mag 2021, 22:13 Nicola Cisternino  ha
scritto:

> Thank you JB
>
> I've tried to update all to last versions.
>
> So I've installed:
> - Apache Karaf *4.3.2*
> - Camel *3.9.0* features
>
> In features terms:
> *feature:install http webconsole**
> **feature:repo-add camel 3.9.0**
> **feature:install camel camel-netty camel-kafka**
> *
> ... and all works fine ;-)
> ... but ... trying to install some other camel feature (for example
> camel-sql or camel-stream) occurs the following error:
>
> karaf@root()> *feature:install camel-sql*
> Error executing command: Unable to resolve root: missing requirement
> [root] osgi.identity; osgi.identity=shell; type=karaf.feature;
> version="[4.3.2,4.3.2]";
> filter:="(&(osgi.identity=shell)(type=karaf.feature)(version>=4.3.2)(version<=4.3.2))"
>
> [caused by: Unable to resolve shell/4.3.2: missing requirement
> [shell/4.3.2] osgi.identity; osgi.identity=org.apache.karaf.shell.core;
> type=osgi.bundle; version="[4.3.2,4.3.2]"; resolution:=mandatory [caused
> by: Unable to resolve org.apache.karaf.shell.core/4.3.2: missing
> requirement [org.apache.karaf.shell.core/4.3.2] osgi.wiring.package;
>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"]]
> karaf@root()>
>
> In the log:
> 2021-05-17T20:09:52,508 | INFO  | pipe-feature:install camel-sql |
> FeaturesServiceImpl  | 18 - org.apache.karaf.features.core -
> 4.3.2 | Adding features: camel-sql/[3.9.0,3.9.0]
> 2021-05-17T20:09:52,601 | ERROR | Karaf ssh console user karaf |
> ShellUtil| 43 - org.apache.karaf.shell.core -
> 4.3.2 | Exception caught while executing command
> org.apache.felix.resolver.reason.ReasonException: Unable to resolve
> root: missing requirement [root] osgi.identity; osgi.identity=shell;
> type=karaf.feature; version="[4.3.2,4.3.2]";
> filter:="(&(osgi.identity=shell)(type=karaf.feature)(version>=4.3.2)(version<=4.3.2))"
>
> [caused by: Unable to resolve shell/4.3.2: missing requirement
> [shell/4.3.2] osgi.identity; osgi.identity=org.apache.karaf.shell.core;
> type=osgi.bundle; version="[4.3.2,4.3.2]"; resolution:=mandatory [caused
> by: Unable to resolve org.apache.karaf.shell.core/4.3.2: missing
> requirement [org.apache.karaf.shell.core/4.3.2] osgi.wiring.package;
>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"]]
>  at
> org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341)
>
> ~[?:?]
>  at
> org.apache.felix.resolver.ResolverImpl.doResolve(ResolverImpl.java:433)
> ~[?:?]
>  at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:420)
> ~[?:?]
>  at
> org.apache.felix.resolver.ResolverImpl.resolve(ResolverImpl.java:374)
> ~[?:?]
>  at
> org.apache.karaf.features.internal.region.SubsystemResolver.resolve(SubsystemResolver.java:257)
>
> ~[?:?]
>  at
> org.apache.karaf.features.internal.service.Deployer.deploy(Deployer.java:399)
>
> ~[?:?]
>  at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl.doProvision(FeaturesServiceImpl.java:1069)
>
> ~[?:?]
>  at
> org.apache.karaf.features.internal.service.FeaturesServiceImpl.lambda$doProvisionInThread$13(FeaturesServiceImpl.java:1004)
>
> ~[?:?]
>  at java.util.concurrent.FutureTask.run(Unknown Source) ~[?:?]
>  at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
> Source) ~[?:?]
>  at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown
> Source) ~[?:?]
>  at java.lang.Thread.run(Unknown Source) [?:?]
> Caused by: org.apache.felix.resolver.reason.ReasonException: Unable to
> resolve shell/4.3.2: missing requirement [shell/4.3.2] osgi.identity;
> osgi.identity=org.apache.karaf.shell.core; type=osgi.bundle;
> version="[4.3.2,4.3.2]"; resolution:=mandatory [caused by: Unable to
> resolve org.apache.karaf.shell.core/4.3.2: missing requirement
> [org.apache.karaf.shell.core/4.3.2] osgi.wiring.package;
>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"]
>  at
> org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341)
>
> ~[?:?]
>  ... 12 more
> Caused by: org.apache.felix.resolver.reason.ReasonException: Unable to
> resolve org.apache.karaf.shell.core/4.3.2: missing requirement
> [org.apache.karaf.shell.core/4.3.2] osgi.wiring.package;
>
> filter:="(&(osgi.wiring.package=org.osgi.framework)(version>=1.0.0)(!(version>=3.0.0)))"
>  at
> org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341)
>
> ~[?:?]
>  at
> org.apache.felix.resolver.Candidates$MissingRequirementError.toException(Candidates.java:1341)
>
> ~[?:?]
>  ... 12 more
>
> The strange thing is that org.apache.karaf.shell.core bundle is running !!
>
>
> On 5/17/21 7:02 PM, Jean-Baptiste Onofre wrote:
> > 

Re: Camel Kubernetes Deployment Component issue

2021-04-28 Thread Andrea Cosentino
Hello,

You need to specify the following header with the number of
replicas: CamelKubernetesDeploymentReplicas

Cheers.

Il giorno mer 28 apr 2021 alle ore 13:00 Bikash Kaushik <
kaushikbikas...@gmail.com> ha scritto:

> Hi Team,
>
> I'm facing an issue while setting a replica number using the camel
> Kubernetes deployment component.
>
> *Please help me in fixing this issue.*
>
> *Route :*
> 
> 
> 
> name="CamelKubernetesNamespaceName">bikash-test
>  name="CamelKubernetesDeploymentName">frontend
> 
> 
> 
>
> *Error :*
> java.lang.IllegalArgumentException: *Scale a specific deployment require
> specify a replicas number*
> at
>
> org.apache.camel.component.kubernetes.deployments.KubernetesDeploymentsProducer.doScaleDeployment(KubernetesDeploymentsProducer.java:186)
> at
>
> org.apache.camel.component.kubernetes.deployments.KubernetesDeploymentsProducer.process(KubernetesDeploymentsProducer.java:83)
> at
>
> org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)
> at
>
> org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:191)
> --
>
>
> *Regards,*
> *Bikash Kaushik*
>


Re: Scheduled polling with azure-storage-blob consumer?

2021-04-08 Thread Andrea Cosentino
Yeah, it looks wrong.

Please open an issue about this.

For the moment, you may think about using a pollEnrich:

from("timer:tick?period=1000")
  .pollEnrich("azure-storage-blob:/myContainer?regex=test/azure_test.*.csv")
  .to("")

Il giorno gio 8 apr 2021 alle ore 13:55  ha
scritto:

> Hi,
>
>
>
> I want to use the azure-storage-blob consumer to download blobs from an
> azure storage.
>
> Very basic route for testing:
>
> from(“azure-storage-blob:/myContainer?regex=test/azure_test.*.csv”)
>
> .to("${body});
>
> This works just fine. However it I can’t figure out how I can set the
> parameters to control the polling
>
>
>
> The BlobConsumer class extends ScheduledBatchConsumer, so I figured I
> could use the normal “delay” parameter, but it doesn’t seem to recognize it.
>
> org.apache.camel.ResolveEndpointFailedException: Failed to resolve
> endpoint:
> azure-storage-blob:///myContainer?delay=30=test%2Fazure_test.*.csv
> due to: There are 1 parameters that couldn't be set on the endpoint. Check
> the uri if the parameters are spelt correctly and that they are properties
> of the endpoint. Unknown parameters=[{delay=30}]
>
>
>
> Also the documentation at
> https://camel.apache.org/components/3.7.x/azure-storage-blob-component.html
> doesn’t mention any of the usual ScheduledConsumer parameters
>
>
>
> Is this by design? If yes, what is the correct way to set the polling
> delay?
>
>
>
> Camel version is 3.7.0
>
>
>
> regards,
>
> Lukas
> 
>
> Dipl. Ing. (FH) Lukas Angerer
> Senior Software Engineer
> Business Automation & Integration
> *Phone:* +43 662 4470 84937
> *Mobile:* +43 664 8159143
> *E-Mail:* lukas.ange...@spar-ics.com
>
> *SPAR Business Services GmbH *Information & Communication Services
> Europastrasse 3, 5015 Salzburg, Austria
>
>
> Sollten Sie diese E-Mail unbeabsichtigt bzw. irrtümlich erhalten haben, so
> weisen wir Sie darauf hin, dass gemäß § 93 Abs 4 TKG der Inhalt sowie die
> Tatsache des Empfangs dieser E-Mail weder aufgezeichnet noch verwertet oder
> Unbefugten mitgeteilt werden dürfen. Wir ersuchen Sie, die Nachricht von
> Ihrem System zu löschen und sich mit uns in Verbindung zu setzen.
>
> If you have received this email accidentally or in error, we point out
> that, in accordance with § 93 para. 4 TKG (Telecommunications Act), the
> contents of this email and the fact of its receipt must not be recorded,
> exploited or communicated to unauthorized persons. We ask you to delete the
> message from your system and to contact us.
>
>
>


Re: Kafka consumer wont recover after WakeupException

2021-04-01 Thread Andrea Cosentino
Please try with an LTS version 3.7.x. 3.2.0 was a development version.

Il giorno gio 1 apr 2021 alle ore 18:30 SRIKANT MVS 
ha scritto:

> HI Team,
>
> I am using camel-kafka (version: 3.2.0) for consuming messages.
> Below is the flow
>
>1. Kafka service consumes events from the topic
>2. Make a call to the Server
>3. When the server is not responding in 40ms, throw
>ServerUnavailableException
>4. Stopping Kafka consumer on the topic
>5. Unsubscribing from the topic testTopicV1
>6. *Error unsubscribing testTopicV1-Thread 0 from kafka topic
>testTopicV1. Caused by: [org.apache.kafka.common.errors.WakeupException
> -
>null]*
>7.
>
>
> *Also seen an InterruptedException with the message "Interrupted while
>waiting for consumer heartbeat thread to close" Once this happens, then
> the
>Kafka consumer never recovers and I have to start the consumer service
>manually. Any idea why this happens and how to mitigate from this issue
> ?
>2021-01-31 05:48:23.143+ WARN Camel (camel-1) thread #6 -
>KafkaConsumer[testTopicV1]
> [camel.component.kafka.KafkaConsumer(log:212)]
>Error unsubscribing testTopicV1-Thread 0 from kafka topic testTopicV1.
>Caused by: [org.apache.kafka.common.errors.WakeupException - null]
>2021-01-31 05:48:23.143+ INFO Camel (camel-1) thread #7 -
>KafkaConsumer[testTopicV1]
> [camel.component.kafka.KafkaConsumer(doRun:397)]
>Unsubscribing testTopicV1-Thread 1 from topic testTopicV1
>org.apache.kafka.common.errors.WakeupException: null at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.maybeTriggerWakeup(ConsumerNetworkClient.java:511)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:275)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:233)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:212)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.commitOffsetsSync(ConsumerCoordinator.java:937)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1473)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.KafkaConsumer.commitSync(KafkaConsumer.java:1431)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.commitOffset(KafkaConsumer.java:432)
>~[camel-kafka-3.2.0.jar!/:3.2.0] at
>
>  
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.onPartitionsRevoked(KafkaConsumer.java:455)
>~[camel-kafka-3.2.0.jar!/:3.2.0] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.invokePartitionsRevoked(ConsumerCoordinator.java:291)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.internals.ConsumerCoordinator.onLeavePrepare(ConsumerCoordinator.java:707)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.kafka.clients.consumer.KafkaConsumer.unsubscribe(KafkaConsumer.java:1073)
>~[kafka-clients-2.4.0.jar!/:?] at
>
>  
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.doRun(KafkaConsumer.java:400)
>[camel-kafka-3.2.0.jar!/:3.2.0] at
>
>  
> org.apache.camel.component.kafka.KafkaConsumer$KafkaFetchRecords.run(KafkaConsumer.java:214)
>[camel-kafka-3.2.0.jar!/:3.2.0] at
>java.util.concurrent.Executors$RunnableAdapter.call(Unknown Source)
>[?:1.8.0_271] at java.util.concurrent.FutureTask.run(Unknown Source)
>[?:1.8.0_271] at
> java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown
>Source) [?:1.8.0_271] at
>java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source)
>[?:1.8.0_271] at java.lang.Thread.run(Unknown Source) [?:1.8.0_271]Full
>StackTrace 2021-01-31 05:48:13.138+ ERROR Camel (camel-1) thread #7
> -
>KafkaConsumer[testTopicV1]
>[camel.processor.errorhandler.DefaultErrorHandler(log:203)] Failed
> delivery
>for (MessageId: ID-TestId-1611647443253-0-25 on ExchangeId:
>ID-TestId-1611647443253-0-25). Exhausted after delivery attempt: 1
> caught:
>com.example.exception.ServerUnavailableException: RestClientException
>during rest call Message History (complete message history is disabled)
>
>  
> ---
>RouteId ProcessorId Processor Elapsed (ms) [route1 ] [route1 ]
>
>  [from[kafka://testTopicV1?allowManualCommit=True=False] [
>40027] ... [route1 ] [to1 ] [bean:kafkaProcessor ] [ 0] Stacktrace
>
>  
> 

Re: Error when setting up a new Camel S3 Source Connector

2021-03-07 Thread Andrea Cosentino
Those files in confluent platform are under the
$CONFLUENT_PLATFORM_HOME/etc/kafka

in particular:

- connect-standalone.properties
- connect-distributed.properties
- producer.properties
- consumer.properties

Il giorno sab 6 mar 2021 alle ore 17:20 Andrea Cosentino 
ha scritto:

> I know what confluent platform is, it is just I'm not familiar with its
> configuration files location.
>
> Strimzi is a project to deploy and manage a kafka cluster on top of
> kubernetes/openshift
>
> Il sab 6 mar 2021, 17:10 Arundhati Bhende 
> ha scritto:
>
>> Thanks.  Yes, Confluent is some additional components on top of kafka
>> components, I believe schema registry is one of those additional components
>> that it provids , but it is still apache kafka.  And unfortunately, I don't
>> know what Strimzi is.   Maybe, a wrapper on Kafka, similar to Confluent.
>>
>> Does anyone else have idea about where do I need to make these changes
>> wrt Confluent?
>>
>> Regards
>>
>> On 3/6/21, 10:11 AM, "Andrea Cosentino"  wrote:
>>
>> I'm not expert of Confluent platform, I was talking about Apache
>> Kafka.
>>
>> Il giorno sab 6 mar 2021 alle ore 15:54 Arundhati Bhende <
>> arundhati.bhe...@prudential.com> ha scritto:
>>
>> > Thank you.
>> >
>> > It is a Confluent implementation ( Cluster ).
>> >
>> > So, do I need to make this change on all connect servers ? (  2 in
>> our
>> > case )
>> >
>> > And which files do I update?  Somewhere I was  reading we have to
>> update 3
>> > files, connect properties, producer and consumer configs, but when
>> I looked
>> > I did not find the connect properties file on those seervers.
>> >
>> > Am I looking in the wrong place?
>> >
>> > Regards
>> >
>> >
>> > On 3/6/21, 8:49 AM, "Andrea Cosentino"  wrote:
>> >
>> > Hello
>> >
>> > 1 yes, the max result poll number of files will be picked up
>> > immediately.
>> >
>> > 2. If you are using the kafka connect in standalone mode,
>> you'll need
>> > to
>> > modify the config/connect-standalone.properties and add the
>> following
>> > prop
>> > producer.max.request.size and set the value
>> >
>> > In Strimzi you have to specify this property for the kafka
>> connect
>> > cluster.
>> >
>> > If it's a docker container you'll need to modify the connect
>> conf as
>> > well
>> >
>> > 3. Yes, it happens because the bucket contains files bigger
>> than the
>> > default size.
>> >
>> >
>> >
>> > Il sab 6 mar 2021, 14:13 Arundhati Bhende <
>> > arundhati.bhe...@prudential.com>
>> > ha scritto:
>> >
>> > > Hi,  where  can I find thee below information?   Trying to
>> > understand how
>> > > the s3 source connector works and if I can make changes in the
>> > config if I
>> > > can't make changes on the server tp add max.request.sizee?
>> > >
>> > > Thanks
>> > >
>> > >
>> > > On 3/4/21, 5:12 PM, "Arundhati Bhende" <
>> > arundhati.bhe...@prudential.com>
>> > > wrote:
>> > >
>> > > I am trying to set up a Camel S3 Source connector.  The
>> S3 bucket
>> > > already exists.
>> > >
>> > > I have 2 questions
>> > >
>> > > 1.  If the bucket already exists and has files in the
>> > bucket, when
>> > > we define the new connector, are those files immediately read
>> into
>> > the
>> > > topic?
>> > >
>> > > 2.  The error I am getting is
>> > >
>> > >   The message is 57139013 bytes when serialized which
>> is
>> > larger
>> > > than 1048576, which is the value of the max.request.size
>> > configuration
>> > >
>> > >   Where should I change the max.request.size Value?
>> Do I
>> > add that
>> > > to the connector config?
>> > >
>> > > 3.  Why am I getting this error - right after I
>> create the
>> > > connector. Is it because 1 happens and that the bucket has
>> files
>> > larger
>> > > than the default size?
>> > >
>> > > Thank you
>> > > Aru
>> > >
>> > >
>> > >
>> > >
>> >
>> >
>>
>>


Re: Error when setting up a new Camel S3 Source Connector

2021-03-06 Thread Andrea Cosentino
I know what confluent platform is, it is just I'm not familiar with its
configuration files location.

Strimzi is a project to deploy and manage a kafka cluster on top of
kubernetes/openshift

Il sab 6 mar 2021, 17:10 Arundhati Bhende 
ha scritto:

> Thanks.  Yes, Confluent is some additional components on top of kafka
> components, I believe schema registry is one of those additional components
> that it provids , but it is still apache kafka.  And unfortunately, I don't
> know what Strimzi is.   Maybe, a wrapper on Kafka, similar to Confluent.
>
> Does anyone else have idea about where do I need to make these changes wrt
> Confluent?
>
> Regards
>
> On 3/6/21, 10:11 AM, "Andrea Cosentino"  wrote:
>
> I'm not expert of Confluent platform, I was talking about Apache Kafka.
>
> Il giorno sab 6 mar 2021 alle ore 15:54 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Thank you.
> >
> > It is a Confluent implementation ( Cluster ).
> >
> > So, do I need to make this change on all connect servers ? (  2 in
> our
> > case )
> >
> > And which files do I update?  Somewhere I was  reading we have to
> update 3
> > files, connect properties, producer and consumer configs, but when I
> looked
> > I did not find the connect properties file on those seervers.
> >
> > Am I looking in the wrong place?
> >
> > Regards
> >
> >
> > On 3/6/21, 8:49 AM, "Andrea Cosentino"  wrote:
> >
> > Hello
> >
> > 1 yes, the max result poll number of files will be picked up
> > immediately.
> >
> > 2. If you are using the kafka connect in standalone mode, you'll
> need
> > to
> > modify the config/connect-standalone.properties and add the
> following
> > prop
> > producer.max.request.size and set the value
> >
> > In Strimzi you have to specify this property for the kafka
> connect
> > cluster.
> >
> > If it's a docker container you'll need to modify the connect
> conf as
> > well
> >
> > 3. Yes, it happens because the bucket contains files bigger than
> the
> > default size.
> >
> >
> >
> > Il sab 6 mar 2021, 14:13 Arundhati Bhende <
> > arundhati.bhe...@prudential.com>
> > ha scritto:
> >
> > > Hi,  where  can I find thee below information?   Trying to
> > understand how
> > > the s3 source connector works and if I can make changes in the
> > config if I
> > > can't make changes on the server tp add max.request.sizee?
> > >
> > > Thanks
> > >
> > >
> > > On 3/4/21, 5:12 PM, "Arundhati Bhende" <
> > arundhati.bhe...@prudential.com>
> > > wrote:
> > >
> > > I am trying to set up a Camel S3 Source connector.  The S3
> bucket
> > > already exists.
> > >
> > > I have 2 questions
> > >
> > > 1.  If the bucket already exists and has files in the
> > bucket, when
> > > we define the new connector, are those files immediately read
> into
> > the
> > > topic?
> > >
> > > 2.  The error I am getting is
> > >
> > >   The message is 57139013 bytes when serialized which
> is
> > larger
> > > than 1048576, which is the value of the max.request.size
> > configuration
> > >
> > >   Where should I change the max.request.size Value?
> Do I
> > add that
> > > to the connector config?
> > >
> > > 3.  Why am I getting this error - right after I create
> the
> > > connector. Is it because 1 happens and that the bucket has
> files
> > larger
> > > than the default size?
> > >
> > > Thank you
> > > Aru
> > >
> > >
> > >
> > >
> >
> >
>
>


Re: Error when setting up a new Camel S3 Source Connector

2021-03-06 Thread Andrea Cosentino
I'm not expert of Confluent platform, I was talking about Apache Kafka.

Il giorno sab 6 mar 2021 alle ore 15:54 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> Thank you.
>
> It is a Confluent implementation ( Cluster ).
>
> So, do I need to make this change on all connect servers ? (  2 in our
> case )
>
> And which files do I update?  Somewhere I was  reading we have to update 3
> files, connect properties, producer and consumer configs, but when I looked
> I did not find the connect properties file on those seervers.
>
> Am I looking in the wrong place?
>
> Regards
>
>
> On 3/6/21, 8:49 AM, "Andrea Cosentino"  wrote:
>
> Hello
>
> 1 yes, the max result poll number of files will be picked up
> immediately.
>
> 2. If you are using the kafka connect in standalone mode, you'll need
> to
> modify the config/connect-standalone.properties and add the following
> prop
> producer.max.request.size and set the value
>
> In Strimzi you have to specify this property for the kafka connect
> cluster.
>
> If it's a docker container you'll need to modify the connect conf as
> well
>
> 3. Yes, it happens because the bucket contains files bigger than the
> default size.
>
>
>
> Il sab 6 mar 2021, 14:13 Arundhati Bhende <
> arundhati.bhe...@prudential.com>
> ha scritto:
>
> > Hi,  where  can I find thee below information?   Trying to
> understand how
> > the s3 source connector works and if I can make changes in the
> config if I
> > can't make changes on the server tp add max.request.sizee?
> >
> > Thanks
> >
> >
> > On 3/4/21, 5:12 PM, "Arundhati Bhende" <
> arundhati.bhe...@prudential.com>
> > wrote:
> >
> > I am trying to set up a Camel S3 Source connector.  The S3 bucket
> > already exists.
> >
> > I have 2 questions
> >
> > 1.  If the bucket already exists and has files in the
> bucket, when
> > we define the new connector, are those files immediately read into
> the
> > topic?
> >
> > 2.  The error I am getting is
> >
> >   The message is 57139013 bytes when serialized which is
> larger
> > than 1048576, which is the value of the max.request.size
> configuration
> >
> >   Where should I change the max.request.size Value?  Do I
> add that
> > to the connector config?
> >
> > 3.  Why am I getting this error - right after I create the
> > connector. Is it because 1 happens and that the bucket has files
> larger
> > than the default size?
> >
> > Thank you
> > Aru
> >
> >
> >
> >
>
>


Re: Error when setting up a new Camel S3 Source Connector

2021-03-06 Thread Andrea Cosentino
Hello

1 yes, the max result poll number of files will be picked up immediately.

2. If you are using the kafka connect in standalone mode, you'll need to
modify the config/connect-standalone.properties and add the following prop
producer.max.request.size and set the value

In Strimzi you have to specify this property for the kafka connect cluster.

If it's a docker container you'll need to modify the connect conf as well

3. Yes, it happens because the bucket contains files bigger than the
default size.



Il sab 6 mar 2021, 14:13 Arundhati Bhende 
ha scritto:

> Hi,  where  can I find thee below information?   Trying to understand how
> the s3 source connector works and if I can make changes in the config if I
> can't make changes on the server tp add max.request.sizee?
>
> Thanks
>
>
> On 3/4/21, 5:12 PM, "Arundhati Bhende" 
> wrote:
>
> I am trying to set up a Camel S3 Source connector.  The S3 bucket
> already exists.
>
> I have 2 questions
>
> 1.  If the bucket already exists and has files in the bucket, when
> we define the new connector, are those files immediately read into the
> topic?
>
> 2.  The error I am getting is
>
>   The message is 57139013 bytes when serialized which is larger
> than 1048576, which is the value of the max.request.size configuration
>
>   Where should I change the max.request.size Value?  Do I add that
> to the connector config?
>
> 3.  Why am I getting this error - right after I create the
> connector. Is it because 1 happens and that the bucket has files larger
> than the default size?
>
> Thank you
> Aru
>
>
>
>


Re: Camel AWS S3 Source Connector - what is the key?

2021-02-11 Thread Andrea Cosentino
It should be in that way

Il gio 11 feb 2021, 11:33 Arundhati Bhende 
ha scritto:

> Thanks Andrea.
>
> How would ii use it inn my configuration?  I  presume, it would be like
> below in  the connector configuration?
>
> camel.source.camelMessageHeaderKey= CamelAwsS3Key
>
>
> -AB
>
> On 2/11/21, 1:47 AM, "Andrea Cosentino"  wrote:
>
> Hi,
>
> You need to set the following option in your
> configuration: camel.source.camelMessageHeaderKey
>
> The description "The name of a camel message header containing an
> unique
> key that can be used as a Kafka message key. If this is not specified,
> then
> the Kafka message will not have a key."
>
> For example you can use the file name, which it is stored in
> CamelAwsS3Key
> Camel header.
>
> Il giorno gio 11 feb 2021 alle ore 04:41 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Hello,
> >
> > When setting up the AWS S3 source connector, when we add a file to
> S3,
> > the entire file gets added to the topic as a message ( value ).
> What is
> > the key for this file, where is it defined or is there a way we can
> define
> > the key for such a connector?
> >
> > Thank you
> > Arundhati
> >
> >
>
>


Re: Replacement for camel-xmljson for schema-less xml -> json

2021-02-11 Thread Andrea Cosentino
Hello,

The available alternative is using camel-jaxb in combinatio
with camel-jackson or any of the json components we provide.

Directly there is no solution ootb.

Il giorno gio 11 feb 2021 alle ore 11:17 Keith Herbert 
ha scritto:

> Hi all,
>
> camel-xmljson is deprecated and even it's documentation has recently been
> removed from the Camel website.
>
> Is there another DataFormatter that can perform xml->json without an xsd
> file or annotated pojos?
>
> The issue https://issues.apache.org/jira/browse/CAMEL-12995 says that it
> can replaced by "combinations of two components", but which two components?
>
> I could do this manually in Jackson writing a bean method to transform the
> xml to an ObjectNode and then back to json, but is there a more idiomatic
> way to do this?
>
> Thank you all,
> Keith
>


Re: Query on Camel JDBC Adapter use

2021-02-10 Thread Andrea Cosentino
Hello,

Please don't re-post the same question multiple times.

Here is an example of jdbc component usage:

https://github.com/apache/camel-examples/tree/master/examples/camel-example-jdbc

Il giorno gio 11 feb 2021 alle ore 08:30 Sandeep3 Singh
 ha scritto:

> Hi Camel Team,
>
> I am planning to use Camel JDBC adapter to connect SAP-HANA with
> Snowflake database. SAP vendor has recommended to use this adapter and
> hence we want to check if it works. I have the below queries about this
> adapter
>
> What all Apache softwares are required to make Camel JDBC adapter work ?
> Do I need only the adapter or I would also need to install the Apache
> Camel Framework ? Or more softwares would be required ? Please provide the
> list.
> I would highly appreciate if you can share the links for all the Apache
> software downloads to make the Camel JDBC adapter work.
> Any other instructions/recommendations that I need to consider.
>
>   An early response would be appreciated as this is critical for our
> project.
>
> Regards
> Sandeep Singh
> =-=-=
> Notice: The information contained in this e-mail
> message and/or attachments to it may contain
> confidential or privileged information. If you are
> not the intended recipient, any dissemination, use,
> review, distribution, printing or copying of the
> information contained in this e-mail message
> and/or attachments to it are strictly prohibited. If
> you have received this communication in error,
> please notify us by reply e-mail or telephone and
> immediately and permanently delete the message
> and any attachments. Thank you
>
>
>


Re: Camel AWS S3 Source Connector - what is the key?

2021-02-10 Thread Andrea Cosentino
Hi,

You need to set the following option in your
configuration: camel.source.camelMessageHeaderKey

The description "The name of a camel message header containing an unique
key that can be used as a Kafka message key. If this is not specified, then
the Kafka message will not have a key."

For example you can use the file name, which it is stored in CamelAwsS3Key
Camel header.

Il giorno gio 11 feb 2021 alle ore 04:41 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> Hello,
>
> When setting up the AWS S3 source connector, when we add a file to S3,
> the entire file gets added to the topic as a message ( value ).  What is
> the key for this file, where is it defined or is there a way we can define
> the key for such a connector?
>
> Thank you
> Arundhati
>
>


Re: Component "camel-ssh" broken? - NoSuchMethodError: ConnectFuture.getSession()

2021-01-27 Thread Andrea Cosentino
Yes, but the jetty alias was missing, so we need 4.3.1 release.

Il giorno mer 27 gen 2021 alle ore 09:13 Jean-Baptiste Onofre <
j...@nanthrax.net> ha scritto:

> It’s not tested, but most of the features work fine with Karaf 4.3.0.
>
> However, I agree: I will upgrade to Karaf 4.3.0 for Camel Karaf 3.8.0
> (it’s already done ;) ).
>
> Regards
> JB
>
> > Le 27 janv. 2021 à 06:30, Andrea Cosentino  a écrit :
> >
> > Camel 3.7.x doesn't support Karaf 4.3.0, the last version we tested is
> 4.2.9
> >
> > Il giorno mar 26 gen 2021 alle ore 22:00 Schulze, Jan <
> > jan.schu...@uni-tuebingen.de> ha scritto:
> >
> >> Hi Claus,
> >>
> >>
> >> thanks for your reply. I forgot to mention, that I am using Camel in
> Karaf.
> >>
> >> Along with Camel I also upgraded Karaf (4.2.9 => 4.3.0).
> >>
> >>
> >> When using Camel 2.7.1 with Karaf 4.2.9, the camel-ssh component is
> >> working without problems.
> >>
> >>
> >> Also, the camel features URLs for installing Camel into Karaf both
> specify
> >> mvn:org.apache.sshd/sshd-core/2.0.0 as dependency for camel-ssh
> >>
> >>
> >>
> https://repo1.maven.org/maven2/org/apache/camel/karaf/apache-camel/3.4.0/apache-camel-3.4.0-features.xml
> >>
> >>
> >>
> https://repo1.maven.org/maven2/org/apache/camel/karaf/apache-camel/3.7.1/apache-camel-3.7.1-features.xml
> >>
> >>
> >> So it seems to be a Karaf-related issue.
> >>
> >>
> >> Using "feature:install camel-ssh" results in the following in a clean
> >> Karaf 4.2.9:
> >>
> >>
> >> karaf@root()> bundle:list -t 0 | grep -i ssh
> >> 35 x Active   x  30 x 4.2.9x Apache Karaf :: Shell :: SSH
> >> 38 x Active   x  30 x 1.7.0x Apache Mina SSHD :: Core
> >> 79 x Active   x  50 x 3.7.1x camel-ssh
> >> 101 x Active   x  50 x 2.0.0x Apache Mina SSHD :: Core
> >>
> >>
> >> And in a clean Karaf 4.3.0 it results in:
> >>
> >> karaf@root()> bundle:list -t 0 | grep -i ssh
> >> 44 x Active   x  30 x 4.3.0  x Apache Karaf :: Shell :: SSH
> >> 46 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: OSGi
> >> 47 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: SCP
> >> 48 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: SFTP
> >> 121 x Active   x  50 x 3.7.1  x camel-ssh
> >>
> >>
> >> I'm too tired right now to persue this any further. Maybe I can resolve
> it
> >> tomorrow.
> >>
> >>
> >>
> >> Regards
> >> --
> >> Jan Schulze
> >> Eberhard Karls Universität Tübingen
> >> 
> >> Von: Claus Ibsen 
> >> Gesendet: Dienstag, 26. Januar 2021 13:26:16
> >> An: users@camel.apache.org
> >> Betreff: Re: Component "camel-ssh" broken? - NoSuchMethodError:
> >> ConnectFuture.getSession()
> >>
> >> Hi
> >>
> >> Thanks for reporting. Can you create a JIRA ticket.
> >> And would you be able to try test with switching to use mina 2.0 JAR
> >> on the classpath but keep using the 3.7.1 camel version.
> >>
> >> On Tue, Jan 26, 2021 at 1:20 PM Schulze, Jan
> >>  wrote:
> >>>
> >>> Hi,
> >>>
> >>> "camel-ssh" component is throwing a CamelExecutionException when it is
> >> used to write a file via SSH.
> >>> It used to be working in Camel 3.4.0, but after upgrading to 3.7.1, I
> >> observe the following stack trace:
> >>>
> >>> org.apache.camel.CamelExecutionException: Exception occurred during
> >> execution on the exchange: Exchange[F2205BFA0B31B1C-]
> >>>at
> >>
> org.apache.camel.CamelExecutionException.wrapCamelExecutionException(CamelExecutionException.java:45)
> >> ~[!/:3.7.1]
> >>>at
> >>
> org.apache.camel.support.DefaultExchange.setException(DefaultExchange.java:425)
> >> ~[!/:3.7.1]
> >>>at
> >>
> org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:69)
> >> ~[!/:3.7.1]
> >>>at
> >>
> org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:188)
> >> ~[!/:3.7.1]
> >>>at
> >>
> org.apache.cam

Re: Component "camel-ssh" broken? - NoSuchMethodError: ConnectFuture.getSession()

2021-01-26 Thread Andrea Cosentino
Camel 3.7.x doesn't support Karaf 4.3.0, the last version we tested is 4.2.9

Il giorno mar 26 gen 2021 alle ore 22:00 Schulze, Jan <
jan.schu...@uni-tuebingen.de> ha scritto:

> Hi Claus,
>
>
> thanks for your reply. I forgot to mention, that I am using Camel in Karaf.
>
> Along with Camel I also upgraded Karaf (4.2.9 => 4.3.0).
>
>
> When using Camel 2.7.1 with Karaf 4.2.9, the camel-ssh component is
> working without problems.
>
>
> Also, the camel features URLs for installing Camel into Karaf both specify
> mvn:org.apache.sshd/sshd-core/2.0.0 as dependency for camel-ssh
>
>
> https://repo1.maven.org/maven2/org/apache/camel/karaf/apache-camel/3.4.0/apache-camel-3.4.0-features.xml
>
>
> https://repo1.maven.org/maven2/org/apache/camel/karaf/apache-camel/3.7.1/apache-camel-3.7.1-features.xml
>
>
> So it seems to be a Karaf-related issue.
>
>
> Using "feature:install camel-ssh" results in the following in a clean
> Karaf 4.2.9:
>
>
> karaf@root()> bundle:list -t 0 | grep -i ssh
>  35 x Active   x  30 x 4.2.9x Apache Karaf :: Shell :: SSH
>  38 x Active   x  30 x 1.7.0x Apache Mina SSHD :: Core
>  79 x Active   x  50 x 3.7.1x camel-ssh
> 101 x Active   x  50 x 2.0.0x Apache Mina SSHD :: Core
>
>
> And in a clean Karaf 4.3.0 it results in:
>
> karaf@root()> bundle:list -t 0 | grep -i ssh
>  44 x Active   x  30 x 4.3.0  x Apache Karaf :: Shell :: SSH
>  46 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: OSGi
>  47 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: SCP
>  48 x Active   x  30 x 2.5.1  x Apache Mina SSHD :: SFTP
> 121 x Active   x  50 x 3.7.1  x camel-ssh
>
>
> I'm too tired right now to persue this any further. Maybe I can resolve it
> tomorrow.
>
>
>
> Regards
> --
> Jan Schulze
> Eberhard Karls Universität Tübingen
> 
> Von: Claus Ibsen 
> Gesendet: Dienstag, 26. Januar 2021 13:26:16
> An: users@camel.apache.org
> Betreff: Re: Component "camel-ssh" broken? - NoSuchMethodError:
> ConnectFuture.getSession()
>
> Hi
>
> Thanks for reporting. Can you create a JIRA ticket.
> And would you be able to try test with switching to use mina 2.0 JAR
> on the classpath but keep using the 3.7.1 camel version.
>
> On Tue, Jan 26, 2021 at 1:20 PM Schulze, Jan
>  wrote:
> >
> > Hi,
> >
> > "camel-ssh" component is throwing a CamelExecutionException when it is
> used to write a file via SSH.
> > It used to be working in Camel 3.4.0, but after upgrading to 3.7.1, I
> observe the following stack trace:
> >
> > org.apache.camel.CamelExecutionException: Exception occurred during
> execution on the exchange: Exchange[F2205BFA0B31B1C-]
> > at
> org.apache.camel.CamelExecutionException.wrapCamelExecutionException(CamelExecutionException.java:45)
> ~[!/:3.7.1]
> > at
> org.apache.camel.support.DefaultExchange.setException(DefaultExchange.java:425)
> ~[!/:3.7.1]
> > at
> org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:69)
> ~[!/:3.7.1]
> > at
> org.apache.camel.processor.SendProcessor.lambda$process$2(SendProcessor.java:188)
> ~[!/:3.7.1]
> > at
> org.apache.camel.support.cache.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:317)
> ~[!/:3.7.1]
> > at
> org.apache.camel.processor.SendProcessor.process(SendProcessor.java:187)
> ~[!/:3.7.1]
> > at
> org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$RedeliveryTask.doRun(RedeliveryErrorHandler.java:714)
> [!/:3.7.1]
> > at
> org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$RedeliveryTask.run(RedeliveryErrorHandler.java:623)
> [!/:3.7.1]
> > at
> org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:148)
> [!/:3.7.1]
> > at
> org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:60)
> [!/:3.7.1]
> > at
> org.apache.camel.processor.Pipeline.process(Pipeline.java:147) [!/:3.7.1]
> > at
> org.apache.camel.impl.engine.CamelInternalProcessor.process(CamelInternalProcessor.java:312)
> [!/:3.7.1]
> > at
> org.apache.camel.component.timer.TimerConsumer.sendTimerExchange(TimerConsumer.java:207)
> [!/:3.7.1]
> > at
> org.apache.camel.component.timer.TimerConsumer$1.run(TimerConsumer.java:76)
> [!/:3.7.1]
> > at java.util.TimerThread.mainLoop(Timer.java:556) [?:?]
> > at java.util.TimerThread.run(Timer.java:506) [?:?]
> > Caused by: java.lang.NoSuchMethodError:
> 'org.apache.sshd.client.session.ClientSession
> org.apache.sshd.client.future.ConnectFuture.getSession()'
> > at
> org.apache.camel.component.ssh.SshHelper.sendExecCommand(SshHelper.java:84)
> ~[?:?]
> > at
> org.apache.camel.component.ssh.SshProducer.process(SshProducer.java:74)
> ~[?:?]
> > at
> 

Re: Camel-K / Kafka / Avro: something broke overnight

2021-01-14 Thread Andrea Cosentino
I don't think it is feasible from our side to get to 3.6.0, it's up to you.

Can you try to set the additionalProperties at component level like this
example?

https://github.com/apache/camel/blob/master/components/camel-kafka/src/test/java/org/apache/camel/component/kafka/KafkaComponentTest.java#L63

Il giorno gio 14 gen 2021 alle ore 17:28 mark  ha
scritto:

> Hello,
> Until yesterday we were using Camel-K to send Avro-encoded messages to
> Kafka using code of the form,
>
> .to("kafka:{{topic}}?brokers={{kafka-bootstrap}}"
>   +
>
> "=org.apache.kafka.common.serialization.StringSerializer"
>   +
> "=io.apicurio.registry.utils.serde.AvroKafkaSerializer"
>   + "=" +
> schemaRegistry
>
> Running this today, our Integration had picked up camel-kafka 3.7.0 and
> issue https://issues.apache.org/jira/browse/CAMEL-15770 which changed the
> first two properties to keySerializer and valueSerializer. However
> adjusting the code to,
>
> .to("kafka:{{topic}}?brokers={{kafka-bootstrap}}"
> +
> "=org.apache.kafka.common.serialization.StringSerializer"
> +
> "=io.apicurio.registry.utils.serde.AvroKafkaSerializer"
> + "=" + schemaRegistry
>
> Leaves us no longer able to set additionalProperties. Attempts such as
> adding
>
> -d mvn:org.apache.camel/camel-kafka:[3.6.0]
>
> have no effect: we always get camel-kafka 3.7 and cannot pin the
> Integration to 3.6 or below. The relevant stack trace is below - please can
> anyone help us either adjust the apicurio.registry.url for the new library,
> or pin things back to 3.6 please?
>
> [1] Caused by: org.apache.kafka.common.KafkaException: Failed to construct
> kafka producer
> [1] at
>
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:434)
> [1] at
>
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:298)
> [1] at
>
> org.apache.camel.component.kafka.KafkaProducer.doStart(KafkaProducer.java:115)
> [1] at
> org.apache.camel.support.service.BaseService.start(BaseService.java:115)
> [1] at
>
> org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:84)
> [1] at
>
> org.apache.camel.impl.engine.AbstractCamelContext.internalAddService(AbstractCamelContext.java:1425)
> [1] at
>
> org.apache.camel.impl.engine.AbstractCamelContext.addService(AbstractCamelContext.java:1343)
> [1] at
> org.apache.camel.processor.SendProcessor.doStart(SendProcessor.java:236)
> [1] at
> org.apache.camel.support.service.BaseService.start(BaseService.java:115)
> [1] at
>
> org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:84)
> [1] at
>
> org.apache.camel.support.service.ServiceHelper.startService(ServiceHelper.java:101)
> [1] at
>
> org.apache.camel.processor.errorhandler.RedeliveryErrorHandler.doStart(RedeliveryErrorHandler.java:1487)
> [1] at
>
> org.apache.camel.support.ChildServiceSupport.start(ChildServiceSupport.java:60)
> [1] ... 37 more
> [1] Caused by: java.lang.IllegalArgumentException: Missing registry base
> url, set apicurio.registry.url
> [1] at
>
> io.apicurio.registry.utils.serde.AbstractKafkaSerDe.configure(AbstractKafkaSerDe.java:120)
> [1] at
>
> io.apicurio.registry.utils.serde.AbstractKafkaStrategyAwareSerDe.configure(AbstractKafkaStrategyAwareSerDe.java:75)
> [1] at
>
> io.apicurio.registry.utils.serde.AvroKafkaSerializer.configure(AvroKafkaSerializer.java:78)
> [1] at
>
> org.apache.kafka.clients.producer.KafkaProducer.(KafkaProducer.java:369)
> [1] ... 49 more
>
> My dependencies are currently,
>
> // camel-k:
> dependency=mvn:org.apache.camel.quarkus/camel-quarkus-kafka:1.5.0
> // camel-k: dependency=mvn:org.apache.avro/avro:1.10.1
> // camel-k: dependency=mvn:org.glassfish.jersey.core/jersey-common:2.22.2
> // camel-k:
> dependency=mvn:io.apicurio/apicurio-registry-utils-serde:1.3.2.Final
>
> Many thanks in advance,
> Regards,
>
> Mark
>


Re: Are you using binary distribution?

2021-01-14 Thread Andrea Cosentino
No objection, but this was related to a general discussion about binary
distribution of all the Camel subprojects.

Il giorno gio 14 gen 2021 alle ore 14:16 Jean-Baptiste Onofre <
j...@nanthrax.net> ha scritto:

> Hi,
>
> What about a Karaf-camel (I proposed and started to work on karamel ;) ) ?
>
> I have a branch where I have karamel distribution (including resources for
> Kubernetes).
>
> If there’s no objection, I will create the camel-karat PR.
>
> Thoughts ?
>
> Regards
> JB
>
> > Le 14 déc. 2020 à 14:37, Zoran Regvart  a écrit :
> >
> > Hi Cameleers,
> > we're discussing binary distribution on two issues[1][2]. The binary
> > distribution is the tar.gz/ZIP file linked from the Camel website. By
> > ASF policy we only ship source code, and the binary distribution is
> > optional.
> >
> > Back in the dark days, before using build tools that knew about
> > dependency management (Maven, Gradle...) folk used to use the binary
> > distribution.
> >
> > I've found some statistics on downloads/per day, for us[3]/eu[4] and
> > created these charts:
> >
> > https://s.apache.org/camel-dl-us
> > https://s.apache.org/camel-dl-eu
> >
> > The data is over 2 and 1/4 years, we've had 19.7+-8.8 via US, and
> > 20.24+-8.43 in via EU per day. So not that much IMHO.
> >
> > I'm wondering if anyone is still relying on these, and if so what
> > would a binary distribution look like for sub projects? Should we do
> > the same as we do for the Camel core?
> >
> > Please reply on this thread or chime in on those issues for
> > sub-project specific concerns.
> >
> > Thanks :)
> >
> > zoran
> >
> > [1] https://github.com/apache/camel-quarkus/issues/2045
> > [2] https://github.com/apache/camel-kafka-connector/issues/754
> > [3] https://www-us.apache.org/dyn/stats/camel.log
> > [4] https://www-eu.apache.org/dyn/stats/camel.log
> > --
> > Zoran Regvart
>
>


Re: Camel-Netty Security Vulnerability (CWE-295/BDSA-2018-4022) - Hostname verification

2021-01-11 Thread Andrea Cosentino
Please report the camel version you're using.

I think this shouldn't be discussed at users ml, but you should contact ASF
security mail address.



Il lun 11 gen 2021, 14:35 Ravi Sunchu  ha
scritto:

> Hi All:
>
> In a project where we are using camel-netty component, our Blackduck scans
> reported a medium (4.7) security vulnerability against netty-4.1.53.Final
> version. The essence of the vulnerability seems to be that Netty client
> does not verify the hostname of the server against the certificate. This is
> documented in the following issues under the Netty project.
>
> https://github.com/netty/netty/issues/9930
> https://github.com/netty/netty/issues/8537
>
> Apparently Netty devs are trying to enable hostname verification by
> default in Netty 5, but while using Netty 4 this has to be enabled manually
> by setting
>
> SSLParameters.setEndpointIdentificationAlgorithm("HTTPS") and by providing
> hostname and port while creating the SSLEngine.
>
> I looked around the Camel JSSE Util page (
> https://camel.apache.org/manual/latest/camel-configuration-utilities.html)
> and the source for SSLContextParameters and related classes. I could not
> find any mechanism to set the endpoint identification algorithm in
> SSLContextParameters in Camel so that it gets passed to the underlying
> Netty library as expected. Search through Camel mailing list also did not
> return any hits on this topic.
>
> Is there a way to enable hostname verification for Netty component in
> Camel? Or is this a vulnerability in camel-netty component that still needs
> to be addressed in this component?
>
> Attached is the Blackduck report regarding this vulnerability.
>
> Thanks for the help.
>
> Regards
> Ravi Sunchu
>


Re: Camel S3SourceConnector and Effect of Topic cleanup.policy "compact" vs. "delete"

2021-01-05 Thread Andrea Cosentino
The options I reported are related to kafka broker configuration I guess, so 
they should be set at broker level and not in the connector config.

By the way, without more information on your configuration and why you need 
compaction this is not so much we can do.

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Tuesday, January 5, 2021, 04:59:56 PM GMT+1, Arundhati Bhende 
 wrote: 





Thanks.  I tried with those options with many combinations,  but kept getting 
same error.  Asking this to get better understanding.

So, I used the same connector configuration as below.    I created the topic 
with cleanup.policy=compact and kept getting the error below, so I changed 
"only" the cleanup policy to "delete" and it worked.  Other configuration 
parameters for the topic were kept exactly the same.    So, trying to 
understand the reason for why the topic must be cleanup.policy = delete. 

DATA=$( cat << EOF
{
        "connector.class": 
"org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
        "key.converter": "org.apache.kafka.connect.storage.StringConverter",
        "value.converter": "org.apache.kafka.connect.storage.StringConverter",
        "camel.source.maxPollDuration": "1",
        "topics": "TEST-S3-SOURCE-DZONE-POC",
        "camel.source.path.bucketNameOrArn": " push-json-poc",
        "camel.component.aws-s3.region": "US_EAST_1",
        "tasks.max": "1",
        "camel.source.endpoint.useIAMCredentials": "true",
        "camel.source.endpoint.autocloseBody": "true"
}
EOF
)

Thanks



On 1/5/21, 3:08 AM, "Andrea Cosentino"  wrote:

    This seems related more on kafka connect configuration than the connector 
itself. I guess you'll need to tune the options related to this like:

    offset.flush.timeout.ms
    offset.flush.interval.ms

    --
    Andrea Cosentino 
    --
    Apache Camel PMC Chair
    Apache Karaf Committer
    Apache Servicemix PMC Member
    Email: ancosen1...@yahoo.com
    Twitter: @oscerd2
    Github: oscerd






    On Tuesday, January 5, 2021, 12:17:44 AM GMT+1, Arundhati Bhende 
 wrote: 





    aws-s3 connector  -  not aws2-s3.  

    On 1/4/21, 5:19 PM, "Andrea Cosentino"  wrote:

        Is this with aws2-s3 connector or aws2-s3?

        Il lun 4 gen 2021, 23:05 Arundhati Bhende 

        ha scritto:

        > Hi, I am testing the connector with different cleanup policies for the
        > Topic.
        >
        > If the topic cleanup.policy is set to "delete",  the connector works
        > correctly and I am able to access the message in the topic
        >
        > If the topic cleanup.policy is set to "compact", the connect Task 
fails
        > with the below error.
        >
        > I am trying to find out why this happens.  Can someone please explain?
        >
        >      trace: org.apache.kafka.connect.errors.ConnectException:
        > OffsetStorageWriter is already flushing
        >          at
        > 
org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111)
        >          at
        > 
org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:438)
        >          at
        > 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
        >          at
        > org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
        >          at
        > org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
        >          at
        > 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        >          at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        >          at
        > 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        >          at
        > 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        >          at java.lang.Thread.run(Thread.java:748)
        >
        > Thank you
        >
        >
        >




Re: Camel S3SourceConnector and Effect of Topic cleanup.policy "compact" vs. "delete"

2021-01-05 Thread Andrea Cosentino
This seems related more on kafka connect configuration than the connector 
itself. I guess you'll need to tune the options related to this like:

offset.flush.timeout.ms
offset.flush.interval.ms

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Tuesday, January 5, 2021, 12:17:44 AM GMT+1, Arundhati Bhende 
 wrote: 





aws-s3 connector  -  not aws2-s3.  

On 1/4/21, 5:19 PM, "Andrea Cosentino"  wrote:

    Is this with aws2-s3 connector or aws2-s3?

    Il lun 4 gen 2021, 23:05 Arundhati Bhende 
    ha scritto:

    > Hi, I am testing the connector with different cleanup policies for the
    > Topic.
    >
    > If the topic cleanup.policy is set to "delete",  the connector works
    > correctly and I am able to access the message in the topic
    >
    > If the topic cleanup.policy is set to "compact", the connect Task fails
    > with the below error.
    >
    > I am trying to find out why this happens.  Can someone please explain?
    >
    >      trace: org.apache.kafka.connect.errors.ConnectException:
    > OffsetStorageWriter is already flushing
    >          at
    > 
org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111)
    >          at
    > 
org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:438)
    >          at
    > 
org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
    >          at
    > org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
    >          at
    > org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
    >          at
    > java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
    >          at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    >          at
    > 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
    >          at
    > 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
    >          at java.lang.Thread.run(Thread.java:748)
    >
    > Thank you
    >
    >
    >



Re: Camel S3SourceConnector and Effect of Topic cleanup.policy "compact" vs. "delete"

2021-01-04 Thread Andrea Cosentino
Is this with aws2-s3 connector or aws2-s3?

Il lun 4 gen 2021, 23:05 Arundhati Bhende 
ha scritto:

> Hi, I am testing the connector with different cleanup policies for the
> Topic.
>
> If the topic cleanup.policy is set to "delete",  the connector works
> correctly and I am able to access the message in the topic
>
> If the topic cleanup.policy is set to "compact", the connect Task fails
> with the below error.
>
> I am trying to find out why this happens.  Can someone please explain?
>
>   trace: org.apache.kafka.connect.errors.ConnectException:
> OffsetStorageWriter is already flushing
>   at
> org.apache.kafka.connect.storage.OffsetStorageWriter.beginFlush(OffsetStorageWriter.java:111)
>   at
> org.apache.kafka.connect.runtime.WorkerSourceTask.commitOffsets(WorkerSourceTask.java:438)
>   at
> org.apache.kafka.connect.runtime.WorkerSourceTask.execute(WorkerSourceTask.java:257)
>   at
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)
>   at
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)
>   at
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
>   at java.util.concurrent.FutureTask.run(FutureTask.java:266)
>   at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
>   at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
>   at java.lang.Thread.run(Thread.java:748)
>
> Thank you
>
>
>


Re: Correction needed in Documentation for Camel Source Connector

2021-01-04 Thread Andrea Cosentino
I created this one
https://github.com/apache/camel-kafka-connector/issues/822

Il giorno lun 4 gen 2021 alle ore 15:18 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> Thanks Andrea.
>
> Do I make a feature / enhancement request for this through some mechanism
> or will it be taken up by the development team as and when they can fit it
> into their plan?
>
> Thanks
>
> On 12/27/20, 11:12 AM, "Andrea Cosentino"  wrote:
>
> I don't thikn  it can be done through transforms.
>
> It must be supported on camel level eventually
>
> Il giorno dom 27 dic 2020 alle ore 16:58 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Sure.
> >
> > Do you have any suggestion on to how can I split the lines in the
> file as
> > single records in adding to the topic or do I have to write a
> separate app
> > that adds reads the single message and splits it as the multiple
> messages
> > to yet another topic
> >
> > If you have any suggestions, please let me know.
> >
> > One other place this is repeated is on the RedHat site's copy of the
> same
> > document.  Not sure how to inform them.   Hopefully fixing it on
> Camel site
> > will get propagated there.
> >
> > Thank you
> >
> > -Original Message-
> > From: Andrea Cosentino 
> > Sent: Sunday, December 27, 2020 10:48 AM
> > To: users@camel.apache.org
> > Subject: Re: Correction needed in Documentation for Camel Source
> Connector
> >
> > Thanks for spotting.
> >
> > Il dom 27 dic 2020, 16:39 Arundhati Bhende <
> > arundhati.bhe...@prudential.com> ha scritto:
> >
> > > I am not yet familiar with other capabilities of a connector, have
> > > just started learning about it.
> > >
> > > In regards to the comment  :  You're wrong. The aggregation
> strategy
> > > can be used in sink too.
> > >
> > > I am not saying it cannot be used in the sink,  what I am saying
> is -
> > > The official page for Source Connector has Sink connector wording
> at
> > > the bottom - which I think should have been source.
> > >
> > > This is what is on the documentation page which I am saying the
> word
> > > sink is there by mistake, as well as two lines above this on the
> page
> > >
> > > The camel-aws-s3 sink connector has no aggregation strategies out
> of
> > > the box.
> > >
> > >
> > > -Original Message-
> > > From: Andrea Cosentino 
> > > Sent: Sunday, December 27, 2020 10:33 AM
> > > To: users@camel.apache.org
> > > Subject: Re: Correction needed in Documentation for Camel Source
> > > Connector
> > >
> > > You're wrong. The aggregation strategy can be used in sink too.
> > >
> > > Il dom 27 dic 2020, 16:23 Arundhati Bhende <
> > > arundhati.bhe...@prudential.com> ha scritto:
> > >
> > > > Hi,
> > > >
> > > > I do not know if this is the correct list for this, but I wanted
> to
> > > > bring it to notice of the team that maintains the documentation.
> > > >
> > > > In the page -  "camel-aws-s3-kafka-connector source
> configuration"
> > > >  at the link
> > > >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fca
> > > > me
> > > > l.apache.org
> %2Fcamel-kafka-connector%2Flatest%2Fconnectors%2Fcamel-a
> > > > ws
> > > >
> -s3-kafka-source-connector.htmldata=04%7C01%7Carundhati.bhende%
> > > > 40
> > > > prudential.com
> %7Cec4cbc71190f49a7c11a08d8aa7cc7a0%7Cd8fde2f593924260
> > > > 8a
> > > >
> 030ad01f4746e9%7C0%7C0%7C637446800196356811%7CUnknown%7CTWFpbGZsb3d8
> > > > ey
> > > >
> JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> > > > 00
> > > >
> 0sdata=USwkv0KWNDm%2FJdPEfkQOxmOYwD%2BX9jB0cS7hFYxI7c8%3Dr
> > > > es
> > > > erved=0  the wording at the bottom needs to be changed ( Seems
> to be
> > > > incorrect copy /paste from Sink connector )
> > > >
> > > >
> > > > Below is the documentation where the word sink needs to be
> replaced
> > > > with source
> > > >
> > > > The camel-aws-s3 sink connector supports 1 converters out of the
> > > > box, which are listed below.
> > > >
> org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> > > >
> > > > The camel-aws-s3 sink connector supports 1 transforms out of the
> > > > box, which are listed below.
> > > >
> org.apache.camel.kafkaconnector.awss3.transformers.S3ObjectTransform
> > > > s
> > > >
> > > > The camel-aws-s3 sink connector has no aggregation strategies
> out of
> > > > the box.
> > > >
> > > > Thank you
> > > >
> > > >
> > >
> >
>
>


Re: Camel S3 Source Connector - value.converter

2020-12-27 Thread Andrea Cosentino
Currently I think so.

Il giorno dom 27 dic 2020 alle ore 17:32 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> Thanks.  So my only option is the second one I mentioned earlier - write
> an application to read 1 message from this topic and do processing on that
> message and create individual messages in a different topic?
>
> On 12/27/20, 11:13 AM, "Andrea Cosentino"  wrote:
>
> I don't think it is feasible to do that at transformers level, we need
> to
> introduce that at camel level eventually.
>
> Il giorno dom 27 dic 2020 alle ore 15:36 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > In order to overcome -"There is no splitting support actually.
> So it
> > will be something like 1 file
> > - 1 message."
> >
> > Is there a way - like using transformer ( I have never used a
> transformer,
> > but wondering if it will help here ) - to split the message into
> multiple
> > messages?  If it is possible, is there an example I could look at to
> define
> > how  I can use it?
> >
> > -Original Message-
> > From: Andrea Cosentino 
> > Sent: Saturday, December 26, 2020 10:19 AM
> > To: users@camel.apache.org
> > Subject: Re: Camel S3 Source Connector - value.converter
> >
> > You can use any of the dataformat provided by camel or converters
> from
> > kafka connect. So even avro or json.
> >
> > There is no splitting support actually. So it will be something like
> 1 file
> > - 1 message.
> >
> >
> >
> > Il sab 26 dic 2020, 16:08 Arundhati Bhende <
> > arundhati.bhe...@prudential.com> ha scritto:
> >
> > > Thank you.
> > >
> > > I will test with setting the autocloseBody to false.  In that case,
> > > will I be able to use any valid converter ?  What I mean is
> currently
> > > I am testing with a simple text file and hence was trying with
> > > StringConverter,  but in reality, those maybe JSON on Avro
> formatted
> > > messages in the files, so will I be able to use those formats as
> > converters?
> > >
> > > Other part of the question - after I get a simple example working
>  - a
> > > single file will contain multiple records, one record per line -
> do I
> > > need to set-up any other property to convert each line to a
> separate
> > > message to the topic?
> > >
> > > One doubt / question - to use the camel-aws2-s3 connector, our aws
> set-up
> > > must be at version 2, right?   Because, currently it is at v1 and
> do not
> > > know, if we have immediate option of moving to v2
> > >
> > >
> > >
> > > On 12/26/20, 10:00 AM, "Andrea Cosentino" 
> wrote:
> > >
> > > Yes, the approach without url is correct
> > >  Autoclosebody will close s3object after consuming the
> payload, so
> > > you want
> > > to get something it must be equal to false, while using the
> > converter.
> > >
> > > My suggestion by the way is using the camel-aws2-s3 connector,
> > > based on ask
> > > v2
> > >
> > > Il sab 26 dic 2020, 15:53 Arundhati Bhende <
> > > arundhati.bhe...@prudential.com>
> > > ha scritto:
> > >
> > > > This is the full configuration that I have used.  I am been
> > > experimenting
> > > > with changing the value.converter between StringConverter and
> > > > S3ObjectConverter - otherwise everything is same.
> > > >
> > > >
> > > >
> > > > DATA=$( cat << EOF
> > > > {
> > > > "connector.class":
> > > >
> "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
> > > > "key.converter":
> > > > "org.apache.kafka.connect.storage.StringConverter",
> > > > "value.converter":
> > > > "org.apache.kafka.connect.storage.StringConverter",
> > > > "camel.source.maxPollDuration": "1",
> > >

Re: Correction needed in Documentation for Camel Source Connector

2020-12-27 Thread Andrea Cosentino
I don't thikn  it can be done through transforms.

It must be supported on camel level eventually

Il giorno dom 27 dic 2020 alle ore 16:58 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> Sure.
>
> Do you have any suggestion on to how can I split the lines in the file as
> single records in adding to the topic or do I have to write a separate app
> that adds reads the single message and splits it as the multiple messages
> to yet another topic
>
> If you have any suggestions, please let me know.
>
> One other place this is repeated is on the RedHat site's copy of the same
> document.  Not sure how to inform them.   Hopefully fixing it on Camel site
> will get propagated there.
>
> Thank you
>
> -Original Message-
> From: Andrea Cosentino 
> Sent: Sunday, December 27, 2020 10:48 AM
> To: users@camel.apache.org
> Subject: Re: Correction needed in Documentation for Camel Source Connector
>
> Thanks for spotting.
>
> Il dom 27 dic 2020, 16:39 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > I am not yet familiar with other capabilities of a connector, have
> > just started learning about it.
> >
> > In regards to the comment  :  You're wrong. The aggregation strategy
> > can be used in sink too.
> >
> > I am not saying it cannot be used in the sink,  what I am saying is -
> > The official page for Source Connector has Sink connector wording at
> > the bottom - which I think should have been source.
> >
> > This is what is on the documentation page which I am saying the word
> > sink is there by mistake, as well as two lines above this on the page
> >
> > The camel-aws-s3 sink connector has no aggregation strategies out of
> > the box.
> >
> >
> > -Original Message-
> > From: Andrea Cosentino 
> > Sent: Sunday, December 27, 2020 10:33 AM
> > To: users@camel.apache.org
> > Subject: Re: Correction needed in Documentation for Camel Source
> > Connector
> >
> > You're wrong. The aggregation strategy can be used in sink too.
> >
> > Il dom 27 dic 2020, 16:23 Arundhati Bhende <
> > arundhati.bhe...@prudential.com> ha scritto:
> >
> > > Hi,
> > >
> > > I do not know if this is the correct list for this, but I wanted to
> > > bring it to notice of the team that maintains the documentation.
> > >
> > > In the page -  "camel-aws-s3-kafka-connector source configuration"
> > >  at the link
> > > https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fca
> > > me
> > > l.apache.org%2Fcamel-kafka-connector%2Flatest%2Fconnectors%2Fcamel-a
> > > ws
> > > -s3-kafka-source-connector.htmldata=04%7C01%7Carundhati.bhende%
> > > 40
> > > prudential.com%7Cec4cbc71190f49a7c11a08d8aa7cc7a0%7Cd8fde2f593924260
> > > 8a
> > > 030ad01f4746e9%7C0%7C0%7C637446800196356811%7CUnknown%7CTWFpbGZsb3d8
> > > ey
> > > JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1
> > > 00
> > > 0sdata=USwkv0KWNDm%2FJdPEfkQOxmOYwD%2BX9jB0cS7hFYxI7c8%3Dr
> > > es
> > > erved=0  the wording at the bottom needs to be changed ( Seems to be
> > > incorrect copy /paste from Sink connector )
> > >
> > >
> > > Below is the documentation where the word sink needs to be replaced
> > > with source
> > >
> > > The camel-aws-s3 sink connector supports 1 converters out of the
> > > box, which are listed below.
> > > org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> > >
> > > The camel-aws-s3 sink connector supports 1 transforms out of the
> > > box, which are listed below.
> > > org.apache.camel.kafkaconnector.awss3.transformers.S3ObjectTransform
> > > s
> > >
> > > The camel-aws-s3 sink connector has no aggregation strategies out of
> > > the box.
> > >
> > > Thank you
> > >
> > >
> >
>


Re: Camel S3 Source Connector - value.converter

2020-12-27 Thread Andrea Cosentino
I don't think it is feasible to do that at transformers level, we need to
introduce that at camel level eventually.

Il giorno dom 27 dic 2020 alle ore 15:36 Arundhati Bhende <
arundhati.bhe...@prudential.com> ha scritto:

> In order to overcome -"There is no splitting support actually. So it
> will be something like 1 file
> - 1 message."
>
> Is there a way - like using transformer ( I have never used a transformer,
> but wondering if it will help here ) - to split the message into multiple
> messages?  If it is possible, is there an example I could look at to define
> how  I can use it?
>
> -Original Message-
> From: Andrea Cosentino 
> Sent: Saturday, December 26, 2020 10:19 AM
> To: users@camel.apache.org
> Subject: Re: Camel S3 Source Connector - value.converter
>
> You can use any of the dataformat provided by camel or converters from
> kafka connect. So even avro or json.
>
> There is no splitting support actually. So it will be something like 1 file
> - 1 message.
>
>
>
> Il sab 26 dic 2020, 16:08 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Thank you.
> >
> > I will test with setting the autocloseBody to false.  In that case,
> > will I be able to use any valid converter ?  What I mean is currently
> > I am testing with a simple text file and hence was trying with
> > StringConverter,  but in reality, those maybe JSON on Avro formatted
> > messages in the files, so will I be able to use those formats as
> converters?
> >
> > Other part of the question - after I get a simple example working   - a
> > single file will contain multiple records, one record per line - do I
> > need to set-up any other property to convert each line to a separate
> > message to the topic?
> >
> > One doubt / question - to use the camel-aws2-s3 connector, our aws set-up
> > must be at version 2, right?   Because, currently it is at v1 and do not
> > know, if we have immediate option of moving to v2
> >
> >
> >
> > On 12/26/20, 10:00 AM, "Andrea Cosentino"  wrote:
> >
> > Yes, the approach without url is correct
> >  Autoclosebody will close s3object after consuming the payload, so
> > you want
> > to get something it must be equal to false, while using the
> converter.
> >
> > My suggestion by the way is using the camel-aws2-s3 connector,
> > based on ask
> > v2
> >
> > Il sab 26 dic 2020, 15:53 Arundhati Bhende <
> > arundhati.bhe...@prudential.com>
> > ha scritto:
> >
> > > This is the full configuration that I have used.  I am been
> > experimenting
> > > with changing the value.converter between StringConverter and
> > > S3ObjectConverter - otherwise everything is same.
> > >
> > >
> > >
> > > DATA=$( cat << EOF
> > > {
> > > "connector.class":
> > > "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
> > > "key.converter":
> > > "org.apache.kafka.connect.storage.StringConverter",
> > > "value.converter":
> > > "org.apache.kafka.connect.storage.StringConverter",
> > > "camel.source.maxPollDuration": "1",
> > > "topics": "TEST-S3-SOURCE-DZONE-POC",
> > > "camel.source.path.bucketNameOrArn": " push-json-poc",
> > > "camel.component.aws-s3.region": "US_EAST_1",
> > > "tasks.max": "1",
> > > "camel.source.endpoint.useIAMCredentials": "true",
> > > "camel.source.endpoint.autocloseBody": "true"
> > > }
> > > EOF
> > > )
> > > #"value.converter":
> > >
> "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
> > >
> > >
> > > This is official one from the link in the other email.
> > > name=CamelAWSS3SourceConnector
> > >
> > >
> >
> connector.class=org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector
> > > key.converter=org.apache.kafka.connect.storage.StringConverter
> > >
> > >
> >
> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> > >
> > > camel.s

Re: Correction needed in Documentation for Camel Source Connector

2020-12-27 Thread Andrea Cosentino
Thanks for spotting.

Il dom 27 dic 2020, 16:39 Arundhati Bhende 
ha scritto:

> I am not yet familiar with other capabilities of a connector, have just
> started learning about it.
>
> In regards to the comment  :  You're wrong. The aggregation strategy can
> be used in sink too.
>
> I am not saying it cannot be used in the sink,  what I am saying is -
>  The official page for Source Connector has Sink connector wording at the
> bottom - which I think should have been source.
>
> This is what is on the documentation page which I am saying the word sink
> is there by mistake, as well as two lines above this on the page
>
> The camel-aws-s3 sink connector has no aggregation strategies out of the
> box.
>
>
> -Original Message-
> From: Andrea Cosentino 
> Sent: Sunday, December 27, 2020 10:33 AM
> To: users@camel.apache.org
> Subject: Re: Correction needed in Documentation for Camel Source Connector
>
> You're wrong. The aggregation strategy can be used in sink too.
>
> Il dom 27 dic 2020, 16:23 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Hi,
> >
> > I do not know if this is the correct list for this, but I wanted to
> > bring it to notice of the team that maintains the documentation.
> >
> > In the page -  "camel-aws-s3-kafka-connector source configuration"
> >  at the link
> > https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fcame
> > l.apache.org%2Fcamel-kafka-connector%2Flatest%2Fconnectors%2Fcamel-aws
> > -s3-kafka-source-connector.htmldata=04%7C01%7Carundhati.bhende%40
> > prudential.com%7Cec4cbc71190f49a7c11a08d8aa7cc7a0%7Cd8fde2f5939242608a
> > 030ad01f4746e9%7C0%7C0%7C637446800196356811%7CUnknown%7CTWFpbGZsb3d8ey
> > JWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C100
> > 0sdata=USwkv0KWNDm%2FJdPEfkQOxmOYwD%2BX9jB0cS7hFYxI7c8%3Dres
> > erved=0  the wording at the bottom needs to be changed ( Seems to be
> > incorrect copy /paste from Sink connector )
> >
> >
> > Below is the documentation where the word sink needs to be replaced
> > with source
> >
> > The camel-aws-s3 sink connector supports 1 converters out of the box,
> > which are listed below.
> > org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> >
> > The camel-aws-s3 sink connector supports 1 transforms out of the box,
> > which are listed below.
> > org.apache.camel.kafkaconnector.awss3.transformers.S3ObjectTransforms
> >
> > The camel-aws-s3 sink connector has no aggregation strategies out of
> > the box.
> >
> > Thank you
> >
> >
>


Re: Correction needed in Documentation for Camel Source Connector

2020-12-27 Thread Andrea Cosentino
You're wrong. The aggregation strategy can be used in sink too.

Il dom 27 dic 2020, 16:23 Arundhati Bhende 
ha scritto:

> Hi,
>
> I do not know if this is the correct list for this, but I wanted to bring
> it to notice of the team that maintains the documentation.
>
> In the page –  “camel-aws-s3-kafka-connector source configuration”
>  at the link
> https://camel.apache.org/camel-kafka-connector/latest/connectors/camel-aws-s3-kafka-source-connector.html
>  the wording at the bottom needs to be changed ( Seems to be incorrect copy
> /paste from Sink connector )
>
>
> Below is the documentation where the word sink needs to be replaced with
> source
>
> The camel-aws-s3 sink connector supports 1 converters out of the box,
> which are listed below.
> org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
>
> The camel-aws-s3 sink connector supports 1 transforms out of the box,
> which are listed below.
> org.apache.camel.kafkaconnector.awss3.transformers.S3ObjectTransforms
>
> The camel-aws-s3 sink connector has no aggregation strategies out of the
> box.
>
> Thank you
>
>


Re: Camel S3 Source Connector - value.converter

2020-12-26 Thread Andrea Cosentino
Yes

Il sab 26 dic 2020, 16:25 Arundhati Bhende 
ha scritto:

> Thanks again.  Will test with these combinations of different value
> converters after I am able to get the avro format files.
>
> In the previous response, you had mentioned it is better to use aws2-s3
> version of connector  - So, in order to do so,   to use the camel-aws2-s3
> connector, our aws set-up must be at version 2, right?   Because, currently
> it is at v1 and do not know, if we have immediate option of moving to v2
>
> -Original Message-----
> From: Andrea Cosentino 
> Sent: Saturday, December 26, 2020 10:19 AM
> To: users@camel.apache.org
> Subject: Re: Camel S3 Source Connector - value.converter
>
> You can use any of the dataformat provided by camel or converters from
> kafka connect. So even avro or json.
>
> There is no splitting support actually. So it will be something like 1 file
> - 1 message.
>
>
>
> Il sab 26 dic 2020, 16:08 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > Thank you.
> >
> > I will test with setting the autocloseBody to false.  In that case,
> > will I be able to use any valid converter ?  What I mean is currently
> > I am testing with a simple text file and hence was trying with
> > StringConverter,  but in reality, those maybe JSON on Avro formatted
> > messages in the files, so will I be able to use those formats as
> converters?
> >
> > Other part of the question - after I get a simple example working   - a
> > single file will contain multiple records, one record per line - do I
> > need to set-up any other property to convert each line to a separate
> > message to the topic?
> >
> > One doubt / question - to use the camel-aws2-s3 connector, our aws set-up
> > must be at version 2, right?   Because, currently it is at v1 and do not
> > know, if we have immediate option of moving to v2
> >
> >
> >
> > On 12/26/20, 10:00 AM, "Andrea Cosentino"  wrote:
> >
> > Yes, the approach without url is correct
> >  Autoclosebody will close s3object after consuming the payload, so
> > you want
> > to get something it must be equal to false, while using the
> converter.
> >
> > My suggestion by the way is using the camel-aws2-s3 connector,
> > based on ask
> > v2
> >
> > Il sab 26 dic 2020, 15:53 Arundhati Bhende <
> > arundhati.bhe...@prudential.com>
> > ha scritto:
> >
> > > This is the full configuration that I have used.  I am been
> > experimenting
> > > with changing the value.converter between StringConverter and
> > > S3ObjectConverter - otherwise everything is same.
> > >
> > >
> > >
> > > DATA=$( cat << EOF
> > > {
> > > "connector.class":
> > > "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
> > > "key.converter":
> > > "org.apache.kafka.connect.storage.StringConverter",
> > > "value.converter":
> > > "org.apache.kafka.connect.storage.StringConverter",
> > > "camel.source.maxPollDuration": "1",
> > > "topics": "TEST-S3-SOURCE-DZONE-POC",
> > > "camel.source.path.bucketNameOrArn": " push-json-poc",
> > > "camel.component.aws-s3.region": "US_EAST_1",
> > > "tasks.max": "1",
> > > "camel.source.endpoint.useIAMCredentials": "true",
> > > "camel.source.endpoint.autocloseBody": "true"
> > > }
> > > EOF
> > > )
> > > #"value.converter":
> > >
> "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
> > >
> > >
> > > This is official one from the link in the other email.
> > > name=CamelAWSS3SourceConnector
> > >
> > >
> >
> connector.class=org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector
> > > key.converter=org.apache.kafka.connect.storage.StringConverter
> > >
> > >
> >
> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> > >
> > > camel.source.maxPollDuration=1
> > >
> > > topics=mytopic

Re: Camel S3 Source Connector - value.converter

2020-12-26 Thread Andrea Cosentino
You can use any of the dataformat provided by camel or converters from
kafka connect. So even avro or json.

There is no splitting support actually. So it will be something like 1 file
- 1 message.



Il sab 26 dic 2020, 16:08 Arundhati Bhende 
ha scritto:

> Thank you.
>
> I will test with setting the autocloseBody to false.  In that case, will I
> be able to use any valid converter ?  What I mean is currently I am testing
> with a simple text file and hence was trying with StringConverter,  but in
> reality, those maybe JSON on Avro formatted messages in the files, so will
> I be able to use those formats as converters?
>
> Other part of the question - after I get a simple example working   - a
> single file will contain multiple records, one record per line - do I need
> to set-up any other property to convert each line to a separate message to
> the topic?
>
> One doubt / question - to use the camel-aws2-s3 connector, our aws set-up
> must be at version 2, right?   Because, currently it is at v1 and do not
> know, if we have immediate option of moving to v2
>
>
>
> On 12/26/20, 10:00 AM, "Andrea Cosentino"  wrote:
>
> Yes, the approach without url is correct
>  Autoclosebody will close s3object after consuming the payload, so you
> want
> to get something it must be equal to false, while using the converter.
>
> My suggestion by the way is using the camel-aws2-s3 connector, based
> on ask
> v2
>
> Il sab 26 dic 2020, 15:53 Arundhati Bhende <
> arundhati.bhe...@prudential.com>
> ha scritto:
>
> > This is the full configuration that I have used.  I am been
> experimenting
> > with changing the value.converter between StringConverter and
> > S3ObjectConverter - otherwise everything is same.
> >
> >
> >
> > DATA=$( cat << EOF
> > {
> > "connector.class":
> > "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
> > "key.converter":
> > "org.apache.kafka.connect.storage.StringConverter",
> > "value.converter":
> > "org.apache.kafka.connect.storage.StringConverter",
> > "camel.source.maxPollDuration": "1",
> > "topics": "TEST-S3-SOURCE-DZONE-POC",
> > "camel.source.path.bucketNameOrArn": " push-json-poc",
> > "camel.component.aws-s3.region": "US_EAST_1",
> > "tasks.max": "1",
> > "camel.source.endpoint.useIAMCredentials": "true",
> > "camel.source.endpoint.autocloseBody": "true"
> > }
> > EOF
> > )
> > #"value.converter":
> > "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
> >
> >
> > This is official one from the link in the other email.
> > name=CamelAWSS3SourceConnector
> >
> >
> connector.class=org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector
> > key.converter=org.apache.kafka.connect.storage.StringConverter
> >
> >
> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
> >
> > camel.source.maxPollDuration=1
> >
> > topics=mytopic
> >
> > camel.source.url=aws-s3://camel-kafka-connector?autocloseBody=false
> >
> > camel.component.aws-s3.access-key=
> > camel.component.aws-s3.secret-key=
> > camel.component.aws-s3.region=EU_WEST_1
> >
> >
> >
> >
> > I can see two differences -
> > 1.  How the AWS credentials are set-up
> > 2.  In my configuration
> > I am using camel.source.path.bucketNameOrArn instead of
> > camel.source.url
> > Because I was getting error that bucketNameOrArn must be
> defined
> >
> > In my configuration - autocloseBody is set true to whereas
> in the
> > default configuration it is set to false
> >
> > How does autocloseBody work?  What impact does it have when
> > setting it to true vs. false?
> >
> >
> > Thank you
> >
> >
> > -Original Message-
> > From: Andrea Cosentino 
> > Sent: Saturday, December 26, 2020 3:59 AM
> > To: users@camel.apache.org
> > Subject: Re: Ca

Re: Camel S3 Source Connector - value.converter

2020-12-26 Thread Andrea Cosentino
Yes, the approach without url is correct
 Autoclosebody will close s3object after consuming the payload, so you want
to get something it must be equal to false, while using the converter.

My suggestion by the way is using the camel-aws2-s3 connector, based on ask
v2

Il sab 26 dic 2020, 15:53 Arundhati Bhende 
ha scritto:

> This is the full configuration that I have used.  I am been experimenting
> with changing the value.converter between StringConverter and
> S3ObjectConverter - otherwise everything is same.
>
>
>
> DATA=$( cat << EOF
> {
> "connector.class":
> "org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector",
> "key.converter":
> "org.apache.kafka.connect.storage.StringConverter",
> "value.converter":
> "org.apache.kafka.connect.storage.StringConverter",
> "camel.source.maxPollDuration": "1",
> "topics": "TEST-S3-SOURCE-DZONE-POC",
> "camel.source.path.bucketNameOrArn": " push-json-poc",
> "camel.component.aws-s3.region": "US_EAST_1",
> "tasks.max": "1",
> "camel.source.endpoint.useIAMCredentials": "true",
> "camel.source.endpoint.autocloseBody": "true"
> }
> EOF
> )
> #"value.converter":
> "org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter",
>
>
> This is official one from the link in the other email.
> name=CamelAWSS3SourceConnector
>
> connector.class=org.apache.camel.kafkaconnector.awss3.CamelAwss3SourceConnector
> key.converter=org.apache.kafka.connect.storage.StringConverter
>
> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
>
> camel.source.maxPollDuration=1
>
> topics=mytopic
>
> camel.source.url=aws-s3://camel-kafka-connector?autocloseBody=false
>
> camel.component.aws-s3.access-key=
> camel.component.aws-s3.secret-key=
> camel.component.aws-s3.region=EU_WEST_1
>
>
>
>
> I can see two differences -
> 1.  How the AWS credentials are set-up
> 2.  In my configuration
> I am using camel.source.path.bucketNameOrArn instead of
> camel.source.url
> Because I was getting error that bucketNameOrArn must be defined
>
> In my configuration - autocloseBody is set true to whereas in the
> default configuration it is set to false
>
> How does autocloseBody work?  What impact does it have when
> setting it to true vs. false?
>
>
> Thank you
>
>
> -Original Message-
> From: Andrea Cosentino 
> Sent: Saturday, December 26, 2020 3:59 AM
> To: users@camel.apache.org
> Subject: Re: Camel S3 Source Connector - value.converter
>
> Can you show your full configuration?
>
> Il sab 26 dic 2020, 07:40 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
> > I am trying to test The Camel S3 Source connector using basic example
> > from this link
> >
> https://nam01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdzone.com%2Farticles%2Freading-aws-s3-file-content-to-kafka-topicdata=04%7C01%7Carundhati.bhende%40prudential.com%7C2fe91ef852ff491ba73608d8a97c923a%7Cd8fde2f5939242608a030ad01f4746e9%7C0%7C0%7C637445700040770003%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000sdata=84qab73sSiJnvgsTZGVcraOjp8tEMuLmNyeFOrYHZbM%3Dreserved=0
> .
> > Below is the file I am uploading to S3.
> >
> > testfile.txt
> > add one line
> > add another line
> > test the connector
> >
> >
> > After the connector picks up and processes the file, If I use this for
> > value converter
> >
> >
> > value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3Obj
> > ectConverter
> >
> > When I run the consumer, it shows the message processed but the output
> > shows blank.
> >
> > If I use the String converter as value converter
> >
> > org.apache.kafka.connect.storage.StringConverter
> >
> > I get the output in the form of
> >
> > com.amazonaws.services.s3.model.S3ObjectInputStream@522e135a
> > com.amazonaws.services.s3.model.S3ObjectInputStream@6c6cf103
> >
> > From the example in the above link, the author doesn't seem to have to
> > do any actions to see the string.
> >
> > So, why would the S3ObjectConverter show a blank output for me?  I can
> > understand the String representation for the object really shows the
> > memory address.
> >
> > What do I need to look at to fix the output.
> >
> > Also, is it possible to change the converter to JSON or Avro converter
> > in the configuration file or I need to add either different properties
> > to the configuration or do some processing afterwards?
> >
> > Thank You
> >
> >
>


Re: Camel S3 Source Connector - value.converter

2020-12-26 Thread Andrea Cosentino
As example use the official one

https://github.com/apache/camel-kafka-connector-examples/blob/master/aws-s3-to-jms/config/CamelAWSS3SourceConnector.properties

Il sab 26 dic 2020, 09:59 Andrea Cosentino  ha scritto:

> Can you show your full configuration?
>
> Il sab 26 dic 2020, 07:40 Arundhati Bhende <
> arundhati.bhe...@prudential.com> ha scritto:
>
>> I am trying to test The Camel S3 Source connector using basic example
>> from this link
>> https://dzone.com/articles/reading-aws-s3-file-content-to-kafka-topic.
>> Below is the file I am uploading to S3.
>>
>> testfile.txt
>> add one line
>> add another line
>> test the connector
>>
>>
>> After the connector picks up and processes the file, If I use this for
>> value converter
>>
>>
>> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
>>
>> When I run the consumer, it shows the message processed but the output
>> shows blank.
>>
>> If I use the String converter as value converter
>>
>> org.apache.kafka.connect.storage.StringConverter
>>
>> I get the output in the form of
>>
>> com.amazonaws.services.s3.model.S3ObjectInputStream@522e135a
>> com.amazonaws.services.s3.model.S3ObjectInputStream@6c6cf103
>>
>> From the example in the above link, the author doesn't seem to have to do
>> any actions to see the string.
>>
>> So, why would the S3ObjectConverter show a blank output for me?  I can
>> understand the String representation for the object really shows the memory
>> address.
>>
>> What do I need to look at to fix the output.
>>
>> Also, is it possible to change the converter to JSON or Avro converter in
>> the configuration file or I need to add either different properties to the
>> configuration or do some processing afterwards?
>>
>> Thank You
>>
>>


Re: Camel S3 Source Connector - value.converter

2020-12-26 Thread Andrea Cosentino
Can you show your full configuration?

Il sab 26 dic 2020, 07:40 Arundhati Bhende 
ha scritto:

> I am trying to test The Camel S3 Source connector using basic example from
> this link
> https://dzone.com/articles/reading-aws-s3-file-content-to-kafka-topic.
> Below is the file I am uploading to S3.
>
> testfile.txt
> add one line
> add another line
> test the connector
>
>
> After the connector picks up and processes the file, If I use this for
> value converter
>
>
> value.converter=org.apache.camel.kafkaconnector.awss3.converters.S3ObjectConverter
>
> When I run the consumer, it shows the message processed but the output
> shows blank.
>
> If I use the String converter as value converter
>
> org.apache.kafka.connect.storage.StringConverter
>
> I get the output in the form of
>
> com.amazonaws.services.s3.model.S3ObjectInputStream@522e135a
> com.amazonaws.services.s3.model.S3ObjectInputStream@6c6cf103
>
> From the example in the above link, the author doesn't seem to have to do
> any actions to see the string.
>
> So, why would the S3ObjectConverter show a blank output for me?  I can
> understand the String representation for the object really shows the memory
> address.
>
> What do I need to look at to fix the output.
>
> Also, is it possible to change the converter to JSON or Avro converter in
> the configuration file or I need to add either different properties to the
> configuration or do some processing afterwards?
>
> Thank You
>
>


R: RE: Apache Camel EOL Versions

2020-12-15 Thread Andrea Cosentino
3.4.x will be supported as lts for one year from the release date of 3.4.0

Inviato da Yahoo Mail su Android 
 
  Il mar, 15 dic, 2020 alle 17:40, Ajmera, Hemang C ha 
scritto:   
Hi
  My understanding was that LTS version will be supported for 1 year. So even 
if 3.7 is release version 3.4 will be continued to be supported for bugs and 
security fixes. Did I misunderstood something?

This is the reference 
https://camel.apache.org/blog/2020/03/LTS-Release-Schedule/


Thanks and Regards,
Hemang Ajmera


-Original Message-
From: Zoran Regvart  
Sent: 15 December 2020 21:42
To: Symphoni Bush - NOAA Affiliate 
Cc: users@camel.apache.org
Subject: Re: Apache Camel EOL Versions


EXTERNAL SENDER:  Do not click any links or open any attachments unless you 
trust the sender and know the content is safe.
EXPÉDITEUR EXTERNE:    Ne cliquez sur aucun lien et n’ouvrez aucune pièce 
jointe à moins qu’ils ne proviennent d’un expéditeur fiable, ou que vous ayez 
l'assurance que le contenu provient d'une source sûre.

Hi Symphoni,
I wonder where you found the owner mail alias? Moving the conversation to the 
users mailing list.

I'll try to answer to the best of my abilities, others may correct me.

Answer differs slightly based on the major version (2.x or 3.x). For 2.x we 
supported the latest two releases. Since version 3.x we support the latest and 
last two LTS versions. Exception to that is when we released 3.0 we opted to 
support the latest 2.x release as well. All other versions should be considered 
EOL at the release of a newer version. This is with respect to what we're 
willing to do here at ASF.
Organizations providing commercial support for Camel would most likely have 
their own definition of support and EOL dates.

So the date a version is considered EOL is:

1/ for 2.x releases: when a new minor release was done, the second oldest minor 
release was considered EOL, for example 2.25.0 was released on 2020-01-23, 
making 2.23.x EOL from that date.

2/ for non-LTS 3.x: we consider non-LTS releases EOL as soon as a newer non-LTS 
is released, for example 3.6.0 was released on
2020-09-20 making 3.5.0 EOL on that date.

3/ for LTS 3.x: we support the last two LTS releases (currently 3.4 and soon 
3.7), 3.4 will be EOL with the the LTS release done after 3.7

List of versions and release dates is available in the release archive on the 
website here:

https://urldefense.com/v3/__https://camel.apache.org/releases/__;!!AaIhyw!8JFXRBAgcpSfw70r1k6EnsebuUKzZTTL_40k3z4yUhPywv-HIeEhZcvJ91E7908T$
 

It goes back only to 2.18 as that's where we stopped migrating old data to the 
new website. For older dates you need to either go to the mailing list archives 
or to the wiki[1] which was the source for the old website.

You can find further information in the LTS schedule for 2020 blog post:

https://urldefense.com/v3/__https://camel.apache.org/blog/2020/03/LTS-Release-Schedule/__;!!AaIhyw!8JFXRBAgcpSfw70r1k6EnsebuUKzZTTL_40k3z4yUhPywv-HIeEhZcvJ94ppx4Js$
 

zoran

[1] 
https://urldefense.com/v3/__https://cwiki.apache.org/confluence/display/CAMEL/Download__;!!AaIhyw!8JFXRBAgcpSfw70r1k6EnsebuUKzZTTL_40k3z4yUhPywv-HIeEhZcvJ97YrWBkI$
 

On Tue, Dec 15, 2020 at 4:25 PM Symphoni Bush - NOAA Affiliate 
 wrote:
>
> Good morning,
>
> I am trying to figure out what versions of Apache Camel are EOL and if so, 
> what dates these versions became EOL. If there is somewhere this information 
> is stored, please let me know.
>
> Thanks



--
Zoran Regvart
  


Re: Service Now Connector Working Example

2020-12-08 Thread Andrea Cosentino
Hello,

We still need to focus on the servicenow kafka connector. We are going
ahead and trying to test as much as we can, but for third party service
like servicenow is a bit complicated.

Il giorno lun 7 dic 2020 alle ore 19:13 simon.pe...@sbb.ch <
simon.pe...@sbb.ch> ha scritto:

> Hi folks
>
> We would like to use the Service Now Connector, to connect Kafka to
> Service Now.
> We are using the Camel Version 0.6.0.
> However, there seems to be an error, when sending the data to the server.
>
> "org.apache.kafka.connect.errors.ConnectException: Exiting WorkerSinkTask
> due to unrecoverable exception.\n\tat
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:568)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:326)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:228)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:196)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:184)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:234)\n\tat
>
> java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)\n\tat
> java.util.concurrent.FutureTask.run(FutureTask.java:266)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)\n\tat
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)\n\tat
> java.lang.Thread.run(Thread.java:748)\nCaused by:
> org.apache.kafka.connect.errors.ConnectException:
> Exchange delivery has failed!\n\tat
> org.apache.camel.kafkaconnector.CamelSinkTask.put(CamelSinkTask.java:152)\n\tat
>
> org.apache.kafka.connect.runtime.WorkerSinkTask.deliverMessages(WorkerSinkTask.java:546)\n\t...
> 10 more\nCaused by: java.lang.IllegalArgumentException: Unable to
> process
> exchange\n\tat
> org.apache.camel.component.servicenow.AbstractServiceNowProcessor.process(AbstractServiceNowProcessor.java:74)\n\tat
>
> org.apache.camel.support.BaseSelectorProducer.process(BaseSelectorProducer.java:36)\n\tat
>
> org.apache.camel.support.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:66)\n\tat
>
> org.apache.camel.processor.SendDynamicProcessor.lambda$process$0(SendDynamicProcessor.java:195)\n\tat
>
> org.apache.camel.impl.engine.DefaultProducerCache.doInAsyncProducer(DefaultProducerCache.java:317)\n\tat
>
> org.apache.camel.processor.SendDynamicProcessor.process(SendDynamicProcessor.java:180)\n\tat
>
> org.apache.camel.processor.errorhandler.RedeliveryErrorHandler$SimpleTask.run(RedeliveryErrorHandler.java:404)\n\tat
>
> org.apache.camel.impl.engine.DefaultReactiveExecutor$Worker.schedule(DefaultReactiveExecutor.java:148)\n\tat
>
> org.apache.camel.impl.engine.DefaultReactiveExecutor.scheduleMain(DefaultReactiveExecutor.java:60)\n\tat
>
> org.apache.camel.processor.Pipeline.process(Pipeline.java:147)\n\tat
> org.apache.camel.processor.CamelInternalProcessor.process(CamelInternalProcessor.java:288)\n\tat
>
> org.apache.camel.component.direct.DirectProducer.process(DirectProducer.java:67)\n\tat
>
> org.apache.camel.processor.SharedCamelInternalProcessor.process(SharedCamelInternalProcessor.java:217)\n\tat
>
> org.apache.camel.processor.SharedCamelInternalProcessor$1.process(SharedCamelInternalProcessor.java:111)\n\tat
>
> org.apache.camel.impl.engine.DefaultAsyncProcessorAwaitManager.process(DefaultAsyncProcessorAwaitManager.java:83)\n\tat
>
> org.apache.camel.processor.SharedCamelInternalProcessor.process(SharedCamelInternalProcessor.java:108)\n\tat
>
> org.apache.camel.impl.engine.DefaultProducerCache.send(DefaultProducerCache.java:189)\n\tat
>
> org.apache.camel.impl.engine.DefaultProducerTemplate.send(DefaultProducerTemplate.java:175)\n\tat
>
> org.apache.camel.impl.engine.DefaultProducerTemplate.send(DefaultProducerTemplate.java:147)\n\tat
>
> org.apache.camel.kafkaconnector.CamelSinkTask.put(CamelSinkTask.java:149)\n\t..."
>
> This is the configuration:
> class: CamelServicenowSinkConnector
>   config:
> camel.sink.endpoint.userName: service
> camel.sink.endpoint.password: ***
> camel.sink.endpoint.table: incident
> camel.sink.path.instanceName: entw
> camel.sink.endpoint.resource: table
> camel.sink.endpoint.release: HELSINKI
> camel.sink.endpoint.basicPropertyBinding: false
> topics: kafka.connector.servicenow
> value.converter: org.apache.kafka.connect.storage.StringConverter
>
>
> The payload is String value = "{\"short_description\":\"test incident
> kafka\",\"description\":\"this is a test\"}";
>
> Are we missing something?
>
> Cheers Simon
>
>
>


Re: Camel jcifs for smb protocol for camel 3.x?

2020-10-26 Thread Andrea Cosentino
You need to ask to camel-extra team to update to camel 3. We can't control
the version there.

Il lun 26 ott 2020, 14:34 William Juwono  ha
scritto:

> Hello, I need to connect to windows share folder using smb protocol.
> So, from what I see, I need camel jcifs, however the latest version seems
> to be 2.25.2
> I'm currently using camel 3.5.0 and I need version 3 at least because the
> same route needs to connect to S3.
>
> Any suggestions?
>


Re: Character encoding lost when using Camel AWS2S3Endpoint

2020-10-16 Thread Andrea Cosentino
Yes, it should be related to EC2 instance environment.

Il giorno ven 16 ott 2020 alle ore 15:04 राकेश रवळेकर <
rakesh.ravle...@gmail.com> ha scritto:

> Hi Team,
>
> It works with 3.6.0 Snapshot on our local windows/mac machines with
> convertBodyTo(String.class,
> "ISO-8859-1"), however when we run the same on AWS EC2 instance it is
> unable to read the characters. I assume this is related to encoding charset
> or do you have any suggestions?
>
> Thanks
>
> On Fri, Oct 16, 2020 at 12:00 PM राकेश रवळेकर 
> wrote:
>
> > Thanks for the prompt response, surely I will check with the snapshot
> > version and get back.
> >
> > On Fri, Oct 16, 2020 at 11:49 AM Andrea Cosentino 
> > wrote:
> >
> >> Hello,
> >>
> >> This should be fixed in 3.6.0. We changed the way we populate the body,
> so
> >> the charset is not enforced anymore.
> >>
> >> You can try with 3.6.0-SNAPSHOT.
> >>
> >> Let us know if it works, the release for 3.6.0 should be on vote this
> >> weekend.
> >>
> >> Cheers.
> >>
> >> Il giorno ven 16 ott 2020 alle ore 08:11 राकेश रवळेकर <
> >> rakesh.ravle...@gmail.com> ha scritto:
> >>
> >> > Hi All,
> >> >
> >> > We have been using Camel to read files from AWS S3 bucket, however we
> >> could
> >> > see the accented characters in the file body are not readable despite
> >> using
> >> > covertBodyTo in our routes. Further check in AWS2S3Endpoint.java shows
> >> that
> >> > the default charset is used as UTF-8 only?
> >> >
> >> > Reader reader = new BufferedReader(new InputStreamReader(s3Object,
> >> > Charset.forName(StandardCharsets.UTF_8.name(;
> >> >
> >> > Is there any other way if we need to read the file body containing
> such
> >> > accented characters? Have even tried to change the default charset
> using
> >> >
> >> > System.setProperty("org.apache.camel.default.charset", "Cp1252");
> >> >
> >> > We are using below versions
> >> >
> >> > 11
> >> > 2.13.56
> >> > 3.5.0
> >> >
> >> > Please suggest if we are missing anything?
> >> >
> >>
> >
>


Re: Character encoding lost when using Camel AWS2S3Endpoint

2020-10-16 Thread Andrea Cosentino
Hello,

This should be fixed in 3.6.0. We changed the way we populate the body, so
the charset is not enforced anymore.

You can try with 3.6.0-SNAPSHOT.

Let us know if it works, the release for 3.6.0 should be on vote this
weekend.

Cheers.

Il giorno ven 16 ott 2020 alle ore 08:11 राकेश रवळेकर <
rakesh.ravle...@gmail.com> ha scritto:

> Hi All,
>
> We have been using Camel to read files from AWS S3 bucket, however we could
> see the accented characters in the file body are not readable despite using
> covertBodyTo in our routes. Further check in AWS2S3Endpoint.java shows that
> the default charset is used as UTF-8 only?
>
> Reader reader = new BufferedReader(new InputStreamReader(s3Object,
> Charset.forName(StandardCharsets.UTF_8.name(;
>
> Is there any other way if we need to read the file body containing such
> accented characters? Have even tried to change the default charset using
>
> System.setProperty("org.apache.camel.default.charset", "Cp1252");
>
> We are using below versions
>
> 11
> 2.13.56
> 3.5.0
>
> Please suggest if we are missing anything?
>


Re: Unable to use Spring boot properties to configure Encoders for Netty Component in 3.4.0

2020-10-07 Thread Andrea Cosentino
This should be fixed in 3.6.x

Il mer 7 ott 2020, 21:47 Yifan Wu  ha scritto:

> Hi,
>
> I am trying to use Netty Spring boot autoconfiguration.  I want to set
> encoders in the properties with the following line.
> camel.component.netty.encoders=#myCustomEncoder
>
> I have my bean defined:
>  class="io.netty.handler.codec.string.StringEncoder"/>
>
> The Camel application failed to start because:
>
> Failed to bind properties under 'camel.component.netty.encoders' to
> java.util.List:
> Reason: No converter found capable of converting from type
> [java.lang.String] to type [java.util.List]
>
>
> Any help is appreciated.
>
> Thanks
> Yifan
>


Re: Apache camel running on multiple nodes

2020-09-30 Thread Andrea Cosentino
I believe you already have an answer on this.

Il giorno gio 1 ott 2020 alle ore 07:50 SRAVAN KUMAR <
pabbasrava...@gmail.com> ha scritto:

> Hi Team,
>
>  I want to run the apache camel file route on multiple nodes and the source
> folder is the same for both nodes and expecting the route to process files
> parallelly
>
> > when I tried the running camel on multiple nodes, files are processing in
> a round-robin fashion (only one ode processing is at a time)
> > Example:- if Node1 pick  file1  then Node2 is waiting for node1 to
> process
> > after node1 processing completes, node2 will pick a new file (Node1 will
> wait for node2 to process file)
>
>
> MY FILE ROUTE :
> from("file:\\\folder?maxMessagesPerPoll=1=true&
> readLockMinLength=0=changed=1000
> eadLockCheckInterval=1000)
>


Re: Camel 3.4.4 feature:install camel-json-validator causes an error

2020-09-28 Thread Andrea Cosentino
We have a verify phase in camel-karaf. This seems pretty weird. I have to
check. Can you please open an issue? Thanks

Il lun 28 set 2020, 21:38 Gerald Kallas  ha scritto:

> Dear all,
>
> I'm trying to install
>
> feature:install camel-json-validator
>
> on Karaf 4.2.9. It causes an error
>
> java.lang.ArrayIndexOutOfBoundsException: Index 19 out of bounds for
> length 19
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:576)
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:494)
> at aQute.bnd.osgi.Clazz.parseClassFileWithCollector(Clazz.java:483)
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:473)
> at aQute.bnd.osgi.Analyzer.analyzeJar(Analyzer.java:2177)
> at aQute.bnd.osgi.Analyzer.analyzeBundleClasspath(Analyzer.java:2083)
> at aQute.bnd.osgi.Analyzer.analyze(Analyzer.java:138)
> at aQute.bnd.osgi.Analyzer.calcManifest(Analyzer.java:616)
> at org.ops4j.pax.swissbox.bnd.BndUtils.createBundle(BndUtils.java:161)
> at
> org.ops4j.pax.url.wrap.internal.Connection.getInputStream(Connection.java:83)
> at java.base/java.net.URL.openStream(URL.java:1140)
> at
> org.apache.karaf.features.internal.download.impl.SimpleDownloadTask.download(SimpleDownloadTask.java:78)
> at
> org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
> java.lang.ArrayIndexOutOfBoundsException: Index 19 out of bounds for
> length 19
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:576)
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:494)
> at aQute.bnd.osgi.Clazz.parseClassFileWithCollector(Clazz.java:483)
> at aQute.bnd.osgi.Clazz.parseClassFile(Clazz.java:473)
> at aQute.bnd.osgi.Analyzer.analyzeJar(Analyzer.java:2177)
> at aQute.bnd.osgi.Analyzer.analyzeBundleClasspath(Analyzer.java:2083)
> at aQute.bnd.osgi.Analyzer.analyze(Analyzer.java:138)
> at aQute.bnd.osgi.Analyzer.calcManifest(Analyzer.java:616)
> at org.ops4j.pax.swissbox.bnd.BndUtils.createBundle(BndUtils.java:161)
> at
> org.ops4j.pax.url.wrap.internal.Connection.getInputStream(Connection.java:83)
> at java.base/java.net.URL.openStream(URL.java:1140)
> at
> org.apache.karaf.features.internal.download.impl.SimpleDownloadTask.download(SimpleDownloadTask.java:78)
> at
> org.apache.karaf.features.internal.download.impl.AbstractRetryableDownloadTask.run(AbstractRetryableDownloadTask.java:60)
> at
> java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
> at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
> at
> java.base/java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:304)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
> at
> java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
> at java.base/java.lang.Thread.run(Thread.java:834)
>
> Should I raise a ticket?
>
> Best
> Gerald


Re: Performance regression with bean and ognl expressions in Simple language version 3.4.x

2020-09-21 Thread Andrea Cosentino
Thanks a lot!

Il giorno lun 21 set 2020 alle ore 14:22 Denis Chirov
 ha scritto:

> Hi,
>
> The sample and a Java Flight recording were uploaded on GitHub:
> https://github.com/dchirov/camel-performance-sample.git
>
> Best Regards, Denis
>
> -Original Message-
> From: Claus Ibsen 
> Sent: Saturday, September 19, 2020 6:28 PM
> To: users@camel.apache.org
> Subject: Re: Performance regression with bean and ognl expressions in
> Simple language version 3.4.x
>
> Hi
>
> Many things have changed of course when you go from a major version v2 to
> v3.
> Can you put together a very small example application that can run
> standalone that can be used to reproduce the issue. And if it can run
> outside Spring Boot with just a basic public static void main then its
> maybe even easier.
>
> And then put the sample somewhere like on github or create a JIRA ticket
> and attach as .zip.
>
>
> On Thu, Sep 17, 2020 at 2:32 AM Corneliu Chitic
>  wrote:
> >
> > Hi,
> >
> > we've identified a performance regression while running same code with
> Apache Camel 3.4.3 + Spring Boot vs Apache Camel 2.24.2 with Spring
> framework 5.1.9. We've migrated one application to this LTS version and we
> face this impact.
> > The main bottleneck is the synchronized block from:
> org.apache.camel.impl.engine.AbstractCamelContext.resolveLanguage(String).
> The root cause is the time spent to validate Simple expressions when using
> bean language (${bean:name?method=something}) or OGNL like calls to POJO
> methods (${exchangeProperty.pojo.method}). According to the stack traces
> the new version spends time to allocate the bean + full setup of it.
> Blocking times are quite high (average 100ms, max could be ~300ms) and as
> the number of parallel processing threads increases it goes up steadily.
> >
> > Has anything changed in version 3.x (or more precisely 3.4.x)? The
> changelogs and upgrade tutorial didn't suggested anything in this area.
> > Is there any configuration flag that would allow us to switch back to
> version 2.x mode of working for this functionality?
> >
> > We have run repeated trials and have consistent results with both
> versions; we have a project setup to demo this and also some Java Flight
> recordings for comparison. I don't think I can attach anything to this
> maillist, please let me know how I can provide any additional input if
> needed.
> >
> > Thank you, Corneliu
> > This email is subject to Computaris email terms of use:
> > https://www.computaris.com/email-terms-use/
>
>
>
> --
> Claus Ibsen
> -
> http://davsclaus.com @davsclaus
> Camel in Action 2: https://www.manning.com/ibsen2
> This email is subject to Computaris email terms of use:
> https://www.computaris.com/email-terms-use/
>


Re: SAP RFC calls with Apache Camel

2020-09-15 Thread Andrea Cosentino
Hello,

Currently no, there is no examples/attempts in Apache Camel codebase

Cheers
Andrea

Il mar 15 set 2020, 21:22 Gerald Kallas  ha scritto:

> Hi all,
>
> I know that there's a camel-sap component from RedHat Fuse. Is there a
> similar way to interact with SAP over RFC with Apache Camel?
>
> Thanks in advance
> - Gerald
>


Re: [ANNOUNCEMENT] Apache Camel 3.5.0 (LTS) with Spring Boot and Karaf Sub-Projects Released

2020-09-08 Thread Andrea Cosentino
For point 2.. the backport on LTS are just for bugs. So it's a patch
release branch. It will be supported for one year from the 3.4.0 release,
so June 2021.

More details: https://camel.apache.org/blog/2020/03/LTS-Release-Schedule/

Il mar 8 set 2020, 14:37 ski n  ha scritto:

> I do it through the mailing list as the blog (
> https://camel.apache.org/blog/2020/09/Camel35-Whatsnew/) don't allow
> comments.
>
> 1) What are the use case of the new "LambdaRouteBuilder" when should I use
> it and when not?
> 2) How long will fixes be backported to LTS 3.4.X. Is that all fixes or a
> subset?
> 3) Will this (or the LTS) release also land in ServiceMix (don't see a lot
> of development going on there)?
>
> Regards,
>
> Raymond
>
> Op za 5 sep. 2020 om 10:33 schreef Gregor Zurowski <
> gre...@list.zurowski.org
> >:
>
> > The Camel community announces the immediate availability of Camel,
> > Camel Spring Boot and Camel Karaf 3.5.0, a new minor release with 240
> > new features, improvements and fixes.
> >
> > The artifacts are published and ready for you to download [1] either
> > from the Apache mirrors or from the Maven Central repository. For more
> > details please take a look at the release notes [2, 3].
> >
> > Many thanks to all who made this release possible.
> >
> > On behalf of the Camel PMC,
> > Gregor Zurowski
> >
> > [1] http://camel.apache.org/download.html
> > [2] https://camel.apache.org/blog/release-3-5-0.html
> > [3] https://camel.apache.org/releases/release-3.5.0/
> >
>


Re: claimCheck availability within ServiceMix 7.0.1 (Apache Camel 2.16.5)

2020-09-01 Thread Andrea Cosentino
No, ClaimCheck has been introduced in 2.21.0

Il giorno mar 1 set 2020 alle ore 14:47 Gerald Kallas 
ha scritto:

> Dear all,
>
> for a legacy project we're working with Apache Servicemix 7.0.1. This
> contains Camel 2.16.5.
>
> While trying to use the claimCheck pattern I'm getting
>
> org.springframework.beans.factory.xml.XmlBeanDefinitionStoreException:
> Line 106 in XML document from URL
> [bundle://425.8:0/META-INF/spring/eib.routes.mappings.RDHRDH011.xml] is
> invalid; nested exception is org.xml.sax.SAXParseException:
> cvc-complex-type.2.4.a: Invalid content was found starting with element
> 'claimCheck'.
>
> Is claimCheck not supported with Camel 2.16.5?
>
> The route starts with
>
> 
>
> http://www.springframework.org/schema/beans; xmlns:xsi="
> http://www.w3.org/2001/XMLSchema-instance;
> xmlns:camel="http://camel.apache.org/schema/spring; xmlns:prop="
> http://camel.apache.org/schema/placeholder;
> xsi:schemaLocation="http://camel.apache.org/schema/spring
> http://camel.apache.org/schema/spring/camel-spring.xsd
> http://www.springframework.org/schema/beans
> http://www.springframework.org/schema/beans/spring-beans.xsd;>
>
>  class="de.ag.cas.eib.mapping.smx.MappingServiceImpl" />
>
> http://camel.apache.org/schema/spring; streamCache="true">
>
> Best
> Gerald
>


Re: Help required for Camel components deployed in cloud

2020-08-31 Thread Andrea Cosentino
There are no cloud formation templates available in the project actually.

Il lun 31 ago 2020, 12:06 senthil kumar sk  ha
scritto:

> Hi ,
>
> We required help for camel components deployed in AWS cloud.
> Please share any cloud formation template is available.
>
>
>
> Thanks
> SenthilKumar SK
>


Re: No signature of method: java.util.LinkedHashMap.resolve() error when get property in groovy script

2020-08-28 Thread Andrea Cosentino
Probably is outdated. Please open an issue so we won't forget. Thanks

Il ven 28 ago 2020, 18:06 Mikhail Lukyanov  ha
scritto:

>
> There is the same thing, it is written that you can call the property like
> this
>
> [image: image.png]
>
>
> пт, 28 авг. 2020 г. в 19:01, Andrea Cosentino :
>
>> You're looking at wrong documentation.
>>
>> https://camel.apache.org/components/2.x/languages/groovy-language.html
>>
>> Il ven 28 ago 2020, 17:58 Mikhail Lukyanov  ha
>> scritto:
>>
>> > Hello, everyone.
>> >
>> > I'm using Camel version 2.25.1.
>> >
>> > I get an error when I want to get the property as indicated here
>> >
>> >
>> https://camel.apache.org/components/latest/languages/groovy-language.html#_using_properties_function
>> through
>> > properties.resolve("test_name") (in JavaSctipt this works)
>> > [image: image.png]
>> >
>> >  Error
>> >
>> >>
>> >> No signature of method: java.util.LinkedHashMap.resolve() is applicable
>> >> for argument types: (java.lang.String) values: [test_name]
>> >> Possible solutions: remove(java.lang.Object), remove(java.lang.Object),
>> >> remove(java.lang.Object, java.lang.Object), clone()
>> >> at
>> >>
>> org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58)
>> >> at
>> >>
>> org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49)
>> >> at
>> >>
>> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>> >> at
>> >>
>> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>> >> at
>> >>
>> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
>> >> at
>> script1598627841751704336236.run(script1598627841751704336236.groovy:1)
>> >
>> >
>> > And I also found the same question on
>> >
>> https://stackoverflow.com/questions/25472136/apache-camel-groovy-scripting/39465657
>> > .
>> >
>> > It turns out that there is an error in the documentation and you can't
>> get
>> > a property in the groovy?
>> > --
>> > *With best regards, Lukyanov Mikhail*
>> > *Tel: **+7-909-69-71-547*
>> >
>>
>
>
> --
> *With best regards, Lukyanov Mikhail*
> *Tel: **+7-909-69-71-547​*
>


Re: No signature of method: java.util.LinkedHashMap.resolve() error when get property in groovy script

2020-08-28 Thread Andrea Cosentino
You're looking at wrong documentation.

https://camel.apache.org/components/2.x/languages/groovy-language.html

Il ven 28 ago 2020, 17:58 Mikhail Lukyanov  ha
scritto:

> Hello, everyone.
>
> I'm using Camel version 2.25.1.
>
> I get an error when I want to get the property as indicated here
>
> https://camel.apache.org/components/latest/languages/groovy-language.html#_using_properties_function
>  through
> properties.resolve("test_name") (in JavaSctipt this works)
> [image: image.png]
>
>  Error
>
>>
>> No signature of method: java.util.LinkedHashMap.resolve() is applicable
>> for argument types: (java.lang.String) values: [test_name]
>> Possible solutions: remove(java.lang.Object), remove(java.lang.Object),
>> remove(java.lang.Object, java.lang.Object), clone()
>> at
>> org.codehaus.groovy.runtime.ScriptBytecodeAdapter.unwrap(ScriptBytecodeAdapter.java:58)
>> at
>> org.codehaus.groovy.runtime.callsite.PojoMetaClassSite.call(PojoMetaClassSite.java:49)
>> at
>> org.codehaus.groovy.runtime.callsite.CallSiteArray.defaultCall(CallSiteArray.java:48)
>> at
>> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:113)
>> at
>> org.codehaus.groovy.runtime.callsite.AbstractCallSite.call(AbstractCallSite.java:125)
>> at script1598627841751704336236.run(script1598627841751704336236.groovy:1)
>
>
> And I also found the same question on
> https://stackoverflow.com/questions/25472136/apache-camel-groovy-scripting/39465657
> .
>
> It turns out that there is an error in the documentation and you can't get
> a property in the groovy?
> --
> *With best regards, Lukyanov Mikhail*
> *Tel: **+7-909-69-71-547*
>


Re: camel-undertow in Camel > 3.2

2020-08-04 Thread Andrea Cosentino
Hello,

Yes. The reason is here: https://issues.apache.org/jira/browse/CAMEL-14939

>From 2.1.0 they won't support OSGi anymore and more and more third party
libraries are doing the same thing.

Il giorno mar 4 ago 2020 alle ore 12:04 Gerald Kallas
 ha scritto:

> Hi folks,
>
> I was trying to setup a Blueprint route with undertow. In the
> documentation I saw
>
> "The following components is no longer supported in OSGi and has been
> removed from the Camel Karaf features file: camel-undertow, ... ."
>
> Does that finally mean that undertow could no longer be used in Camel
> Blueprint? Several things seem to be much easier with undertow than jetty ..
>
> Best
> - Gerald
>


Re: Redesigning the Camel website front page

2020-07-31 Thread Andrea Cosentino
Personally I really like it, but will we remove that computer on right and
will we add a Camel logo there?

I find thath image a bit like an example site...

Other than that, super cool and big +1

Il giorno ven 31 lug 2020 alle ore 18:54 Zoran Regvart 
ha scritto:

> Hi Cameleers,
> in the past few months we had a lot of help with the Camel website
> from two interns from the Outreachy program.
>
> We're now at a point where we're redesigning the website front page
> and we would like to hear some feedback from you. Take a look at the
> preview:
>
> https://deploy-preview-442--camel.netlify.app/
>
> And if you wish to provide some feedback chime in on the pull request:
>
> https://github.com/apache/camel-website/pull/442
>
> thanks!
>
> zoran
> --
> Zoran Regvart
>


R: Re: OSGI service Call using Camel route

2020-07-19 Thread Andrea Cosentino
For helping on Fuse support please ask to Red Hat support

Inviato da Yahoo Mail su Android 
 
  Il dom, 19 lug, 2020 alle 21:30, vignesh k ha scritto:   
// Blueprint referencing the OSGI service


http://www.osgi.org/xmlns/blueprint/v1.0.0; 
xmlns:cm="http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0;>

    
    
    http://camel.apache.org/schema/blueprint;>
        
    


// Camel Java DSL

public class HelloRoutes extends RouteBuilder {


@Override
public void configure() throws Exception {

from("timer:foo?repeatCount=1")
    .bean(com.test.api.Hello.class, "getGreeting")
      .log("The message contains: ${body}");
}
}

//Actual Error Trace

 ERROR [Blueprint Event Dispatcher: 1] Error occurred during starting 
CamelContext: hello-routes-service
org.apache.camel.FailedToCreateRouteException: Failed to create route route19 
at: >>> Bean[com.test.api.Hello] <<< in route: 
Route(route19)[[From[timer:foo?repeatCount=1]] -> [Bean[com because of 
org.apache.camel.component.bean.MethodNotFoundException: Static method with 
name: getGreeting not found on class: com.test.api.Hello
        at 
org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:1303) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.model.RouteDefinition.addRoutes(RouteDefinition.java:204) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.startRoute(DefaultCamelContext.java:1143)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.startRouteDefinitions(DefaultCamelContext.java:3729)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.doStartCamel(DefaultCamelContext.java:3443)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.access$000(DefaultCamelContext.java:209)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3251) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext$2.call(DefaultCamelContext.java:3247) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.doWithDefinedClassLoader(DefaultCamelContext.java:3270)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.doStart(DefaultCamelContext.java:3247)
 ~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.support.ServiceSupport.start(ServiceSupport.java:61) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.impl.DefaultCamelContext.start(DefaultCamelContext.java:3163) 
~[62:org.apache.camel.camel-core:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.blueprint.BlueprintCamelContext.start(BlueprintCamelContext.java:255)
 ~[60:org.apache.camel.camel-blueprint:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.blueprint.BlueprintCamelContext.maybeStart(BlueprintCamelContext.java:297)
 ~[60:org.apache.camel.camel-blueprint:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.camel.blueprint.BlueprintCamelContext.blueprintEvent(BlueprintCamelContext.java:188)
 [60:org.apache.camel.camel-blueprint:2.21.0.fuse-730078-redhat-1]
        at 
org.apache.aries.blueprint.container.BlueprintEventDispatcher$3.call(BlueprintEventDispatcher.java:190)
 [51:org.apache.aries.blueprint.core:1.10.1]
        at 
org.apache.aries.blueprint.container.BlueprintEventDispatcher$3.call(BlueprintEventDispatcher.java:188)
 [51:org.apache.aries.blueprint.core:1.10.1]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [?:?]
        at java.util.concurrent.FutureTask.run(FutureTask.java:266) [?:?]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180)
 [?:?]
        at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293)
 [?:?]
        at 
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) 
[?:?]
        at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) 
[?:?]
        at java.lang.Thread.run(Thread.java:745) [?:?]
Caused by: org.apache.camel.RuntimeCamelException: 
org.apache.camel.component.bean.MethodNotFoundException: Static method with 
name: getGreeting not found on class: 

Re: Camel Kafka Connector - running locally

2020-07-12 Thread Andrea Cosentino
Maybe report the error otherwise it's a bit hard. Report it in the issue
tracker on github

Il dom 12 lug 2020, 17:21 Vikas Jaiswal  ha scritto:

> Yes it is.. I unzip a connector in the plugin location which has its own
> directory.
>
> -Original Message-----
> From: Andrea Cosentino [mailto:anco...@gmail.com]
> Sent: 12 July 2020 20:15
> To: users@camel.apache.org
> Subject: Re: Camel Kafka Connector - running locally
>
> CAUTION: DO NOT click links, open attachments, or provide sensitive
> information if the sender is unknown
>
> Each connector needs to be in his own directory. How is it the your plugin
> path location?
>
> Il dom 12 lug 2020, 16:32 Vikas Jaiswal  ha scritto:
>
> > Hi Camel Team,
> > I tried running one of the examples locally for Kafka connect:
> >
> >
> https://github.com/apache/camel-kafka-connector-examples/tree/master/file/file-source
> >
> > I get lots of camel exception. I have set the "plugin.path" to the
> > location where I have unzipped the connector.
> >
> > I do get the file in the queue. Also how do I read the file that is
> there?
> > Is there a way to read the file record by record and upload to the queue?
> >
> >
> > Regards,
> > Vikas
> >
> >
> Help reduce your carbon footprint | Think before you print. This e-mail
> and any attachments are confidential and intended solely for the addressee
> and may also be privileged or exempt from disclosure under applicable law.
> If you are not the addressee, or have received this e-mail in error, please
> notify the sender immediately, delete it from your system and do not copy,
> disclose or otherwise act upon any part of this e-mail or its attachments.
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Saksoft Group is personal to the
> sender and is not given or endorsed by the Saksoft Group. Any data that you
> provide within the context of your email you will have done so with your
> own consent and GDPR controls will be applied whilst the data is controlled
> or processed within the Saksoft Group.
>
>
>
>
>


Re: Camel Kafka Connector - running locally

2020-07-12 Thread Andrea Cosentino
Each connector needs to be in his own directory. How is it the your plugin
path location?

Il dom 12 lug 2020, 16:32 Vikas Jaiswal  ha scritto:

> Hi Camel Team,
> I tried running one of the examples locally for Kafka connect:
>
> https://github.com/apache/camel-kafka-connector-examples/tree/master/file/file-source
>
> I get lots of camel exception. I have set the "plugin.path" to the
> location where I have unzipped the connector.
>
> I do get the file in the queue. Also how do I read the file that is there?
> Is there a way to read the file record by record and upload to the queue?
>
>
> Regards,
> Vikas
>
>


[SECURITY] CVE-2020-11994 - Server-Side Template Injection and arbitrary file disclosure on Camel templating components

2020-07-08 Thread Andrea Cosentino
A new security advisory has been released for Apache Camel, that is fixed in
the recent 2.25.2 and 3.4.0 releases.

CVE-2020-11994: Server-Side Template Injection and arbitrary file disclosure on 
Camel templating components

Severity: MEDIUM

Vendor: The Apache Software Foundation

Versions Affected: Camel 2.25.0 to 2.25.1, Camel 3.0.0 to 3.3.0. The 
unsupported Camel 2.x (2.24 and earlier) versions may be also affected.

Description: Server-Side Template Injection and arbitrary file disclosure on 
Camel templating components

Mitigation: 2.x users should upgrade to 2.25.2, 3.x users should upgrade to 
3.4.0 The JIRA tickets: https://issues.apache.org/jira/browse/CAMEL-15013 and 
https://issues.apache.org/jira/browse/CAMEL-15050 refer to the various commits 
that resolved the issue, and have more details.

Credit: This issue was discovered by GHSL team member @pwntester (Alvaro Muñoz)

On behalf of the Apache Camel PMC

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


Re: Basic authentication of WAB using Jaas in Karaf - the trick doesn't work any longer w/ Karaf 4.2.9 and Camel 3.4.0

2020-06-28 Thread Andrea Cosentino
I think it's good to have the details shared in public.

Il lun 29 giu 2020, 07:30 Jean-Baptiste Onofre  ha scritto:

> Hi,
>
> Yes Karaf 4.2.9 upgraded to Pax Web 7.2.15 and Jetty 9.4.28.v20200408.
>
> Can you please send a private message about issues you have with Karaf
> 4.2.9 and Camel 3.4.0 (as I’m working on camel karaf for 3.5.0) ?
>
> Thanks,
> Regards
> JB
>
> > Le 28 juin 2020 à 22:02, Gerald Kallas  a écrit :
> >
> > I tested the combination Karaf 4.2.8 and Camel 3.3.0, with this the
> workaround works as expected. Seems that Jetty has been updated in Karaf
> 4.2.9?
> >
> > (The combination Karaf 4.2.8 and Camel 3.4.0 doesn't work due to other
> issues.)
> >
> >> Gerald Kallas  hat am 28.06.2020 18:12
> geschrieben:
> >>
> >>
> >> Hi all,
> >>
> >> I was updating the runtime to Karaf 4.2.9 and Camel 3.4.0.
> >>
> >> after removing one of the org.eclipse.jetty.jaas.JAASLoginService
> entries in my etc/jetty.xml I'm getting an error as attached below.
> >>
> >> Neither hawtio nor my servlet are working any longer. Seems that now
> both entries of org.eclipse.jetty.jaas.JAASLoginService are mandatory.
> >>
> >> With both entries, as you found Grzegorz, the authentication doesn't
> work.
> >>
> >> Should I create a JIRA ticket and if yes, within Karaf? Or maybe you
> have another workaround for that behaviour?
> >>
> >> Best
> >> - Gerald
> >>
> >>
> >> 2020-06-28T16:06:47,673 | ERROR | FelixStartLevel  |
> HttpServiceStarted   | 266 - org.ops4j.pax.web.pax-web-runtime
> - 7.2.16 | Could not start the servlet context for context path []
> >> java.lang.SecurityException: AuthConfigFactory error:
> java.lang.ClassNotFoundException:
> org.apache.geronimo.components.jaspi.AuthConfigFactoryImpl not found by
> org.apache.geronimo.specs.geronimo-jaspic_1.0_spec [169]
> >>at
> javax.security.auth.message.config.AuthConfigFactory.getFactory(AuthConfigFactory.java:77)
> ~[?:?]
> >>at
> org.eclipse.jetty.security.jaspi.JaspiAuthenticatorFactory.getAuthenticator(JaspiAuthenticatorFactory.java:90)
> ~[?:?]
> >>at
> org.eclipse.jetty.security.SecurityHandler.doStart(SecurityHandler.java:394)
> ~[?:?]
> >>at
> org.eclipse.jetty.security.ConstraintSecurityHandler.doStart(ConstraintSecurityHandler.java:419)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.ScopedHandler.doStart(ScopedHandler.java:120)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.session.SessionHandler.doStart(SessionHandler.java:504)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:169)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:110)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:97)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.ScopedHandler.doStart(ScopedHandler.java:120)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.ContextHandler.startContext(ContextHandler.java:898)
> ~[?:?]
> >>at
> org.eclipse.jetty.servlet.ServletContextHandler.startContext(ServletContextHandler.java:356)
> ~[?:?]
> >>at
> org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.startContext(HttpServiceContext.java:396)
> ~[?:?]
> >>at
> org.eclipse.jetty.server.handler.ContextHandler.doStart(ContextHandler.java:838)
> ~[?:?]
> >>at
> org.eclipse.jetty.servlet.ServletContextHandler.doStart(ServletContextHandler.java:275)
> ~[?:?]
> >>at
> org.ops4j.pax.web.service.jetty.internal.HttpServiceContext.doStart(HttpServiceContext.java:272)
> ~[?:?]
> >>at
> org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:72)
> ~[?:?]
> >>at
> org.ops4j.pax.web.service.jetty.internal.JettyServerImpl$1.start(JettyServerImpl.java:329)
> ~[?:?]
> >>at
> org.ops4j.pax.web.service.internal.HttpServiceStarted.registerServlet(HttpServiceStarted.java:255)
> [!/:?]
> >>at
> org.ops4j.pax.web.service.internal.HttpServiceStarted.registerServlet(HttpServiceStarted.java:226)
> [!/:?]
> >>at
> org.ops4j.pax.web.service.internal.HttpServiceStarted.registerServlet(HttpServiceStarted.java:210)
> [!/:?]
> >>at
> org.ops4j.pax.web.service.internal.HttpServiceProxy.registerServlet(HttpServiceProxy.java:69)
> [!/:?]
> >>at
> 

Re: Camel to not lookup OSGi services when looking up a bean

2020-06-25 Thread Andrea Cosentino
What is the camel version for this?

Il giorno gio 25 giu 2020 alle ore 17:09 Martin Lichtin
 ha scritto:

> Is there a way to tell Camel to not search OSGi services when looking up a
> bean?
>
> It seems by default it's continuously looking up the service references,
> would like to turn this off.
>
> Stack TraceSample CountPercentage(%)
> java.util.TreeMap.getEntryUsingComparator(Object)173.753
> java.util.TreeMap.getEntry(Object)173.753
>java.util.TreeMap.get(Object)173.753
> org.apache.felix.framework.ServiceRegistrationImpl.getProperty(String)
> 112.428
> org.apache.felix.framework.ServiceRegistrationImpl$ServiceReferenceMap.get(Object)
> 112.428
> org.apache.felix.framework.capabilityset.CapabilitySet.match(Set,
> SimpleFilter)112.428
> org.apache.felix.framework.capabilityset.CapabilitySet.match(SimpleFilter,
> boolean)91.987
> org.apache.felix.framework.ServiceRegistry.getServiceReferences(String,
> SimpleFilter)91.987
> org.apache.felix.framework.Felix.getServiceReferences(BundleImpl, String,
> String, boolean)91.987
> org.apache.felix.framework.Felix.getAllowedServiceReferences(BundleImpl,
> String, String, boolean)91.987
> org.apache.felix.framework.BundleContextImpl.getServiceReferences(String,
> String)91.987
> org.apache.camel.core.osgi.OsgiServiceRegistry.lookupByName(String) 9
> 1.987
> org.apache.camel.impl.CompositeRegistry.lookupByName(String)9 1.987
> org.apache.camel.impl.PropertyPlaceholderDelegateRegistry.lookupByName(String)
> 91.987
> org.apache.camel.component.bean.RegistryBean.lookupBean()9 1.987
> org.apache.camel.component.bean.RegistryBean.getBean()91.987
> org.apache.camel.component.bean.AbstractBeanProcessor.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.component.bean.BeanProcessor.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.component.bean.BeanProducer.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.processor.SendProcessor.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.processor.RedeliveryErrorHandler.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.processor.CamelInternalProcessor.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.processor.CamelInternalProcessor.process(Exchange,
> AsyncCallback)91.987
> org.apache.camel.processor.DelegateAsyncProcessor.process(Exchange) 9
> 1.987
> org.apache.camel.http.common.CamelServlet.doService(HttpServletRequest,
> HttpServletResponse)91.987
>
> - Martin
>


Re: camel-velocity warnings in Camel 3.4.0

2020-06-24 Thread Andrea Cosentino
https://issues.apache.org/jira/browse/CAMEL-15240

Il giorno gio 25 giu 2020 alle ore 06:36 Andrea Cosentino
 ha scritto:

> This is something in the component, we are setting properties there and we
> need to review the deprecation, anyway it seems they just changed the name.
>
> --
> Andrea Cosentino
> --
> Apache Camel PMC Chair
> Apache Karaf Committer
> Apache Servicemix PMC Member
> Email: ancosen1...@yahoo.com
> Twitter: @oscerd2
> Github: oscerd
>
>
>
>
>
>
> On Thursday, June 25, 2020, 01:39:06 AM GMT+2, Gerald Kallas <
> catsh...@mailbox.org> wrote:
>
>
>
>
>
> Dear all,
>
> while executing camel-velocity templates I'm getting
>
> deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 |
> configuration key 'class.resource.loader.class' has been deprecated in
> favor of 'resource.loader.class.class'
> deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 |
> configuration key 'resource.loader' has been deprecated in favor of
> 'resource.loaders'
> deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 |
> configuration key 'class.resource.loader.description' has been deprecated
> in favor of 'resource.loader.class.description'
>
> I'm just calling the velocity to process templates.
>
> What does that mean and how to avoid this?
>
> Best
> - Gerald
>


Re: camel-velocity warnings in Camel 3.4.0

2020-06-24 Thread Andrea Cosentino
This is something in the component, we are setting properties there and we need 
to review the deprecation, anyway it seems they just changed the name.

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Thursday, June 25, 2020, 01:39:06 AM GMT+2, Gerald Kallas 
 wrote: 





Dear all,

while executing camel-velocity templates I'm getting

deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 | configuration key 
'class.resource.loader.class' has been deprecated in favor of 
'resource.loader.class.class'
deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 | configuration key 
'resource.loader' has been deprecated in favor of 'resource.loaders'
deprecation | 207 - org.apache.velocity.engine-core - 2.1.0 | configuration key 
'class.resource.loader.description' has been deprecated in favor of 
'resource.loader.class.description'

I'm just calling the velocity to process templates.

What does that mean and how to avoid this?

Best
- Gerald


Re: camel release 3.4.0 - struts-core-1.3.8.jar

2020-06-21 Thread Andrea Cosentino
Can you please find out the component?

Il giorno lun 22 giu 2020 alle ore 07:32 Vikas Jaiswal 
ha scritto:

> If you include the camel parent pom struts component gets downloaded. Not
> sure which component is using struts.
>
> Regards,
> Vikas
>
> -Original Message-
> From: Andrea Cosentino [mailto:anco...@gmail.com]
> Sent: 22 June 2020 11:00
> To: users@camel.apache.org
> Subject: Re: camel release 3.4.0 - struts-core-1.3.8.jar
>
> CAUTION: DO NOT click links, open attachments, or provide sensitive
> information if the sender is unknown
>
> What component?
>
> Il giorno lun 22 giu 2020 alle ore 07:27 Vikas Jaiswal <
> vika...@saksoft.com>
> ha scritto:
>
> > Hi,
> >  One of the camel components is downloading a very old struts version
> > 1.3.8.jar . This has got security vulnerabilities. Can this please be
> > updated?
> >
> >
> https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-6117/version_id-164423/Apache-Struts-1.3.8.html
> > Regards,
> > Vikas
> >
> Help reduce your carbon footprint | Think before you print. This e-mail
> and any attachments are confidential and intended solely for the addressee
> and may also be privileged or exempt from disclosure under applicable law.
> If you are not the addressee, or have received this e-mail in error, please
> notify the sender immediately, delete it from your system and do not copy,
> disclose or otherwise act upon any part of this e-mail or its attachments.
> Any opinion or other information in this e-mail or its attachments that
> does not relate to the business of the Saksoft Group is personal to the
> sender and is not given or endorsed by the Saksoft Group. Any data that you
> provide within the context of your email you will have done so with your
> own consent and GDPR controls will be applied whilst the data is controlled
> or processed within the Saksoft Group.
>
>
>
>
>


Re: camel release 3.4.0 - struts-core-1.3.8.jar

2020-06-21 Thread Andrea Cosentino
What component?

Il giorno lun 22 giu 2020 alle ore 07:27 Vikas Jaiswal 
ha scritto:

> Hi,
>  One of the camel components is downloading a very old struts version
> 1.3.8.jar . This has got security vulnerabilities. Can this please be
> updated?
>
> https://www.cvedetails.com/vulnerability-list/vendor_id-45/product_id-6117/version_id-164423/Apache-Struts-1.3.8.html
> Regards,
> Vikas
>


Re: Camel website: documentation or projects

2020-06-18 Thread Andrea Cosentino
I'd say this should be under Documentation and I'd expect to find
description of projects and links to docs, I guess.

One page it's for sure more clear than two.

Thanks for your time on this :-)

Il giorno mer 17 giu 2020 alle ore 23:05 Zoran Regvart 
ha scritto:

> Hi Cameleers,
> we have a discussion on pull request #397[1] about merging the
> Documentation and Projects sections on the website. This came about
> from a fair bit of duplication we currently have between those two
> pages and we're trying to consolidate and simplify.
>
> We're looking for community feedback on this:
>  - would you expect this unified page to be under "Projects" or under
> "Documentation"?
>  - what do you expect to find on this page (description, links, etc?)
>
> thanks :)
>
> zoran
>
> [1] https://github.com/apache/camel-website/pull/397
> --
> Zoran Regvart
>


Re: Apache Camel Issue with Spring Boot 2.1.9 RELEASE

2020-06-17 Thread Andrea Cosentino
The keystore password is wrong.

Il mer 17 giu 2020, 22:33 Mrinal Sharma  ha scritto:

> Can you elaborate more?
>
> -Original Message-
> From: Andrea Cosentino 
> Sent: Wednesday, June 17, 2020 3:54 PM
> To: users@camel.apache.org
> Subject: Re: Apache Camel Issue with Spring Boot 2.1.9 RELEASE
>
> CAUTION - EXTERNAL EMAIL This email originated from outside of Smith Micro
> Software. Do not click links or open attachments unless you recognize the
> sender and know the content is safe.
>
>
> The error is explicit.
>
> Il mer 17 giu 2020, 21:47 Mrinal Sharma  ha
> scritto:
>
> > Hello,
> >
> > I have a Jhipster UAA Server based Spring Boot(2.1.9 RELEASE)
> > application which loads PKCS12 file in UAAServer. Once I added Camel 3
> > Dependency the KeyStore API's started giving errors.
> >
> > --
> > --
> > ---
> >
> > KeyPair keyPair = new KeyStoreKeyFactory(
> >  new
> > ClassPathResource(uaaProperties.getKeyStore().getName()),
> > uaaProperties.getKeyStore().getPassword().toCharArray())
> >  .getKeyPair(uaaProperties.getKeyStore().getAlias());
> >
> > --
> > --
> > ---
> > The error that is thrown is :
> > Caused by: java.io.IOException: keystore password was incorrect
> >
> > Why would addition of Camel 3 cause this error?
> >
> > Thanks,
> > Mrinal Sharma
> >
> >
> >
>


Re: Apache Camel Issue with Spring Boot 2.1.9 RELEASE

2020-06-17 Thread Andrea Cosentino
The error is explicit.

Il mer 17 giu 2020, 21:47 Mrinal Sharma  ha scritto:

> Hello,
>
> I have a Jhipster UAA Server based Spring Boot(2.1.9 RELEASE) application
> which loads PKCS12 file in UAAServer. Once I added Camel 3 Dependency the
> KeyStore API's started giving errors.
>
> ---
>
> KeyPair keyPair = new KeyStoreKeyFactory(
>  new ClassPathResource(uaaProperties.getKeyStore().getName()),
> uaaProperties.getKeyStore().getPassword().toCharArray())
>  .getKeyPair(uaaProperties.getKeyStore().getAlias());
>
> ---
> The error that is thrown is :
> Caused by: java.io.IOException: keystore password was incorrect
>
> Why would addition of Camel 3 cause this error?
>
> Thanks,
> Mrinal Sharma
>
>
>


Re: GSON unmarshalTypeName

2020-05-22 Thread Andrea Cosentino
Hello,

No. Unmarshall expect a dataformat, a dataformat definition or a reference
to a dataformat.

So you'll have to define multiple dataformats for each unmarshallTypeName.

Il giorno ven 22 mag 2020 alle ore 17:31 Jeremy Ross <
jeremy.g.r...@gmail.com> ha scritto:

> Hi all,
>
> Using the GSON data format, is it possible to specify unmarshalTypeName for
> each invocation of unmarshal()?
>


Re: [SECURITY] New security advisory CVE-2020-11972 released for Apache Camel

2020-05-14 Thread Andrea Cosentino
Let me add the credit too

Credit: This issue was discovered by Colm O. HEigeartaigh  from Apache Software Foundation

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd






On Thursday, May 14, 2020, 04:24:12 PM GMT+2, Andrea Cosentino 
 wrote: 





A new security advisory has been released for Apache Camel, that is fixed in
the recent 2.25.1 and 3.2.0 releases.

CVE-2020-11972: Apache Camel RabbitMQ enables Java deserialization by default

Severity: MEDIUM

Vendor: The Apache Software Foundation

Versions Affected: Camel 2.25.0, Camel 3.0.0 to 3.1.0. The unsupported Camel 
2.x (2.24 and earlier) versions may be also affected.

Description: Apache Camel RabbitMQ enables Java deserialization by default

Mitigation: 2.x users should upgrade to 2.25.1, 3.x users should upgrade to 
3.2.0 The JIRA tickets: https://issues.apache.org/jira/browse/CAMEL-14711 
refers to the various commits that resovoled the issue, and have more details.

On behalf of the Apache Camel PMC

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


[SECURITY] New security advisory CVE-2020-11973 released for Apache Camel

2020-05-14 Thread Andrea Cosentino
A new security advisory has been released for Apache Camel, that is fixed in
the recent 2.25.1 and 3.2.0 releases.

CVE-2020-11973: Apache Camel Netty enables Java deserialization by default

Severity: MEDIUM

Vendor: The Apache Software Foundation

Versions Affected: Camel 2.25.0, Camel 3.0.0 to 3.1.0. The unsupported Camel 
2.x (2.24 and earlier) versions may be also affected.

Description: Apache Camel Netty enables Java deserialization by default

Mitigation: 2.x users should upgrade to 2.25.1, 3.x users should upgrade to 
3.2.0 The JIRA tickets: https://issues.apache.org/jira/browse/CAMEL-14447 
refers to the various commits that resovoled the issue, and have more details.

Credit: This issue was discovered by Colm O. HEigeartaigh  from Apache Software Foundation

On behalf of the Apache Camel PMC

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


[SECURITY] New security advisory CVE-2020-11972 released for Apache Camel

2020-05-14 Thread Andrea Cosentino
A new security advisory has been released for Apache Camel, that is fixed in
the recent 2.25.1 and 3.2.0 releases.

CVE-2020-11972: Apache Camel RabbitMQ enables Java deserialization by default

Severity: MEDIUM

Vendor: The Apache Software Foundation

Versions Affected: Camel 2.25.0, Camel 3.0.0 to 3.1.0. The unsupported Camel 
2.x (2.24 and earlier) versions may be also affected.

Description: Apache Camel RabbitMQ enables Java deserialization by default

Mitigation: 2.x users should upgrade to 2.25.1, 3.x users should upgrade to 
3.2.0 The JIRA tickets: https://issues.apache.org/jira/browse/CAMEL-14711 
refers to the various commits that resovoled the issue, and have more details.

On behalf of the Apache Camel PMC

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


[SECURITY] New security advisory CVE-2020-11971 released for Apache Camel

2020-05-14 Thread Andrea Cosentino
A new security advisory has been released for Apache Camel, that is fixed in
the recent 2.25.1 and 3.2.0 releases.

CVE-2020-11971: Apache Camel JMX Rebind Flaw Vulnerability

Severity: MEDIUM

Vendor: The Apache Software Foundation

Versions Affected: Camel 2.25.0, Camel 3.0.0 to 3.1.0. The unsupported Camel 
2.x (2.24 and earlier) versions may be also affected.

Description: Apache Camel JMX Rebind Flaw Vulnerability

Mitigation: 2.x users should upgrade to 2.25.1, 3.x users should upgrade to 
3.2.0 The JIRA tickets: https://issues.apache.org/jira/browse/CAMEL-14811 
refers to the various commits that resovoled the issue, and have more details.

Credit: This issue was discovered by Colm O. HEigeartaigh  from Apache Software Foundation and Jonathan Gallimore  from Tomitribe

On behalf of the Apache Camel PMC

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


Re: Availability of camel-netty-http in Camel 3.x?

2020-05-06 Thread Andrea Cosentino
No, we removed the camel-netty-http coming from 2.x

Camel-netty-http of Camel 3 is the equivalent of camel-netty4-http from
Camel 2.x

It won't be removed in Camel 3, because there is just one version of the
component: no distinction between Netty and Netty 4, just one component
netty-http

Il giorno mer 6 mag 2020 alle ore 16:33 Gerald Kallas 
ha scritto:

> Hi all,
>
> I was reading in the migration documentation
>
> "We have removed all deprecated components from Camel 2.x, including the
> old camel-http, camel-hdfs, camel-mina, camel-mongodb, camel-netty,
> camel-netty-http, camel-quartz, camel-restlet and camel-rx components."
>
> Does that mean that camel-netty-http will be removed soon from Camel 3
> (currently it's still available)?
>
> Thanks in advance
> - Gerald
>


Re: Camel Quarkus Core causes Exception

2020-04-30 Thread Andrea Cosentino
Please report on the github issues tracker of camel-quarkus project.

Il giorno gio 30 apr 2020 alle ore 11:31 Mikael Andersson Wigander <
mikael.grevs...@gmail.com> ha scritto:

> Hi
>
> I just migrated my quarkus sample app to Camel 1.0.0-M7 and using
> 1.4.1.Final of Quarkus on GraalVM for Java 11 and I now get this exception
> on startup:
>
> 2020-04-30 11:24:03,079 ERROR [io.qua.dep.dev.DevModeMain] (main) Failed
> to start Quarkus: java.lang.NoSuchMethodError: 'void
> io.quarkus.builder.BuildChainBuilder.setClassLoader(java.lang.ClassLoader)'
> at
> io.quarkus.deployment.QuarkusAugmentor.run(QuarkusAugmentor.java:92)
> at
> io.quarkus.runner.bootstrap.AugmentActionImpl.runAugment(AugmentActionImpl.java:245)
> at
> io.quarkus.runner.bootstrap.AugmentActionImpl.createInitialRuntimeApplication(AugmentActionImpl.java:130)
> at
> io.quarkus.runner.bootstrap.AugmentActionImpl.createInitialRuntimeApplication(AugmentActionImpl.java:52)
> at io.quarkus.deployment.dev
> .IsolatedDevModeMain.firstStart(IsolatedDevModeMain.java:61)
> at io.quarkus.deployment.dev
> .IsolatedDevModeMain.accept(IsolatedDevModeMain.java:265)
> at io.quarkus.deployment.dev
> .IsolatedDevModeMain.accept(IsolatedDevModeMain.java:40)
> at
> io.quarkus.bootstrap.app.CuratedApplication.runInCl(CuratedApplication.java:129)
> at
> io.quarkus.bootstrap.app.CuratedApplication.runInAugmentClassLoader(CuratedApplication.java:82)
> at io.quarkus.deployment.dev
> .DevModeMain.start(DevModeMain.java:116)
> at io.quarkus.deployment.dev.DevModeMain.main(DevModeMain.java:56)
>
>
> It is due to the camel-quarkus-core dependency probably because when left
> out this error is not thrown.
>
>
> Sample available here
>
> https://github.com/hakuseki/CamelQuarkusDynamicHttp
>
>
> Pls advice
>
> M


[ANNOUNCEMENT] Apache Camel Quarkus 1.0.0-M7 Released

2020-04-30 Thread Andrea Cosentino
The Camel community announces the immediate availability of Camel-Quarkus 
1.0.0-M7

The artifacts are published and ready for you to download either from the 
Apache mirrors or from the Github repository. For more details you
can have a look at the github repository [1]

Many thanks to all who made this release possible.

[1] apache/camel-quarkus


Re: camel s3 consumer filename

2020-04-27 Thread Andrea Cosentino
Let me correct myself, it's




Il giorno lun 27 apr 2020 alle ore 15:11 Andrea Cosentino 
ha scritto:

> When you're consuming file from S3, the name is set in the
> header CamelAwsS3Key
>
> So you could do something like
>
>  />
> 
>
> I didn't test this.
>
> Il giorno lun 27 apr 2020 alle ore 14:34 Axel Bock <
> axel.bock.mail+l...@gmail.com> ha scritto:
>
>> hi list, since you (read: andrea ;) helped a lot on my last question, I
>> have another one. I consume files from s3. but the files are saved under
>> generic file names in my target directory, which I don't want. I actually
>> want to save them under the name they have in s3.
>>
>> this is my config (broken up and without s ...):
>>
>> > />
>> 
>>
>>
>> ... and those are the undesirable file names:
>>
>> $ ll live_test_in
>> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
>> ID-MacBock-Pro-2-fritz-box-1587744607230-0-1
>> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
>> ID-MacBock-Pro-2-fritz-box-1587744607230-0-2
>> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
>> ID-MacBock-Pro-2-fritz-box-1587744607230-0-3
>> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
>> ID-MacBock-Pro-2-fritz-box-1587744607230-0-4
>>
>>
>> annoying. any help here? greatly appreciated :)
>> cheers,
>> axel.
>>
>


Re: camel s3 consumer filename

2020-04-27 Thread Andrea Cosentino
When you're consuming file from S3, the name is set in the
header CamelAwsS3Key

So you could do something like




I didn't test this.

Il giorno lun 27 apr 2020 alle ore 14:34 Axel Bock <
axel.bock.mail+l...@gmail.com> ha scritto:

> hi list, since you (read: andrea ;) helped a lot on my last question, I
> have another one. I consume files from s3. but the files are saved under
> generic file names in my target directory, which I don't want. I actually
> want to save them under the name they have in s3.
>
> this is my config (broken up and without s ...):
>
>  />
> 
>
>
> ... and those are the undesirable file names:
>
> $ ll live_test_in
> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
> ID-MacBock-Pro-2-fritz-box-1587744607230-0-1
> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
> ID-MacBock-Pro-2-fritz-box-1587744607230-0-2
> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
> ID-MacBock-Pro-2-fritz-box-1587744607230-0-3
> -rw-r--r-- 1 tm staff 0 Apr 24 18:10
> ID-MacBock-Pro-2-fritz-box-1587744607230-0-4
>
>
> annoying. any help here? greatly appreciated :)
> cheers,
> axel.
>


Re: org.apache.camel.processor.RedeliveryPolicy not found by org.apache.camel.camel-base

2020-04-21 Thread Andrea Cosentino
It's here

https://github.com/apache/camel/blob/camel-3.0.1/core/camel-base/src/main/java/org/apache/camel/processor/errorhandler/RedeliveryPolicy.java

I think it is missing in the migration guide.

Il mar 21 apr 2020, 23:21 Gerald Kallas  ha scritto:

> I'm working with Camel 3.0.1.
>
> When defining
>
>  class="org.apache.camel.processor.RedeliveryPolicy">
> 
> 
> 
>
>  class="org.apache.camel.builder.DeadLetterChannelBuilder">
> 
>  ref="redeliveryPolicyConfig"/>
> 
>
> as documented I'm getting an error
>
> org.osgi.service.blueprint.container.ComponentDefinitionException: Unable
> to load class org.apache.camel.processor.RedeliveryPolicy from recipe
> BeanRecipe[name='redeliveryPolicyConfig']
> ...
> Caused by: java.lang.ClassNotFoundException:
> org.apache.camel.processor.RedeliveryPolicy not found by
> org.apache.camel.camel-base [94]
>
> Does there have something changed in Camel 3?
>
> Best
> - Gerald
>


Re: S3 and special chars in AWS_SECRET_ACCESS_KEY

2020-04-21 Thread Andrea Cosentino
Hello,

Please use the following notation in this case



Il giorno mar 21 apr 2020 alle ore 15:12 Axel Bock <
axel.bock.mail+l...@gmail.com> ha scritto:

> hi list, i have this problem: when using the s3 module of camel I got the
> following error:
>
> The request signature we calculated does not match the signature you
> provided.
>
> after lots of searching i found out it goes away when the
> aws_secret_access_key does NOT have special chars included and conforms to
> [a-zA-Z0-9]+. the configuration in the XML was this:
>
> 
> uri="aws-s3://MY_BUCKET?prefix=/some/dirregion=EU_CENTRAL_1deleteAfterRead=trueaccessKey=AKIAKIKEYKEYsecretKey=THESECRETKEY+THATDOESNOTWORK"
> />
>
> am I doing something wrong or is this a known limitation?
>
>
> cheers!
> axel.
>


Re: Spring Main unavailable in 3.2

2020-04-17 Thread Andrea Cosentino
Hello,

We moved it in camel-spring-main.

https://camel.apache.org/manual/latest/camel-3x-upgrade-guide.html#_main_in_camel_spring

Il giorno ven 17 apr 2020 alle ore 09:53 Rafael Sainz <
rafael.sa...@tcmpartners.com> ha scritto:

> Hello,
>
> While I was upgrading from Camel 2.x to 3.2. I realized that
> org.apache.camel.spring.Main class is no longer available in Camel 3.2
> though it is within 3.1
>
> Is it going to be supported?
>
> Thanks
>


Re: "since camel 3.3"

2020-04-16 Thread Andrea Cosentino
It will be available from 3.3.0, the site documentation is based on master
and you're looking at the latest documentation, you can select 3.2.x in the
component reference side bar and that component won't appear.

Il giorno gio 16 apr 2020 alle ore 09:26 Axel Bock <
axel.bock.mail+l...@gmail.com> ha scritto:

> hi all, I found a weird information on this page:
>
>
> https://camel.apache.org/components/latest/azure-storage-blob-component.html
>
> basically it says the "new" azure blob component is releases "since camel
> 3.3". is that an error or will this really be valid from the next version
> on?
>
>
> cheers!
> axel.
>


Re: Veracode & Black Duck Issues

2020-04-14 Thread Andrea Cosentino
It depends if we consider the issues reported by these tools, real issues
or not.

Il mar 14 apr 2020, 19:00 Vikas Jaiswal  ha scritto:

> Hi Camel Team,
>   What would be the way to get Veracode &
> Black Duck issues resolved in the Camel code?
> -Vikas
>


[ANNOUNCEMENT] Apache Camel Quarkus 1.0.0-M6 Released

2020-04-12 Thread Andrea Cosentino
The Camel community announces the immediate availability of Camel-Quarkus 
1.0.0-M6

The artifacts are published and ready for you to download either from the 
Apache mirrors or from the Github repository. For more details you
can have a look at the github repository [1]

Many thanks to all who made this release possible.

[1] apache/camel-quarkus

--
Andrea Cosentino 
--
Apache Camel PMC Chair
Apache Karaf Committer
Apache Servicemix PMC Member
Email: ancosen1...@yahoo.com
Twitter: @oscerd2
Github: oscerd


<    1   2   3   4   5   6   >