Re: destination topics in mm2 larger than source topic

2020-07-02 Thread Ricardo Ferreira

Iftach,

I think you should try observe if this happens with other topics. Maybe 
something unrelated might have happened already in the case of the topic 
that currently has ~3TB of data -- making things even harder to 
troubleshoot.


I would recommend creating a new topic with few partitions and configure 
that topic in the whitelist. Then, observe if the same behavior occur. 
If it does then it might be something wrong with MM2 -- likely a bug or 
misconfiguration. If not then you can eliminate MM2 as the cause and 
work at a smaller scale to see if something went south with the topic. 
Maybe that could be something not even related to MM2 such as network 
failures that forced the internal producer of MM2 to retry multiple 
times and hence produce more data that it should.


The bottom-line is that certain troubleshooting exercises are hard or 
sometimes impossible to diagnose with cases that might have been an outlier.


-- Ricardo

On 7/1/20 10:02 AM, Iftach Ben-Yosef wrote:

Hi Ryanne, thanks for the quick reply.

I had the thought it might be compression. I see that the topics have the
following config "compression.type=producer". This is for both the source
and destination topics. Should I check something else regarding compression?

Also, the destination topics are larger than the same topic being mirrored
using mm1 - the sum of the 3 topics mirrored by mm2 is much larger than the
1 topic that mm1 produced (they have the same 3 source topics, only mm1
aggregates to 1 destination topic). Retention is again the same between the
mm1 destination topic and the mm2 destination topics.

Thanks,
Iftach


On Wed, Jul 1, 2020 at 4:54 PM Ryanne Dolan  wrote:


Iftach, is it possible the source topic is compressed?

Ryanne

On Wed, Jul 1, 2020, 8:39 AM Iftach Ben-Yosef 
wrote:


Hello everyone.

I'm testing mm2 for our cross dc topic replication. We used to do it

using

mm1 but faced various issues.

So far, mm2 is working well, but I have 1 issue which I can't really
explain; the destination topic is larger than the source topic.

For example, We have 1 topic which on the source cluster is around
2.8-2.9TB with retention.ms=8640

I added to our mm2 cluster the "sync.topic.configs.enabled=false" config,
and edited the retention.ms of the destination topic to be 5760.

Other

than that, I haven't touched the topic created by mm2 on the destination
cluster.

By logic I'd say that if I shortened the retention on the destination,

the

topic size should decrease, but in practice, I see that it is larger than
the source topic (it's about 4.6TB).
This same behaviour is seen on all 3 topics which I am currently

mirroring

(all 3 from different source clusters, into the same destination

clusters)

Does anyone have any idea as to why mm2 acts this way for me?

Thanks,
Iftach

--
The above terms reflect a potential business arrangement, are provided
solely as a basis for further discussion, and are not intended to be and
do
not constitute a legally binding obligation. No legally binding
obligations
will be created, implied, or inferred until an agreement in final form is
executed in writing by all parties involved.


This email and any
attachments hereto may be confidential or privileged.  If you received
this
communication by mistake, please don't forward it to anyone else, please
erase all copies and attachments, and please let me know that it has gone
to the wrong person. Thanks.



Kafka : Windows support

2020-07-02 Thread Hiremath, Santhosh Boloshankar
Hi,

We were thinking of using Kafka for windows 10 OS, when went through 
documentation (http://kafka.apache.org/documentation/#os
) following statements was found:

Kafka should run well on any unix system and has been tested on Linux and 
Solaris.
We have seen a few issues running on Windows and Windows is not currently a 
well-supported platform though we would be happy to change that.
Kindly let us know when can be expect official support for windows operating 
system.


With best regards,
Santhosh Boloshankar Hiremath

Siemens Technology and Services Private Limited
IOT DS AA PD ARCH
84, Hosur Road
Bengaluru 560100, India
Tel.: +91 80 33131303
Fax: +91 80 33134503
Mobile: +91 9449250810
mailto:santhosh.hirem...@siemens.com
www.siemens.co.in/STS
www.siemens.com/ingenuityforlife
[cid:image001.gif@01D6505B.A5BA62F0]
Registered Office: Plot No. 2, Sector No. 2, Kharghar Node, Navi Mumbai - 
410210. Telephone +91 22 39672000. Fax +91 22 27740169. Other Offices: 
Bangalore, Chennai, Gurgaon, Noida, Pune. Corporate Identity number: 
U9MH1986PTC093854




Vending and support services.

2020-07-02 Thread Олег Нестеров
Hi!
I'm interested in vending and support services for apache kafka in the
Russia Federation.
Can you advise a company in the Russia Federation ?

-- 

С уважением,
Нестеров Олег
Инженер программист Prooftech IT
89130107969


Re: Problem in reading From JDBC SOURCE

2020-07-02 Thread Ricardo Ferreira

Vishnu,

I think is hard to troubleshoot things without the proper context. In 
your case, could you please share an example of the rows contained in 
the table `sample`? As well as its DDL?


-- Ricardo

On 7/2/20 9:29 AM, vishnu murali wrote:

I go through that documentation

Where it described like DECIMAL is not supported in MySQL  like this .

And also no example for MySQL so is there any other sample with MySQL



On Thu, Jul 2, 2020, 18:49 Robin Moffatt  wrote:


Check out this article where it covers decimal handling:

https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/#bytes-decimals-numerics


--

Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff


On Thu, 2 Jul 2020 at 13:54, vishnu murali 
wrote:


Hi Guys,

I am having some problem while reading from MySQL using JDBC source and
received like below
Anyone know what is the reason and how to solve this ?

"a": "Aote",

   "b": "AmrU",

   "c": "AceM",

   "d": "Aote",


Instead of

"a": 0.002,

   "b": 0.465,

   "c": 0.545,

   "d": 0.100


It's my configuration


{

 "name": "sample",

 "config": {

 "connector.class":

"io.confluent.connect.jdbc.JdbcSourceConnector",

 "connection.url": "jdbc:mysql://localhost:3306/sample",

 "connection.user": "",

 "connection.password": "xxx",

 "topic.prefix": "dample-",

 "poll.interval.ms": 360,

 "table.whitelist": "sample",

 "schemas.enable": "false",

 "mode": "bulk",

 "value.converter.schemas.enable": "false",

 "numeric.mapping": "best_fit",

 "value.converter": "org.apache.kafka.connect.json.JsonConverter",

 "transforms": "createKey,extractInt",

 "transforms.createKey.type":
"org.apache.kafka.connect.transforms.ValueToKey",

 "transforms.createKey.fields": "ID",

 "transforms.extractInt.type":
"org.apache.kafka.connect.transforms.ExtractField$Key",

 "transforms.extractInt.field": "ID"

 }

}



Re: Problem in reading From JDBC SOURCE

2020-07-02 Thread vishnu murali
I go through that documentation

Where it described like DECIMAL is not supported in MySQL  like this .

And also no example for MySQL so is there any other sample with MySQL



On Thu, Jul 2, 2020, 18:49 Robin Moffatt  wrote:

> Check out this article where it covers decimal handling:
>
> https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/#bytes-decimals-numerics
>
>
> --
>
> Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff
>
>
> On Thu, 2 Jul 2020 at 13:54, vishnu murali 
> wrote:
>
> > Hi Guys,
> >
> > I am having some problem while reading from MySQL using JDBC source and
> > received like below
> > Anyone know what is the reason and how to solve this ?
> >
> > "a": "Aote",
> >
> >   "b": "AmrU",
> >
> >   "c": "AceM",
> >
> >   "d": "Aote",
> >
> >
> > Instead of
> >
> > "a": 0.002,
> >
> >   "b": 0.465,
> >
> >   "c": 0.545,
> >
> >   "d": 0.100
> >
> >
> > It's my configuration
> >
> >
> > {
> >
> > "name": "sample",
> >
> > "config": {
> >
> > "connector.class":
> "io.confluent.connect.jdbc.JdbcSourceConnector",
> >
> > "connection.url": "jdbc:mysql://localhost:3306/sample",
> >
> > "connection.user": "",
> >
> > "connection.password": "xxx",
> >
> > "topic.prefix": "dample-",
> >
> > "poll.interval.ms": 360,
> >
> > "table.whitelist": "sample",
> >
> > "schemas.enable": "false",
> >
> > "mode": "bulk",
> >
> > "value.converter.schemas.enable": "false",
> >
> > "numeric.mapping": "best_fit",
> >
> > "value.converter": "org.apache.kafka.connect.json.JsonConverter",
> >
> > "transforms": "createKey,extractInt",
> >
> > "transforms.createKey.type":
> > "org.apache.kafka.connect.transforms.ValueToKey",
> >
> > "transforms.createKey.fields": "ID",
> >
> > "transforms.extractInt.type":
> > "org.apache.kafka.connect.transforms.ExtractField$Key",
> >
> > "transforms.extractInt.field": "ID"
> >
> > }
> >
> > }
> >
>


Re: Problem in reading From JDBC SOURCE

2020-07-02 Thread Robin Moffatt
Check out this article where it covers decimal handling:
https://www.confluent.io/blog/kafka-connect-deep-dive-jdbc-source-connector/#bytes-decimals-numerics


-- 

Robin Moffatt | Senior Developer Advocate | ro...@confluent.io | @rmoff


On Thu, 2 Jul 2020 at 13:54, vishnu murali 
wrote:

> Hi Guys,
>
> I am having some problem while reading from MySQL using JDBC source and
> received like below
> Anyone know what is the reason and how to solve this ?
>
> "a": "Aote",
>
>   "b": "AmrU",
>
>   "c": "AceM",
>
>   "d": "Aote",
>
>
> Instead of
>
> "a": 0.002,
>
>   "b": 0.465,
>
>   "c": 0.545,
>
>   "d": 0.100
>
>
> It's my configuration
>
>
> {
>
> "name": "sample",
>
> "config": {
>
> "connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
>
> "connection.url": "jdbc:mysql://localhost:3306/sample",
>
> "connection.user": "",
>
> "connection.password": "xxx",
>
> "topic.prefix": "dample-",
>
> "poll.interval.ms": 360,
>
> "table.whitelist": "sample",
>
> "schemas.enable": "false",
>
> "mode": "bulk",
>
> "value.converter.schemas.enable": "false",
>
> "numeric.mapping": "best_fit",
>
> "value.converter": "org.apache.kafka.connect.json.JsonConverter",
>
> "transforms": "createKey,extractInt",
>
> "transforms.createKey.type":
> "org.apache.kafka.connect.transforms.ValueToKey",
>
> "transforms.createKey.fields": "ID",
>
> "transforms.extractInt.type":
> "org.apache.kafka.connect.transforms.ExtractField$Key",
>
> "transforms.extractInt.field": "ID"
>
> }
>
> }
>


Problem in reading From JDBC SOURCE

2020-07-02 Thread vishnu murali
Hi Guys,

I am having some problem while reading from MySQL using JDBC source and
received like below
Anyone know what is the reason and how to solve this ?

"a": "Aote",

  "b": "AmrU",

  "c": "AceM",

  "d": "Aote",


Instead of

"a": 0.002,

  "b": 0.465,

  "c": 0.545,

  "d": 0.100


It's my configuration


{

"name": "sample",

"config": {

"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",

"connection.url": "jdbc:mysql://localhost:3306/sample",

"connection.user": "",

"connection.password": "xxx",

"topic.prefix": "dample-",

"poll.interval.ms": 360,

"table.whitelist": "sample",

"schemas.enable": "false",

"mode": "bulk",

"value.converter.schemas.enable": "false",

"numeric.mapping": "best_fit",

"value.converter": "org.apache.kafka.connect.json.JsonConverter",

"transforms": "createKey,extractInt",

"transforms.createKey.type":
"org.apache.kafka.connect.transforms.ValueToKey",

"transforms.createKey.fields": "ID",

"transforms.extractInt.type":
"org.apache.kafka.connect.transforms.ExtractField$Key",

"transforms.extractInt.field": "ID"

}

}