Hi Zoltan, Sqoop is trying for the best throughput to move data from source to destination, so your issue might be tricky to solve. I was thinking about it and I do have couple of ideas:
1) Did you tried to limit number of concurrent connections using "-m" parameter? 2) I can imagine that huge parallelism in Sqoop can make hard time for MySQL single threaded replication. Thinking out-of-the box, what about creating table that won't be replicated (mysql can limit replication on both database and table level) on all your nodes and performing your load to all of them (it doesn't matter whether sequentially or in parallel). Once every node will get the data, you can atomically switch the table on all nodes at once. I'm not sure whether it's feasible nor whether it will actually work. I'm just trying to help. Jarcec On Thu, Sep 13, 2012 at 08:41:13AM +0000, Zoltán Tóth-Czifra wrote: > Hi, > > Thank you for your answers! > > I have been reading about Sqoop2, but since it's still under development it > doesn't really serve me. Besides, my problem is not limiting connections, but > somehow limiting the throughput of even one connection. > > This problem might not be Sqoop-specific, but I wondered if anyone have faced > this and solved it somehow. > > Thank you! > ________________________________________ > From: Kathleen Ting [[email protected]] > Sent: Thursday, September 13, 2012 1:27 AM > To: [email protected] > Subject: Re: Throttling inserts to avoid replication lags > > Chuck, Zoltán, > > In Sqoop 2, it has been discussed that connections will allow the > specification of a resource policy in that resources will be managed > by limiting the total number of physical Connections open at one time > and with an option to disable Connections. > > More info: > https://blogs.apache.org/sqoop/entry/apache_sqoop_highlights_of_sqoop > > Regards, Kathleen > > On Wed, Sep 12, 2012 at 8:08 AM, Connell, Chuck > <[email protected]> wrote: > > In my opinion, this is not a Sqoop problem. It is related to the RDBMS and > > the way it handles high-volume updates. Those updates might be coming from > > Sqoop, or they might be coming from a realtime stock market price feed. > > > > > > > > I would go ahead and test the system as is. Let Sqoop do all its updates. If > > you actually have a problem with inconsistencies or poor performance, then I > > would deal with it as a purely MySQL issue. > > > > > > > > (A low-tech approach… run the sqoop jobs at night??) > > > > > > > > Chuck > > > > > > > > > > > > From: Zoltán Tóth-Czifra [mailto:[email protected]] > > Sent: Wednesday, September 12, 2012 10:48 AM > > To: [email protected] > > Subject: Throttling inserts to avoid replication lags > > > > > > > > Hi guys, > > > > > > > > We are using Sqoop (cdh3u3) to export Hive tables to relational databases. > > Usually these databases are only used by business intelligence to further > > analyze and filter the data. However, in certain cases we need to export to > > relational databases that are heavily accessed by our products and users. > > > > > > > > Our concern is that Sqoop exports would interfere with this random access of > > our users. Tempotal inconsistency of the data can be solved with a staging > > table and an atomic swap, however, we are concerned about the replication > > lag between the master and the slaves. > > > > > > > > If we write large data quickly with Sqoop to the master (even to a staging > > table), that takes time to be replicated to the slaves (minutes) and causes > > an inconsistency we can't allow, that is, other writes from our users will > > be queued up. I wonder if any of you had similar problems. We are talking > > about a MySQL cluster by the way. > > > > > > > > For what I know, Sqoop doesn't have any built-in throttle funcionality (for > > example a delay between inserts). We have been thinking to solve this with a > > proxy, but the existing solutions on the market are very incomplete. > > > > > > > > Any other idea? The more transparent the best. > > > > > > > > Thanks!
signature.asc
Description: Digital signature
