I did wind up using LOAD DATA INFILE.

When I started, I was afraid that I was going to process about 20 tables every 
day; but I redid the data exchange to avoid that.

Regards,

Jerry Schwartz
Global Information Incorporated
195 Farmington Ave.
Farmington, CT 06032

860.674.8796 / FAX: 860.674.8341
E-mail: je...@gii.co.jp
Web site: www.the-infoshop.com

>-----Original Message-----
>From: mos [mailto:mo...@fastmail.fm]
>Sent: Tuesday, January 04, 2011 12:26 PM
>To: Jerry Schwartz; 'mos'; mysql@lists.mysql.com
>Subject: RE: This just seems to slow
>
>At 05:08 PM 1/3/2011, Jerry Schwartz wrote:
>>Folks, this is getting creepy. It seems like each of the variations you've
>>given me works some times and not others. I haven't found the pattern yet.
>
>The SQL I sent you works on my server just fine with your table and your
>data.  BTW, there is no way to get a list of warnings produced from
>MySQLImport. You will need to switch to SQL and execute "Load Data InFile
>...." and after that has executed, do a "Show Warnings" to display the
>warnings. I always use "Load Data Infile" and not MySQLImport because I
>have more control over its execution. The Load Data should be 10x faster
>than using Inserts so it is worth considering.
>
>The MySQL server prefers the import file to be in the server's data
>directory. If you are trying to import it from another location you need to
>change your my.ini file. Please see the article
>http://dev.mysql.com/doc/refman/5.0/en/load-data-local.html
>
>Mike
>
>
>>During the hours it took my initial data loads to finish, I rewrote the
>>import
>>process so that (I hope) I won't have to go through this again.
>>
>>Thanks for your help.
>>
>>Regards,
>>
>>Jerry Schwartz
>>Global Information Incorporated
>>195 Farmington Ave.
>>Farmington, CT 06032
>>
>>860.674.8796 / FAX: 860.674.8341
>>E-mail: je...@gii.co.jp
>>Web site: www.the-infoshop.com
>>
>>
>> >-----Original Message-----
>> >From: mos [mailto:mo...@fastmail.fm]
>> >Sent: Monday, January 03, 2011 3:25 PM
>> >To: Jerry Schwartz; mysql@lists.mysql.com
>> >Subject: RE: This just seems to slow
>> >
>> >Jerry,
>> >     Try this:
>> >
>> >mysqlimport -uusername -ppassword --verbose --debug-info --delete
>> >--columns=`dm_history_dm_id`,`DM_History_Customer_ID` --local
>> >--fields-terminated-by=","  --fields-optionally-enclosed-by="\""
>> >--lines-terminated-by="\r\n" --host=localhost yourdbname t_dmu_history.txt
>> >
>> >
>> >I use Load Data Infile all the time, usually inside of a program like
>> Delphi.
>> >
>> >Mike
>> >
>> >At 09:56 AM 1/3/2011, Jerry Schwartz wrote:
>> >>Okay, I have a confession to make: I have never gotten Load Data Infile 
>> >>or
>> >>mysqlimport to work.
>> >>
>> >>Here's my CSV file, named "t_dmu_history.txt":
>> >>
>> >>13071,299519
>> >>13071,299520
>> >>13071,299521
>> >>13071,299522
>> >>13071,299524
>> >>13071,299526
>> >>13071,299527
>> >>...
>> >>
>> >>Here's my mysqlimport command:
>> >>
>> >>mysqlimport -uaccess -pxxx --delete
>> >>--columns=`dm_history_dm_id`,`DM_History_Customer_ID`
>> >>  --local --silent
>> >>--fields-terminated-by=',' --lines-terminated-by='\r\n' --host=localhost
>> >>maintable_usa t_dmu_history.txt
>> >>
>> >>I'm running on Windows Vista, and mysqlimport is "Ver 3.7 Distrib 5.1.31,
>> >>for
>> >>Win32 (ia32)"
>> >>
>> >>It runs for awhile, but I wind up with only one record:
>> >>
>> >>localhost >select * from t_dmu_history;
>> >>+------------------+------------------+------------------------+
>> >>| t_dmu_history_id | DM_History_DM_ID | DM_History_Customer_ID |
>> >>+------------------+------------------+------------------------+
>> >>|                1 |            13071 |                   NULL |
>> >>+------------------+------------------+------------------------+
>> >>1 row in set (0.00 sec)
>> >>
>> >>Obviously mysqlimport is parsing the input file incorrectly, but I don't
>> >>know
>> >>why.
>> >>
>> >>Here's the table itself:
>> >>
>> >>+------------------------+---------+------+-----+---------+-------------
>> ---+
>> >>| Field                  | Type    | Null | Key | Default |
>> Extra          |
>> >>+------------------------+---------+------+-----+---------+-------------
>> ---+
>> >>| t_dmu_history_id       | int(11) | NO   | PRI | NULL    |
>> auto_increment |
>> >>| DM_History_DM_ID       | int(11) | YES  | MUL |
>> NULL    |                |
>> >>| DM_History_Customer_ID | int(11) | YES  | MUL |
>> NULL    |                |
>> >>+------------------------+---------+------+-----+---------+-------------
>> ---+
>> >>
>> >>        Table: t_dmu_history
>> >>Create Table: CREATE TABLE `t_dmu_history` (
>> >>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >>   PRIMARY KEY (`t_dmu_history_id`),
>> >>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >>) ENGINE=InnoDB AUTO_INCREMENT=2 DEFAULT CHARSET=utf8
>> >>
>> >>Regards,
>> >>
>> >>Jerry Schwartz
>> >>Global Information Incorporated
>> >>195 Farmington Ave.
>> >>Farmington, CT 06032
>> >>
>> >>860.674.8796 / FAX: 860.674.8341
>> >>E-mail: je...@gii.co.jp
>> >>Web site: www.the-infoshop.com
>> >>
>> >>
>> >> >-----Original Message-----
>> >> >From: mos [mailto:mo...@fastmail.fm]
>> >> >Sent: Sunday, January 02, 2011 11:42 PM
>> >> >To: mysql@lists.mysql.com
>> >> >Subject: Re: This just seems to slow
>> >> >
>> >> >Jerry,
>> >> >       Use "Load Data Infile" when loading a lot of data. Whoever is
>> >> > giving
>> >> >you the data should be able to dump it to a CSV file. Your imports
>> will be
>> >> >much faster.
>> >> >
>> >> >Mike
>> >> >
>> >> >At 07:51 PM 1/2/2011, you wrote:
>> >> >>I'm trying to load data into a simple table, and it is taking many 
>> >> >>hours
>> >> >>(and
>> >> >>still not done). I know hardware, etc., can have a big effect, but
>> >> >>NOTHING
>> >> >>should have this big an effect.
>> >> >>
>> >> >>=====
>> >> >>us-gii >show create table t_dmu_history\G
>> >> >>*************************** 1. row ***************************
>> >> >>        Table: t_dmu_history
>> >> >>Create Table: CREATE TABLE `t_dmu_history` (
>> >> >>   `t_dmu_history_id` int(11) NOT NULL AUTO_INCREMENT,
>> >> >>   `DM_History_DM_ID` int(11) DEFAULT NULL,
>> >> >>   `DM_History_Customer_ID` int(11) DEFAULT NULL,
>> >> >>   PRIMARY KEY (`t_dmu_history_id`),
>> >> >>   KEY `DM_History_DM_ID` (`DM_History_DM_ID`),
>> >> >>   KEY `DM_History_Customer_ID` (`DM_History_Customer_ID`)
>> >> >>) ENGINE=InnoDB AUTO_INCREMENT=1446317 DEFAULT CHARSET=utf8
>> >> >>=====
>> >> >>
>> >> >>Here's a snip of what the input file looks like:
>> >> >>=====
>> >> >>SET autocommit=1;
>> >> >>
>> >> >>#
>> >> >># Dumping data for table 'T_DMU_History'
>> >> >>#
>> >> >>
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299519);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299520);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299521);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299522);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299524);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299526);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299527);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299528);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299529);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299531);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299532);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299533);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299534);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13071, 299535);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298880);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298881);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298882);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298883);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298884);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298885);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298886);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298887);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298889);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298890);
>> >> >>INSERT INTO `T_DMU_History` (`DM_History_DM_ID`,
>> >> >>`DM_History_Customer_ID`)
>> >> >>VALUES (13073, 298891);
>> >> >>=====
>> >> >>
>> >> >>There are about 870000 records.
>> >> >>
>> >> >>I realize that using one INSERT per row is going to hurt, but I don't
>> >> >>control
>> >> >>the format of the incoming data.
>> >> >>
>> >> >>Besides, I'd have thought this would be pretty quick regardless of how
>> >> >>clumsy
>> >> >>the method was.
>> >> >>
>> >> >>Is that "autocommit" a problem? This is a bulk load into an empty
>> table,
>> >> >>so
>> >> >>I'm not worried about ACID.
>> >> >>
>> >> >>Any suggestions?
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>Regards,
>> >> >>
>> >> >>Jerry Schwartz
>> >> >>Global Information Incorporated
>> >> >>195 Farmington Ave.
>> >> >>Farmington, CT 06032
>> >> >>
>> >> >>860.674.8796 / FAX: 860.674.8341
>> >> >>E-mail: je...@gii.co.jp
>> >> >>Web site: www.the-infoshop.com
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>
>> >> >>--
>> >> >>MySQL General Mailing List
>> >> >>For list archives: http://lists.mysql.com/mysql
>> >> >>To unsubscribe: 
>> >> >>http://lists.mysql.com/mysql?unsub=mo...@fastmail.fm
>> >> >
>> >> >
>> >> >--
>> >> >MySQL General Mailing List
>> >> >For list archives: http://lists.mysql.com/mysql
>> >> >To unsubscribe:    http://lists.mysql.com/mysql?unsub=je...@gii.co.jp





-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:    http://lists.mysql.com/mysql?unsub=arch...@jab.org

Reply via email to