LOAD DATA giving BIG mysql-bin files ...

2006-03-30 Thread C.R.Vegelin
Hi List, I would appreciate your help on the following. When using LOAD DATA INFILE 'inputfile.txt' into a MyISAM table, it creates mysql-bin.nn files under my database directory with the size of 'inputfile.txt' (about 200 MB). Since I have to load 12 inputfiles, I get about 2.5 GB of

Re: LOAD DATA giving BIG mysql-bin files ...

2006-03-30 Thread Adrian Bruce
I think this is normal as the binary log will contain a record of all changes made to the data, therefore if you are loading large files regularly- the bin logs will be quite large. If you do not want the binary logging, edit the my.cnf file, comment out the line log-bin (#log-bin) and

Re: LOAD DATA giving BIG mysql-bin files ...

2006-03-30 Thread C.R.Vegelin
Thanks Adrian, Dilipkumar, Dhandapani, I changed my.ini file, restarted the server and now it's okay. Regards, Cor - Original Message - From: Adrian Bruce [EMAIL PROTECTED] To: C.R.Vegelin [EMAIL PROTECTED] Cc: mysql@lists.mysql.com Sent: Thursday, March 30, 2006 9:48 AM Subject: Re:

RE: Expiration date on users utilizing freeradius and mysql

2006-03-30 Thread Atkins, Dwane P
I submitted this yesterday and was not sure if maybe it did not get out to folks. How would I put an expiration date on a mysql field so that it would match a radius entry? Also, is there a way that I can call up a web based screen and have all the information at my fingertips for inputting user

Re: Compound Insert Statement

2006-03-30 Thread SGreen
Sheeri is correct. Rich's statement should have worked. What Rich is looking for is the syntax for doing what the manual calls extended inserts. quoting TFM (http://dev.mysql.com/doc/refman/5.0/en/insert.html) INSERT [LOW_PRIORITY | DELAYED | HIGH_PRIORITY] [IGNORE] [INTO] tbl_name

RE: Expiration date on users utilizing freeradius and mysql

2006-03-30 Thread SGreen
We got the question. However what you ask isn't purely a database issue. How does your authentication program (freeradius?) handle expiration dates? If it doesn't then adding those to the database won't help a bit. If it does, then there should already be a date column. (or two) in the

Fultext search issues

2006-03-30 Thread Yemi Obembe
Using the full text function to run a search stunt on a table. all seem working fine except im looking for ways to fine tune the table relevance. i found out that the function relevates a row higer by the number of keywords found in it and not by if all keywords is present in the row... for

Re: DBD::mysql::st execute failed: MySQL server has gone away

2006-03-30 Thread Luke Vanderfluit
Ravi Prasad LR wrote: Luke, Yup. Basically if I do a particular query from the command line, I get the following error: === InnoDB: Error: tried to read 16384 bytes at offset 1 3469819904. InnoDB: Was only able to read -1. 060327 8:25:41 InnoDB: Operating

stunningly slow query

2006-03-30 Thread Chris Kantarjiev
We're having some serious problems with concurrent queries. This is a dual-processor amd64 machine with 16GB RAM, running NetBSD and MySQL 4.0.25. key_buffer_size is 3GB. When I have a long running query going, otherwise short queries take a very very long time to execute. For example, I have

Re: stunningly slow query

2006-03-30 Thread Mike Wexler
Can you post the output of SHOW FULL PROCESSLIST during the time when both sets of queries are running? Also what storage engine are you using for your tables? Chris Kantarjiev wrote: We're having some serious problems with concurrent queries. This is a dual-processor amd64 machine with 16GB

Re: stunningly slow query

2006-03-30 Thread Chris Kantarjiev
Can you post the output of SHOW FULL PROCESSLIST during the time when both sets of queries are running? mysql show full processlist;

Re: stunningly slow query

2006-03-30 Thread Mike Wexler
Chris Kantarjiev wrote: Can you post the output of SHOW FULL PROCESSLIST during the time when both sets of queries are running? That throws out my first theory about table locks. What do vmstat and top say? Is it CPU bound? I/O bound? Also you might want to do a show status before and

Re: stunningly slow query

2006-03-30 Thread Chris Kantarjiev
That throws out my first theory about table locks. That's what I thought, too. What do vmstat and top say? Is it CPU bound? I/O bound? Certainly not CPU bound. Maybe I/O bound, not conclusive. My current theory is that there is some thrashing on key buffer blocks. Also you might want to

Re: Force a COMMIT on InnoDB tables? (set autocommit=0)

2006-03-30 Thread patrick
Is there any way to make this the default behaviour? I did a Google search, and it was suggested I put the following line in /etc/my.cnf: [mysqld] init_connect='set autocommit=0' This works fine, but I worry that this will affect all incoming connections regardless of whether or not they are

Re: Force a COMMIT on InnoDB tables? (set autocommit=0)

2006-03-30 Thread patrick
I've confirmed that this does affect ALL incoming connections. On 3/30/06, patrick [EMAIL PROTECTED] wrote: Is there any way to make this the default behaviour? I did a Google search, and it was suggested I put the following line in /etc/my.cnf: [mysqld] init_connect='set autocommit=0'

Re: stunningly slow query

2006-03-30 Thread Mike Wexler
It doesn't really answer your question, but have you tried INSERT DELAYED as a work around? Also the updated status is strange, because that generally indicates that its looking for the record to be updated, but since the record is new, there is no record to be updated. Could it be checking

TIMESTAMP field not automatically updating last_updated field

2006-03-30 Thread Ferindo Middleton Jr
I think I've seen this complaint posted before but I ignored but now I realize that in some of my db tables' last_updated field the value is automatically updating on UPDATEs to records while in other tables the last_updated fields for some strange reason aren't automatically updating. I'll

Re: TIMESTAMP field not automatically updating last_updated field

2006-03-30 Thread Scott Haneda
I think I've seen this complaint posted before but I ignored but now I realize that in some of my db tables' last_updated field the value is automatically updating on UPDATEs to records while in other tables the last_updated fields for some strange reason aren't automatically updating. I'll

Re: TIMESTAMP field not automatically updating last_updated field

2006-03-30 Thread jonathan
are you having two timestamp fields in a table (ie a created and a last_updated)? -j On Mar 30, 2006, at 5:17 PM, Ferindo Middleton Jr wrote: I think I've seen this complaint posted before but I ignored but now I realize that in some of my db tables' last_updated field the value is

Re: TIMESTAMP field not automatically updating last_updated field

2006-03-30 Thread Ferindo Middleton Jr
jonathan wrote: are you having two timestamp fields in a table (ie a created and a last_updated)? -j On Mar 30, 2006, at 5:17 PM, Ferindo Middleton Jr wrote: I think I've seen this complaint posted before but I ignored but now I realize that in some of my db tables' last_updated field the

Delete Duplicates

2006-03-30 Thread Rich
Hi there. Any quick way of killing duplicate records? Cheers -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]

Re: stunningly slow query

2006-03-30 Thread Christopher A. Kantarjiev
Mike Wexler wrote: It doesn't really answer your question, but have you tried INSERT DELAYED as a work around? We've not had a lot of luck with this in the past, but it's worth a try. Also the updated status is strange, because that generally indicates that its looking for the record to be

AlterTable Structure Across Multiple DBs

2006-03-30 Thread Jason Dimberg
I have about 25 databases with the same structure and occasionally need to update the table structure. For example, I recently found a mistake in a field that was of type SET and needed to be VARCHAR. I will now need to edit each table. Is there an easy method to alter table structure

Re: Delete Duplicates

2006-03-30 Thread Barry
Rich wrote: Hi there. Any quick way of killing duplicate records? Cheers Subqueries probably. -- Smileys rule (cX.x)C --o(^_^o) Dance for me! ^(^_^)o (o^_^)o o(^_^)^ o(^_^o) -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:

Re: Fultext search issues

2006-03-30 Thread Gabriel PREDA
You ought to use the *Boolean Full-Text Searches.* You would then do a: SELECT title, Comment FROM table_name WHERE MATCH (Comment) AGAINST ('+foo +bar' IN BOOLEAN MODE); This way the rows that contain both words have higher relevance... those that have only one... will have lower relevance. Or

Re: Delete Duplicates

2006-03-30 Thread Prasanna Raj
Subquries will help you . --Praj On Thu, 30 Mar 2006 21:11:56 -0500 Rich [EMAIL PROTECTED] wrote: Hi there. Any quick way of killing duplicate records? Cheers -- MySQL General Mailing List For list archives: http://lists.mysql.com/mysql To unsubscribe:http://lists.mysql.com/[EMAIL

Re: mysql performance problems.

2006-03-30 Thread Prasanna Raj
Is tat query is the problem ? Then turn on your slow queies and try optimizing those slow queries ? Post your queries and table description for further help :) --Praj On Wed, 29 Mar 2006 12:33:20 -0500 Jacob, Raymond A Jr [EMAIL PROTECTED] wrote: After a 23days of running mysql, I have