I have this problem too on large tables. Currently my table is 5
million or so but could easily grow to 10 to 50 times that.
I ended up adding a field called Status, and then adding an index for
the primary key and status fields.
Then I do my UPDATES or DELETES for a particular status value using the
LIMIT command. If I use LIMIT to , say, 10000, then the operation is
very quick and I just keep running it until the records affected
returned is zero. But without the STATUS and LIMIT in my SQL then it
chokes after trying for several hours and I get the record locks error.
--James
Brent Baisley wrote:
You would need to convert the table to InnoDB as George mentioned.
Alternatively, break up your SELECT into many smaller selects. If your
query is running slow now, it's only going to get slower and slower as
your table grows. I've noticed with MySQL that large result queries
don't slow down linearly. You will probably find that the many smaller
queries actually shorten the entire time it takes.
As an example, I've got a process that merges a 6 million row table
with a 300 million row table on a daily basis. It merges by selecting
records within a 20 minute time period (there is a certain amount of
comparisons involved in the merging). The tables have to be MyISAM
because we are using MERGE tables.
It used to take 9 hours with much smaller data sets. Breaking it up it
now takes about 80 minutes with triple the amount of data. Each query
only takes a few seconds, so locking isn't a big issue. Insert/Updates
complete between the selects. I also don't have to worry mysql running
out of memory and swapping to disk regardless of how big the dataset
gets.
----- Original Message ----- From: "Jon Ribbens"
<[EMAIL PROTECTED]>
To: <mysql@lists.mysql.com>
Sent: Thursday, October 19, 2006 10:06 AM
Subject: How can I do a SELECT without locking the table against updates?
I have a simple single-table SELECT query that takes of several
minutes to complete, due to a very large number of result rows being
involed. I don't think there is any way to optimise the query - MySQL
is already using the appropriate index etc, it's just a huge table and
the query has a large result set.
While the SELECT is executing, any INSERT/UPDATE queries on the table
are blocked.
Is there any way I can tell MySQL *not* to lock the table while the
SELECT is executing? I know this could lead to ambiguities, but in
this application it doesn't matter - if, for example, a new matching
row is added after the SELECT starts but before it finishes, it simply
doesn't matter whether that row is returned in the results or not.
If there is no way to do this in MySQL, does anyone have any
innovative suggestions as to how I could work around the problem? The
table is very large, so solutions involving multiple copies are tricky
to implement.
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]