Re: Query Optimization

2009-01-13 Thread Ken Menzel

Try a union instead of an or condition.
http://dev.mysql.com/doc/refman/5.0/en/union.html

Johnny Withers wrote:

I have the following tables:

Customer: id,ssn
Customer_Id: id,customer_id,id_num

The customer table holds customers along with their SSN and the customer_id
table holds identifications for each customer (Driver's License, State
Issued ID, Student ID, etc). The SSN column from the customer table is
VARCHAR(9) and the id_num column from the customer_id table is VARCHAR(32).
Both of these columns have an index on them.

The following query uses the index on customer.ssn and executes in 0ms:

SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
FROM customer USE INDEX(idx_ssn)
LEFT JOIN customer_id ON customer.id=customer_id.customer_id
WHERE ssn='123456789';

Explain output:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: ref
possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra:

Now, this is the query I have trouble with, it does not use the index (or
says it does but doesn't?) and on a busy system (200+ queries per sec) can
take up to 20 seconds or more to execute:

SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
FROM customer USE INDEX(idx_ssn)
LEFT JOIN customer_id ON customer.id=customer_id.customer_id
WHERE ssn='123456789' OR id_num='123456789';

Explain output:

*** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: index
possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: NULL
 rows: 165843
Extra: Using index
*** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra: Using where


Is there some way I can make it use the index? I've thought about
redesigning the query to select from the customer_id table first, if a row
is found.. just return the matching customer_id from the customer table..
but I wanted to see if maybe i'm going about this the wrong way before I
engineer some way around this.

Thanks in advance,

-
Johnny Withers
601.209.4985
joh...@pixelated.net




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



MySQL University session on January 15: Low-Level Locking in mysqld and InnoDB

2009-01-13 Thread Stefan Hinz
MySQL University: Low-Level Locking in mysqld and InnoDB

Happy New Year!

MySQL University sessions are starting again after the winter break.
This Thursday, we're beginning with Tim Cook's presentation on low-level
locking in mysqld and InnoDB – the good, the bad, and the ugly. Tim
works in the Performance and Applications Engineering department at Sun
Microsystems. Expect to get some deep insights into the inner workings
of the MySQL Server and InnoDB.

Tim's based in California, so note that his session will start rather
late for Europeans.

For MySQL University sessions, point your browser to this page:

http://webmeeting.dimdim.com/portal/JoinForm.action?confKey=mysqluniversity

You need a browser with a working Flash plugin. You may register for a
Dimdim account, but you don't have to.

MySQL University is a free educational online program for
engineers/developers. MySQL University sessions are open to anyone, not
just Sun employees. Sessions are recorded (slides and audio), so if you
can't attend the live session you can look at the recording anytime
after the session.

Here's the schedule for the upcoming weeks (see
http://forge.mysql.com/wiki/MySQL_University for a better format of this
list):

January 15, 200917:00 UTC / 9am PDT (Pacific) / 11am CST (Central) /
12am EST (Eastern) / 17:00 GMT (London) / 18:00 CET (Berlin) / 20:00 MDT
(Moscow)Low-Level Locking in mysqld and InnoDB - the Good, the Bad,
and the UglyTim Cook

January 22, 200908:00 UTC / 8:00 GMT / 9:00 CET / 11:00 MDT (Moscow) /
13:30 IST (India) / 16:00 CST (Beijing) / 17:00 JST (Tokyo) / 19:00 EDT
(Melbourne) Scalability by Design - Coding for Systems With Large CPU
Counts  Richard Smith

January 29, 200916:00 UTC / 8am PDT (Pacific) / 10am CST (Central) /
11am EST (Eastern) / 16:00 GMT (London) / 17:00 CET (Berlin) / 19:00 MDT
(Moscow)Scalability Challenges in an InnoDB-based Replication
Environment David Lutz

February 5, 200908:00 UTC / 8:00 GMT / 9:00 CET / 11:00 MDT (Moscow) /
13:30 IST (India) / 16:00 CST (Beijing) / 17:00 JST (Tokyo) / 19:00 EDT
(Melbourne) MySQL Performance and Scalability Project - Issues and
Opportunities   Allan Packer

February 12, 2008   14:00 UTC / 8am CST (Central) / 9am EST (Eastern) /
14:00 GMT / 15:00 CET / 17:00 MDT (Moscow)  Using DTrace with MySQL 
MC
Brown

February 19, 2009   14:00 UTC / 8am CST (Central) / 9am EST (Eastern) /
14:00 GMT / 15:00 CET / 17:00 MDT (Moscow)  Developing MySQL on
Solaris MC Brown  Trond Norbye

February 26, 2009   14:00 UTC / 8am CST (Central) / 9am EST (Eastern) /
14:00 GMT / 15:00 CET / 17:00 MDT (Moscow)  Backing up MySQL using file
system snapshotsLenz Grimmer

March 5, 2009   14:00 UTC / 8am CST (Central) / 9am EST (Eastern) / 14:00
GMT / 15:00 CET / 17:00 MDT (Moscow)Good Coding Style   Konstantin 
Osipov

March 12, 2009  14:00 UTC / 8am CST (Central) / 9am EST (Eastern) /
14:00 GMT / 15:00 CET / 17:00 MDT (Moscow)  MySQL and ZFS   MC Brown

The session address (Dimdim URL) for all sessions is:

http://webmeeting.dimdim.com/portal/JoinForm.action?confKey=mysqluniversity

Please bookmark this address, since it will remain valid for all future
MySQL University sessions. Remember, though, that the meeting room will
open only 15 minutes before the session starts.

Dimdim is the conferencing system we're using for MySQL University
sessions. It provides integrated voice streaming, chat, whiteboard,
session recording (slides and voice), and more. All you need to do to
attend MySQL University sessions is point your browser to the address
given above.

All MySQL University sessions are recorded, that is, slides and voice
can be viewed as a Flash file (.flv). You can find those recordings on
the respective MySQL University session pages which are listed on the
MySQL University home page:

http://forge.mysql.com/wiki/MySQL_University

Cheers,

Stefan
-- 
***
Sun Microsystems GmbHStefan Hinz
Sonnenallee 1Manager Documentation, Database Group
85551 Kirchheim-Heimstetten  Phone: +49-30-82702940
Germany  Fax:   +49-30-82702941
http://www.sun.de/mysql  mailto: stefan.h...@sun.com

Amtsgericht Muenchen: HRB161028
Geschaeftsfuehrer: Thomas Schroeder, Wolfgang Engels, Dr. Roland Boemer
Vorsitzender des Aufsichtsrates: Martin Haering
***














-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Question about Master-Master replication: Is this possible?

2009-01-13 Thread Frank Becker
Hello together,

I have successfully set up a master-master-replication between two
servers. My question is: It is possible to set up such a replication
between three (or more) servers? Like this


Master3 --- Master1 --- Master2
   |
Master4

Thanks for your feedback
Best regards

Frank

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Baron Schwartz
On Tue, Jan 13, 2009 at 12:32 PM, Frank Becker
computersac...@beckerwelt.de wrote:
 Hello together,

 I have successfully set up a master-master-replication between two
 servers. My question is: It is possible to set up such a replication
 between three (or more) servers? Like this


 Master3 --- Master1 --- Master2
   |
Master4


These types of questions can always be answered by asking: does my
proposed setup require any server to have more than one master?  If
so, it's currently not possible.

You didn't draw arrows between the servers, so I can't really answer
you.  I can say that this is possible:

 Master3 --- Master1 --- Master2
   |
  v
Master4


but this is not:


 Master3 --- Master1 --- Master2
   |
  v
Master4


Why not?  Simply because in this diagram, Master1 is the slave of both
Master2 and Master3 which is impossible.

--
Baron Schwartz, Director of Consulting, Percona Inc.
Our Blog: http://www.mysqlperformanceblog.com/
Our Services: http://www.percona.com/services.html

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



RE: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Rolando Edwards
In the topology you just illustrated, you need to be specific about your scheme 
using arrows. Here are some examples:

==

Example 1: This is MultiMaster Replication among 4 servers

Master1---Master2
   ^  |
   |  |
   |  |
   |  V
Master4---Master3

==

Example 2: This is Tree Replication among 4 servers

  Master1
 |
 |
/|\
   | | |
   | | |
   V V V
Slave2Slave3Slave4

==

Example 3: This is Chained Replication among 4 servers

Master1---Slave2---Slave3---Slave4

==

Example 4: This is MultiMaster Replication among 4 servers
Using two masters and two slaves

  __
 /  \
V\
Master1Master2
  | \^ |
  |  \__/  |
  ||
  ||
  VV
Slave3  Slave4

==

As long as you obey the rule: A SLAVE CANNOT HAVE TWO MASTERS,
there are many replication topologies that are possible

-Original Message-
From: Frank Becker [mailto:computersac...@beckerwelt.de] 
Sent: Tuesday, January 13, 2009 12:33 PM
To: mysql@lists.mysql.com
Subject: Question about Master-Master replication: Is this possible?

Hello together,

I have successfully set up a master-master-replication between two
servers. My question is: It is possible to set up such a replication
between three (or more) servers? Like this


Master3 --- Master1 --- Master2
   |
Master4

Thanks for your feedback
Best regards

Frank

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=redwa...@logicworks.net


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about Master-Master replication: Is this possible?

2009-01-13 Thread Frank Becker
Hello Baron, thanks for your response.

 These types of questions can always be answered by asking: does my
 proposed setup require any server to have more than one master?  If
 so, it's currently not possible.

What I want to do is the following:
eGroupware is a enterprise-groupware solution. I started with eGroupware
on a single server. If I or my wife is out of office (e.g. by train) we
have no internet access with our notebook. It's too expensive. Ok.
I then set up a virtual server with vmware and set up a
master-master-replication between server (master1) and virtual server
(master2). This works fine. The notebook has its own server and if the
notebook is in the home network it replicates the changes.  

Now I want connect another notebook in the same way. If I do it again
and again the result would be a star-topology of masters.

That is why I ask. I don't want a master of desaster.

You asked for arrows. Here are they:

Master3 (virtual) -- Master1 -- Master2 (virtual)
^
|
v
Master4 (virtual)
 
Thank you for your help.

Frank Becker



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



? Solved ? Re: mysqldump: Error 2 013: Lost connection to MySQL server

2009-01-13 Thread Dan
On Tue, 2009-01-13 at 12:19 +0530, Chandru wrote:

 Hi,
 
   Did u try using this command 
 
 
 mysqldump --opt db_name  db_name.sql -p 2bkp.err

Not quite. Firstly, I had to alter the normal backup cron job, and that
doesn't happen until late at night.

Secondly, yes I added the redirection to capture errors. There were none
( empty file this time ).

Thirdly, I didn't use '--opt'. I had no other suggestions yesterday
( before I went to bed anyway - there's 1 in my inbox this morning ), so
I did some experimenting of my own and changed the dump command to:

mysqldump --skip-opt --add-drop-table --add-locks --create-options
--quick --lock-tables --set-charset --disable-keys dbmail  dbmail.sql
-pSOME_PASSWORD 2bkp.err

This made mysql do 1 insert per record.

The backup *appears* to have completed successfully. At least the end of
the dump file looks valid. It ends dumping the last table, then a view,
then I get:

-- Dump completed on 2009-01-13 17:23:13

Previously it just finished part-way through dumping a blob.

I have yet to do extensive testing on it. I suppose I should try
importing the dump file into another server and see if I get the correct
number of rows in each table ...

The only issue now is that the dump file is much smaller than I would
have expected. When using --opt, I was getting 30GB dump files. I would
have expected the current format ( 1 insert statement per record ) to be
much bigger, but it's 23GB. Now having said that, I did email the
current DB administrator and ask him to get people to archive all emails
with huge attachments somewhere on a network share ( people have some
pretty big attachments ). Also I asked him to get people to clean out
their Trash ( which happens only when we tell them to ). So I suppose
it's not completely infeasible that this alone is responsible for the
difference.

Anyway, it's been a very disconcerting experience. It goes without
saying that people would expect that anything that gets into a MySQL
database should be able to be backed up by mysqldump. And it's worrying
that the default --opt can't do that. When I get some time I'll enter a
bug ...

Thanks for you help Chandru.

Dan



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Why does changing a table property rebuild the table?

2009-01-13 Thread mos

At 09:17 PM 1/12/2009, you wrote:

 Why would delay_key_writes require a table rebuild? It's not
 modifying the data. Reloading tens of millions of rows for several
 hours seems to be a waste of time.

It probably flips a bit in the .frm file or something like that, but I
have not investigated it myself.

My guess is that you can hack this to do what you want.  We wrote
about this in our book -- you can alter ENUM lists without a table
rebuild, for example.  I'm betting you can do the same thing here.
Rather than describe the whole thing let me show you the blog post
Aurimas wrote about it:

http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/

Baron



Baron,
 Yes, that worked. Thanks. :)

It saves me about 4 hours per table and I had 6 tables to do. So I was done 
in 5 minutes instead of 24 hours.


Mike 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
This sounds like you need to raise max_allowed_packet for mysqldump
(and possibly mysqld) - these are separate settings for both the
client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
or specify it as an option on the command line mysqldump --opt ...
--max_allowed_packet=1G dbname  backup-file.

On Tue, Jan 13, 2009 at 2:58 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 2009-01-13 at 12:19 +0530, Chandru wrote:

 Hi,

   Did u try using this command


 mysqldump --opt db_name  db_name.sql -p 2bkp.err

 Not quite. Firstly, I had to alter the normal backup cron job, and that
 doesn't happen until late at night.

 Secondly, yes I added the redirection to capture errors. There were none
 ( empty file this time ).

 Thirdly, I didn't use '--opt'. I had no other suggestions yesterday
 ( before I went to bed anyway - there's 1 in my inbox this morning ), so
 I did some experimenting of my own and changed the dump command to:

 mysqldump --skip-opt --add-drop-table --add-locks --create-options
 --quick --lock-tables --set-charset --disable-keys dbmail  dbmail.sql
 -pSOME_PASSWORD 2bkp.err

 This made mysql do 1 insert per record.

 The backup *appears* to have completed successfully. At least the end of
 the dump file looks valid. It ends dumping the last table, then a view,
 then I get:

 -- Dump completed on 2009-01-13 17:23:13

 Previously it just finished part-way through dumping a blob.

 I have yet to do extensive testing on it. I suppose I should try
 importing the dump file into another server and see if I get the correct
 number of rows in each table ...

 The only issue now is that the dump file is much smaller than I would
 have expected. When using --opt, I was getting 30GB dump files. I would
 have expected the current format ( 1 insert statement per record ) to be
 much bigger, but it's 23GB. Now having said that, I did email the
 current DB administrator and ask him to get people to archive all emails
 with huge attachments somewhere on a network share ( people have some
 pretty big attachments ). Also I asked him to get people to clean out
 their Trash ( which happens only when we tell them to ). So I suppose
 it's not completely infeasible that this alone is responsible for the
 difference.

 Anyway, it's been a very disconcerting experience. It goes without
 saying that people would expect that anything that gets into a MySQL
 database should be able to be backed up by mysqldump. And it's worrying
 that the default --opt can't do that. When I get some time I'll enter a
 bug ...

 Thanks for you help Chandru.

 Dan



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=andrew.b.gar...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2009-01-13 Thread Andrew Garner
Do you have an index on id_num?  What sort of explain output do you
get when you don't use a query hint? Your USE INDEX hint may be
causing MySQL to ignore a better strategy.

If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization .   See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older versions of MySQL you may find a union
more efficient.


On Mon, Jan 12, 2009 at 9:43 AM, Johnny Withers joh...@pixelated.net wrote:
 I have the following tables:

 Customer: id,ssn
 Customer_Id: id,customer_id,id_num

 The customer table holds customers along with their SSN and the customer_id
 table holds identifications for each customer (Driver's License, State
 Issued ID, Student ID, etc). The SSN column from the customer table is
 VARCHAR(9) and the id_num column from the customer_id table is VARCHAR(32).
 Both of these columns have an index on them.

 The following query uses the index on customer.ssn and executes in 0ms:

 SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
 FROM customer USE INDEX(idx_ssn)
 LEFT JOIN customer_id ON customer.id=customer_id.customer_id
 WHERE ssn='123456789';

 Explain output:

 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: ref
 possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
 possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra:

 Now, this is the query I have trouble with, it does not use the index (or
 says it does but doesn't?) and on a busy system (200+ queries per sec) can
 take up to 20 seconds or more to execute:

 SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
 FROM customer USE INDEX(idx_ssn)
 LEFT JOIN customer_id ON customer.id=customer_id.customer_id
 WHERE ssn='123456789' OR id_num='123456789';

 Explain output:

 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: index
 possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: NULL
 rows: 165843
Extra: Using index
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
 possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra: Using where


 Is there some way I can make it use the index? I've thought about
 redesigning the query to select from the customer_id table first, if a row
 is found.. just return the matching customer_id from the customer table..
 but I wanted to see if maybe i'm going about this the wrong way before I
 engineer some way around this.

 Thanks in advance,

 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Dan
On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner
andrew.b.gar...@gmail.com wrote:

 This sounds like you need to raise max_allowed_packet for mysqldump
 (and possibly mysqld) - these are separate settings for both the
 client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
 or specify it as an option on the command line mysqldump --opt ...
 --max_allowed_packet=1G dbname  backup-file.

This is certainly the most common advice for this error, yes. I increased
the max_allowed_packet size from 1M to 128M when the problem initially
occured. This didn't fix anything.

Since dbmail splits up all email body / attachments into small chunks and
inserts these chunks in separate records, I really don't see how a
max_allowed_packet size of 128M would fail ... especially since the data
got in there with a max_allowed_packet size of 1M to begin with. The
biggest email in the database is 50M. So even if dbmail *hadn't* split the
email into separate records, a max_allowed_packet size of 128M should be
*easily* big enough, shouldn't it?

As for a max_allowed_packet size of 1G, that just sounds dangerous. The
server has 900MB or so of chip RAM and 512MB of swap. It's also running a
LOT of other services. I don't want something stupid happening like Linux's
out-of-memory-killer coming along and killing MySQL, causing database
corruption. Can someone please comment on this? If it's not dangerous, I
will try it. As noted in a prior post, I 'successfully' completed a backup
last night, and I'm testing it now, but it took 10 hours to complete, and
was still running when people came in this morning, which is obviously not
desirable, so if I can somehow still use the --opt option of mysqldump by
making max_allowed_packet to some absolutely astronomical level without
endangering things, maybe that's the way to go. Maybe ...

Anyway, thanks for the comments Andrew.

Dan


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2009-01-13 Thread Baron Schwartz
 If you have separate indexes on ssn and id_num, MySQL may be able to
 efficiently use an index merge optimization .   See
 http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
 This is only in 5.0+ - on older versions of MySQL you may find a union
 more efficient.

And in newer versions, too.  The optimizer frequently underestimates
the cost of the merge operation and the required random I/O for row
lookups.  So, yes it can use an index merge, but... efficiency is
another question.  I've seen table scans outperform a two-way index
merge by orders of magnitude.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2009-01-13 Thread Andrew Garner
On Tue, Jan 13, 2009 at 7:07 PM, Baron Schwartz ba...@xaprb.com wrote:
 If you have separate indexes on ssn and id_num, MySQL may be able to
 efficiently use an index merge optimization .   See
 http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
 This is only in 5.0+ - on older versions of MySQL you may find a union
 more efficient.

 And in newer versions, too.  The optimizer frequently underestimates
 the cost of the merge operation and the required random I/O for row
 lookups.  So, yes it can use an index merge, but... efficiency is
 another question.  I've seen table scans outperform a two-way index
 merge by orders of magnitude.

These appeared to be high selectivity indexes, but perhaps I assumed
too much. :)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
On Tue, Jan 13, 2009 at 6:06 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner
 andrew.b.gar...@gmail.com wrote:

 This sounds like you need to raise max_allowed_packet for mysqldump
 (and possibly mysqld) - these are separate settings for both the
 client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
 or specify it as an option on the command line mysqldump --opt ...
 --max_allowed_packet=1G dbname  backup-file.

 This is certainly the most common advice for this error, yes. I increased
 the max_allowed_packet size from 1M to 128M when the problem initially
 occured. This didn't fix anything.

My apologies.  I hadn't read up-thread where this was discussed, and
given that, max_allowed_packet is almost certainly not the problem.
Sorry for the noise.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: stuck commits

2009-01-13 Thread Scott Edwards
On Tuesday 13 January 2009 07:23:52 am Krishna Chandra Prajapati wrote:
 Hi Scott,

 I believe something wrong with innodb parameters. It should be optimum. In
 your case it might be too high or too low. Take a look at log file size.
 Please send your show variables and show status data to reach at
 conclusion.



Okay. This is not during a time of incident.  Another server is handling the 
load from the queries and database.  So far we don't notice any issues with 
stuck commits on the new server, but it's only handling minimal load outside 
of amavis queries.  We would like to run this on the original system, because 
it has raid1+drbd+heartbeat (2nodes) where as the temporary solution only 
uses raid1.

Variable_name   Value
auto_increment_increment1
auto_increment_offset   1
automatic_sp_privileges ON
back_log50
basedir /usr/
binlog_cache_size   32768
bulk_insert_buffer_size 8388608
character_set_clientlatin1
character_set_connectionlatin1
character_set_database  latin1
character_set_filesystembinary
character_set_results   latin1
character_set_serverlatin1
character_set_systemutf8
character_sets_dir  /usr/share/mysql/charsets/
collation_connectionlatin1_swedish_ci
collation_database  latin1_swedish_ci
collation_serverlatin1_swedish_ci
completion_type 0
concurrent_insert   1
connect_timeout 5
datadir /var/www/mysql/
date_format %Y-%m-%d
datetime_format %Y-%m-%d %H:%i:%s
default_week_format 0
delay_key_write ON
delayed_insert_limit100
delayed_insert_timeout  300
delayed_queue_size  1000
div_precision_increment 4
engine_condition_pushdown   OFF
expire_logs_days10
flush   OFF
flush_time  0
ft_boolean_syntax   + -()~*:|
ft_max_word_len 84
ft_min_word_len 4
ft_query_expansion_limit20
ft_stopword_file(built-in)
group_concat_max_len1024
have_archiveYES
have_bdbNO
have_blackhole_engine   NO
have_compress   YES
have_crypt  YES
have_csvYES
have_dynamic_loadingYES
have_example_engine NO
have_federated_engine   YES
have_geometry   YES
have_innodb YES
have_isam   NO
have_merge_engine   YES
have_ndbcluster DISABLED
have_opensslDISABLED
have_query_cacheYES
have_raid   NO
have_rtree_keys YES
have_symlinkYES
init_connect
init_file
init_slave
innodb_additional_mem_pool_size 1048576
innodb_autoextend_increment 8
innodb_buffer_pool_awe_mem_mb   0
innodb_buffer_pool_size 268435456
innodb_checksumsON
innodb_commit_concurrency   0
innodb_concurrency_tickets  500
innodb_data_file_path   ibdata1:10M:autoextend
innodb_data_home_dir
innodb_doublewrite  ON
innodb_fast_shutdown1
innodb_file_io_threads  4
innodb_file_per_table   OFF
innodb_flush_log_at_trx_commit  1
innodb_flush_method
innodb_force_recovery   0
innodb_lock_wait_timeout50
innodb_locks_unsafe_for_binlog  OFF
innodb_log_arch_dir
innodb_log_archive  OFF
innodb_log_buffer_size  1048576
innodb_log_file_size5242880
innodb_log_files_in_group   2
innodb_log_group_home_dir   ./
innodb_max_dirty_pages_pct  90
innodb_max_purge_lag0
innodb_mirrored_log_groups  1
innodb_open_files   300
innodb_rollback_on_timeout  OFF
innodb_support_xa   ON
innodb_sync_spin_loops  20
innodb_table_locks  ON
innodb_thread_concurrency   8
innodb_thread_sleep_delay   1
interactive_timeout 28800
join_buffer_size3141632
key_buffer_size 50331648
key_cache_age_threshold 300
key_cache_block_size1024
key_cache_division_limit100
language/usr/share/mysql/english/
large_files_support ON
large_page_size 0
large_pages OFF
lc_time_names   en_US
license GPL
local_infileON
locked_in_memoryOFF
log OFF
log_bin ON
log_bin_trust_function_creators OFF
log_error
log_queries_not_using_indexes   OFF
log_slave_updates   OFF
log_slow_queriesON
log_warnings1
long_query_time 3
low_priority_updatesOFF
lower_case_file_system  OFF
lower_case_table_names  0
max_allowed_packet  16776192
max_binlog_cache_size   18446744073709551615
max_binlog_size 104857600
max_connect_errors  1
max_connections 250
max_delayed_threads 20
max_error_count 64
max_heap_table_size 209715200
max_insert_delayed_threads  20
max_join_size   18446744073709551615
max_length_for_sort_data1024
max_prepared_stmt_count 16382
max_relay_log_size  0
max_seeks_for_key   18446744073709551615
max_sort_length 1024
max_sp_recursion_depth  0
max_tmp_tables  32
max_user_connections0
max_write_lock_count18446744073709551615
multi_range_count   256
myisam_data_pointer_size6
myisam_max_sort_file_size   9223372036854775807
myisam_recover_options  OFF
myisam_repair_threads   1
myisam_sort_buffer_size 8388608
myisam_stats_method nulls_unequal
ndb_autoincrement_prefetch_sz   32
ndb_force_send  ON
ndb_use_exact_count ON
ndb_use_transactionsON