Re: MySQL runs on 16-cores server

2009-04-10 Thread Andrew Garner
On Fri, Apr 10, 2009 at 10:40 AM, Wm Mussatto mussa...@csz.com wrote:
 On Fri, April 10, 2009 05:24, Uwe Kiewel wrote:
 Moon's Father wrote:
 Hi.
    If the server has 16 cores, how to set parameters to make MySQL runs
 well.

 IIRC is mysqld multi threaded - so if you have parallel queries, mysqld
 will spam multiple threads across multiple cores.
 ---
 Don't you mean spaN.  I hope mySQL doesn't SPAM. ;-}

On 16 cores, spam is probably a little more accurate :)

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: When will MySQL support array datatype?

2009-03-10 Thread Andrew Garner
This is part of the SQL Standard.  MySQL has a worklog open on it:
http://forge.mysql.com/worklog/task.php?id=2081

On Sat, Feb 14, 2009 at 9:05 PM, Moon's Father
yueliangdao0...@gmail.com wrote:
 Hi.
   Who could tell me when the MySQL support array datatype?
 Any reply will be appreciated.

 --
 I'm a MySQL DBA in china.
 More about me just visit here:
 http://yueliangdao0608.cublog.cn


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Algorithm for resolving foreign key dependencies?

2009-02-03 Thread Andrew Garner
Sounds like you want to walk tables in order of their fk dependencies
- a topological ordering.  You might want to take a look at SQLAlchemy
which has some methods to do just this in sqlalchemy.sql.util:

def sort_tables(tables, reverse=False):
sort a collection of Table objects in order of their
foreign-key dependency.

~Andrew

On Tue, Feb 3, 2009 at 3:40 PM, Philip Pemberton usene...@philpem.me.uk wrote:
 Hi,
  First of all, I apologise in advance for any mind-altering, or
 headache-inducing effects this question may have. I've spent the past two
 days trying to figure it out, and all I've got to show for it is a
 mostly-working recursive depth-first-search routine and an empty packet of
 painkillers.

 MySQL version: 5.0.67-0ubuntu6

 I'm trying to write a code generator (in Python) that reads in a MySQL
 database, enumerates all the tables, then produces INSERT, DELETE and UPDATE
 code in PHP. The INSERT and UPDATE code generation was fairly easy, and
 works quite well. What I'm having trouble with is the DELETE code generator
 -- more specifically, resolving foreign key references.

 Basically, what I have is a tree built in memory, so I can go:
  tableinfo['thetable']['fieldname']['refs']
 And get a complete list of all the tables (and the fields within that table)
 that reference 'fieldname' in 'thetable'.

 What I want is an answer to the question: If all my foreign keys were set
 to 'ON DELETE CASCADE', what would I need to do to delete row 'X' in table
 'Y' without violating any foreign key constraints?



 Here's an example. Let's say I've got these tables:

 CREATE TABLE `Manufacturers` (
  `idManufacturer` int(11) NOT NULL auto_increment,
  `name` varchar(255) NOT NULL,
  PRIMARY KEY  (`idManufacturer`)
 ) ENGINE=InnoDB

 CREATE TABLE `Parts` (
  `idPart` int(11) NOT NULL auto_increment,
  `idManufacturer` int(11) NOT NULL,
  `partnumber` int(11) NOT NULL,
  PRIMARY KEY  (`idPart`),
  KEY `Parts_idManufacturer_FKIndex` (`idManufacturer`),
  CONSTRAINT `Parts_ibfk_1` FOREIGN KEY (`idManufacturer`) REFERENCES
 `Manufacturers` (`idManufacturer`)
 ) ENGINE=InnoDB

 And my database contains:
 Manufacturers:
  idManufacturername
  123   Any Company Inc.

 Parts:
  idPart  idManufacturer  partnumber
  1   123 12345

 Now, let's say I want to do this:
  DELETE FROM Manufacturers WHERE idManufacturer=123

 Because I have a part that references Manufacturer #123, I have to do this
 instead:
  DELETE FROM Parts WHERE idManufacturer=123
  DELETE FROM Manufacturer WHERE idManufacturer=123


 What I want is something I can feed the table definitions to, and the name
 of the table I want to delete a row from (in this case 'Manufacturers'), and
 generate a list of the DELETE commands that would allow me to delete that
 row while enforcing FK dependencies.

 I figure this is going to have to work something like mathematical
 expression evaluation -- build up a list of dependencies, then deal with the
 deepest dependency first. Catch being I can't see an obvious way to deal
 with generating the necessary DELETE commands without having to write a
 massive if recursion_level = 0 then generate_a_straight_delete else if
 recursion_level = 1 then... statement...

 Thanks,
 --
 Phil.
 usene...@philpem.me.uk
 http://www.philpem.me.uk/
 If mail bounces, replace 08 with the last two digits of the current year.


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
  http://lists.mysql.com/mysql?unsub=andrew.b.gar...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
This sounds like you need to raise max_allowed_packet for mysqldump
(and possibly mysqld) - these are separate settings for both the
client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
or specify it as an option on the command line mysqldump --opt ...
--max_allowed_packet=1G dbname  backup-file.

On Tue, Jan 13, 2009 at 2:58 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 2009-01-13 at 12:19 +0530, Chandru wrote:

 Hi,

   Did u try using this command


 mysqldump --opt db_name  db_name.sql -p 2bkp.err

 Not quite. Firstly, I had to alter the normal backup cron job, and that
 doesn't happen until late at night.

 Secondly, yes I added the redirection to capture errors. There were none
 ( empty file this time ).

 Thirdly, I didn't use '--opt'. I had no other suggestions yesterday
 ( before I went to bed anyway - there's 1 in my inbox this morning ), so
 I did some experimenting of my own and changed the dump command to:

 mysqldump --skip-opt --add-drop-table --add-locks --create-options
 --quick --lock-tables --set-charset --disable-keys dbmail  dbmail.sql
 -pSOME_PASSWORD 2bkp.err

 This made mysql do 1 insert per record.

 The backup *appears* to have completed successfully. At least the end of
 the dump file looks valid. It ends dumping the last table, then a view,
 then I get:

 -- Dump completed on 2009-01-13 17:23:13

 Previously it just finished part-way through dumping a blob.

 I have yet to do extensive testing on it. I suppose I should try
 importing the dump file into another server and see if I get the correct
 number of rows in each table ...

 The only issue now is that the dump file is much smaller than I would
 have expected. When using --opt, I was getting 30GB dump files. I would
 have expected the current format ( 1 insert statement per record ) to be
 much bigger, but it's 23GB. Now having said that, I did email the
 current DB administrator and ask him to get people to archive all emails
 with huge attachments somewhere on a network share ( people have some
 pretty big attachments ). Also I asked him to get people to clean out
 their Trash ( which happens only when we tell them to ). So I suppose
 it's not completely infeasible that this alone is responsible for the
 difference.

 Anyway, it's been a very disconcerting experience. It goes without
 saying that people would expect that anything that gets into a MySQL
 database should be able to be backed up by mysqldump. And it's worrying
 that the default --opt can't do that. When I get some time I'll enter a
 bug ...

 Thanks for you help Chandru.

 Dan



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=andrew.b.gar...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2009-01-13 Thread Andrew Garner
Do you have an index on id_num?  What sort of explain output do you
get when you don't use a query hint? Your USE INDEX hint may be
causing MySQL to ignore a better strategy.

If you have separate indexes on ssn and id_num, MySQL may be able to
efficiently use an index merge optimization .   See
http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
This is only in 5.0+ - on older versions of MySQL you may find a union
more efficient.


On Mon, Jan 12, 2009 at 9:43 AM, Johnny Withers joh...@pixelated.net wrote:
 I have the following tables:

 Customer: id,ssn
 Customer_Id: id,customer_id,id_num

 The customer table holds customers along with their SSN and the customer_id
 table holds identifications for each customer (Driver's License, State
 Issued ID, Student ID, etc). The SSN column from the customer table is
 VARCHAR(9) and the id_num column from the customer_id table is VARCHAR(32).
 Both of these columns have an index on them.

 The following query uses the index on customer.ssn and executes in 0ms:

 SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
 FROM customer USE INDEX(idx_ssn)
 LEFT JOIN customer_id ON customer.id=customer_id.customer_id
 WHERE ssn='123456789';

 Explain output:

 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: ref
 possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: const
 rows: 1
Extra: Using where; Using index
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
 possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra:

 Now, this is the query I have trouble with, it does not use the index (or
 says it does but doesn't?) and on a busy system (200+ queries per sec) can
 take up to 20 seconds or more to execute:

 SELECT SQL_NO_CACHE customer.id,customer.ssn,customer_id,id_num
 FROM customer USE INDEX(idx_ssn)
 LEFT JOIN customer_id ON customer.id=customer_id.customer_id
 WHERE ssn='123456789' OR id_num='123456789';

 Explain output:

 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: customer
 type: index
 possible_keys: idx_ssn
  key: idx_ssn
  key_len: 35
  ref: NULL
 rows: 165843
Extra: Using index
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: customer_id
 type: ref
 possible_keys: customer_key
  key: customer_key
  key_len: 5
  ref: aca_ecash.customer.id
 rows: 1
Extra: Using where


 Is there some way I can make it use the index? I've thought about
 redesigning the query to select from the customer_id table first, if a row
 is found.. just return the matching customer_id from the customer table..
 but I wanted to see if maybe i'm going about this the wrong way before I
 engineer some way around this.

 Thanks in advance,

 -
 Johnny Withers
 601.209.4985
 joh...@pixelated.net


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query Optimization

2009-01-13 Thread Andrew Garner
On Tue, Jan 13, 2009 at 7:07 PM, Baron Schwartz ba...@xaprb.com wrote:
 If you have separate indexes on ssn and id_num, MySQL may be able to
 efficiently use an index merge optimization .   See
 http://dev.mysql.com/doc/refman/5.0/en/index-merge-optimization.html.
 This is only in 5.0+ - on older versions of MySQL you may find a union
 more efficient.

 And in newer versions, too.  The optimizer frequently underestimates
 the cost of the merge operation and the required random I/O for row
 lookups.  So, yes it can use an index merge, but... efficiency is
 another question.  I've seen table scans outperform a two-way index
 merge by orders of magnitude.

These appeared to be high selectivity indexes, but perhaps I assumed
too much. :)

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: ? Solved ? Re: mysqldump: Error 2013: Lost connection to MySQL server

2009-01-13 Thread Andrew Garner
On Tue, Jan 13, 2009 at 6:06 PM, Dan d...@entropy.homelinux.org wrote:
 On Tue, 13 Jan 2009 18:34:44 -0600, Andrew Garner
 andrew.b.gar...@gmail.com wrote:

 This sounds like you need to raise max_allowed_packet for mysqldump
 (and possibly mysqld) - these are separate settings for both the
 client and the server.  You can do this via the my.cnf (or ~/.my.cnf)
 or specify it as an option on the command line mysqldump --opt ...
 --max_allowed_packet=1G dbname  backup-file.

 This is certainly the most common advice for this error, yes. I increased
 the max_allowed_packet size from 1M to 128M when the problem initially
 occured. This didn't fix anything.

My apologies.  I hadn't read up-thread where this was discussed, and
given that, max_allowed_packet is almost certainly not the problem.
Sorry for the noise.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql binlogs and their expiry times

2008-10-08 Thread Andrew Garner
On Wed, Oct 8, 2008 at 3:10 AM, Zbigniew Szalbot [EMAIL PROTECTED] wrote:
 Hi there,

 I hope someone can help. Due to they way my HD has been sliced I had
 to move mysql database to /usr/local/mysql. All works fine. Last week
 I added this entry:

 #expire bin logs
 expire_logs_days = 7

 to /usr/local/mysql/my.cnf

 I restarted the MySQL server and now I have been waiting for the
 binlogs to automatically expire but this is not happening:

 $ ls -l /usr/local/mysql

 -r--r--r--  1 mysql  mysql4954 Oct  1 07:30 my.cnf
 drwx--  2 mysql  mysql1536 Sep 27 07:10 mysql
 -rw-rw  1 mysql  mysql  1073745213 Sep  2 04:07 mysql-bin.47
 -rw-rw  1 mysql  mysql  1073746878 Sep  7 03:48 mysql-bin.48
 -rw-rw  1 mysql  mysql  1073745707 Sep 11 20:07 mysql-bin.49
 -rw-rw  1 mysql  mysql   175527890 Sep 12 08:32 mysql-bin.50
 -rw-rw  1 mysql  mysql  128272 Sep 12 08:40 mysql-bin.51
 -rw-rw  1 mysql  mysql  1073745119 Sep 17 04:35 mysql-bin.52
 -rw-rw  1 mysql  mysql  1073747657 Sep 22 04:26 mysql-bin.53
 -rw-rw  1 mysql  mysql  1073744456 Sep 27 03:28 mysql-bin.54
 -rw-rw  1 mysql  mysql   986782722 Oct  1 07:32 mysql-bin.55
 -rw-rw  1 mysql  mysql  1073742442 Oct  6 04:18 mysql-bin.56
 -rw-rw  1 mysql  mysql   536487381 Oct  8 07:45 mysql-bin.57
 -rw-r-  1 mysql  mysql 209 Oct  6 04:18 mysql-bin.index

 Do you have any idea why? Or if /usr/local/mysql/ is a correct
 location for my.cnf file? Perhaphs it should go to /usr/local/etc/ ?

 If it matters, I use mysql-server-5.0.67 on FreeBSD 7.0-Release machine.

 Many thanks!

I've seen MySQL get confused in some situations - seemingly after
running out of disk space or when someone manually maintains bin logs
outside of mysql-bin.index (e.g. find+rm).   expire-logs-days stops
working - explicit flush logs doesn't trigger expiration, log
rotations doesn't trigger expiration.  In the cases I've run into an
explicit PURGE MASTER LOGS or a restart usually fixes things and
further flush-logs/log rotations appear to work - sometimes after
correcting the mysql-bin.index.   There's a bug report similar to this
here:  http://bugs.mysql.com/bug.php?id=28238.  Might be useful to
also be aware of: http://bugs.mysql.com/bug.php?id=37027

~Andrew

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]