Re: Indexing about 40 Billion Entries

2012-06-21 Thread Brent Clark


On 20/06/2012 11:45, Christian Koetteritzsch wrote:

Hi guys,

As the title says I'm trying to index 40 billion entries with two indexes on a 
server with 16 cores and 128GB RAM. The table is the one below and it is a 
myisam table. The *.myd file is about 640GB



Hiya

I am unable to help. But one question that was running through my mind, as I 
was reading this is:

How do you do your backups?

I use mylvmbackup on a slave master replication server, but I would love to 
know how or what you use and do your backups.

Hope you come right with your problem.

Thanks
Brent

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysqld got signal 6 (problem why bigger than I initially realised)

2012-03-27 Thread Brent Clark

Hey Guys

Yesterday I sent an email, about '1 client is using or hasn't closed the table 
properly', but the problem is actually bigger than I realised.

We run Mysql replication, and on the second node, Mysql is crashing with 
'mysqld got signal 6' every so often.

Other than the drives, we have replaced the hardware, and still the problem 
persists.

Googling, I thought it was a memory exhaustion issue, so I started playing with 
the following variables and values.

key_buffer_size=8384512
max_connections=41

But it still crashes

Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size=8384512
Mar 27 12:36:53 xyz-web02 mysqld: read_buffer_size=131072
Mar 27 12:36:53 xyz-web02 mysqld: max_used_connections=3
Mar 27 12:36:53 xyz-web02 mysqld: max_threads=41
Mar 27 12:36:53 xyz-web02 mysqld: thread_count=1
Mar 27 12:36:53 xyz-web02 mysqld: connection_count=1
Mar 27 12:36:53 xyz-web02 mysqld: It is possible that mysqld could use up to
Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size + (read_buffer_size + 
sort_buffer_size)*max_threads = 97828 K  bytes of memory

xyz-web02:# free -m
 total   used   free sharedbuffers cached
Mem: 12038  11966 71  0117  11378
-/+ buffers/cache:469  11568
Swap: 2047  0   2047

We running Debian Squeeze 64bit.

I have attached the full crash message.

If anyone can help, I would be *most* grateful (If you are in South Africa, I 
will buy you a meal, let alone a beer :)

Kindest Regards
Brent Clark
*** glibc detected *** /usr/sbin/mysqld: double free or corruption (!prev): 
0x7f5e38003b60 ***
=== Backtrace: =
/lib/libc.so.6(+0x71bd6)[0x7f5e4575dbd6]
/lib/libc.so.6(cfree+0x6c)[0x7f5e4576294c]
/usr/sbin/mysqld(_ZN15Query_log_eventD0Ev+0x20)[0x7f5e474f2ed0]
/usr/sbin/mysqld(handle_slave_sql+0x835)[0x7f5e47567255]
/lib/libpthread.so.0(+0x68ca)[0x7f5e46c738ca]
/lib/libc.so.6(clone+0x6d)[0x7f5e457bb86d]
=== Memory map: 
7f5e3000-7f5e30021000 rw-p  00:00 0 
7f5e30021000-7f5e3400 ---p  00:00 0 
7f5e3800-7f5e38158000 rw-p  00:00 0 
7f5e38158000-7f5e3c00 ---p  00:00 0 
7f5e3e019000-7f5e3e01a000 ---p  00:00 0 
7f5e3e01a000-7f5e3e81a000 rw-p  00:00 0 
7f5e3e81a000-7f5e3e81b000 ---p  00:00 0 
7f5e3e81b000-7f5e3f01b000 rw-p  00:00 0 
7f5e3f01b000-7f5e3f01c000 ---p  00:00 0 
7f5e3f01c000-7f5e3f81c000 rw-p  00:00 0 
7f5e3f81c000-7f5e3f81d000 ---p  00:00 0 
7f5e3f81d000-7f5e4001d000 rw-p  00:00 0 
7f5e4001d000-7f5e4001e000 ---p  00:00 0 
7f5e4001e000-7f5e40a24000 rw-p  00:00 0 
7f5e40c47000-7f5e40c48000 ---p  00:00 0 
7f5e40c48000-7f5e41448000 rw-p  00:00 0 
7f5e41448000-7f5e41449000 ---p  00:00 0 
7f5e41449000-7f5e41c49000 rw-p  00:00 0 
7f5e41c49000-7f5e41c4a000 ---p  00:00 0 
7f5e41c4a000-7f5e448c4000 rw-p  00:00 0 
7f5e448c4000-7f5e448d r-xp  fd:00 328894 
/lib/libnss_files-2.11.3.so
7f5e448d-7f5e44acf000 ---p c000 fd:00 328894 
/lib/libnss_files-2.11.3.so
7f5e44acf000-7f5e44ad r--p b000 fd:00 328894 
/lib/libnss_files-2.11.3.so
7f5e44ad-7f5e44ad1000 rw-p c000 fd:00 328894 
/lib/libnss_files-2.11.3.so
7f5e44ad1000-7f5e44adb000 r-xp  fd:00 328896 
/lib/libnss_nis-2.11.3.so
7f5e44adb000-7f5e44cda000 ---p a000 fd:00 328896 
/lib/libnss_nis-2.11.3.so
7f5e44cda000-7f5e44cdb000 r--p 9000 fd:00 328896 
/lib/libnss_nis-2.11.3.so
7f5e44cdb000-7f5e44cdc000 rw-p a000 fd:00 328896 
/lib/libnss_nis-2.11.3.so
7f5e44cdc000-7f5e44ce3000 r-xp  fd:00 328892 
/lib/libnss_compat-2.11.3.so
7f5e44ce3000-7f5e44ee2000 ---p 7000 fd:00 328892 
/lib/libnss_compat-2.11.3.so
7f5e44ee2000-7f5e44ee3000 r--p 6000 fd:00 328892 
/lib/libnss_compat-2.11.3.so
7f5e44ee3000-7f5e44ee4000 rw-p 7000 fd:00 328892 
/lib/libnss_compat-2.11.3.so
7f5e44eeb000-7f5e44eec000 ---p  00:00 0 
7f5e44eec000-7f5e456ec000 rw-p  00:00 0 
7f5e456ec000-7f5e45845000 r-xp  fd:00 328847 
/lib/libc-2.11.3.so
7f5e45845000-7f5e45a44000 ---p 00159000 fd:00 328847 
/lib/libc-2.11.3.so
7f5e45a44000-7f5e45a48000 r--p 00158000 fd:00 328847 
/lib/libc-2.11.3.so
7f5e45a48000-7f5e45a49000 rw-p 0015c000 fd:00 328847 
/lib/libc-2.11.3.so
7f5e45a49000-7f5e45a4e000 rw-p  00:00 0 
7f5e45a4e000-7f5e45a64000 r-xp  fd:00 328801 
/lib/libgcc_s.so.1
7f5e45a64000-7f5e45c63000 ---p 00016000 fd:00 328801 
/lib/libgcc_s.so.1
7f5e45c63000-7f5e45c64000 rw-p 00015000 fd:00 328801 
/lib/libgcc_s.so

Re: mysqld got signal 6 (problem why bigger than I initially realised)

2012-03-27 Thread Brent Clark

Good day KarthiK.P.R

Other than the replication settings in '/etc/mysql/conf.d/replication.cnf' and 
our /etc/mysql/conf.d/custom.cnf

xyz-web02:/data# cat /etc/mysql/conf.d/custom.cnf
[mysqld]
innodb_file_per_table
bind-address = 0.0.0.0
datadir  = /data
binlog_format=mixed
key_buffer_size=8384512
max_connections=41

The original /etc/mysql/my.cnf is untouched.

The only other service we have running is DRBD (active / active) and apache, 
nothing is hitting apache as this is the standby node.
And there is no load, or anything consuming resources.

Thanks
Brent


On 27/03/2012 14:04, P.R.Karthik wrote:

HI Brent,

Can you please paste your configuration file settings (my.cnf) ? It will help 
to identify where things went wrong.

Is there any other memory consuming application running on the server beyond 
mysql ?

Regards,
KarthiK.P.R



On Tue, Mar 27, 2012 at 4:17 PM, Brent Clark brentgclarkl...@gmail.com 
mailto:brentgclarkl...@gmail.com wrote:

Hey Guys

Yesterday I sent an email, about '1 client is using or hasn't closed the 
table properly', but the problem is actually bigger than I realised.

We run Mysql replication, and on the second node, Mysql is crashing with 
'mysqld got signal 6' every so often.

Other than the drives, we have replaced the hardware, and still the problem 
persists.

Googling, I thought it was a memory exhaustion issue, so I started playing 
with the following variables and values.

key_buffer_size=8384512
max_connections=41

But it still crashes

Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size=8384512
Mar 27 12:36:53 xyz-web02 mysqld: read_buffer_size=131072
Mar 27 12:36:53 xyz-web02 mysqld: max_used_connections=3
Mar 27 12:36:53 xyz-web02 mysqld: max_threads=41
Mar 27 12:36:53 xyz-web02 mysqld: thread_count=1
Mar 27 12:36:53 xyz-web02 mysqld: connection_count=1
Mar 27 12:36:53 xyz-web02 mysqld: It is possible that mysqld could use up to
Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size + (read_buffer_size + 
sort_buffer_size)*max_threads = 97828 K  bytes of memory

xyz-web02:# free -m
 total   used   free sharedbuffers cached
Mem: 12038  11966 71  0117  11378
-/+ buffers/cache:469  11568
Swap: 2047  0   2047

We running Debian Squeeze 64bit.

I have attached the full crash message.

If anyone can help, I would be *most* grateful (If you are in South Africa, 
I will buy you a meal, let alone a beer :)

Kindest Regards
Brent Clark


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/mysql





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysqld got signal 6 (problem why bigger than I initially realised)

2012-03-27 Thread Brent Clark
Hi Rick

Thank you for replying.

Please see my answers to your questions.

On Tue, Mar 27, 2012 at 7:58 PM, Rick James rja...@yahoo-inc.com wrote:
 Do you have 12GB of RAM?

 total   used   free sharedbuffers cached
Mem: 12038  11959 78  0139  11234
-/+ buffers/cache:586  11451
Swap: 2047  0   2047

 Is this a 64-bit mysqld?

Linux hostname 2.6.32-5-amd64 #1 SMP Mon Jan 16 16:22:28 UTC 2012
x86_64 GNU/Linux

 Let's see
   SHOW VARIABLES LIKE '%buffer%';

+-+-+
| Variable_name   | Value   |
+-+-+
| bulk_insert_buffer_size | 8388608 |
| innodb_buffer_pool_size | 8388608 |
| innodb_log_buffer_size  | 1048576 |
| join_buffer_size| 131072  |
| key_buffer_size | 8384512 |
| myisam_sort_buffer_size | 8388608 |
| net_buffer_length   | 16384   |
| preload_buffer_size | 32768   |
| read_buffer_size| 131072  |
| read_rnd_buffer_size| 262144  |
| sort_buffer_size| 2097144 |
| sql_buffer_result   | OFF |
+-+-+

   SHOW GLOBAL STATUS LIKE 'Max%';

+--+---+
| Variable_name| Value |
+--+---+
| Max_used_connections | 3 |
+--+---+

   SHOW GLOBAL STATUS LIKE 'Up%';

+---+---+
| Variable_name | Value |
+---+---+
| Uptime| 18492 |
| Uptime_since_flush_status | 18492 |
+---+---+

 What is Apache's MaxClients?

MaxClients  1000

Thank you again.
Brent



 On 3/27/12 6:25 AM, Brent Clark wrote:

 Good day KarthiK.P.R

 Other than the replication settings in '/etc/mysql/conf.d/replication.cnf'
 and our /etc/mysql/conf.d/custom.cnf

 xyz-web02:/data# cat /etc/mysql/conf.d/custom.cnf
 [mysqld]
 innodb_file_per_table
 bind-address = 0.0.0.0
 datadir      = /data
 binlog_format=mixed
 key_buffer_size=8384512
 max_connections=41

 The original /etc/mysql/my.cnf is untouched.

 The only other service we have running is DRBD (active / active) and
 apache, nothing is hitting apache as this is the standby node.
 And there is no load, or anything consuming resources.

 Thanks
 Brent


 On 27/03/2012 14:04, P.R.Karthik wrote:

 HI Brent,

 Can you please paste your configuration file settings (my.cnf) ? It will
 help to identify where things went wrong.

 Is there any other memory consuming application running on the server
 beyond mysql ?

 Regards,
 KarthiK.P.R



 On Tue, Mar 27, 2012 at 4:17 PM, Brent
 Clarkbrentgclarkl...@gmail.commailto:brentgclarkl...@gmail.com  wrote:

     Hey Guys

     Yesterday I sent an email, about '1 client is using or hasn't closed
 the table properly', but the problem is actually bigger than I realised.

     We run Mysql replication, and on the second node, Mysql is crashing
 with 'mysqld got signal 6' every so often.

     Other than the drives, we have replaced the hardware, and still the
 problem persists.

     Googling, I thought it was a memory exhaustion issue, so I started
 playing with the following variables and values.

     key_buffer_size=8384512
     max_connections=41

     But it still crashes

     Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size=8384512
     Mar 27 12:36:53 xyz-web02 mysqld: read_buffer_size=131072
     Mar 27 12:36:53 xyz-web02 mysqld: max_used_connections=3
     Mar 27 12:36:53 xyz-web02 mysqld: max_threads=41
     Mar 27 12:36:53 xyz-web02 mysqld: thread_count=1
     Mar 27 12:36:53 xyz-web02 mysqld: connection_count=1
     Mar 27 12:36:53 xyz-web02 mysqld: It is possible that mysqld could
 use up to
     Mar 27 12:36:53 xyz-web02 mysqld: key_buffer_size + (read_buffer_size
 + sort_buffer_size)*max_threads = 97828 K  bytes of memory

     xyz-web02:# free -m
                  total       used       free     shared    buffers
 cached
     Mem:         12038      11966         71          0        117
  11378
     -/+ buffers/cache:        469      11568
     Swap:         2047          0       2047

     We running Debian Squeeze 64bit.

     I have attached the full crash message.

     If anyone can help, I would be *most* grateful (If you are in South
 Africa, I will buy you a meal, let alone a beer :)

     Kindest Regards
     Brent Clark


     --
     MySQL General Mailing List
     For list archives: http://lists.mysql.com/mysql
     To unsubscribe: http://lists.mysql.com/mysql




 --
 Rick James - MySQL Geek



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



1 client is using or hasn't closed the table properly

2012-03-26 Thread Brent Clark

Hi Guys

I got a problem whereby replication (Master - Master) is broken between my two 
nodes.

But whats really interesting is that, when I try and recover, i.e. take a LVM 
snapshot (mylvmbackup), and uncompress / recover  on the slave. On starting 
mysql, I just see a range of

Mar 26 14:43:54 xyz-web02 mysqld: 120326 14:43:54 [ERROR] /usr/sbin/mysqld: 
Table './db/comm_opens' is marked as crashed and should be repaired
Mar 26 14:43:54 xyz-web02 mysqld: 120326 14:43:54 [Warning] Checking table:   
'./db/comm_opens'
Mar 26 14:43:54 xyz-web02 mysqld: 120326 14:43:54 [ERROR] 1 client is using or 
hasn't closed the table properly

Would anyone know what is causing this, and how I can go about fixing it.
Do you think the problem could be on node 1?

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Monitor 'mysqladmin flush-hosts'

2011-11-23 Thread Brent Clark

Hiya

I was wondering. Is there anyway Mysql can be monitored for when you need to 
run mysqladmin flush-hosts?

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Mysql user kill privilege

2011-09-27 Thread Brent Clark

Hiya

I got a request from a client for the rights to kill his queries if something 
goes wrong. Long story short, he doesn't want to have to phone.

I see via the MySQL documentation, that the client would need PROCESS and SUPER 
privileges. Im not happy with that.

Would know of an alternative solution. I could reply, write better queries, but 
somehow I dont think the client would be to happy with that.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Concerned : Developer getting There is no 'root'@'%' registered error message

2011-08-17 Thread Brent Clark

Hiya

Ill be up front my saying my knowledge of store procedures is limited.

A Developer gave me a procedure to load.

It starts off with:

CREATE DEFINER=`root`@`%` PROCEDURE 


But now, the developer informs me that he gets the following message. 
There is no 'root'@'%' registered


Googling reveals the following link : 
http://forums.mysql.com/read.php?10,237843,238950#msg-238950


And that I can run:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'mysql' WITH 
GRANT OPTION;


Im hesitant to do it.

My Mysql root acess details are:

mysql select user, host from user WHERE user = 'root';
+--+-+
| user | host|
+--+-+
| root | 127.0.0.1   |
| root | localhost   |
+--+-+

If someone can share their opinion, thoughts or share the same concerns 
it would be appreciated.


Kind Regards
Brent Clark



Re: Concerned : Developer getting There is no 'root'@'%' registered error message

2011-08-17 Thread Brent Clark

Hiya

Thank you so much for replying. I really appreciate it.

I know the answer (well I think I do :) ), but im still going to ask. 
What is the risk if do the GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' 
IDENTIFIED BY 'mysql' WITH GRANT OPTION;

To satisfy the developer.

Thank you again.
Brent

On 17/08/2011 15:42, Johnny Withers wrote:


Change the definer to one of your registered root accounts. Root@127 
or root@localhost.


On Aug 17, 2011 8:39 AM, Brent Clark brentgclarkl...@gmail.com 
mailto:brentgclarkl...@gmail.com wrote:


Hiya

Ill be up front my saying my knowledge of store procedures is limited.

A Developer gave me a procedure to load.

It starts off with:

CREATE DEFINER=`root`@`%` PROCEDURE 


But now, the developer informs me that he gets the following message. 
There is no 'root'@'%' registered


Googling reveals the following link : 
http://forums.mysql.com/read.php?10,237843,238950#msg-238950


And that I can run:

GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY 'mysql' WITH 
GRANT OPTION;


Im hesitant to do it.

My Mysql root acess details are:

mysql select user, host from user WHERE user = 'root';
+--+-+
| user | host|
+--+-+
| root | 127.0.0.1   |
| root | localhost   |
+--+-+

If someone can share their opinion, thoughts or share the same 
concerns it would be appreciated.


Kind Regards
Brent Clark





Innodb as its default storage engine for Mysql 5.0 / 5.1

2011-05-25 Thread Brent Clark

Hiya

I think I read somewhere that Mysql 5.5 is defaulting on Innodb as its 
default storage engine.


The question I would like to ask is. For those still running Mysql 5.0 / 
5.1. Have any of you set the mysql default variable to be / use Innodb?


Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Error in accept: Too many open files

2011-04-26 Thread Brent Clark

Hiya

I recently imported data on a new server (5.1). But before the importing 
I added the feature 'innodb_file_per_table'.


I now want to enable Master - Master replication, but the problem is, im 
seeing the following


--
Apr 24 23:32:50 maj-web01 mysqld: 110424 23:32:50 [ERROR] Error in 
accept: Too many open files
Apr 24 23:35:03 maj-web01 mysqld: 110424 23:35:03 [ERROR] 
/usr/sbin/mysqld: Can't open file: './maj_fs2/sites.frm' (errno: 24)

--

Ive been going through High Performance MySQL, and a key area the book 
address is the config option 'innodb_open_files'.


If I do a file count for *.ibd files, I only have 147 files, and I see 
the limit is 300


mysql show global variables like 'innodb_open_files'\G;
*** 1. row ***
Variable_name: innodb_open_files
Value: 300
1 row in set (0.00 sec)

Could the other option to look at be 'open_files_limit'?

The database is a mixture of innodb and Myiasm.

I really need to get replication working, if someone could help my 
understand this issue, it would be appreciated.


Regards
Brent Clark


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Mysql issue / crashing

2011-04-20 Thread Brent Clark

Sorry

Would you be so kind as to explain your thinking.

How would upgrading Mysql fix the issue?

Regards
Brent Clark

On 20/04/2011 06:23, Suresh Kuna wrote:
Install the latest version of mysql on top of the current version and 
start the database.


On Tue, Apr 19, 2011 at 9:34 PM, Brent Clark 
brentgclarkl...@gmail.com mailto:brentgclarkl...@gmail.com wrote:


Thanks for replying

5.1.55


On 19/04/2011 13:55, Suresh Kuna wrote:

What is the version of MYSQL you are using currently ?

On Tue, Apr 19, 2011 at 3:46 PM, Brent Clark
brentgclarkl...@gmail.com mailto:brentgclarkl...@gmail.com wrote:

Ive added

innodb_force_recovery=4

Still no go.


 Original Message 
Subject:Mysql issue / crashing
Date:   Tue, 19 Apr 2011 12:15:30 +0200
From:   Brent Clark brentgclarkl...@gmail.com
mailto:brentgclarkl...@gmail.com
To: mysql@lists.mysql.com mailto:mysql@lists.mysql.com



Hiya

Im getting the following

I ran myisamchk --silent --force */*.MYI

But still I get the following.

I cant see how I can bring Mysql up.

# mysqld2912

110419 12:13:22 [Warning] 'for replication startup options' is
deprecated and will be removed in a future release. Please
use ''CHANGE
MASTER'' instead.
110419 12:13:22 [Note] Plugin 'FEDERATED' is disabled.
110419 12:13:22  InnoDB: Initializing buffer pool, size = 512.0M
110419 12:13:22  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 10 96395066
110419 12:13:22  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the
doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 10
96451805
110419 12:13:22  InnoDB: Starting an apply batch of log
records to the
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13
14 15 16
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36
37 38 39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60
61 62 63 64
65 66 67 68 69 70 110419 12:13:22 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that
this binary
or one of the libraries it was linked against is corrupt,
improperly built,
or misconfigured. This error can also be caused by
malfunctioning hardware.
We will try our best to scrape up some info that will
hopefully help
diagnose
the problem, but since we have already crashed, something is
definitely
wrong
and this may fail.

key_buffer_size=201326592
read_buffer_size=2097152
max_used_connections=0
max_threads=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size +
sort_buffer_size)*max_threads =
606853 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information
to find out
where mysqld died. If you see no messages after this,
something went
terribly wrong...
stack_bottom = (nil) thread_stack 0x2
mysqld(my_print_stacktrace+0x2d) [0xb75de06d]
mysqld(handle_segfault+0x49c) [0xb72ac0cc]
[0xb7018400]
mysqld(page_cur_parse_insert_rec+0x580) [0xb7500390]
mysqld [0xb74ea325]
mysqld(recv_recover_page+0x502) [0xb74ec2e2]
mysqld(buf_page_io_complete+0x624) [0xb74a22e4]
mysqld(fil_aio_wait+0x12d) [0xb74bdb8d]
mysqld [0xb7533d80]
/lib/i686/cmov/libpthread.so.0 [0xb6ffd4c0]
/lib/i686/cmov/libc.so.6(clone+0x5e) [0xb6ded84e]
The manual page at
http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the
crash




-- 
Thanks

Suresh Kuna
MySQL DBA





--
Thanks
Suresh Kuna
MySQL DBA




Re: Mysql issue / crashing

2011-04-20 Thread Brent Clark

On 20/04/2011 10:10, Suresh Kuna wrote:

It will, try it out.


Thanks for replying.

My Colleague and I, we tried a different route.

We retried innodb_force_recovery.

But this time we started at 1 and progressed to 6.

At 6 we were able to able to start working.

So for our recovery procedure we have opted for mysqldump and reimport.
Dont get me wrong, we know its slower, and may not be bullet proof, but 
we are not seeing missing data, but we are reimporting for an extra measure.


Brent
P.s. The one cool thing is that we have been able to add is 
'innodb_file_per_table'.




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Mysql issue / crashing

2011-04-19 Thread Brent Clark

Hiya

Im getting the following

I ran myisamchk --silent --force */*.MYI

But still I get the following.

I cant see how I can bring Mysql up.

# mysqld
110419 12:13:22 [Warning] 'for replication startup options' is 
deprecated and will be removed in a future release. Please use ''CHANGE 
MASTER'' instead.

110419 12:13:22 [Note] Plugin 'FEDERATED' is disabled.
110419 12:13:22  InnoDB: Initializing buffer pool, size = 512.0M
110419 12:13:22  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 10 96395066
110419 12:13:22  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 10 96451805
110419 12:13:22  InnoDB: Starting an apply batch of log records to the 
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 
65 66 67 68 69 70 110419 12:13:22 - mysqld got signal 11 ;

This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong

and this may fail.

key_buffer_size=201326592
read_buffer_size=2097152
max_used_connections=0
max_threads=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 
606853 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = (nil) thread_stack 0x2
mysqld(my_print_stacktrace+0x2d) [0xb75de06d]
mysqld(handle_segfault+0x49c) [0xb72ac0cc]
[0xb7018400]
mysqld(page_cur_parse_insert_rec+0x580) [0xb7500390]
mysqld [0xb74ea325]
mysqld(recv_recover_page+0x502) [0xb74ec2e2]
mysqld(buf_page_io_complete+0x624) [0xb74a22e4]
mysqld(fil_aio_wait+0x12d) [0xb74bdb8d]
mysqld [0xb7533d80]
/lib/i686/cmov/libpthread.so.0 [0xb6ffd4c0]
/lib/i686/cmov/libc.so.6(clone+0x5e) [0xb6ded84e]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Fwd: Mysql issue / crashing

2011-04-19 Thread Brent Clark

Ive added

innodb_force_recovery=4

Still no go.

 Original Message 
Subject:Mysql issue / crashing
Date:   Tue, 19 Apr 2011 12:15:30 +0200
From:   Brent Clark brentgclarkl...@gmail.com
To: mysql@lists.mysql.com



Hiya

Im getting the following

I ran myisamchk --silent --force */*.MYI

But still I get the following.

I cant see how I can bring Mysql up.

# mysqld
110419 12:13:22 [Warning] 'for replication startup options' is
deprecated and will be removed in a future release. Please use ''CHANGE
MASTER'' instead.
110419 12:13:22 [Note] Plugin 'FEDERATED' is disabled.
110419 12:13:22  InnoDB: Initializing buffer pool, size = 512.0M
110419 12:13:22  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 10 96395066
110419 12:13:22  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 10 96451805
110419 12:13:22  InnoDB: Starting an apply batch of log records to the
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64
65 66 67 68 69 70 110419 12:13:22 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is definitely
wrong
and this may fail.

key_buffer_size=201326592
read_buffer_size=2097152
max_used_connections=0
max_threads=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads =
606853 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = (nil) thread_stack 0x2
mysqld(my_print_stacktrace+0x2d) [0xb75de06d]
mysqld(handle_segfault+0x49c) [0xb72ac0cc]
[0xb7018400]
mysqld(page_cur_parse_insert_rec+0x580) [0xb7500390]
mysqld [0xb74ea325]
mysqld(recv_recover_page+0x502) [0xb74ec2e2]
mysqld(buf_page_io_complete+0x624) [0xb74a22e4]
mysqld(fil_aio_wait+0x12d) [0xb74bdb8d]
mysqld [0xb7533d80]
/lib/i686/cmov/libpthread.so.0 [0xb6ffd4c0]
/lib/i686/cmov/libc.so.6(clone+0x5e) [0xb6ded84e]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html contains
information that should help you find out what is causing the crash



Re: Mysql issue / crashing

2011-04-19 Thread Brent Clark

Thanks for replying

5.1.55

On 19/04/2011 13:55, Suresh Kuna wrote:

What is the version of MYSQL you are using currently ?

On Tue, Apr 19, 2011 at 3:46 PM, Brent Clark 
brentgclarkl...@gmail.com mailto:brentgclarkl...@gmail.com wrote:


Ive added

innodb_force_recovery=4

Still no go.


 Original Message 
Subject:Mysql issue / crashing
Date:   Tue, 19 Apr 2011 12:15:30 +0200
From:   Brent Clark brentgclarkl...@gmail.com
mailto:brentgclarkl...@gmail.com
To: mysql@lists.mysql.com mailto:mysql@lists.mysql.com



Hiya

Im getting the following

I ran myisamchk --silent --force */*.MYI

But still I get the following.

I cant see how I can bring Mysql up.

# mysqld2912
110419 12:13:22 [Warning] 'for replication startup options' is
deprecated and will be removed in a future release. Please use
''CHANGE
MASTER'' instead.
110419 12:13:22 [Note] Plugin 'FEDERATED' is disabled.
110419 12:13:22  InnoDB: Initializing buffer pool, size = 512.0M
110419 12:13:22  InnoDB: Completed initialization of buffer pool
InnoDB: Log scan progressed past the checkpoint lsn 10 96395066
110419 12:13:22  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the
doublewrite
InnoDB: buffer...
InnoDB: Doing recovery: scanned up to log sequence number 10 96451805
110419 12:13:22  InnoDB: Starting an apply batch of log records to the
database...
InnoDB: Progress in percents: 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38
39 40
41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62
63 64
65 66 67 68 69 70 110419 12:13:22 - mysqld got signal 11 ;
This could be because you hit a bug. It is also possible that this
binary
or one of the libraries it was linked against is corrupt,
improperly built,
or misconfigured. This error can also be caused by malfunctioning
hardware.
We will try our best to scrape up some info that will hopefully help
diagnose
the problem, but since we have already crashed, something is
definitely
wrong
and this may fail.

key_buffer_size=201326592
read_buffer_size=2097152
max_used_connections=0
max_threads=100
threads_connected=0
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads =
606853 K
bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

Thread pointer: 0x0
Attempting backtrace. You can use the following information to
find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
stack_bottom = (nil) thread_stack 0x2
mysqld(my_print_stacktrace+0x2d) [0xb75de06d]
mysqld(handle_segfault+0x49c) [0xb72ac0cc]
[0xb7018400]
mysqld(page_cur_parse_insert_rec+0x580) [0xb7500390]
mysqld [0xb74ea325]
mysqld(recv_recover_page+0x502) [0xb74ec2e2]
mysqld(buf_page_io_complete+0x624) [0xb74a22e4]
mysqld(fil_aio_wait+0x12d) [0xb74bdb8d]
mysqld [0xb7533d80]
/lib/i686/cmov/libpthread.so.0 [0xb6ffd4c0]
/lib/i686/cmov/libc.so.6(clone+0x5e) [0xb6ded84e]
The manual page at http://dev.mysql.com/doc/mysql/en/crashing.html
contains
information that should help you find out what is causing the crash




--
Thanks
Suresh Kuna
MySQL DBA




` vs '

2011-03-30 Thread Brent Clark

Hiya

Im wondering if someone could help me understand this. If you look at my 
two queries below. By the ORDER BY one is using ` and the other ', as a 
result, if you do an explain you will see that the top query does a 
filesort, while the other does not.


Would anyone know why.

mysql explain SELECT `Contact`.`id`, `Contact`.`name`, 
`Contact`.`surname`, `Contact`.`mobile`, `Contact`.`user_id`, 
`Contact`.`active`, `Contact`.`created` FROM `contacts` AS `Contact`   
WHERE `Contact`.`user_id` = 10203 AND `Contact`.`active` = '1'   ORDER 
BY `name` asc LIMIT 10;

++-+-+--+---+--+-+--++-+
| id | select_type | table   | type | possible_keys | key  | key_len | 
ref  | rows   | Extra   |

++-+-+--+---+--+-+--++-+
|  1 | SIMPLE  | Contact | ALL  | NULL  | NULL | NULL| 
NULL | 344709 | Using where; Using filesort |

++-+-+--+---+--+-+--++-+
1 row in set (0.00 sec)

mysql explain SELECT `Contact`.`id`, `Contact`.`name`, 
`Contact`.`surname`, `Contact`.`mobile`, `Contact`.`user_id`, 
`Contact`.`active`, `Contact`.`created` FROM `contacts` AS `Contact`   
WHERE `Contact`.`user_id` = 10203 AND `Contact`.`active` = '1'   ORDER 
BY 'name' asc LIMIT 10;

++-+-+--+---+--+-+--++-+
| id | select_type | table   | type | possible_keys | key  | key_len | 
ref  | rows   | Extra   |

++-+-+--+---+--+-+--++-+
|  1 | SIMPLE  | Contact | ALL  | NULL  | NULL | NULL| 
NULL | 344710 | Using where |

++-+-+--+---+--+-+--++-+
1 row in set (0.00 sec)

Thanks
Brent

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysql binlog format

2011-03-16 Thread Brent Clark

Hiya

I run master - master replication for my MySQL replication setup.

Ive just upgraded to Mysql 5.1 and as a result im seeing the following 
in my syslog.


 'Statement may not be safe to log'

Ive been reading up on the following.
http://dev.mysql.com/doc/refman/5.1/en/replication-sbr-rbr.html

http://www.slideshare.net/ligaya/mysql-51-replication
(See from slide 12.)


The question I would like to ask is. Is it safe to just change the  
binlog format? Or is a preprocedure that needs to first be exercised.
With the above warning, is this something I can ignore, or is there a 
serious matter (silly question I know, but I felt I needed to ask it).
After reading the above urls, im still stuck as to which binlog format 
to use. Im leaning toward mixed, but the part of its still testing is a 
little nerving.
And then lastly, if the change is made, and something goes wrong. Is 
there a role back procedure, or can I just change the binlog format back 
to STATEMENT.


If anyone can help me understand this, it would greatly be appreciated.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Mysql 5.1 - 5.0

2011-03-10 Thread Brent Clark

Hiya

We have client that is using Ubuntu, therefore MySQL is 5.1, but where I 
work, we still standardise on Debian Lenny (upgrading to Squeeze, is in 
the pipeline), therefore, MySQL is 5.0.


What I would like to know is, can I just make a copy of the mysql 
database files and copy them the Lenny box?


If anyone could help / answer, it would be appreciated.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysql vs source

2011-03-10 Thread Brent Clark

Hiya

I just found that I can run
 mysql db -e 'source exporteddbdata.sql'

The question I would like to ask is. Is there a speed difference between

mysql db  exporteddbdata.sql
and
mysql db -e 'source exporteddbdata.sql'
(using source)

Reason im asking is, I got a exported 5.4GB database file, and I need to 
import it as quickly as possible.

When I initially testing it, is ran for 170m40.935s

Generally I just scp the files (learnt this from Mylvmbackup), but the 
problem is, is that the source DB is Mysql 5.1 and the target is Mysql 5.0.


Thanks
Brent

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Back-up Plan for Large Database

2010-12-28 Thread Brent Clark

On 27/12/2010 19:14, Wagner Bianchi wrote:

*You have no guarantee the data on the slave matches the master 100%.*
*
*
Try it with *semi-synchronous* replication.

Best regards.
--
Wagner Bianchi



Hiya

Thats why you nagios monitor replication of your slaves (I do master 
master replication monitoring)


I use mylvmbackup. Works like a bomb.

HTH
Brent

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



localhost vs domain for connection string

2010-11-23 Thread Brent Clark

Hiya

Is there a difference if someone had to make the connection string the a 
domain (hosts file entry makes the machine look at its ip) as opposed to 
just using localhost.

If so would a performance hit be incurred?

I have this client that has used the domain and in netstat im  seeing 
all this


tcp0  0 own.ex.ip:50340  own.ex.ip:3306   
ESTABLISHED 30324/apache2
tcp0  0 own.ex.ip:50287  own.ex.ip:3306   
ESTABLISHED 30309/apache2
tcp0  0 own.ex.ip:3306   own.ex.ip:50287  
ESTABLISHED 29234/mysqld
tcp0  0 own.ex.ip:50357  own.ex.ip:3306   
ESTABLISHED 31714/apache2
tcp0  0 own.ex.ip:3306   own.ex.ip:50335  
ESTABLISHED 29234/mysqld


But I have another client that is using localhost and netstat is quiet 
as a mouse.


This actually never dawned on me. Hence the reason im asking
But the real reason is that the first clients machine is under heavy 
load and we are trying to see what can be improved.


Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Master Master Replication ... do a fail over and a week agos data is revealed.

2010-10-18 Thread Brent Clark

Hiya

I run MySQL Master - Master Replication. Ive had an interesting 
situation whereby I failed over using heartbeat but whats is 
interesting  is that via the application (vbulletin), I see that the 
forums was showing that a weeks ago data.


I had to re fail over to the original server and then the forums was up 
to date again.


Has anyone seeing this type of issue with MySQL. I promise you that both 
slaves are up and synced. I actually nagios monitor it.


If anyone could share some thought on MySQL's going on's  or anything, 
it would be appreciated.


Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Peer review maatkits mk-parallel-dump and mk-parallel-restore usage

2010-06-07 Thread Brent Clark

Hiya

Im trying to make use of maatkit as a means of dumping a database and 
then restoring to another database.


For dumps: mk-parallel-dump --user abc --password xyz --databases 
currentdb --base-dir /tmp/dump


For restore: mk-parallel-restore --create-databases --user abc 
--password xyz --database newdb /tmp/dump


My question is, is my logic and understanding correct, and would it be 
ok to do it like this.


Kind Regards
Brent Clark


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



High MySQL sleep count

2010-05-26 Thread Brent Clark

Hiya

I have a server that gets connections / requests from Germany and South 
Africa, Johannesburg (my server's based in Cape Town).


All servers are in datacentres. Im not 100% sure if its latency related, 
but I see a load or SLEEP when I do mysqladin proc, which appears to be 
hogging mysql.


My question is. Would adjusting timeout values address this problem.

I was thinking of using / setting these values.
[mysqld]
set-variable= connect_timeout=20
set-variable= wait_timeout=20
set-variable= interactive_timeout=20
set-variable= net_write_timeout=20

Can anyone share some thought or opinion on this.  Or whats the correct 
practice.


Kind Regards
Brent Clark





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



tcpdump mysql ?

2010-04-20 Thread Brent Clark

Hiya

I tried getting the following command running from the following youtube 
clip.


http://www.youtube.com/watch?v=Zofzid6xIZ4
Look at 19:25

I know I can use tcpdump, with maatkit (Im not always able to install 
maatkit on clients machines). But based on whats above in the clip, Did 
Mr Callaghan make a typo or leave something out.


This is the command as I understand it.
tcpdump -c 100 -s 1000 -A -n -p port 3306 | grep SELECT | sed 
's/\/\*.*\*\///g' | sed 's/.*\(SELECT.*\)/\1/gI' | sort | uniq -c | sort 
-r -n -k 1,1 | head -5


Other question is. What commnds do you use to help debuging and testing.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Freespace threshold be having to OPTIMIZE TABLE?

2010-03-26 Thread Brent Clark

Hiya

My query below, looks for tables with 10% freespace but also the space 
is greater than 100K.


mysql SHOW TABLE STATUS WHERE Data_free / Data_length  0.1 AND 
Data_free  102400 \G;

*** 1. row ***
   Name: bayes_words
 Engine: MyISAM
Version: 10
 Row_format: Dynamic
   Rows: 97134
 Avg_row_length: 27
Data_length: 2960488
Max_data_length: 281474976710655
   Index_length: 4182016
  Data_free: 330136
 Auto_increment: NULL
Create_time: 2010-02-21 05:17:26
Update_time: 2010-03-27 01:12:01
 Check_time: 2010-02-21 05:17:27
  Collation: latin1_swedish_ci
   Checksum: NULL
 Create_options:
Comment:
1 row in set (0.01 sec)

Question I would like to ask is, what is an acceptable threshhold or 
amount of freespace before I must run OPTIMIZE TABLE?


Kind Regards
Brent Clark




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



mysql proxy in production?

2010-03-11 Thread Brent Clark

Hiya

I work for a pretty large hosting company, and we have some clients that you could call 
in demand clients (Well  here where I live anyway :) ).

We already making use of heartbeat for high availability etc. But the one area 
that we have not tackled is load balancing.

I just read the following, which makes use of mysql proxy.

http://agiletesting.blogspot.com/2009/04/mysql-load-balancing-and-read-write.html

I would like to ask, does anyone make use of mysqlproxy in production, and if so, are you 
using it under heavy load.
How do you find how it performance under load.

If anyone can share their failures, successors or even just thoughts and 
opinions on mysql-proxy (even SQL load balancing in general), I would be most 
grateful.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: mysql proxy in production?

2010-03-11 Thread Brent Clark

On 11/03/2010 16:52, Krishna Chandra Prajapati wrote:

Hi Brent

You can visit the below link.

http://www.mysqlperformanceblog.com/2009/06/09/mysql-proxy-urgh-performance-and-scalability/


Well thats disappointing.

sigh
So what are we supposed to use for loadbalancing mysql.

Thank you for your reply.

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Allowing triggers stored procedures on MySQL

2010-03-06 Thread Brent Clark

Hi everyone,

Currently we have a policy that prohibit our customers from creating 
stored procedures and triggers on their DB's which I imagine must be 
driving them up the walls. It's like having a car with a boot but you 
are not able to use it.  :)


Are there any reasons why we would'nt want customers to make use of 
these built in features and what other means are available.


My reading showed that you need the create routine privilege and you 
*may* require the super privilege if you have binary logging enabled 
(and then that only becomes a potential issue if you are actually 
replaying those logs (ie. either for replication or for  media recovery).


I think I was reading the MySQL 5.1 manual - so maybe this is different 
with 5.0?


Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Replications oddity.

2010-01-15 Thread Brent Clark

Hiya

I have four servers.

   1  -  2
   ||
  4   5

I have master - master replication working flawlessly between server id 
1 and 2.

Server id 4 is a slave of 1.
Server id 2 is a slave of 2.

For the slaves replications it works great, but ... I just realised ... 
if I create a database on id 1, it replicates to 2 and to 4, but does 
not replicate to 5. And its the same if do it the inverse i.e.

Create a DB on 2, replication works to 1 and 5, but not to 4.

Am I doing something wrong. Or am I misunderstanding how the process 
works. I could make id 5 replicate from one., but I was hoping to make 
it off 2.


If anyone could help me understand this, it would gratefully be appreciated.

Kind Regards
Brent


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replications oddity.

2010-01-15 Thread Brent Clark

On 15/01/2010 11:54, Krishna Chandra Prajapati wrote:

Hi Brent,

I believe you missed log_slave_updates component on 2 and 1. You have 
to enable binary logs + log_slave_updates on 2 and 1.


http://dev.mysql.com/doc/refman/5.1/en/replication-options-slave.html#option_mysqld_log-slave-updates

Thanks,
Krishna



Thank you so much.

Works like a bomb.

-Brent

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Importing large databases faster

2009-12-18 Thread Brent Clark

On 17/12/2009 17:46, mos wrote:

Load Data ... is still going to be much faster.

Mike



Hiya

If you using on Linux and using LVM, look at mylvmbackup.

HTH

Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: LIMIT/OFFSET to paginate results

2009-11-23 Thread Brent Baisley
The order the records are returned is not guaranteed unless you  
specify an ORDER BY. You could run the same query multiple times and  
the order the records are returned could be different each time.  
Although this is rarely the case, especially with caching enabled.
Always do an ORDER BY with pagination, and make sure what you are  
ordering by is unique. This is simple to do, just add the unique ID  
field as the last order by field. Then you will always get  
consecutive rows.


Brent


On Nov 23, 2009, at 12:09 PM, Miguel Cardenas wrote:


Hello list :)

I am developing an application that will show records in paginated
documents, i.e. 10 records per page

Lets supose this row structure

MyTable
ID(autoincrement)   SectionID   Name   Description

The ID is automatic autoincrement for unique records, the SectionID  
is to

separate items into different sections.

If I query a particular SectionID rows it should return all those  
rows.


If I use LIMIT x,10 it should return 10 rows beginning at record  
#x, but

my doubt is:

Does the OFFSET x assumes its value to be #x number of consecutive  
rows, or

it is relative to the query results?

For example

ID   SID  name   description
--
01   01   nameA   descriptionA
02   02   nameB   descriptionB
03   01   nameC   descriptionC
04   02   nameD   descriptionD
05   02   nameE   descriptionE
06   01   nameF   descriptionF
07   02   nameG   descriptionG
08   01   nameH   descriptionH
09   02   nameI   descriptionI
10   02   nameJ   descriptionJ
--

If I do
  SELECT name from MyTable SID where SID='02' LIMIT 1,2 (offset 1, 2
elements)
it should return: nameB, nameD

theni f I do
  SELECT name from MyTable SID where SID='02' LIMIT 3,2 (offset 3, 2
elements)
it should return: nameE, nameG

and if I do
  SELECT name from MyTable SID where SID='02' LIMIT 3,2 (offset 5, 2
elements)
it should return: nameI, nameJ

Is it correct? My doubt is if the OFFSET is relative to the SELECT  
results
(to ignore the # first result rows) that match the condition  
SID='02' only,

to show 2 by 2 records (or N by N)...

Thanks for your help



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Table size vs Memory requirements?

2009-11-23 Thread Brent Baisley


On Nov 22, 2009, at 8:54 AM, Ryan Chan wrote:


Hello,

Is it common heard from people that if you have large table (assume
MyISAM in my case), you need large memory in order to have the
key/index in memory for performance, otherwise, table scan on disk is
slow.

But how to estimate how much memory I need?

Consider a simple case, a MyISAM table is 10GB in size, with 2GB
index, how much memory I need?


Thanks.



It's not the size of the table, it's the size of the index that you  
need to watch. MyISAM keeps the table and index separate, so the  
memory requirements can be considerably less than the size of the  
table. What you have likely heard is in reference to InnoDB tables.  
Since InnoDB clusters the index with the data, the memory  
requirements can be much greater. You may notice a significant drop  
off in performance from InnoDB once the data size passes a certain  
level, which is based on your RAM and InnoDB settings.
MyISAM performance is usually fairy steady as the size of the table  
increases.


--
Brent Baisley

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: a better way, code technique?

2009-09-04 Thread Brent Baisley
You should store the current user id in a session variable. Then you
don't have to hit the database at all.

There really is no short way of doing it. Normally you would create a
function (i.e. runQuery) that you pass the query too. Then it handles
running the query, fetching the data, error checking, etc. That way
you don't have to rewrite the same lines every time you want to run a
query.

Brent Baisley

On Fri, Sep 4, 2009 at 6:51 AM, AndrewJamesandrewhu...@gmail.com wrote:
 is there a better way (hopefully simpler) to code this?

 i want to get the user id of the logged in user to use in my next statement.

 $q1 = sprintf(SELECT uid FROM users WHERE users.username='$username');
 $result1 = mysql_query($q1);
 $uid = mysql_fetch_array($result1);
 $u = $uid['uid'];

 it seems like a long way around to get 1 bit of data??

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: 1 Machine with 4 GB RAM for Big Size MySQL Data Size

2009-09-04 Thread Brent Baisley
The size of the data is largely irrelevant, it depends on how much of
it you need to use at once. For most setups, 4GB should be more than
enough. A single server is always a bad idea since it's a single point
of failure.
Concurrent users isn't really relevant with the database either since
it's unlikely all of them will be running a query at the same time.
Unless your queries are really slow.

I'm using a server with 2GB RAM that gets 300K hits a day, adds 250K
records per day to the databases, with one table having almost 50
million records.

Brent Baisley

On Fri, Sep 4, 2009 at 12:48 PM, muhammad subairmsub...@gmail.com wrote:
 One of my potential clients want to migrate their application to web based
 (PHP  MySQL), estimates of the data size is 24GB and growth per month is
 20MB of data.  Unfortunately, they could only use 1 sever machine with 4GB
 RAM.

 The application used in intranet, just running simple transactions and the
 number of users concurent is under 10.

 I need information and suggestion about this condition, whether the effort
 spent on implementation and future maintenance is not too large for use
 MySQL with this condition?

 *btw sorry for my English*

 Thanks you very much,
 --
 Muhammad Subair


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Query for rolling totals

2009-09-03 Thread Brent Baisley
You can do this using a variable. Set the variable starting value with
a query:
set @runningTotal := 0

Then add the calculation to your total:
SELECT a. trans_id ,a.tran_date,b.cat_type,a.payee,a.amnt,
@runningTotal := @runningTotal+a.amnt AS rollingTotal from
transactions a join categories b on a.cat_id = b.cat_id where
a.user_id=1 and a.acc_id=3 order by a.tran_date ASC

Brent Baisley

On Thu, Sep 3, 2009 at 1:56 PM, John
Daisleyjohn.dais...@butterflysystems.co.uk wrote:
 Hi,

 Hoping someone can help me with this little issue! It seems really
 simple but my brain is refusing to work.

 We have a transactions tables like so...

 mysql desc transactions;
 +---+--+--+-+-++
 | Field     | Type                 | Null | Key | Default | Extra
 |
 +---+--+--+-+-++
 | trans_id  | int(10) unsigned     | NO   | PRI | NULL    |
 auto_increment |
 | user_id   | smallint(5) unsigned | NO   | MUL | NULL    |
 |
 | acc_id    | smallint(5) unsigned | NO   | MUL | NULL    |
 |
 | tran_date | date                 | NO   |     | NULL    |
 |
 | payee     | varchar(25)          | NO   |     | NULL    |
 |
 | amnt      | decimal(8,2)         | NO   |     | NULL    |
 |
 | cat_id    | int(10) unsigned     | NO   | MUL | NULL    |
 |
 +---+--+--+-+-++
 7 rows in set (0.00 sec)



 ...this joins to a few other tables and has the following simple query
 run on it to show all transactions for a particular user and account.

 mysql  SELECT a. trans_id ,a.tran_date,b.cat_type,a.payee,a.amnt from
 transactions a join categories b on a.cat_id = b.cat_id where
 a.user_id=1 and a.acc_id=3 order by a.tran_date ASC;

 Which returns a list something like this...

 +---+-+--+---+--+
  trans_id | tran_date    | cat_type | payee                   | amnt  |
 +---+-+-++--+
  |          1| 2009-08-31 | Income   | Opening Balance    |   0.00 |
  |          3| 2009-09-02 | Income   | Test Transactions   |   0.20 |
  |        23| 2009-09-02 | Income   | Tester                    |   1.20
 |
  |      102| 2009-09-02 | Income   | Debit                     |  -1.09
 |
 +---+-+-++--+
 4 rows in set (0.00 sec)

 Now this has been fine for a long time until this afternoon when I get a
 call saying the query needs editing to add another column to the output
 showing a rolling account balance. This means the current output show
 above needs to change to something like this...

 +-+---++--+---+-+
 |trans_id | |tran_date     | cat_type | payee                | amnt |
 Rolling Balance      |
 +-+---++--+---+-+
 |           1| 2009-08-31 | Income   | Opening Balance |  0.00 |
 0.00 |
 |           3| 2009-09-02 | Income   | Test Transactions |  0.20 |
 0.20 |
 |         23| 2009-09-02 | Income   | Tester                   |  1.20 |
 1.40 |
 |       102| 2009-09-02 | Income   | Debit                    | -1.09 |
 0.31 |
 +-+---++--+---+-+
 4 rows in set (0.00 sec)

 Anyone got any tips on how to achieve this? Group with ROLLUP doesn't
 really do what I need. I've tried a couple of sub queries but cant get
 the output I need.

 I want to avoid storing a rolling balance into the table because this
 would take a while due to the number of records and could create
 problems when someone goes in and modifies a single transaction.

 Any tips would be much appreciated.

 Regards


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



METAPHON does not exist

2009-08-31 Thread Brent Clark

Hiya

I got a debian server with mysql 5 and I get the following message.

ERROR 1305 (42000): FUNCTION suppor_db2.METAPHON does not exist.

Would anyone know how I can fix this.

Got the following installed

ii  libdbd-mysql-perl4.007-1  A Perl5 
database interface to the MySQL database
ii  libmysqlclient15off  5.0.51a-24+lenny1MySQL 
database client library
ii  mysql-client-5.0 5.0.51a-24+lenny1MySQL 
database client binaries
ii  mysql-common 5.0.51a-24+lenny1MySQL 
database common files
ii  mysql-server-5.0 5.0.51a-24+lenny1MySQL 
database server binaries
ii  php5-mysql   5.2.6.dfsg.1-1+lenny3MySQL 
module for php5

Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: store timezone

2009-07-28 Thread Brent Baisley
MySQL doesn't support timezones (I think Postgres does). I usually
just store dates as Greenwich Mean Time and store the time zone hours
offset in a separate field.

Brent Baisley

On Tue, Jul 28, 2009 at 7:59 AM, Manoj Singhmanojsingh2...@gmail.com wrote:
 Hi All,

 Is it possible to store the timezone with datetime data type in mysql or any
 other approach available to achieve this in mysql.

 Looking for your help.

 Thanks,
 Manoj


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Index selection problem

2009-07-21 Thread Brent Baisley
Try doing a SHOW INDEX FROM orders and look at the cardinality
column. These are the stats MySQL uses to determine which index to
use. Sometimes they aren't always update properly and you may need to
run ANALYZE on the table.

But, you can also tell MySQL to use the index you want.
SELECT * FROM orders USE INDEX (index_a) WHERE ...

Brent Baisley

On Tue, Jul 21, 2009 at 5:52 AM, Mortenmy.li...@mac.com wrote:

 Hi, I have a table orders with the columns

  item_id INT FK items(id)
  customer_id INT FK customers(id)
  status_id TINYINT -- Between 1 and 4 always
  ordered_at DATETIME
  delivered_at DATETIME

 There are indexes:

  index_a: (item_id, customer_id, status_id)
  index_b: (item_id, status_id, ordered_at, delivered_at)

 Given this query:

  SELECT * FROM orders WHERE item_id = 9602 AND customer_id = 5531 AND
 status_id IN (1,2)

 Then the key chosen is index_b. Same happens if I use (status_id = 1 OR
 status_id = 2). If I only check against one status_id, then the correct
 index_a gets picked with ref const,const,const.

 I'm not even doing a range scan on status_id and even if I were, it's the
 last column in index_a. Since ordered_at and delivered_at are both dates
 then index_b will have a very high selectivity. In reality, index_b may make
 little sense, but I still don't understand why MySQL would ever pick that
 when 3 columns in the query can use the covering index_a

 Can anyone give me some input on how to make sense of this?

 Thanks,

 Morten

 select count(*) from orders where item_id = 9602 - 4534 records
 select count(*) from orders where item_id = 9602 and status_id IN (1,2) -
 4181 records
 select count(*) from orders where item_id = 9602 and customer_id = 5531 -
 1226 records
 select count(*) from orders where item_id = 9602 and customer_id = 5531 and
 status_id IN (1,2) - 1174 records



 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Hard? query to with group order by group head's name

2009-07-21 Thread Brent Baisley
On Thu, Jul 16, 2009 at 1:20 AM, Elim PDTe...@pdtnetworks.net wrote:
 My table group_member looks like this:
 +---+---+-+
 | member_id | name  | head_id |
 +---+---+-+
 |         1 | Elim  |    NULL |
 |         2 | Ann   |       1 |
 |         3 | David |    NULL |
 |         4 | John  |       3 |
 |         5 | Jane  |       3 |
 +---+---+-+

 Record with null head_id means
 the member is a group head.
 Record with head_id k are in the
 group with head whoes id equals k.

 I like to fetch the rows in the following ordaer

 |         3 | David |    NULL |
 |         4 | John  |       3 |
 |         5 | Jane  |       3 |
 |         1 | Elim  |    NULL |
 |         2 | Ann   |       1 |

 That is
 (1) A head-row follewed by the group members with that head
 (2)head rows are ordered alphabetically by name.

 What the query looks like?

 Thanks



You need to create your own sort values, and link to the head name.
So really you are sorting on head name + head_id. Since sometimes the
head name is the current record, sometimes it's a parent record,
you need to conditional check which type of record it is and built
the sort value.

SELECT tablename.*,
IF(tablename.head_id=NULL,
  CONCAT(tablename.name, tablename.member_id),
  CONCAT(heads.name, tablename.head_id)
) AS SortValue
FROM tablename LEFT JOIN tablename AS heads ON tablename.head_id=heads.member_id
ORDER BY SortValue


Brent Baisley

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Removing Duplicate Records

2009-07-14 Thread Brent Baisley
That's assuming that there is a unique identifier field, like an auto
increment field. Although that could be added after the fact. Also,
you need to run the query multiple times until it returns no affected
records. So if there are 4 copies of a record, it would need to be run
3 times to get rid of all the dups.
But I agree, that is the best way to remove duplicates in place
provided the table is not too large.

Brent Baisley

On Tue, Jul 14, 2009 at 11:52 AM, Marcus
Bointonmar...@synchromedia.co.uk wrote:
 You can combine the two queries you have in option 3 (you'll need to change
 field names, but you should get the idea), something like this:
 DELETE table1 FROM table1, (SELECT MAX(id) AS dupid, COUNT(id) AS dupcnt
 FROM table1 WHERE field1 IS NOT NULL GROUP BY link_id HAVING dupcnt1) AS
 dups WHERE table1.id=dups.dupid;
 Marcus
 --
 Marcus Bointon
 Synchromedia Limited: Creators of http://www.smartmessages.net/
 UK resellers of i...@hand CRM solutions
 mar...@synchromedia.co.uk | http://www.synchromedia.co.uk/




-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Growing database Performance

2009-06-26 Thread Brent Baisley
1 million records is not a lot at all if you have it indexed right.
I've worked with systems where tables grew by 30 million records per
month. Although we only kept about 300 million records at any one
time.

It's not really the size or record count you need to worry about, but
the activity. You may end up with locking issues if you are using
MyISAM. Although if you are probably not doing a lot of deletes and
updates, just mainly inserts so it may not be able.
InnoDB won't have locking issues, but could have performance issues if
don't have enough memory.

You can use partitions or merge tables to split your data, but
splitting your tables into different databases isn't really going to
help you.

Splitting your mail into inbox and outbox may be helpful since you
inbox would probably only ever have records added to it. Whereas the
outbox would have a lot of activity (inserts and deletes).

Brent Baisley

On Fri, Jun 26, 2009 at 11:25 AM, fa sofak...@yahoo.com wrote:
 I have a website where my database is continuously growing. And I started 
 being worried about performance.
 I have a couple of questions, and I would appreciate it very much if you can 
 elaborate on them.

 - I have about 70 tables in the same database.  some of them are becoming 
 very large (over 1 million record), and I guess in a couple of months some of 
 them will be double in size, like the mailbox table. Would it be beneficial 
 to divide this database tables across different databases where each database 
 holds some tables?

 - I was looking at MySQL table partition, and I would like to try it. I am 
 not sure though what is the best way to do it, for example in my mailbox 
 table, I have senderId and receiverId as keys and I query inbox and 
 outbox using these keys.. If I want to paritition the table, what is the 
 partion by method I should use?

 - do you think dividing my mailbox table into separate tables for inbox and 
 outbox like: mailbox_inbox, mailbox_outbox, and mailbox_messages would be 
 beneficial?


 - I am also guessing that my mailbox table will be holding 10s of
 millions of records in a year or two period.. and I am not sure about
 the best way to habdle such a table when it grows very much.. any ideas
 on how to plan for such a senario?
 I can imagine the best way would be to create many tables each holding a 
 portion of the mailbox table while using MySQL partition on each of them... I 
 am wondering though about the best way to map senderId and receiverId to 
 the correct table

 thank you





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Half Hour Sub-query in MySQL vs. 5 Seconds in VFP?

2009-06-19 Thread Brent Baisley
It sounds like you want to use spatial indexes, but they only became
available in v4.1
http://dev.mysql.com/doc/refman/5.0/en/create-index.html
http://dev.mysql.com/doc/refman/5.0/en/using-a-spatial-index.html

You would need to switch your table type from InnoDB to MyISAM, which
is fairly easy with ALTER TABLE. But that should allow you to drop all
your calculations in the query.

You don't have to do any re-architecture to change you subquery to a join:
SELECT custzip FROM customers
JOIN
(SELECT zip FROM
zipcodes WHERE degrees(acos(sin(radians(39.0788994))*sin(radians(latitude))+
cos(radians(39.0788994))*cos(radians(latitude))*cos(radians(-77.1227036-longitude*60*1.1515
 5) AS zips
ON custzip=zip

Often times that simple change speeds things up considerably in MySQL.
An explain should show it has a DERIVED TABLE if I recall correctly.

Brent Baisley

On Thu, Jun 18, 2009 at 9:06 PM, Matt Neimeyerm...@neimeyer.org wrote:
 I'm converting a PHP app from using Visual FoxPro as the database
 backend to using MySQL as the backend. I'm testing on MySQL 4.1.22 on
 Mac OSX 10.4. The end application will be deployed cross platform and
 to both 4.x and 5.x MySQL servers.

 This query returned 21 records in .27 seconds.

   SELECT zip FROM zipcodes WHERE
 degrees(acos(sin(radians(39.0788994))*sin(radians(latitude))+
 cos(radians(39.0788994))*cos(radians(latitude))*cos(radians(-77.1227036-longitude*60*1.1515
  5

 This query returned 21442 records in 1.08 seconds.

   SELECT custzip FROM customers

 This query is still running half an hour later, with a Time of 2167
 and a State of Sending Data (according to the mysql process list)

   SELECT custzip FROM customers WHERE custzip IN (SELECT zip FROM
 zipcodes WHERE degrees(acos(sin(radians(39.0788994))*sin(radians(latitude))+
 cos(radians(39.0788994))*cos(radians(latitude))*cos(radians(-77.1227036-longitude*60*1.1515
  5)

 When I try to EXPLAIN the query it gives me the following...

 id,select_type,table,type,possible_keys,key,key_len,ref,rows,Extra
 1,PRIMARY,customers,index,NULL,cw_custzip,30,NULL,21226,Using where; Using 
 index
 2,DEPENDENT SUBQUERY,zipcodes,ALL,NULL,NULL,NULL,NULL,42144,Using where

 If it matters both tables are INNODB and both customers.custzip and
 zipcodes.zip are indexed. We used a program called DBConvert from
 DMSoft to convert the data so it's exactly the same on both the VFP
 side and the MySQL side. With all that in mind... VFP returns the
 exact same query in 5-10 seconds and that includes render time in the
 web browser.

 By comparison... the query WHERE id IN (SELECT id FROM phone WHERE
 phonedate = '2001-01-01' AND phonedate = '2009-06-18') returns
 almost instantly.

 I'm at a complete loss... The suggestions I've seen online for
 optimizing Dependent Subquery's basically revolve around changing it
 from a sub-query to a join but that would require more
 re-architecturing than I want to do... (Unless I'm forced) Especially
 since more than a few of those solutions suggested precalculating the
 distance between zipcodes which only works if the distances are known
 (only allow 10, 50 and 100 mile radi for example)

 Any ideas?

 Thanks in advance!

 Matt

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Question about query - can this be done?

2009-06-02 Thread Brent Baisley
On Tue, Jun 2, 2009 at 11:52 AM, Ray r...@stilltech.net wrote:
 Hello,

 I've tried the manual and google, but I am not even sure what to call what I
 want to do.

 simplified data example:
 I have a table of start and end times for an event, and an id for that event
 in a table. each event may occur multiple times, but never more than 5 times
 and rarely more than 3.
 I want a query that will provide one record per event with all times included.
 feel free to answer RTFM or STFW as long as you provide the manual section or
 key words.  ;)
 Thanks,
 Ray


 chart form follows:

 id | event_id | start | end
 ---
 1  |    4         | t1    | t2
 2  |    4         | t3    | t4
 3  |    4         | t5    | t6
 4  |    5         | t1    | t2
 5  |    5         | t3    | t4

 becomes

 id | event_id | start | end | start | end | start | end
 ---
 ?  |     4        | t1    | t2     | t3    | t4    |  t5    | t6
 ?  |     5        | t1    | t2     | t3    | t4


I think what you are looking for is GROUP_CONCAT. You can just GROUP
BY event id, and then process the resulting delimited string on the
front end.
SELECT event_id, GROUP_CONCAT(start) start_dates, GROUP_CONCAT(end) end_dates
FROM events GROUP BY event_id

Or even combined start and end dates into a single string and group them.
SELECT event_id, GROUP_CONCAT( CONCAT(start, '-', end) ) start_end
FROM events GROUP BY event_id

But, if you really want to get it in the column format you indicate,
you can make a much more complicated query. Use SUBSTRING_INDEX to
split out the parts of the group you need.

SELECT event_id,
SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 1 ) start1,
SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 1 ) end1,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 2 ), ',',
-1 ) start2,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 2 ), ',', -1 ) end2,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 3 ), ',',
-1 ) start3,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 3 ), ',', -1 ) end3,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 4 ), ',',
-1 ) start4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 4 ), ',', -1 ) end4,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(start), ',', 5 ), ',',
-1 ) start5,
SUBSTRING_INDEX( SUBSTRING_INDEX(GROUP_CONCAT(end), ',', 5 ), ',', -1 ) end5
FROM events GROUP BY event_id;

I think that will give the format you specified, but I am not
recommending you do it this way.

Hope that helps.

Brent Baisley

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Mysql Locked Process Hang

2009-05-06 Thread Brent Baisley
You have 2 queries that are probably blocking everything. They are in
the update state.
| 30 | root | localhost | bsm  | Query   |  138 | update | insert into
upbcgww03
values(/var/tmp/zypp.067D9R/zypp-trusted-kr9rzhrO,trustdb.gpg,1200,b18a1a

| 30 | root | localhost | bsm  | Query   |  138 | update | insert into
upbcgww03
values(/var/tmp/zypp.067D9R/zypp-trusted-kr9rzhrO,trustdb.gpg,1200,b18a1a

But regardless, it doesn't matter which database you use or table
type, you are not going to get decent performance by doing many, many
single inserts. Even your selects are going to hurt things. To process
a file with only 100 lines, you're are going to do between 100 and 200
queries (100 selects+100 possible inserts).
You should try to batch your selects so you get a bunch of matches you
can filter on, rather than just one. Your inserts you should
absolutely use bulk inserts. Just build up a list of values that need
to be saved and when you hit 100 (or some other batch size), bulk
insert into the database and bulk write to the file.

Brent Baisley

On Wed, May 6, 2009 at 12:02 PM, Andrew Carlson naclos...@gmail.com wrote:
 Any ideas about this?  It's very aggravating and I have no idea how to debug
 this any further.  Thanks.

 Hi.

 I am having a problem with a program I am writing.  The program reads a
 file, checks an object file, and if the record doesn't exist, it inserts to
 a node table (one table per node) and the objects file.  I wrote a C program
 to do this, with multiple processes running at one time.  I have 15 odd
 nodes, with 40 files (one file per filesystem on the 15 nodes).  I kick of
 40 processes at once.  It runs for some time, and hangs with the following
 process list (not at the same point every time):

 mysql show processlist;
 ++--+---+--+-+--++--+
 | Id | User | Host      | db   | Command | Time | State  |
 Info
 |
 ++--+---+--+-+--++--+
 |  2 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(aztcd.ko,33246,ed2c466b200d3fb38420f23c73b31da6)
 |
 |  3 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(ib_mad.ko,41540,d1513ed2dafa6ea1ec53ca31f16a6ea6)
 |
 |  4 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(ib_ucm.ko,23704,1df574fe480402cd1baa02bfe53dea25)
 |
 |  5 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(pam_make.so,4992,9ecb181ecd67bd51f9d1c47381a02e8f)
 |
 |  6 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(istallion.ko,38828,c0f89798d35eed8bb447465f1771c13c)
 |
 | 10 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(CIM_MemoryCheck.CIM_Check,11758,1607681b1648d7ef2e105dffc59f7122)
 |
 | 12 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(i2c-amd756.ko,8120,2106913cdc436edbfbd79f4638f0f266)
 |
 | 15 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(ice_not.hpp,777,b3da4cef03bb7ede418858da3e74d29f)
 |
 | 16 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(hand-pub.html,4069,e03dee2afd9ddb0460307f58e01599a9)
 |
 | 17 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(081107035101.dat,3969,8aac4f8e16c8c450cfb14e3c573d62e6)
 |
 | 19 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(iptable_nat.o,29116,a156a3cdb6a9bfc85bad4c6016a90d70)
 |
 | 20 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(SYS_LC_MESSAGES,63,9e8cf9be98236c7327c479ea65447570)
 |
 | 18 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(proddon_backup.070512154501.log,2122,917485542f6cf8afccf610905de809ba
 |
 | 22 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(whfwdata6.htm,7283,50337d6d9d0fa796aa7ebcc3edd26e0a)
 |
 | 23 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(xfrm6_mode_transport.ko,7748,f484c1aef7174af8852113b2ced0aa9e)
 |
 | 24 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(stock_bluetooth.png,2490,2c0325756b662464839152a62f78ab8a)
 |
 | 25 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(231.lst,163,31a38641a00279721cbfc0eaacabd6d0)
 |
 | 26 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(DEPEND,289,0049695fe84c6117e007623d9db38ea8)
 |
 | 27 | root | localhost | bsm  | Query   |  138 | Locked | insert into
 objects
 values(kioslaverc,36,fba9d778b2bd00fddd07d9ff4b7c8afd)
 |
 | 28 | root | localhost | bsm  | Query   |  138 | Locked

Re: Small InnoDB table with many concurrent queries

2009-04-20 Thread Brent Baisley
MyISAM should be fine with many SELECTs running at once. Tables are
only locked for INSERTs DELETEs and UPDATEs, and conditionally at
that. You can set MySQL to always append INSERTs to the end of the
file so you can insert while selecting.

What is the whole picture? If you were having locking problems, you
are doing more than just a lot of SELECTs. How many inserts, updates,
deletes are you doing? If you are doing a lot of updates, even InnoDB
will block if you are trying to update the same record across queries.

If you have a lot of querying in sending data state, check which
ones are in a different state. One of those may be the culprit.

Brent Baisley

On Mon, Apr 20, 2009 at 10:28 AM, living liquid | Christian Meisinger
c.meisin...@livingliquid.com wrote:
 Hi there.

 I've a small table with my daily banner hits.
 1. version was with myisam but with a lot of concurrent queries (all SELECTs) 
 i get too many table locks.
 so i changed it to an innodb table.
 works great most of the time.

 sometimes it seems to be too much, starting at about 500 concurrent queries i 
 see a huge amount of processes
 taking about 3 minutes to finish 'sending data'.

 the SELECT queries use the correct index and data returned is small (2 
 integers).

 the table has only 4MB and about 35000 rows.
 it can't be the size of the table...

 mysql server is connected with a 1G switch.
 so i don't think it's network related.

 mysql server is a dual xeon 2,3GHz with 8G ram and SCSI disk RAID5.

 did i hit a innodb limit with this server configuration?
 or is my my.cnf bad?

 my.cnf ---

 key_buffer              = 750M
 max_allowed_packet      = 32M
 table_cache             = 1
 sort_buffer_size        = 4M
 join_buffer_size        = 4M
 read_buffer_size        = 2M
 read_rnd_buffer_size    = 4M
 myisam_sort_buffer_size = 128M
 query_cache_size        = 750M
 query_cache_limit       = 16M
 thread_cache            = 32
 thread_concurrency      = 16

 tmp_table_size          = 700M
 max_heap_table_size     = 700M

 net_buffer_length = 16K

 skip-external-locking

 innodb_additional_mem_pool_size = 16M
 innodb_buffer_pool_size         = 2G
 innodb_thread_concurrency       = 16
 innodb_flush_log_at_trx_commit  = 2
 innodb_log_buffer_size          = 8M
 innodb_lock_wait_timeout        = 120
 innodb_log_file_size            = 256M
 innodb_log_files_in_group       = 3

 --


 thanks for any info

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:    http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Select query locks tables in Innodb

2009-03-12 Thread Brent Baisley
The nice thing about InnnoDB is that it won't have to access the data
portion of the file if it doesn't have to. So if all the information
you are retrieving is contained in an index, it only accesses the
index to get the information it needs. The data portion is never
access, and thus never locked.

Something like this is probably going on. All the information it needs
for the 100,000 records is contained in the index, the the data
portion is never accessed until it needs to retrieve the 60,000
records.

That's a simplistic overview of what could be going on. But it sounds
like your issue has been resolved.

Interesting, your temp1 attached file shows mysql switched from using
the org_date index to the organization index.

Brent Baisley

2009/3/12 Carl c...@etrak-plus.com:
 Brent,

 After a delay while I was busy killing alligators, I did as you suggested
 (added a composite index of date and organization_serial on
 journal_entry_master... in the spirit of your suggestion, anyway.)  The
 results were interesting:

 1.  In my test environment, I could not force a locked file even though I
 opened the dates up to cover 2+ years and changed to an organization that
 had more records.  The 'Explain' is attached as temp1.txt.  You will note
 that it starts with 100,000+ records while the eventual set of records for
 the report is 60,000 because the 100,000+ number includes some journmal
 entries for refund/void/etc. transactions which we have no interest in.

 2.  I tried various combinations of indexes but couldn't seem to get any
 better than the composite one on the journal_entry_master.  I did not check
 whether the other options would produce locked files.

 I am now going to put this into production and see if it will actually fly.

 I am still a little puzzled about how we could have a relatively large set
 of records (100,000+) and yet not cause any table to be locked as the server
 has only 8GB of memory.

 Thanks for all your help and Baron's suggestions also.

 Carl





 - Original Message - From: Brent Baisley brentt...@gmail.com
 To: Carl c...@etrak-plus.com
 Sent: Thursday, March 05, 2009 1:12 PM
 Subject: Re: Select query locks tables in Innodb


 Ok, so you have 687 unique organization serial numbers. That's not
 very unique, on average it will only narrow down the table to 1/687 of
 it's full size. This is probably the source of your locking problem
 and where you want to focus.
 InnoDB will lock on a query that doesn't use an index. It would have
 to lock every record anyway, so why not lock the table?
 36,000 records still may be too large of a result set to do record
 versioning. But, optimizing your query is the only way to go.

 Your date_effective is a lot more granular, so you may want to focus
 on that. If you do a lot of these types of searches, you can try
 creating a compound index on organization_serial+date_effective.
 CREATE INDEX (org_date) ON
 journal_entry_master(organization_serial,date_effective)

 MySQL would/should then use that query, which will narrow things down
 quicker and better. It shouldn't have to try to do versioning on
 56,000 records while it tries to get the subset of that (36,000).

 Brent

 On Thu, Mar 5, 2009 at 6:02 AM, Carl c...@etrak-plus.com wrote:

 Brent,

 The query returns about 36,000 rows. The 56,000 rows from the
 journal_entry_master table is all the entries for organization 16 (they
 span
 more than the dates I have asked for.)

 SHOW INDEX FROM journal_entry_master shows 1,554,000+ for the primary
 index
 (auto-increment), 687 for the organization_serial (the one I am using), 18
 for the organization_shift_start (I tried this before, i.e., starting with
 the organization_shift, but it quickly got mired down) and 777,000+ for
 the
 date_effective.

 If I understand correctly, you have suggested using the date index. The
 difficulty is the data contains many organizations and so the date range
 query returns 163,000+ rows.

 Also, I would expect scaling a query where I had to programatically cut it
 up would 1) be difficult and 2) wouldn't really solve the problem but
 would
 rather just shorten the time of the locks. I am not suggesting that I
 might
 not end up there, only hoping for a better solution.

 Thanks for all your insight and feel free to suggest away.

 Carl

 - Original Message - From: Brent Baisley brentt...@gmail.com
 To: Carl c...@etrak-plus.com
 Sent: Wednesday, March 04, 2009 4:23 PM
 Subject: Re: Select query locks tables in Innodb


 Is the result of the query returning 56,000+ rows? How many rows are
 you expecting to be returned once the query is finished running?
 Your date range is over a year. You may actually get much better
 performance (and avoid locking) by running more queries with a
 narrower date range and linking them through a UNION. It's using the
 organization index rather than the date index.
 I don't know your dataset, but typically you want your query to use
 the date index since

Re: Confused About Rows and Skipping On Import with MySQL Migration Toolkit 1.1

2009-03-07 Thread Brent Baisley
It's not skipping any rows. When you select records from a database,  
it gets them in the order that is quickest to retrieve them, not the  
order they were entered. The natural order is how they are stored on  
disk. As your database is updated over time, this order may change.
If you have an auto increment column, order it by that value. That  
field will have the order the records were imported in.


Brent Baisley


On Mar 6, 2009, at 9:10 PM, revDAVE wrote:


Hi folks,

I'm trying to use MySQL Migration Toolkit 1.1 with MS SQL server 2005
http://dev.mysql.com/downloads/gui-tools/5.0.html

- basically all is working great - some tables import no problem -  
except...


I'm trying to import an address table and in the summary it says that
there's a few problems like:

incorrect string value for column 'street' at row 655
0 rows transferred

The problem is when I try to import just 650 or so,  then I go to  
phpmyadim
to look - well the rows are not in the original order - they start  
with
addressID 1-2-3-4 etc but randomly skips some so there's no way I  
can match

the imported row 655 to the addressID (655) (they were in order - no
deletions) - to find the bad field to fix it...

Q: why is it importing rows and seemingly skipping many?

Q: how can I figure out which is the REAL row 655 (without counting  
by hand)

to fix the field text string error?

Thanks in advance - dave


--
Thanks - RevDave
Cool @ hosting4days . com
[db-lists 09]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=brentt...@gmail.com




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Select query locks tables in Innodb

2009-03-03 Thread Brent Baisley
A SELECT will/can lock a table. It almost always does in MyISAM (no
insert/updates), almost never does in InnoDB. There is an exception to
every rule. The problem is most likely in the 107488 rows part of the
query. That's too many rows for InnoDB to keep a version history on so
it's likely just locking the table.

Is that how many records you want to return? That seems like a lot.
Maybe reworking your query may help. Heck, post the sizeable query.
You've been spending weeks on it.

Brent Baisley

On Tue, Mar 3, 2009 at 10:53 AM, Carl c...@etrak-plus.com wrote:
 I have been wrestling with this problem for a couple of weeks and have been 
 unable to find a solution.

 The MySQL version is  5.0.37 and it is running on a Slackware Linux 11 box.

 The problem:

 A query that is selecting data for a report locks the files that it accesses 
 forcing users who are attempting to enter transactions to wait until the 
 select query is finished.

 The query is sizable so I have not included it here (I can if that would be 
 helpful.)  Explain shows (abbreviated):

 id   select_type    table                type        possible keys            
                 key_len   ref        rows         Extra
 1    SIMPLE         transactions     ref           
 PRIMARY,person,organization    4            const    107448      *
 1    SIMPLE         person             eq_ref      PRIMARY                    
              4    person_serial    1
 1    SIMPLE         regs                 ref           transaction            
                     4    transactions_serial  1
 1    SIMPLE         transaction_event ref        PRIMARY, transaction, 
 receipt    4    transactions_serial    1
 1    SIMPLE         receipt_master    ref         PRIMARY                     
             4    receipt_serial            1

 The 107448 rows are the transactions for the organization I am reporting.  
 The person is linked directly to the transaction.  During the select query, 
 the person table is locked thereby stopping updates to any person in the 
 table.

 I have always thought a select is only a read and would, therefore, not lock 
 any tables.

 Anyone have any ideas?

 TIA,

 Carl

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MyISAM large tables and indexes managing problems

2009-03-01 Thread Brent Baisley
Be careful with using InnoDB with large tables. Performance drops  
quickly and quite a bit once the size exceeds your RAM capabilities.



On Mar 1, 2009, at 3:41 PM, Claudio Nanni wrote:


Hi Baron,
I need to try some trick like that, a sort of offline index building.
Luckily I have a slave on that is basically a backup server.
Tomorrow I am going to play more with the dude.
Do you think that there would be any improvement in converting the  
table to InnoDB

forcing to use multiple files as tablespace?

Thanks
Claudio
Baron Schwartz wrote:

Claudio,

http://www.mysqlperformanceblog.com/2007/10/29/hacking-to-make-alter-table-online-for-certain-changes/

Your mileage may vary, use at your own risk, etc.

Basically: convince MySQL that the indexes have already been built  
but

need to be repaired, then run REPAIR TABLE.  As long as the index is
non-unique, this can be done by sort.  In your case, the index
(PRIMARY) is unique, so you'll need to see if you can work around  
that

somehow.  Maybe you can create it under another name as non-unique,
build it, then swap it and the .frm file out.  Have fun.

This is the only option I see for you, but maybe there are others.







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: multiple choice dropdown box puzzle

2009-02-23 Thread Brent Baisley
It's actually a very simple solution, and you should do it all in a
single INSERT. Putting INSERTs in a loop will kill your performance
when you try to scale.

$sql4 = 'INSERT INTO temp (example) VALUES (' . implode('),(',
$_POST[categoriesIN]) . ')';
$result4 = mysql_query($sql4, $db);

That example does not sanitize the data before inserting.

Brent

On Mon, Feb 23, 2009 at 10:25 AM, PJ af.gour...@videotron.ca wrote:
 I think this is a tough one... and way above my head:
 PLEASE READ ALL OF THE ABOVE TO UNDERSTAND WHAT I AM TRYING TO DO.
 Having a bit of a rough time figuring out how to formulate php-mysql to 
 insert data into fields using a multiple dropdown box in a form.

 to post I am using the following:
 snip...
 $categoriesIN   = $_POST[categoriesIN];

 ...snip...

 select name=$categoriesIN[] multiple=multiple
OPTIONChoose Categories.../option
OPTION VALUE=? echo $categoriesIN; ?1
OPTION VALUE=? echo $categoriesIN; ?2
OPTION VALUE=? echo $categoriesIN; ?3
OPTION VALUE=? echo $categoriesIN; ?4
OPTION VALUE=? echo $categoriesIN; ?5
/SELECT

 ...snip...

 $sql4 = FOR ( $ii = 0 ; $ii  count($categoriesIN) ; $ii++ )
INSERT INTO temp (example) $categoriesIN[$ii] ;
$result4 = mysql_query($sql4, $db);
 ...snip

 this does not work! The other posts work like a charm... but this...

 I cannot figure out what I should be entering where... I have tried several 
 different configurations, but nothing seems to work...

 I found this as a model for entering the selections but can't figure out how 
 to modify it for my needs:

 select name=branch_no[] multiple=multiple size=5
 option  Choose your location(s) /option
 option value=31003100/option
 option value=31053105/option
 option value=3503 3503/option
 option value=3504 3504/option
 /select

 What I would like to do is something like the following:
 select name=$categoriesIN[] multiple=multiple
OPTIONChoose Categories.../option
OPTION VALUE=1History
OPTION VALUE=2Temples
OPTION VALUE=2Pharaohs and Queens
OPTION VALUE=4Cleopatra
OPTION VALUE=4Mummies
/SELECT
 and going further, I would like to be able to use a table that actually holds 
 these values to feed them to the code above. I am sure this is possible but 
 it must take some huge knowledge and experience to do it.

 BUT ...
 as I look at things, I am wondering if the FOR statement in the above should 
 be used to do several INSERTs, that is, one $sql(number) per selected 
 category... now, would that require many $sqls or many INSERTs within the 
 $sql ?


 --

 Phil Jourdan --- p...@ptahhotep.com
   http://www.ptahhotep.com
   http://www.chiccantine.com


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Record IDs

2009-02-23 Thread Brent Baisley
On Mon, Feb 23, 2009 at 11:40 AM, Hagen Finley finha...@comcast.net wrote:
 I have a number of duplicate records in my table which are identical hence,
 I can't do a delete on the columns without deleting both records. One
 deletion strategy I have considered is identifying the records by their
 table record id - but I don't know for certain that such an identifier
 exists or how I would show it via a select statement. Any suggestions?

 Hagen Finley
 Boulder, CO

If you do not want to delete duplicates one at a time (i.e. LIMIT 1),
then you need a unique record identifier. MySQL does not have a
default unique record id, you need to create that field yourself. Just
specify it as an auto-increment and you with have a unique ID for each
record.
Check if you already have an auto-increment field, otherwise add one.

To delete your duplicate records, use something similar to the following query.
DELETE tableName FROM tableName JOIN
(SELECT max(recordID) dupID, count(*) c FROM tableName GROUP BY
recordID HAVING c1) AS dupTable ON tableName.recordID=dupTable.dupID

That will delete the latest duplicate record. If you have more than
2 copies of a record (i.e. triplicates), this will not clean the all
out. You can keep running the query to delete multiple duplicates of
records.

Test the query first to make sure it's working properly. Just switch
DELETE tableNAME with SELECT fieldName(s).

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Best RAID for a DB + LVM?

2009-02-23 Thread Brent Baisley
SCSI isn't necessarily faster now. The big difference used to be
SCSI's support for command queueing, which is why it was faster in
multi-user environments. Command queueing is now fairly common in SATA
drives.
The highest end SCSI is probably still faster than the highest end
SATA, but you will have less disk space and it will cost much more.
I would recommend using one of the RAID in a box solution. They have
big caches for the whole RAID and they are optimized to the
controllers. If money isn't really an issue, you may look into
something like NetApp. That would have everything you need.

Brent Baisley

On Mon, Feb 23, 2009 at 6:58 AM, Andy Smith a.sm...@ukgrid.net wrote:
 What RAID level to use, whether to use SCSI or SATA etc are all pretty much
 how long is a piece of string? questions. If you have a really high end
 hardware array RAID 5 may be faster than RAID1+0 is on a cheaper system.

 Basically

 RAID 5 = slower
 SATA = slower
 RAID 1+0 = faster
 SCSI = faster
 more physical disks = faster
 more expensive controller = faster

 ;)

 If you want to compare specific hardware you'll need to get your hands on it
 or find someone else who has already done a comparison. But it will make a
 huge difference to performance what disk array you have hooked up, just
 depends how much you want to spend

 Quoting Waynn Lue waynn...@gmail.com:

 I currently have a RAID 5 setup for our database server.  Our space is
 running out, so I'm looking to increase the disk space.  Since I'm doing
 that anyway, I decided to re-evaluate our current disk array.  I was told
 that RAID 5 isn't a good choice for databases since it's slower to write.
 In addition, I've also been considering setting up LVM to take quick db
 snapshots, after reading various links on the web (and posts to this
 list).

 So on to the questions!  First, if that's what I eventually want to do
 (get
 a new RAID server with LVM), do I need to do anything special to set up
 LVM
 on the new system?  Second, what is a good RAID setup for databases?  RAID
 10?  0+1?  Third, I have the choice of using SATA or SCSI in conjuction
 with
 the RAID drives I choose.  How much of a difference is there in using SATA
 instead of SCSI, especially in light of whatever RAID I end up going with?

 Thanks for any insights,
 Waynn




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Optimizing IN queries?

2009-01-26 Thread Brent Baisley
If you are running MySQL 5, try moving the WHERE condition into the
JOIN condition, which is really where you want the filter since it's
part of the join.

SELECT me.id, me.quotation_id, quotation.qt, part.id, part.cit
FROM cwGroup me
JOIN quotation ON (
quotation.id = me.quotation_id AND quotation.id IN (107037, 304650,
508795, 712723, 1054653))
JOIN part ON ( part.id = quotation.part_id )

That may or may not help, check if the explain changes.

Brent Baisley


On Mon, Jan 26, 2009 at 6:16 AM, Jesse Sheidlower jes...@panix.com wrote:

 I have an app that joins results from a MySQL query with the
 results of a lookup against an external search engine, which
 returns its results in the form of primary-key id's of one of
 the tables in my database. I handle this by adding these
 results with an IN query. (My impression had been that this is
 faster than a long chain of OR's.)

 In the simplest case, if I'm _only_ searching against these
 results, the query will look something like this (I've
 removed some columns from the SELECT list for readability):

 SELECT me.id, me.quotation_id, quotation.qt, part.id, part.cit
 FROM cwGroup me
 JOIN quotation ON (
 quotation.id = me.quotation_id )
 JOIN part ON ( part.id = quotation.part_id )
 WHERE ( quotation.id IN (107037, 304650, 508795, 712723, 1054653))

 When I ran this on a query that generated a moderate number of
 results (over 1000, but not millions), it took MySQL 26
 seconds to reply on my dev box.

 Can someone suggest what I can look at to speed this up? The
 section of the manual that talked about optimizing range
 queries spent a lot of time explaining how they work but very
 little on how to speed them up. The EXPLAIN didn't really
 help--only one column got a lot of results, and it's not clear
 to me why MySQL would take 26 seconds to fetch 1214 records.

 The EXPLAIN looks like this:

 ---
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: me
 type: range
 possible_keys: quotation_id
  key: quotation_id
  key_len: 4
  ref: NULL
 rows: 1214
Extra: Using where
 *** 2. row ***
   id: 1
  select_type: SIMPLE
table: quotation
 type: eq_ref
 possible_keys: PRIMARY,part_id
  key: PRIMARY
  key_len: 4
  ref: rqs_incs.me.quotation_id
 rows: 1
Extra:
 *** 3. row ***
   id: 1
  select_type: SIMPLE
table: part
 type: eq_ref
 possible_keys: PRIMARY
  key: PRIMARY
  key_len: 4
  ref: rqs_incs.quotation.part_id
 rows: 1
 ---

 Thanks very much.

 Jesse Sheidlower

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=brentt...@gmail.com



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Unused and empty tables - what impact on mysql

2009-01-14 Thread Brent Clark

Hiya

I just inherited a project, and before I get started, Id like to do a 
little clean up.


There a *  load  of unused and empty tables in the db. My question 
is, does this in any way affect the performance of mysql in anyway and 
if so how?


Kind Regards
Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Unused and empty tables - what impact on mysql

2009-01-14 Thread Brent Clark

Claudio Nanni wrote:

Empty files, like empty tables is not synonym for useless or unused.
Of course it depends from the storage engine used but
in the list of performance tuning removing empty tables is way way down,
they have almost no impact while not used.
The bottom line, remove only when sure 100% of their uselessness.

Cheers
Claudio Nanni

Thanks for your reply and feedback.

Regards

Brent Clark

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Distinct Query Problem

2008-12-22 Thread Brent Baisley
On Fri, Dec 19, 2008 at 1:03 PM, Tompkins Neil
neil.tompk...@googlemail.com wrote:
 Hi,

 I've the following query which I'm having problems with.  Basically I have
 a 5 tables as follows :

 MasterTB - Contains list of master records
 LookupTB - Contains relationship between MasterTB to ProductTB
 ContentTB - Contains description of product, and location of data files
 PriceTB - Contains list of prices per day for each product
 ProductTB - List of products

 SELECT MasterTB.MasterID, ProductTB.Supplier, MasterTB.Name,
 ContentTB.Title, ContentTB.FolderName, MIN(PriceTB.Price) As PriceDiscounts
 FROM MasterTB
 INNER JOIN LookupTB ON LookupTB.MasterID = MasterTB.MasterID
 INNER JOIN ProductTB ON LookupTB.ProductID = ProductTB.ProductID
 INNER JOIN PriceTB ON ProductTB.ProductID = PriceTB.ProductID
 INNER JOIN ContentTB ON ProductTB.ProductID = ContentTB.ProductID
 WHERE MasterTB.Enabled = 'Yes'
 AND ContentTB.Language = 'ENG' AND ContentTB.Site = 'www'
 AND PriceTB.Price  0
 AND PriceTB.Quantity  0
 GROUP BY ProductTB.ProductID, MasterTB.MasterID
 ORDER BY ProductTB.MarkUp DESC

 Basically each product is listed in the master table, and can have a number
 of suppliers linked to it (ProductTB).  The query above will show me a list
 of products for all suppliers for a particular product.  However I want to
 be able to show the lowest price product from just the lowest priced
 supplier.

 Any ideas ?

 Thanks,
 Neil


You are actually going to need at least 2 queries, which will be
nested. You need to first find the lowest price, then figure out which
supplier has that lowest price. If more than one supplier has the same
lowest price, you won't be able to do it in a single query and will
likely need to do post processing.
Just an example to point you in the right direction. First, get the
lowest price for the product(s) you are interested in:
SELECT PriceTB.ProductID, MIN(PriceTB.Price) As MinPrice
FROM PriceTB GROUP BY ProductID

Then you use that as a virtual table (MinPriceList) to join on the
supplier with that price for that product.
SELECT ProductTB.Supplier, MinPriceList.ProductID,
MinPriceList.MinPrice As PriceDiscounts
FROM MasterTB
INNER JOIN ProductTB ON LookupTB.ProductID = ProductTB.ProductID
INNER JOIN (
SELECT PriceTB.ProductID, MIN(PriceTB.Price) As MinPrice
FROM PriceTB GROUP BY ProductID
) AS MinPriceList ON ProductTB.ProductID=MinPriceList.ProductID AND
ProductTB.Price=MinPriceList.MinPrice
INNER JOIN ...

Basically what you are doing is creating a virtual table on the fly
based on a select statement.  It sort of like a temp table, but
without having to go through the creation and management of it. Treat
the virtual table created from the query as if it was a regular table.
As I mentioned, this will break if more than one supplier has the same
price. You'll get an arbitrary supplier ID out of those with the
minimum price. This is because there is no unique value to join on.

Hope that points you in the right direction.

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Average Rating, like Netflix

2008-12-22 Thread Brent Baisley
The ratings field would be NULL. You could also add a count in your
query to tell how many ratings there were. If count is 0, you know
there are no ratings.
SELECT count(ratings.rating_id) AS rate_count, ...

Brent Baisley

On Mon, Dec 22, 2008 at 12:39 PM, Brian Dunning br...@briandunning.com wrote:
 If I did the left join to include movies with no ratings, how would I tell
 if it had no ratings? If I used mysql_fetch_array in PHP, would
 $result['rating'] == 0, or '', or NULL, or what?

 On Dec 22, 2008, at 9:29 AM, Brent Baisley wrote:

 The biggest problem is your join condition (and no group by). It's
 fine for MySQLv4, but things have changed in v5. You should start
 getting in the habit of moving the join filters from the WHERE clause
 to a specific JOIN condition. Use the WHERE clause to perform filters
 after the join occurs.
 For example:
 SELECT movies.* average(ratings.rating) FROM movies
 INNER JOIN ratings ON movies.movie_id=ratings.movie_id
 GROUP BY movies.movie_id

 Change the INNER JOIN to a LEFT JOIN if you want all movies, even
 those with no ratings.

 Brent Baisley


 On Mon, Dec 22, 2008 at 11:13 AM, Brian Dunning br...@briandunning.com
 wrote:

 Pretend I'm Netflix and I want to return a list of found movies,
 including
 the average of related ratings for each movie. Something like this:

 select movies.*, average(ratings.rating) from movies, ratings where
 movies.movie_id=ratings.movie_id

 I'm sure that's wrong in about 10 different ways but hopefully you get
 what
 I'm trying to do. Thanks.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Uptimize: join table on if()

2008-12-08 Thread Brent Baisley
On Mon, Dec 8, 2008 at 7:14 AM, Mogens Melander [EMAIL PROTECTED] wrote:
 Hi list

 I have this statement that really need optimizing. The result is
 about 5500 rows, and it runs for about 3-5 minutes. If i do the
 same in a PHP script (outer loop, inner loop) it run in 20 sec.

 The idea is that data in tdata might be shared between 2 or more
 records in main. The main.parent field is a pointer to main.code,
 so if main.parent is positive, i need to retrieve data linked to parent.

 Did i miss something?

 select m.code, m.parent, t.data
  from main m
  left join tdata t
  on ( if( m.parent  0, t.code = m.parent, t.code = m.code ) and 
 t.country='dk' )
  where m.active = 'on' and m.tdataon = 'on'
  order by m.code;

 CREATE TABLE  `main` (
  `code` int(10) unsigned NOT NULL default '0',
  `parent` int(10) unsigned NOT NULL default '0',
  `active` varchar(2) NOT NULL,
  `tdataon` varchar(2) NOT NULL default '',
  PRIMARY KEY  (`code`),
  KEY `parent` (`parent`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 CREATE TABLE  `data` (
  `code` int(10) unsigned NOT NULL default '0',
  `country` varchar(2) NOT NULL default '',
  `data` varchar(130) NOT NULL default '',
  PRIMARY KEY  (`code`,`country`),
  KEY `code` (`code`),
  KEY `country` (`country`)
 ) ENGINE=InnoDB DEFAULT CHARSET=latin1;

 --

What you are doing wrong is putting a condition on the JOIN. This
prevent MySQL from optimizing the query because it has to check every
record to determine the join. Put the condition in the field list you
pull. Alias the table you are joining on so you can join it twice, one
for each condition.

select m.code, m.parent,
if( m.parent  0, t.data, t1.data ) AS data
 from main m
 left join tdata t
on  t.code = m.parent and t.country='dk'
left join tdata t1
on t1.code=m.code and t1.country='dk'
 where m.active = 'on' and m.tdataon = 'on'
 order by m.code;

That may not be completely correct. What you are doing is getting 2
copies of the data field and conditional adding the one you need to
the retrieved record.

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Randomize by Score and DESC

2008-12-06 Thread Brent Baisley

That's because RAND() is a decimal (0.37689672).  Try score*RAND().

Brent Baisley

On Nov 30, 2008, at 2:03 AM, sangprabv wrote:


Hi,
Thans for the reply, I have tried it but I don't see the RAND() to be
work. This list is always the same. TIA


Willy


-Original Message-
From: mos [EMAIL PROTECTED]
To: sangprabv [EMAIL PROTECTED], mysql@lists.mysql.com
Subject: Re: Randomize by Score and DESC
Date: Sat, 29 Nov 2008 13:05:09 -0600
Mailer: QUALCOMM Windows Eudora Version 6.0.0.22

At 09:15 AM 11/29/2008, sangprabv wrote:

Hi,
I have a query like this SELECT * FROM table ORDER BY score DESC,
RAND() The aim is to randomize the result by score with descending
order. But it doesn't work. What missed here? TIA.


Willy



Willy,
  That is because you are ordering by Score then Rand. If Score  
is an

integer, try


SELECT * FROM table ORDER BY score+RAND() DESC



This will randomize the higher scores first, followed by the lower  
scores etc..




Mike



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT through many databases

2008-11-21 Thread Brent Baisley
On Fri, Nov 21, 2008 at 12:44 PM, Andre Matos [EMAIL PROTECTED] wrote:
 Hello,

 Let's suppose I have 5 database: db1, db2, db3, db4, and db5. They all have
 the same structure but different data.

 I would like perform this select

 SELECT TaskDoneOn, TaskDoneBy
 FROM {database}
 WHERE TaskDoneOn IS NOT NULL

 and collect the data from all 5 database. However, I would like to avoid
 doing something like this:

 SELECT TaskDoneOn, TaskDoneBy
 FROM db1
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db2
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db3
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db4
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db5
 WHERE TaskDoneOn IS NOT NULL


 Today I have 5, but tomorrow I can have 50 and I don't want to forget any
 database.

 Thanks for any help.

 Andre

Create a MERGE table that is all those tables combined. Then you just
need to do 1 select as if it was one table. Just be sure to update the
MERGE table description when ever you add a table.

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: SELECT through many databases

2008-11-21 Thread Brent Baisley
A MERGE table is just a virtual table that is made up of other tables.
You treat it no differently than any other table, pretend it's a real
table.

You could even create multiple MERGE tables from different tables. A
good example is Q1, Q2, Q3, Q4, LatestQ, for quarterly information.
You just roll the underlying tables from one to the other as quarters
close, you never have to change your scripts or move data around.
Underlying tables can be members of more than one MERGE table.

You can even INSERT into them, just specify which underlying table the
new data is supposed to go into.

On Fri, Nov 21, 2008 at 2:12 PM, Andre Matos [EMAIL PROTECTED] wrote:
 Sounds interesting, but does the MERGER support complex SELECT statements
 and LEFT JOIN?

 Andre


 On 21-Nov-08, at 1:45 PM, Brent Baisley wrote:

 On Fri, Nov 21, 2008 at 12:44 PM, Andre Matos [EMAIL PROTECTED]
 wrote:

 Hello,

 Let's suppose I have 5 database: db1, db2, db3, db4, and db5. They all
 have
 the same structure but different data.

 I would like perform this select

 SELECT TaskDoneOn, TaskDoneBy
 FROM {database}
 WHERE TaskDoneOn IS NOT NULL

 and collect the data from all 5 database. However, I would like to avoid
 doing something like this:

 SELECT TaskDoneOn, TaskDoneBy
 FROM db1
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db2
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db3
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db4
 WHERE TaskDoneOn IS NOT NULL
 UNION
 SELECT TaskDoneOn, TaskDoneBy
 FROM db5
 WHERE TaskDoneOn IS NOT NULL


 Today I have 5, but tomorrow I can have 50 and I don't want to forget any
 database.

 Thanks for any help.

 Andre

 Create a MERGE table that is all those tables combined. Then you just
 need to do 1 select as if it was one table. Just be sure to update the
 MERGE table description when ever you add a table.

 Brent Baisley

 --
 Dr. André Matos
 [EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: How to remove the duplicate values in my table!

2008-11-19 Thread Brent Baisley


On Nov 19, 2008, at 3:24 AM, jean claude babin wrote:


Hi,

I found the bug in my servlet ,when I run my application it enter  
one record
to the database without duplicate values.Now I want to clean my  
table by

removing all duplicate rows .Any thoughts?


I assume you have a unique record identifier like and auto_increment  
field? If you not, add and auto_increment field, you have to have a  
unique ID.


Assuming the deviceId field is what indicates a duplicate:
SELECT max(uniqueId) maxUid, deviceId, count(*) c FROM table GROUP BY  
deviceId HAVING c1


That will give you the highest unique Id of each duplicate, which is  
what you want to delete assuming you want to keep the first record. If  
you want to keep the latest, change it to min.


Then you want to join on that select so you can use it as your delete  
filter.

DELETE table FROM table JOIN (
SELECT max(uniqueId) maxUid, deviceId, count(*) c FROM table GROUP BY  
deviceId HAVING c1

) as dupSet ON dupSet.maxUid=table.uniqueId

That will delete one duplicate record for each duplicate group at a  
time. So if you have 10 of the same duplicate, you need to run the  
query 9 times. It wouldn't be too hard to add another subquery (i.e.  
LEFT JOIN on the dup select WHERE table.uniqueId IS NULL) to that to  
filter so you can delete all duplicates in 1 shot. This has always  
been something I had to do very infrequently, so I never bothered  
taking it further.


Hope that help!

Brent Baisley

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Overhead Issue

2008-11-19 Thread Brent Baisley
If you are frequently deleting many records, then that can cause  
fragmentation. But if you are deleting that many records, you usually  
are deleting old records. To work around this overhead, I usually  
use MERGE tables. On a periodic basis you create a new table, add it  
to the MERGE list while also removing the oldest table from the MERGE  
list. You still have all the data, but you've removed it from normal  
use with virtually no overhead.


Brent Baisley


On Nov 17, 2008, at 9:53 PM, Micah Stevens wrote:

I don't think this is indicative of a design issue. Some tables need  
data removed more often than others, however Moon's Father brings up  
an excellent point. If you CAN resolve this with a change in design,  
that would be the best solution of course.


-Micah

On 11/17/2008 06:50 PM, Moon's Father wrote:


Maybe your tables were not properly designed.

On Tue, Nov 18, 2008 at 10:35 AM, sangprabv [EMAIL PROTECTED]  
wrote:




Thanks for the reply, does this overhead reduce performance? And is
there any tips to avoid this overhead? TIA.


WM


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]










Re: Overhead Issue

2008-11-17 Thread Brent Baisley
On Mon, Nov 17, 2008 at 7:56 PM, sangprabv [EMAIL PROTECTED] wrote:
 Hi,
 I just want to know what things that cause table/db overhead? Because I
 have my tables always get overhead problem. And must run OPTIMIZE query
 every morning. Is there any other solution? TIA.


 Willy

What is happening that you feel you need to run Optimize? You may just
not have your settings optimized. I've run tables with many millions
of records without having to run optimize.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Most efficient way of handling a large dataset

2008-10-24 Thread Brent Baisley
On Fri, Oct 24, 2008 at 6:59 AM, Mark Goodge [EMAIL PROTECTED] wrote:
 I'd appreciate some advice on how best to handle a biggish dataset
 consisting of around 5 million lines. At the moment, I have a single table
 consisting of four fields and one primary key:

 partcode varchar(20)
 region varchar(10)
 location varchar(50)
 qty int(11)
 PRIMARY KEY (partcode, region, location)

 The biggest variable is partcode, with around 80,000 distinct values. For
 statistical purposes, I need to be able to select a sum(qty) based on the
 other three fields (eg, select sum(qty) from mytable where partcode ='x'
 and region = 'y' and location = 'z') as well as generating a list of
 partcodes and total quantities in each region and location (eg, select
 sum(qty), partcode from mytable where region = 'y' and location = 'z' group
 by partcode).

 The selection is done via a web-based interface. Unfortunately, it's too
 slow. So I want to be able to optimise it for faster access. Speed of
 updating is less crucial, as it isn't updated in real-time - the table gets
 updated by a nightly batch job that runs outside normal working hours (and,
 apart from the rare occasion when a location is added or removed, the only
 thing that changes is the value in qty).

 Does anyone have any suggestions? My initial thought is to replace the
 region and location varchar fields with int fields keyed to a separate list
 of region and location names. Would that help, or is there a better way?

 Mark
 --
 http://mark.goodge.co.uk - my pointless blog
 http://www.good-stuff.co.uk - my less pointless stuff

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Your first query uses an index, so that should be fairly quick. Your
second query does not because your only index starts with partcode,
but you are not searching on partcode. Add an index on
region+location. That should speed the second query up considerably.
If you want to normalize your data, you should replace region and
location with ids that link to another table. While that will speed
up searches, the speed improvement likely won't be noticeable for the
searches you listed.

Make sure query cache is enabled. That will help a lot since the
result of the search will be cached until the table changes.

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Confusion over query stratergy

2008-10-17 Thread Brent Baisley
Why are you creating a subquery/derived table?

Just change your limit to 1,2
 ORDER BY updates.AcctSessionTime DESC LIMIT 1,2

Like you did in the outer query.

Brent

On Fri, Oct 17, 2008 at 5:12 AM, Ian Christian [EMAIL PROTECTED] wrote:
 Hi all,

 I'm trying to work out the difference in a field between the last 2
 updates in an updates table.   I'm doing this as shown below:

 mysqlSELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
- ;
 +--+--+--++
 | AcctInputOctets  | AcctOutputOctets | (@in :=
 AcctInputOctets) | (@out := AcctOutputOctets) |
 +--+--+--++
 | 18446744073654284768 | 18446744073171813223 |
 55266848 |  537738393 |
 | 9508 |18620 |
 55257340 |  537719773 |
 +--+--+--++
 2 rows in set (0.02 sec)

 mysql explain(query above)
 *** 1. row ***
   id: 1
  select_type: SIMPLE
table: updates
 type: ref
 possible_keys: AcctUniqueID
  key: AcctUniqueID
  key_len: 34
  ref: const
 rows: 327
Extra: Using where; Using filesort
 1 row in set (0.00 sec)


 As can be seen, this query uses a key, and runs well.  However, I only
 require the 2nd row of that dataset.  I couldn't figure out a better
 way of doing it than this:

 mysql SELECT AcctInputOctets,  AcctOutputOctets FROM
-  (SELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
-  ) AS t1 LIMIT 1,2
- ;
 +-+--+
 | AcctInputOctets | AcctOutputOctets |
 +-+--+
 |9508 |18620 |
 +-+--+
 1 row in set (0.02 sec)


 This does exactly what I want, but to me feels wrong, I think I'm
 missing a trick to doing this 'the right way'.  Also, look at how the
 query runs:


 mysql explain SELECT AcctInputOctets,  AcctOutputOctets FROM
-  (SELECT
-  (@in - AcctInputOctets) AS AcctInputOctets,
-  (@out - AcctOutputOctets) AS AcctOutputOctets,
-  (@in := AcctInputOctets),
-  (@out := AcctOutputOctets)
-FROM updates
-WHERE acctuniqueid = '4b9fe4a361344536'
-ORDER BY updates.AcctSessionTime DESC LIMIT 2
-  ) AS t1 LIMIT 1,2
- \G
 *** 1. row ***
   id: 1
  select_type: PRIMARY
table: derived2
 type: ALL
 possible_keys: NULL
  key: NULL
  key_len: NULL
  ref: NULL
 rows: 2
Extra:
 *** 2. row ***
   id: 2
  select_type: DERIVED
table: updates
 type: ALL
 possible_keys: AcctUniqueID
  key: AcctUniqueID
  key_len: 34
  ref:
 rows: 28717165
Extra: Using filesort
 2 rows in set (0.02 sec)


 Apparently, it's doing a full table scan over all 29 million records.
 Whilst this query appears to run fast still, surly it's not right that
 a full table scan is needed?

 Thanks,

 Ian

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why are joins between tables in dif db so slow?

2008-10-03 Thread Brent Baisley
Both times seem a bit long, even if you database has millions of rows.  
Can you post and explain of your query? That they are in different  
databases should have minimal effect on your query.


Brent

On Oct 3, 2008, at 12:14 PM, mos wrote:

I have two indexed MyISAM tables, each in a separate database. If I  
do a left join on the two tables, it takes 2 minutes to return the  
5,000 rows. The same join on similar tables in the same database  
would take 5-10 seconds. Both databases are on the same drive. So  
why is it 10x slower when the other table is in another database?


TIA

Mike


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: if count

2008-09-23 Thread Brent Baisley
It doesn't work because you are counting values. An empty string is  
still a value. Perhaps you are trying to SUM instead? If you are  
really looking for a count, you also should use sum.


sum(if(a.Type = Signature Based Return, 1,0))

That will return a count of those records where a.Type = Signature  
Based Return.


Brent

On Sep 23, 2008, at 9:29 PM, kalin m wrote:



hi all...

can somebody explain why a conditional count like this one doesn't  
work:


count(if(a.Type = Signature Based Return, a.amount,''))  group  
by  order by

or
if(a.Type = Signature Based Return, count(a.amount),'')  
group by   order by...



thanks...

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: too many connections

2008-09-19 Thread Brent Baisley
One thing a lot of people miss is that web server KeepAliveTimeout
setting has an effect on pconnect. Apache will keep the thread
handling that client open for the KeepAliveTimeout duration, which
will keep the database connection open for reuse.
You can lower your KeepAliveTimeout or not use pconnect.

Brent Baisley

On Fri, Sep 19, 2008 at 3:51 PM, Jaime Fuentes [EMAIL PROTECTED] wrote:
 You have to use mysql 64bits on S.O. 64bits

 --Mensaje original--
 De: Martin Gainty
 Para: Kinney, Gail
 Para: 'mysql@lists.mysql.com'
 Enviado: 19 Sep 2008 10:51
 Asunto: RE: too many connections


 in my.cnf configuration file try upping the number of connections
 max_connections=3072
 to
 max_connections=6144

 Martin
 __
 Disclaimer and confidentiality note
 Everything in this e-mail and any attachments relates to the official 
 business of Sender. This transmission is of a confidential nature and Sender 
 does not endorse distribution to any party other than intended recipient. 
 Sender does not necessarily endorse content contained within this 
 transmission.


 From: [EMAIL PROTECTED]
 To: mysql@lists.mysql.com
 Date: Fri, 19 Sep 2008 09:33:58 -0600
 Subject: too many connections

 Hello,  We have MySQL 4.0.14 and have just gotten an error:  too many 
 connections.  we can't connect to our site using MySQL admin.  Please help.

 Gail Kinney
 Webmaster UC Denver
 [EMAIL PROTECTED]mailto:[EMAIL PROTECTED]


 _
 Want to do more with Windows Live? Learn 10 hidden secrets from Jamie.
 http://windowslive.com/connect/post/jamiethomson.spaces.live.com-Blog-cns!550F681DAD532637!5295.entry?ocid=TXT_TAGLM_WL_domore_092008


 Enviado desde mi  BlackBerry de Claro.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Appropriate Design

2008-09-17 Thread Brent Baisley
On Wed, Sep 17, 2008 at 7:19 AM, Kevin Hunter [EMAIL PROTECTED] wrote:
 At 5:46am -0400 on Wed, 17 Sep 2008, Jeffrey Santos wrote:
 I'm developing an application
 that will require information from various sources.  Since what
 might be considered required information about those sources will
 vary (occasionally dramatically), I'm unsure as to the correct
 design for the database tables.  In other words I'm left with (I
 believe) two choices:

 1.   Create a source table that will contain, among other things,
 every possible field for a source type and sort out what is
 considered required information at the application level.

 If possible, maybe a general conventional wisdom statement
 would greatly help my education on these matters!

 This is a common pitfall of noobs to data modeling.  The idea is to try
 to think of everything at the forefront, which is almost always an
 impossible task, or to think of a minimal case to jump start the coding
 process, and then add model components later as necessary.

 The second pitfall is keeping model logic in the application.  This
 violates two similar principles: Single-Point-of-Authority and
 Don't-Repeat-Yourself.

 You are using a database so that it can maintain the structure and
 integrity of your data.  There is absolutely no other compelling reason
 to use a DB.  (If you didn't need integrity and structure, you'd use a
 simpler and mucho faster flat file.)  Let the DB do its job and be the
 single-point-of-authority.  The application should certainly do things
 the right way, putting data where it needs to go, but it should not be
 the application's /responsibility/ to keep data integrity and structure.
  If you rely on your application to maintain your structure, you presume
 that your programmers are perfect and will think of every little detail.
  (They're not and they won't.  I promise.)

 As a general technology, databases receive a lot of work so that
 applications developers don't have to sweat the little details.  Like
 making sure that every user in a table has a last name.  Like making
 sure the user typed a 0 instead of an O.  Like defining constraints so
 that developers don't have to make sure an account has enough money to
 make a withdraw.  All they need know is that it didn't go through.

 The other principle of DRY also holds true.  At the point you have your
 application try to maintain data constraints, you will inevitably have
 lots of repeated or similar code to maintain similar data cases.  That's
 a different kind of nightmare.  Similarly, with the DB, it's silly to
 define multiple columns for similar data.  That's spreadsheet think.
 Like col_toyota, col_ford, col_chevy, col_lexus, col_buick.  No.  Make a
 single column as a foreign key to another table.

 Some keywords to use with Google:

 normalize
 normalization
 foreign keys
 foreign key constraints
 innodb

 Wikipedia is often a good starting point.

 Once you've learned the concept, I can't stress enough that you should
 normalize, normalize, normalize.

 2.   Create a few source tables detailing the required information about
 each source type.  The only way I can think of doing this is something along
 the lines of:

 and then pull only the appropriate data from the database at query time
 using JOINs.

 This is closer to a better approach.  That said, be more specific about
 the problem you're trying to solve and folks may be better able to
 provide useful advices.

 Kevin


In cases like these, I go vertical instead of horizontal. Meaning
don't use columns, use records. Using columns will force you to modify
the table structure every time a new data element comes in, which will
probably be often in this case.
Think of fields as labels for your data. If you create a table with 2
fields (label, value), you can have unlimited fields and add new
ones at will because they are just new records with a different label.
Depending on your data, you may want to have multiple value fields
for different data types (i.e. number, text, date).

It's harder to program, but it makes things very flexible. Indexing
every item is very feasible since you are not creating an index for
every field, just label+value. The table will grow quickly as far as
number of records, but MySQL handles millions of records without a
problem.

Hope that helps

Brent Baisley

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: DATATYPES

2008-09-09 Thread Brent Baisley
You pretty much answered your own question. A char will always use the
same amount of space (the max size), regardless of how little data you
put in it. A varchar will only use enough space to store the data, so
the amount of space used for each record will be different. You can
also specify a max size for varchar. Text is like varchar, but with a
fixed max size of 65,000 characters.

Brent Baisley

On Tue, Sep 9, 2008 at 8:24 AM, Krishna Chandra Prajapati
[EMAIL PROTECTED] wrote:
 Hi,

 I would like to know the difference between char, varchar and text.

 char limit 255 character fixed length
 varchar limit 65,000 character variable length
 text limit 65,000 character variable length.

 --
 Krishna Chandra Prajapati


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Full text search and highlight results

2008-09-08 Thread Brent Baisley
MySQL has no idea how you are presenting the data (html, rtf, etc.),  
so it couldn't hilight the words for you. It should really be that  
tricky using grep and PHP.


Brent

On Sep 8, 2008, at 10:58 AM, Fco. Mario Barcala Rodríguez wrote:


Hi all:

I was reading documentation and searching into mail archives but I
didn't find a solution to the following question:

Is there any way to highligh results from a full-text search? I know
some tricky methods using PHP but I want to know if mysql (5.0 or 5.1
versions) offers some methos or function to do this.

I want to write the keywords in context (KWIC) boldface and some
previous an following words around them normalface.

Thanks in advance,

 Mario Barcala


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Wierd INSERT ... SELECT syntax problem

2008-09-06 Thread Brent Baisley
Well, for your simple example, you can use query variables to add the  
counters.

SET @cntr:=0, @lastVal:='A'
INSERT INTO tableB LOC,DATA SELECT CONCAT(LOC,  
CONCAT( IF(@lastVal=LOC, @cntr:[EMAIL PROTECTED], @cntr:=0),  
IF(@lastVal:=LOC,'',''))) LOC, CONCAT(DATA, @cntr) FROM tableA ORDER  
BY LOC


That should add a sequential number to LOC and DATA that will reset to  
0 whenever the value of LOC changes. Some of the IFs in there are just  
to suppress output of variable assignment.


Hope that helps

Brent Baisley


On Sep 5, 2008, at 5:44 PM, Dan Tappin wrote:

I have an existing data set - here is an example (the real one is  
more complex than this)


LOC DATA
-   
A   1
B   2
C   3
D   4
E   5
F   6
...

and I am looking to run some sort of INSERT ... SELECT on this to  
make a new table like this:


LOC DATA
-   
A0  10
A1  11
A2  12
A3  13
B0  20
B1  21
B2  22
B3  23
C0  30
C1  31
C2  32
C3  33
D0  40
D1  41
D2  42
D3  43
E0  50
E1  51
E2  52
E3  53
F0  60
F1  61
F2  62
F3  63

I basically want to take the data from each row, perform n number of  
operations on it and insert it into a new table.  I could make a PHP  
script that does this but I figured there had to be a better way.


Any ideas?

Dan

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: use of wildcards or regular expressions in IFNULL, how to create a view that substitutes NULL by 0?

2008-09-06 Thread Brent Baisley
There is no way that I know of to reference every field in a table  
without using a stored procedure. If you really need to do something  
like that, and you will be adding new columns frequently, then it's  
most likely an indication that your table structure is not normalized.  
Those columns should probably be records with a column indicating what  
type of data it is.


Brent Baisley


On Sep 4, 2008, at 5:11 AM, drflxms wrote:


Dear MySQL specialists,

this is a MySQL-newbie question: I want to create a view of a table,
where all NULL-values are substituted by 0. Therefore I tried:

SELECT *,
IFNULL(*,0)
FROM table

Unfortunately IFNULL seems  not to accept any wildcards like * as
placeholder for the column-name. REGEXP didn't work either - well  
maybe

I made a mistake in the syntax?

Everything works fine, when I write an IFNULL-command for every column
in my table:

SELECT *,
IFNULL(b1,0) AS b1,
IFNULL(b2,0) AS b2,
IFNULL(b3,0) AS b3,
...

But beside causing a lot of writing-work, this solution has the  
problem,

that it doesn't reflect new columns in the original table in the view,
as there is no corresponding IFNULL-command in the view. This is not
acceptable in my case.

So is there a way to use wildcards/regular expressions in IFNULL? Is
there another way to create a view that substitutes every NULL-value  
with 0?

I'd appreciate any kind of help very much!

Kind regards and greetings from Munich,
Felix

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: innodb/myisam performance issues

2008-09-06 Thread Brent Baisley
Hey Josh, I came in really late on this discussion. It's been my  
experience that InnoDB is great until the size of the database/indexes  
surpasses the amount of memory you can give to InnoDB for  caching.  
The performance drop off is pretty quick and dramatic. I've seen this  
happen on live tables that performed great one day and then horrible  
the next. Although this was on table of about 20 million rows, not 130M.
Based on your table size, you would need to be running a 64-bit system  
and 64-bit mysql so you could allocate enough memory to InnoDB. You  
don't see the system swapping because InnoDB is working within it's  
defined memory allocation limits. Using EXPLAIN on your queries  
probably isn't showing you anything helpful because MySQL is using the  
proper indexes, but InnoDB can't fit the entire index in memory. My  
best guess is that InnoDB is loading part of the index, searching,  
loading the next part, searching, etc. Which is why you don't see  
consistent high IO or CPU. If you run vmstat 1, that may show you that  
IO is occurring, followed by CPU, then back to IO.


For very large tables I stick with MyISAM and use MERGE tables if they  
are applicable.


Hope that helps or points you in the right direction.

Brent Baisley


On Sep 4, 2008, at 4:26 PM, Josh Miller wrote:


Good afternoon,

I have recently converted a large table from MyISAM to InnoDB and am  
experiencing severe performance issues because of it.  HTTP response  
times have gone from avg .25 seconds to avg 2-3 seconds.  Details  
follow:


PHP/MySQL website, no memcached, 3 web nodes that interact with DB,  
one that serves images, one master DB that serves all reads/writes,  
backup DB that only serves for backup/failover at this time (app  
being changed to split reads/writes, not yet).


The one table that I converted is 130M rows, around 10GB data MyISAM  
to 22GB InnoDB.  There are around 110 tables on the DB total.



My.cnf abbreviated settings:

[mysqld]
port  = 3306
socket  = /tmp/mysql.sock
skip-locking
key_buffer= 3G
sort_buffer_size  = 45M
max_allowed_packet  = 16M
table_cache = 2048

tmp_table_size= 512M
max_heap_table_size = 512M

myisam_sort_buffer_size = 512M
myisam_max_sort_file_size = 10G
myisam_repair_threads   = 1
thread_cache_size   = 300

query_cache_type  = 1
query_cache_limit = 1M
query_cache_size  = 600M

thread_concurrency  = 8
max_connections   = 2048
sync_binlog = 1

innodb_buffer_pool_size = 14G
innodb_log_file_size  = 20M
innodb_flush_log_at_trx_commit=1
innodb_flush_method = O_DIRECT
skip-innodb-doublewrite
innodb_support_xa = 1
innodb_autoextend_increment = 16
innodb_data_file_path   = ibdata1:40G:autoextend

We're seeing a significantly higher percentage of IO wait on the  
system,  averaging 20% now with the majority of that being user IO.   
The system is not swapping at all.


Any ideas for what to check or modify to increase the performance  
here and let MyISAM and InnoDB play better together?  The plan is to  
convert all tables to InnoDB which does not seem like a great idea  
at this point, we're considering moving back to MyISAM.


Thanks!
Josh Miller, RHCE

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large Query Question.

2008-09-03 Thread Brent Baisley
That's a lot of data to return, make sure you factor in data load and
transfer time. You may try breaking your query into smaller parts and
recombining the results in a scripting language. If you are searching
on a range (i.e. date range), break the range into smaller parts and
run multiple queries.
Divide and conquer, it will scale better.

Brent Baisley

On Wed, Sep 3, 2008 at 1:05 PM, Jim Leavitt [EMAIL PROTECTED] wrote:
 Greetings List,

 We have a medium-large size database application which we are trying to
 optimize and I have a few questions.

 Server Specs
 1 Dual Core 2.6 Ghz
 2GB Ram

 Database Specs
 51 Tables
 Min 10 rows, Max 100 rows
 Total size approx 2GB

 My.cnf
 [mysqld]
 set-variable=local-infile=0
 log-slow-queries=slow-queries.log
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 old_passwords=1
 key_buffer = 512M
 max_allowed_packet=4M
 sort_buffer_size = 512M
 read_buffer_size = 512M
 read_rnd_buffer_size = 256M
 record_buffer = 256M
 myisam_sort_buffer_size = 512M
 thread_cache = 128
 query_cache_limit = 1M
 query_cache_type = 1
 query_cache_size = 32M
 join_buffer = 512M
 table_cache = 512


 We are having trouble with certain queries which are returning anywhere from
 10 - 30 rows.  Total query time is taking approx 1 - 2 mins
 depending on load.  Is there anything in our conf file which could improve
 our performance?  Are there any hardware recommendations that could help us
 improve the speed?  Would more memory help us?  Any comments or
 recommendations are greatly appreciated.

 Thanks much.


 Jim Leavitt
 Developer
 Treefrog Interactive Inc. (www.treefrog.ca)
 Bringing the Internet to Life







-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Impossible WHERE in explain

2008-09-01 Thread Brent Baisley
It might be because you you are comparing user_id to a string, when  
the field type is a decimal. Drop the quotes around the user_id search  
value and see if that works.


Brent Baisley

On Sep 1, 2008, at 3:59 PM, Krishna Chandra Prajapati wrote:


Hi,

In the query below explain gives 'Impossible WHERE noticed after'.  
what does

this mean.

CREATE TABLE `user_cookie` (
 `user_id` decimal(22,0) NOT NULL default '0',
 `param` varchar(128) NOT NULL default '',
 `value` varchar(128) default NULL,
 PRIMARY KEY  (`user_id`,`param`),
   CONSTRAINT `fk_user_cookie` FOREIGN KEY (`user_id`) REFERENCES
`user_info` (`user_id`) ON DELETE NO ACTION ON UPDATE NO ACTION
) ENGINE=InnoDB DEFAULT CHARSET=latin1

  SELECT VALUE  FROM user_cookie  WHERE USER_ID = '10538485' AND  
PARAM =

'TIMEOUT'table |type |possible_keys |key | ken_len|ref | rows|  1
|SIMPLE | | | | | |Impossible WHERE noticed afte
--
Krishna Chandra Prajapati



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: another INNODB vs MYISAM question

2008-08-16 Thread Brent Baisley
First, databases do not have a table type, they are mainly just a  
logical grouping of tables. Mixing table types in a database is quite  
alright and is what you are supposed to do. I generally use MYISAM,  
but if I have a table with lots of activity (inserts, deletes,  
selects) or needs transaction support, I use InnoDB.


What you did was not only switch the default table type, but you  
disabled the InnoDB table type. As you may already know, MySQL's table  
types are different engines that are really plug-ins. You can disable  
those plug-ins if you like, which is what you did. Just re-enable the  
InnoDB stuff and you should be alright. You can leave your default  
engine as MYISAM and if you like, you can use ALTER TABLE to convert  
your InnoDB tables to MYISAM.


--
Brent Baisley


On Aug 15, 2008, at 1:01 AM, [EMAIL PROTECTED] wrote:


Hello mysql,

As I have previously mentioned, I installed WAMPSERVER 2.0 on my
Windows XP pro box recently. It installed INNODB as the Default
Engine.

All of my legacy Databases are MYISAM and after the installation, I
copied them all into the DATA folder and everything worked, even
adding new tables etc. but the new stuff was INNODB. So I ended up
with some MYISAM databases that contained INNODB tables in them.

After a few weeks I got to thinking that mixing INNODB and MYISAM
might not be a good thing and switched the Default Engine to MYISAM in
my.ini file. I didn't just switch the default, I commented out all the
INNODB calls in the my.ini file as well.

As I half expected, all the databases that I had added INNODB tables
failed when I tried to fire up the applications that used them.

Although I am not new to mysql, I have had a bit of MYISAM tunnel
vision with it so my question is, if I had just switched the default
engine and NOT disabled the INNODB calls in my.ini, would that have
prevented the problem? I restored all the MYISAM files and got
everything back working again.

I don't want to go through the lengthy reproduction exercise of
reinstalling everything to test the theory so if someone has had some
experience with this, I would appreciate hearing from them.

--
Best regards,
mikesz  mailto:[EMAIL PROTECTED]


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Help with query to remove all records where foreign key doesn't have corresponding records

2008-08-13 Thread Brent Baisley

Just do a left join with the delete query.

DELETE feed_tag FROM feed_tag LEFT JOIN feed ON  
feed_tag.feed_id=feed.id WHERE feed.id IS NULL


That should do it. You can change DELETE feed_tag to SELECT and  
test it first.


--
Brent Baisley


On Aug 13, 2008, at 4:51 PM, Daevid Vincent wrote:

I want to remove all records from 'feed_tag' where the feed_id  
foreign key

doesn't have any corresponding records in feed.

For instance I may have a record in feed_tag that is like (23, 10,  
4543,

'... (some date)').

Then lets say there is no record in feed that has a primary id key  
of 10.


I want that record (or usually records because of the 1 feed to many
feed_tag relationship) to be removed.

CREATE TABLE IF NOT EXISTS `feed` (
  `id` int(11) NOT NULL auto_increment,
  `title` varchar(100) default NULL,
  `url` varchar(255) default NULL,
  `host` varchar(100) default NULL,
  `type` varchar(100) default NULL,
  `status` char(1) default NULL,
  `total_stories` int(11) default '0',
  `created_at` datetime default NULL,
  `updated_at` datetime default NULL,
  PRIMARY KEY  (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;

CREATE TABLE IF NOT EXISTS `feed_tag` (
  `id` int(11) NOT NULL auto_increment,
  `feed_id` int(11) default NULL,
  `tag_id` int(11) default NULL,
  `created_at` datetime default NULL,
  PRIMARY KEY  (`id`),
  KEY `feed_tag_FI_1` (`feed_id`),
  KEY `feed_tag_FI_2` (`tag_id`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8 ;

As you can see the foreign key 'feed_id' is the issue here (ignore the
tag_id key).


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Unique Id generation

2008-08-12 Thread Brent Baisley
Why would the auto_increment not work for you? The only case where you
would have a problem is if the last record was deleted before mysql
shutdown. If you are really concerned about this unique scenario,
insert a dummy record before shutdown to guard against it and delete
the dummy record after you start back up.

--
Brent Baisley

On Tue, Aug 12, 2008 at 8:29 AM,  [EMAIL PROTECTED] wrote:
 Hi all,

 I try to generate a unique id for each row in a Mysql-InnoDB Table. Because 
 of many deletes I  can't use an auto_increment column.
 After a Mysql restart, the next value for an auto_increment-column  is 
 max(auto_increment-column)+1, and I need a really unique id.


 My first solution looks like this:
 I use a second table with an auto-increment-column, and add an insert trigger 
 to the first table.
 The insert trigger adds a row in the second table and uses  the 
 last_insert_id() to get the unique value.
 The (last) row in the second table will never be deleted.
 Does anybody confirm with this solution?

 ###
 drop table unique_id_messages_1;
 create table unique_id_messages_1 (id bigint not null unique, subject text);

 drop table id_sequences_1;
 create table id_sequences_1 (id bigint not null primary key auto_increment);

 drop trigger trg_unique_id_messages_1;

 DELIMITER |

 create trigger trg_unique_id_messages_1 BEFORE INSERT ON unique_id_messages_1
 FOR EACH ROW BEGIN
  insert into id_sequences_1 values ();
  set NEW.id = (select last_insert_id());
 END;
 |
 DELIMITER ;

 insert into unique_id_messages_1 (subject) values (x1);
 insert into unique_id_messages_1 (subject) values (x2);
 insert into unique_id_messages_1 (subject) values (x3);
 insert into unique_id_messages_1 (subject) values (x4);
 select * from unique_id_messages_1;
 ###


 Thanks in advance

 Rudi



 --
 GMX startet ShortView.de. Hier findest Du Leute mit Deinen Interessen!
 Jetzt dabei sein: http://www.shortview.de/[EMAIL PROTECTED]

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: removing duplicate entries

2008-08-12 Thread Brent Baisley
You should be able to do it with the select you already have,
something like this:

delete ACCOUNTACTION from ACCOUNTACTION join (
select ACCOUNTACTION.ID from ACCOUNTACTION
where ACCOUNTACTION.ACTIONDATE like '2008-08-01 02:00%'
group by ACCOUNTACTION.ACCOUNT_ID
having (count(ACCOUNTACTION.ACCOUNT_ID)  1 and ACCOUNTACTION.ID !=
min(ACCOUNTACTION.ID))
) as duprecs
ON ACCOUNTACTION.ID=duprecs.ID

Almost any select statement can be turned into a delete statement. The
tables before FROM will have records deleted and the tables after FROM
will be used as a filter. You could actually delete from multiple
tables in the same query.

Hope that helps.

Brent Baisley

On Wed, Aug 6, 2008 at 4:31 AM, Magnus Smith
[EMAIL PROTECTED] wrote:
 I have the following two tables

 ACCOUNTACTION
 +---+--+--+-+-+---+
 | Field | Type | Null | Key | Default | Extra |
 +---+--+--+-+-+---+
 | ID| bigint(20)   | NO   | PRI | |   |
 | AccountActionType | varchar(31)  | YES  | | NULL|   |
 | DESCRIPTION   | varchar(255) | YES  | | NULL|   |
 | ACTIONDATE| datetime | YES  | | NULL|   |
 | ACCOUNT_ID| bigint(20)   | YES  | MUL | NULL|   |
 +---+--+--+-+-+---+

 and

 ACCOUNTPAYMENTACTION
 +---++--+-+-+---+
 | Field | Type   | Null | Key | Default | Extra |
 +---++--+-+-+---+
 | ID| bigint(20) | NO   | PRI | |   |
 | AMOUNTINPENCE | bigint(20) | YES  | | NULL|   |
 +---++--+-+-+---+

 ACCOUNTPAYMENTACTION shares the primary key with ACCOUNTACTION

 I need to remove duplicate entries that occured at a specific time in
 ACCOUNTACTION I then plan to remove the rows in ACCOUNTPAYMENTACTION
 that are no longer referenced in ACCOUNTACTION by using an outer join

 I can select the duplicate records in ACCOUNTACTION using

 select ACCOUNTACTION.ID from ACCOUNTACTION
 where ACCOUNTACTION.ACTIONDATE like '2008-08-01 02:00%'
 group by ACCOUNTACTION.ACCOUNT_ID
 having (count(ACCOUNTACTION.ACCOUNT_ID)  1 and ACCOUNTACTION.ID !=
 min(ACCOUNTACTION.ID));

 I am trying to delete these records but am having trouble with the sql
 delete

 I tried the following but nothing happened

 delete ACCOUNTACTION where ACCOUNTACTION.ID in
 (select ACCOUNTACTION.ID from ACCOUNTACTION
 where ACCOUNTACTION.ACTIONDATE like '2008-08-01 02:00%'
 group by ACCOUNTACTION.ACCOUNT_ID
 having (count(ACCOUNTACTION.ACCOUNT_ID)  1 and ACCOUNTACTION.ID !=
 min(ACCOUNTACTION.ID)));

 Can anyone help me?


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query optimization help

2008-08-12 Thread Brent Baisley
First, you might want to move the WHERE...t3.int_a = some integer  
condition into the join condition for t3.
Your not using anything from t4, so I'm not sure why you have that  
table in your query.


You can suggest or force mysql to use an index if it's using the wrong  
one:

http://dev.mysql.com/doc/refman/5.0/en/index-hints.html

On very rare occasions I've had to do this. What's happening is that  
mysql is analyzing the information it has about the data and indexes  
and coming to the wrong conclusion, perhaps even opting for an entire  
table scan. You can run ANALYZE TABLE to force mysql to update the  
information it has about the data. This may actually solve your problem.


Try SHOW INDEX FROM t1 to see what data mysql has about the indexes.  
Sometimes the CARDINALITY (uniqueness) column will be null which can  
indicate a problem.


Posting the result of your EXPLAIN will actually be helpful.

Hope that helps.

Brent Baisley


On Aug 11, 2008, at 8:26 AM, Jonathan Terhorst wrote:


I have this query:

SELECT DISTINCT t1.string FROM t1
LEFT JOIN t2 ON t1.string=t2.string
LEFT JOIN t3 ON t1.int_a=t3.int_a
LEFT JOIN t4 ON t1.int_b=t4.int_b
WHERE
t1.string != '' AND
t2.string IS NULL AND
t3.int_a = some integer
ORDER BY
t1.string ASC

This query is executing slower than it should. EXPLAIN has it using  
temporary and using filesort.
I have indexes on every column in the query, but I think the problem  
is the one-index-per-table limitation. According to EXPLAIN, there  
are two possible indices in use for t1 (int_a and string), but only  
int_a is being used. So I tried constructing a compound index on  
int_a and string. Although this new index appears in possible_keys,  
EXPLAIN still shows the key actually being used as int_a. I tried  
building the compound key in both orders and had the same results.  
How do get mysql to all possible keys on t1 when running the query?  
Thanks!




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Major Performance Degradation after replacing Hard Drive

2008-07-21 Thread Brent Baisley
Copying 5GB files shows you what kind of performance you would get for
working with say video, or anything with large contiguous files.
Database access tends to be random, so you want a drive with faster
random access, not streaming speed. Try copying thousands of small
files and compare the speeds.

One odd thing to check is if the old drive supports command queueing
and the new one does not. I assume that are both SATA drives. All SCSI
drives support command queueing and it can make a huge difference
depending on access patterns.

Brent

On Mon, Jul 21, 2008 at 8:42 AM, Phil [EMAIL PROTECTED] wrote:
 Nothing else running and no queries go against that table, it's effectively
 created just for this, so I would expect the table lock.

 Show (full) processlist has nothing but this running..

 Confirmed the faster disks by copying 5Gb files between two of the same type
 of disk (I installed two of them). 2xfaster than previous disks.

 my.cnf

 [mysqld]
 datadir=/var/lib/mysql
 socket=/var/lib/mysql/mysql.sock
 user=mysql
 # Default to using old password format for compatibility with mysql 3.x
 # clients (those using the mysqlclient10 compatibility package).
 old_passwords=1
 key_buffer_size=3072M
 max_allowed_packet=16M
 thread_stack=128K
 thread_cache_size=64
 thread_concurrency=8
 sort_buffer_size=32M
 join_buffer_size=3M
 read_buffer_size=16M
 query_cache_size=64M
 query_cache_limit=8M
 table_cache=300
 max_connections=500
 max_heap_table_size=1024M
 tmp_table_size=1024M
 myisam_sort_buffer_size=128M
 wait_timeout=3000

 set-variable=long_query_time=6
 log-slow-queries=/var/log/mysql-slow-queries.log

 8Gb Ram on this machine which is an intel quad core.

 Anything else I'm missing? It's *possible* a colleague had changed the
 my.cnf in the last few weeks and mysqld hadn't been restarted, but I don't
 see anything obvious in there and he can't remember.

 :(



 On Mon, Jul 21, 2008 at 7:08 AM, Ananda Kumar [EMAIL PROTECTED] wrote:

 Is there any other job running while the update is happening. Because,
 myisam does a table level lock. Please check the show full processlist.

 Also run mysqladmin -uroot -pxxx status. This would write lock information
 into the machine.err log file. Check in this file also if there is any
 locking happening.

 R u sure, this disk is a FASTER disk then the earlier one.




 On 7/21/08, Phil [EMAIL PROTECTED] wrote:

 Hi All,


 Given a fairly simple table as follows

 CREATE TABLE `common_userx2` (
 `t_proj` char(6) default NULL,
 `t_id` int(11) NOT NULL default '0',
 `t_nick` varchar(50) character set latin1 collate latin1_bin NOT NULL
 default '',
 `t_country` varchar(50) NOT NULL default '',
 `t_cpid` varchar(50) NOT NULL default '',
 `t_url` varchar(50) default NULL,
 `t_create_date` int(11) default NULL,
 `t_create_time` bigint(20) NOT NULL,
 `t_has_profile` char(1) NOT NULL,
 `t_team0` int(11) default NULL,
 `t_metric1` double(20,6) NOT NULL default '0.00',
 `t_metric2` double NOT NULL default '0',
 `t_metric3` double NOT NULL default '0',
 `t_metric4` double default NULL,
 `t_active` char(1) NOT NULL default '',
 `t_rev_metric1` double(20,6) NOT NULL default '100.00',
 `t_projrank0` int(11) default NULL,
 `t_rev_metric2` double(20,6) NOT NULL default '100.00',
 `t_racrank0` int(11) default NULL,
 `t_teamrank0` int(11) default NULL,
 `t_countryrank0` int(11) default NULL,
 `t_createdaterank0` int(11) default NULL,
 PRIMARY KEY  (`t_id`),
 KEY `prank` (`t_rev_metric1`,`t_id`),
 KEY `trank` (`t_team0`,`t_rev_metric1`,`t_id`),
 KEY `countryrank` (`t_country`,`t_rev_metric1`,`t_id`),
 KEY `createdaterank` (`t_create_date`,`t_rev_metric1`,`t_id`),
 KEY `racrank` (`t_rev_metric2`,`t_id`)
 ) ENGINE=MyISAM DEFAULT CHARSET=latin1

 I have a ranking update statement as follows

 set @rank = 0;
 update common_userx2 set projrank0 = @rank:[EMAIL PROTECTED] order by
 t_rev_metric1,t_id;

 For the largest case this has close to 1M rows.

 For weeks it was taking around 10seconds to do this. Yesterday I replaced
 the main data drive in the machine with a faster SATA Raptor drive. No
 problems occurred, but since then (and the subsequent reboot of the
 machine)
 this particular query is taking 45 minutes!

 I can't, for the life of me figure out why performance would be degraded
 so
 much. At first I thought perhaps it might be just disk/mysql caching but
 the
 performance has not increased any in subsequent runs.

 Any advice on where to look ?

 Phil

 --
 Help build our city at http://free-dc.myminicity.com !





 --
 Help build our city at http://free-dc.myminicity.com !


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Error with max and group by

2008-06-21 Thread Brent Baisley
Your query is just grabbing the max date within the group, but you are  
not specifying which record you should be pulling for the location, so  
it's using an arbitrary one within the group. In your query, there is  
not relation between max date and location.


What you need to do is find out what the max date is, and then find  
out what record that max date belongs to so you can pull the rest of  
the information from that record. You can do this by use a select on a  
virtual table.


First, get the max date:
select name,max(acq_date) AS mx_acq_date from cust_full group by name;

Now you want to get the record associate with the matching name/max  
date, so you need to join the result of the above query with the same  
table:

select name, item_id, location, mx_acq_date from cust_full
join
(select name,max(acq_date) AS mx_acq_date from cust_full group by  
name) AS mx_cust_full
on cust_full.name=mx_cust_full.name AND  
cust_full.acq_date=mx_cust_full.mx_acq_date


The name+acq_date is going to be your unique string to join on. Your  
finding out the max, then finding out which record is associated with  
the max.


Brent Baisley
I write code.

On Jun 20, 2008, at 10:50 PM, Joe Pearl wrote:


Hi,

My sql is rusty but I'm trying to solve a problem and I'm getting a  
result that does not make sense.


The table is

mysql select * from cust_full;
+---+-+--++
| name  | item_id | location | acq_date   |
+---+-+--++
| Jim   |   1 | OH   | 2007-03-15 |
| Mary  |   2 | PA   | 2007-01-15 |
| Sally |   1 | OH   | 2007-03-15 |
| John  |   0 |  | -00-00 |
| Jim   |   3 | PA   | 2008-01-03 |
+---+-+--++

I want to get back only the most recent entry for each person and I  
don't care about the order.  I want the result to show Jim with the  
acq_date of 2008-01-03, Mary and Sally with the location and date  
for all of them.  However, when I run what I think should be the  
sql, I get:


mysql select name, item_id, location, max(acq_date) from cust_full  
group by name;

+---+-+--+---+
| name  | item_id | location | max(acq_date) |
+---+-+--+---+
| Jim   |   1 | OH   | 2008-01-03|
| John  |   0 |  | -00-00|
| Mary  |   2 | PA   | 2007-01-15|
| Sally |   1 | OH   | 2007-03-15|
+---+-+--+---+


Why am I getting the wrong location for Jim?  It should be PA.

This is on a Mac.

joe.
813.528.3859
My LinkedIn profile:  http://www.linkedin.com/in/joepearl

 We could learn a lot from crayons... Some are sharp, some are  
pretty and some are dull. Some have weird names, and all are  
different colors, but they all have to live in the same box.  -  
unknown











--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: why does left join gives more results?

2008-05-05 Thread Brent Baisley
If you are getting more results, I would guess that your users table
is not a list of unique users, but a list of user logins. If that is
the case, then it's your LEFT JOIN and the count(*) that is causing
you to get more results.
If a user logs in 5 times, but only has 1 post, you will get 5 records
for that user out of the LEFT JOIN. That's the way left joins work.
You're then doing a count on all the records, but you really just want
a count of the number of posts.
If all my assumptions are correct, then a quick fix for your query is
to change your count(*) to this:
count(DISTINCT posts.post_id) as counted

That will count the number of unique posts. I don't know what your
unique field name is for the posts table.

Brent Baisley
Systems Architect

On Sat, May 3, 2008 at 9:00 PM, Patrick Aljord [EMAIL PROTECTED] wrote:
 hey all,

  I have my query that counts posts per user:

  SELECT count(*) as counted, c.user_id FROM posts c  group by c.user_id
  having counted1 order by counted DESC LIMIT 20

  I wanted to add user login for each count so I did:

  SELECT count(*) as counted, u.login FROM posts c left join users u on
  posts.poster_id=u.id  group by c.user_id having counted1 order by
  counted DESC LIMIT 20

  but now I get more results.

  Any idea what I'm doing wrong?

  Thanks in advance

  Pat

  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Odd Results on Mysql LIMIT and ORDER BY

2008-04-26 Thread Brent Baisley
You may want to check on the version you are running. There have been  
a few odd bugs in various MySQL versions in regards to limits and  
order by filtering. Although it usually involved joins and/or unions.  
If you can't or don't want to upgrade your MySQL version, you can try  
restructuring your query like this:


select * from (
select * from containers where upload_date  1209208414 and  
category_id =

120 order by upload_date desc ) as filter
limit 175,25

Technically, it's the same query and should return the same results.  
It will be a little more intensive, since the inner query returns all  
records, then a limit is imposed.


Brent Baisley
Systems Architect


On Apr 26, 2008, at 7:22 AM, j's mysql general wrote:


Hi Guys,

Firstly, this is the only time I have ever encountered this problem  
and
searching archives or google shed no luck since yesterday so here I  
am .


I have a table described below:

mysql describe containers;
+++--+-+- 
++

| Field  | Type   | Null | Key | Default |
Extra  |
+++--+-+- 
++

| internal_id| mediumint(20) unsigned | NO   | PRI | NULL|
auto_increment |
| category_id| smallint(20) unsigned  | YES  | MUL | NULL
||
| user_id| mediumint(20) unsigned | YES  | MUL | NULL
||
| parts_amount   | int(2) | NO   | | 0
||
| file_name  | varchar(64)| NO   | MUL |
||
| file_format| varchar(5) | NO   | MUL |
||
| file_info  | text   | NO   | | NULL
||
| file_description   | text   | YES  | | NULL
||
| admin_comments | text   | YES  | | NULL
||
| is_approved| tinyint(1) | YES  | MUL | 0
||
| is_shared  | tinyint(1) | YES  | MUL | 1
||
| is_deleted | tinyint(1) | YES  | | 0
||
| upload_date| bigint(10) | NO   | MUL | 0
||
| downloads  | int(11)| YES  | MUL | 0
||
| last_download_date | bigint(10) | NO   | MUL | 0
||
| rating | decimal(3,1)   | YES  | MUL | 0.0
||
| ftp_site   | smallint(6)| NO   | MUL | 0
||
| total_votes| int(11)| NO   | MUL | NULL
||
| total_dnloads  | int(11)| NO   | | NULL
||
| total_votes_ave| float  | NO   | | 0
||
| total_votes_sum| int(11)| NO   | | NULL
||
| file_img   | varchar(120)   | NO   | | NULL
||
| file_extended_info | text   | NO   | | NULL
||
| file_exist | tinyint(4) | NO   | MUL | 0
||
| post_options   | varchar(20)| NO   | | NULL
||
+++--+-+- 
++

25 rows in set (0.00 sec)

mysql select count(*) from containers;
+--+
| count(*) |
+--+
| 9504 |
+--+
1 row in set (0.00 sec)
mysql select count(*) from containers where upload_date   
1209208414 and

category_id = 120;
+--+
| count(*) |
+--+
|  795 |
+--+
1 row in set (0.01 sec)

And I have queries like these:

select * from containers where upload_date  1209208414 and  
category_id =

120 order by upload_date desc limit 0,25

and

select * from containers where upload_date  1209208414 and  
category_id =

120 order by upload_date desc limit 175,25

These queries are dynamically generated and is is being paged for  
browser
display so the second query means I am on the 8th page for 25 items  
each

page.

The problem is, offsets 0...150 (LIMIT [0...150],25) will not return  
any

results while 175 onwards will. This happens only when I am filtering
category_id 120, all other categories does not yield this odd  
result. I have
no clue whatsoever what is going on, executing the query directly  
from the
server yields the same results. Now, if I omit either the order by  
or limit

clauses I get results all through out.

Hope someone can shed some light.

Jervin



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Upgrading from 4.1 to 5.0

2008-04-23 Thread Brent Baisley
You may want to try replication. Setup your replication server as  
5.0.  That server gives you a chance to play to get things right  
without affecting the master server. You'll still need to do a dump to  
get the slave up to speed. Once you get everything right, you can  
switch over and the slave becomes the master.


Very simple in theory, a bit more complicated in practice.

Brent Baisley
Systems Architect


On Apr 23, 2008, at 2:28 PM, Paul Choi wrote:

Does anyone have experience with upgrading large databases (~500GB  
each)
from MySQL 4.1 to 5.0? The tables are in InnoDB format. We are using  
the

Community version.

I've read that it's recommended that you use mysqldump and then  
restore,

but this is not possible for us, as we cannot have our databases down
for long, nor can we have our tables locked while doing dump.

I've tried doing the following steps:
  ibbackup --restore
  copy over mysql table dirs.
  set default char set to latin1 (or will default to utf8) in my.cnf
because that's the original char set in 4.1
  Upgrade only mysql database (user and privilege tables)
  mysqlcheck --check-upgrade --auto-repair mysql
  mysql_fix_privilege_tables

I've written a script to compare data between the original 4.1 and the
new 5.0. Looks like certain rows have different numerical data... so
this is not good.

I didn't want to do mysql_upgrade on all databases in this instance of
MySQL because that resulted in 2 things happening:
  1) Don't set default char set to latin1. Run mysql_upgrade
 Some rows had data truncated in certain columns.
  2) Set default char set to latin1. Run mysql_upgrade
 Copies to TMP table. Takes forever... This is unacceptable for  
us.


What is the recommended way to upgrade from 4.1 to 5.0? Or are we  
stuck

using 4.1 forever?

-Paul Choi
Plaxo, Inc.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Replication for auto-increment tables

2008-04-20 Thread Brent Baisley
If you are getting duplicate id's on the slave, then something is not  
setup correctly. The save should have the same ids as the master. Just  
because a field is auto-increment, doesn't mean you can't enter you  
own value. Think of auto-increment as a default value setting.


Just because a database is setup as a slave, that doesn't mean you  
can't use it like a typical database. You can insert, delete, update,  
etc. just like any other DB. Something or someone is likely adding  
records directly to the slave, which is then generating it's own auto- 
increment value.


Brent Baisley
Systems Architect


On Apr 18, 2008, at 11:36 AM, Chanchal James wrote:


Hi,

Has anyone got mysql master-slave replication setup on v4.1. Were  
you able

to get tables with auto_increment update properly to slave ?
If yes, please let me know. I need some advise on how to set it up  
to work
well. I get stuck at duplicate errors quite often, and those are not  
real

duplicates, its just that its id on slave was already occupied by some
previous entry!!

I see mysql 5 has options like: auto-increment-increment 
auto-increment-offset , but with v4.1

Any help is appreciated.
Thanks!



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Query Confusion!

2008-04-12 Thread Brent Baisley
Is the plus query return more then 50% of the records? If so, MySQL  
won't return anything since the result set isn't that relevant.


Brent Baisley
Systems Architect


On Apr 11, 2008, at 8:08 AM, Barry wrote:

I am confused ( nothing new there), what I thought was a simple  
search is proving not to be so,


Can anyone tell me why this query for the word 'plus':
mysql SELECT *
- FROM booktitles
- WHERE MATCH (category , publisher , bookTitle , author)
- AGAINST (CONVERT( _utf8'plus'USING latin1 )
- IN BOOLEAN MODE)
- ORDER BY category;
Empty set (0.00 sec)

returns an empty result set, when this query:

mysql SELECT *
- FROM `booklist`.`booktitles`
- WHERE `id` LIKE '%plus%'
- OR `category` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `publisher` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `bookTitle` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `author` LIKE CONVERT( _utf8 '%plus%'
- USING latin1 )
- COLLATE latin1_swedish_ci
- OR `publishDate` LIKE '%plus%';
+-+--+--- 
+ 
---+ 
-+-+

| id | category | publisher | bookTitle | author | publishDate |
+-+--+--- 
+ 
---+ 
-+-+
| 39 | C++ | SAMS | C++ Primer Plus Fourth Edition | Stephen Prata |  
2001 |
| 162 | Linux | Wiley | Ubuntu Linux Toolbox 1000 plus Commands for  
Ubuntu and Debian Power Users | Christopher Negus Fran�ois Caen |  
2007 |
| 496 | C++ | Prentice Hall | C Plus Plus GUI Programming With Qt 4  
2nd Edition | Jasmin Blanchette, Mark Summerfield | 2008 |
+-+--+--- 
+ 
---+ 
-+-+

3 rows in set (0.00 sec)

provides the correct answer?

Thinking that it the first query wasn't picking up a four letter  
term, I ran this search for the word 'real'


mysql SELECT *
- FROM booktitles
- WHERE MATCH (category , publisher , bookTitle , author)
- AGAINST (CONVERT( _utf8'real'USING latin1 )
- IN BOOLEAN MODE)
- ORDER BY category;
+-+--+--- 
+- 
+--+-+

| id | category | publisher | bookTitle | author | publishDate |
+-+--+--- 
+- 
+--+-+
| 134 | Linux | Prentice Hall | Embedded Linux Primer: A Practical,  
Real-World Approach | Christopher Hallinan | 2006 |
+-+--+--- 
+- 
+--+-+

1 row in set (0.00 sec)

and as you can see it came up with the correct result.

Thanks for looking

Barry

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: left/right join concept

2008-03-01 Thread Brent Baisley
That sounds like the technical answer. I prefer an analogy a 5th  
grader could understand. If you have 2 overlapping circles, and inner  
join is the area that overlaps. A left/outer join is the all of the  
left circle plus the content of the right circle that overlaps. A  
right/outer join is just the opposite.


An outer join doesn't filter the table, it just finds any matching  
content if it's present. Anything without matching content has a  
null where normal matched content would be.



Brent Baisley
PHP, MySQL, Linux, Mac
I write code



On Mar 1, 2008, at 4:16 AM, Thufir wrote:


I'm trying to understand the terminology a bit.

A left or right join can only exist for an outer join.  For an inner
join, the terminology would be out of context because inner joins are
symmetrical (whereas outer joins are asymmetrical).

Would this be a correct understanding?  Anything to add or correct,
please?



thanks,

Thufir


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]






Re: select from otherdb.table question?

2008-01-20 Thread Brent Baisley
When you establish a connection, it's a connection to a server, not a  
specific DB. You can set a default db so that you don't always have  
to specify the db name you are working with. So to answer your  
question, no, a new connection is not established.


Brent


On Jan 19, 2008, at 10:19 AM, Alex K wrote:


Hi Guys,

What does the statement select * from otherdb.table do if I haven't
explicitly connected to otherdb previously? I would assume it connects
to otherdb and does the select on table but does it create a new
connection each time? Is it as efficient as explicitly connecting to
otherdb and then querying. I'm using webware DBUtils for connection
pooling. Would these connections also be taken into account?

Thank you so much,

Alex

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: help with query optimization

2008-01-09 Thread Brent Baisley
Your biggest problem is probably the subquery/IN your are performing.  
You should change that to a join. And I don't know about using  
SQL_CALC_FOUND_ROWS in a full text query that's not boolean, and you  
shouldn't use it unless you have a LIMIT clause.


SELECT SQL_CALC_FOUND_ROWS table_1.id,table_2.*
FROM table_2
JOIN
(SELECT shared_id FROM table_1_view) as table_3 ON  
table_2.shared_id=table_3.shared_id

LEFT JOIN table_1 ON table_2.shared_id = table_1.shared_id
WHERE MATCH table_2.field AGAINST ('value')
AND table_2.current = 1

I know the difference doesn't seem that much, but MySQL optimizes it  
very differently.


Brent


On Jan 4, 2008, at 5:47 PM, Eben wrote:


Hi,

I have a query that has to run on a full text indexed table with  
many millions of records.  I'm trying to figure out some  
optimizations for it.  Here's the general query:


SELECT SQL_CALC_FOUND_ROWS table_1.id,table_2.*
FROM table_2
LEFT JOIN table_1 ON table_2.shared_id = table_1.shared_id
WHERE MATCH table_2.field AGAINST ('value')
AND table_2.current = 1
AND table_2.shared_id IN (SELECT shared_id FROM table_1_view)

Some details:
1. table_1.shared_id and table_2.shared_id are indexed
2. table_2.current is an indexed tinyint
3. table_1_view is derived from a query like:

SELECT shared_id
FROM table_1
WHERE some_field LIKE 'some_value%'

table_1 is a relatively small table i.e.  100k records
table_2 is massive with  10 million records

Any ideas or suggestions are appreciated

thanks,
Eben



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: left join problem

2008-01-09 Thread Brent Baisley
Yes, that is the correct behavior of a LEFT JOIN. A left join keeps  
all the record from the original/left table and will link up any  
related data from the right table, but if there is no related data,  
it sets it to NULL. If you want the join to act as a filter, the just  
use regular JOIN.


Brent


On Jan 8, 2008, at 12:24 PM, Wes Hegge wrote:

I am attempting to left join several tables.  This was working at  
one time
but has seemed to stop just recently.  I have checked and installed  
the
lastest version of mysql via Debian Etch apt-get.  I am running  
version

5.0.32.

I have simplified the example down to just 3 tables but the problem  
exists

at this point.

Table 'contacts'  - Does not matter what the fields are, still the  
same
problem.  I am not using TEXT fields though.  Most are int's or  
varchar's

   account_num
   first_name
   last_name

Table 'address'
   account_num
   address_1
   address_2
   city
   state
   zip

Table 'phone'
   account_num
   phone_1
   phone_1_type
   phone_2
   phone_2_type

What I want to do is search all three tables for something, return  
anything

that matches.  So here is the select statement I have been using:
SELECT contacts.account_num, first_name, last_name, address_1,  
city_1,
phone_1 FROM contacts LEFT JOIN (address, phone) ON  
(contacts.account_num =
address.account_num AND contacts.account_num = phone.account_num)  
WHERE

contacts.account_num LIKE '%something%' OR contacts.first_name LIKE
'%something%' OR address.address_1 LIKE '%something%' OR
address.address_2LIKE '%something%' OR
address.city LIKE '%somehting%' OR phone.phone_1 LIKE '%something%' OR
phone.phone_2 LIKE '%something%' ORDER BY last_name;

When I run this query I only get data back from the 'contacts'  
table.  What
I have been able to track down is that if I am missing data from  
any of the
tables that I LEFT JOIN'd then all the data from all the LEFT  
JOIN'd tables
will be NULL.  In other words if I have account data in tables  
'contacts'
and 'address' but nothing in 'phone' then no data from tables  
'address' or
'phone' will be returned.  If I add data to 'phone' then data is  
returned

properly.

Is this correct behavior?  If so, any suggestions on how to solve this
problem would be great.  Realize this is a smaller example of what  
I am
really trying to do.  There are at least 4 tables in the select  
statement at

any one time and could be as many as 6.

Thanks!

--
Wes Hegge

- If the phone rings.  Its not me.
-- Jimmy  Buffet



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: query_cache TimeToLive

2008-01-09 Thread Brent Baisley
The query_cache TimeToLive is variable. The query will be in the  
cache as long as the data does not change. Once a table/data changes,  
the query cache for those tables are cleared. It's not the best  
implementation, but it's way better than nothing.
MySQL 5 does have an on demand query cache setting. This allows you  
to specific which queries should be cached. This is generally useful  
when most of your tables change constantly (making a cache useless),  
but a few tables do not. The setting is one of the variables you can  
set (SHOW VARIABLES) to either 0, 1, or 2 as I recall.


The Falcon engine (MySQL 6) actually has a very good caching  
mechanism, but that's not officially released yet.


Brent

On Jan 8, 2008, at 11:20 AM, Thomas Raso wrote:


Hi all,

how mysql manage the query_cache TimeToLive (live) and how can I  
change it ?


Thanks



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySQL SELECT Statement with Date help request

2008-01-09 Thread Brent Baisley
Sounds like you should create a MERGE table that links all the  
underlying tables together. Then you just query the merge table and  
MySQL handles which tables it needs to pull data from. You also then  
don't need to query for the tables.


On Jan 9, 2008, at 9:12 AM, Cx Cx wrote:


Hi List,

I am wondering if someone can help me with a query to check what  
databases
are on the MySQL server and then check which of those databases are  
either

partially or completely within the date range i require.

The scenario is as follows:

db1 : 2007-01-01 to 2007-02-01
db2 : 2007-02-01 to 2007-03-01
db3 : 2007-03-01 to 2007-04-01
db4 : 2007-04-01 to 2007-05-01
db5 : 2007-05-01 to 2007-06-01
db6 : 2007-06-01 to 2007-07-01

I require a select statement to query all the db's to identify  
records with

the date range for eg. 2007-02-15 to 2007-05-12.

Logically this tells me that the databases that will have this  
information

will be db2,db3,db4 and db5.

My problem is that i have multiple servers running at different  
locations
that uses the same app that writes to the MySQL db. However the  
amount of
databases on each server differs in amount of db's and date ranges  
for each

server.

Is there a way of getting such a result with MySQL?

Thanks in advance,

Craig



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Eliminating duplicates from self join results

2007-12-12 Thread Brent Baisley
Taking it step by step, this query will give you all the lowest ids,  
for those records with duplicates.

SELECT min(id), email, count(*) AS cnt
FROM addressbook
GROUP BY email
HAVING cnt1

Now think of that query as an already existing table, which you can  
do, you just need to name the query result, in this case I'm calling  
it t1.
Then you LEFT JOIN it with the addressbook table, but filtering out  
the ids you already have.

SELECT id1, email1,
t2.id AS id2, t2.email AS email2
FROM
(
SELECT min(id) AS id1, email AS email1, count(*) AS cnt
FROM addressbook
GROUP BY email
HAVING cnt1
) AS t1
LEFT JOIN
addressbook AS t2
ON t1.email1=t2.email AND t1.id1!=t2.id
ORDER BY email1

I haven't tested it, but that query should work and give you the  
output you want. I don't recall if it works in v4.0, but v4.1 and  
above should work fine.


Brent


On Dec 12, 2007, at 8:35 AM, Yashesh Bhatia wrote:


Hello:

I have the following table

select * from addressbook
+++
| id | email  |
+++
|  1 | [EMAIL PROTECTED] |
|  2 | [EMAIL PROTECTED] |
|  3 | [EMAIL PROTECTED] |
+++
3 rows in set (0.00 sec)

Now i wanted to find a list of duplicate contacts wherein i can get
the 1st contact with same email
and merge with the others. so in the above case id 1 has duplicates 2
and 3, 2 has 1  3 and 3 and 1  2.

I'm only interested in getting the first set of duplicates i.e. 1 has
duplicates 2  3.

So i tried the query
select t1.id as id1, t2.id as id2, t1.email as email1, t2.email as  
email2

from addressbook t1, addressbook t2
where t1.email = t2.email and t1.id != t2.id
order by t1.id
+-+-+++
| id1 | id2 | email1 | email2 |
+-+-+++
|   1 |   2 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
|   1 |   3 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
|   2 |   1 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
|   2 |   3 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
|   3 |   1 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
|   3 |   2 | [EMAIL PROTECTED] | [EMAIL PROTECTED] |
+-+-+++

then i tried the query similar to the one suggested in the MySQL
Cookbook Recipe 14.5

select DISTINCT if(t1.id  t2.id, t1.id, t2.id) as id1,
   if(t1.id  t2.id, t1.email, t2.email) as email1,
   if(t1.id  t2.id, t2.id, t1.id) as id2,
   if(t1.id  t2.id, t2.email, t1.email) as email2
from addressbook t1, addressbook t2
where t1.email = t2.email and t1.id != t2.id
order by t1.id
+-++-++
| id1 | email1 | id2 | email2 |
+-++-++
|   1 | [EMAIL PROTECTED] |   2 | [EMAIL PROTECTED] |
|   1 | [EMAIL PROTECTED] |   3 | [EMAIL PROTECTED] |
|   2 | [EMAIL PROTECTED] |   3 | [EMAIL PROTECTED] |
+-++-++

I'm stuck trying to get a query that will give me only
+-++-++
| id1 | email1 | id2 | email2 |
+-++-++
|   1 | [EMAIL PROTECTED] |   2 | [EMAIL PROTECTED] |
|   1 | [EMAIL PROTECTED] |   3 | [EMAIL PROTECTED] |
+-++-++

Any help, feeback is deeply appreciated. Thanks a bunch in advance.

Yashesh Bhatia

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: mysql eluding query debugging?

2007-11-10 Thread Brent Baisley
A function on any column does not negate the use of the index. That  
only happens if you use a function in a filter part (join, where, etc.).


You may want to run optimize table on the 2 tables involved. That  
will update the table stats that mysql uses to optimize the queries.  
MySQL may occasionally be taking a different optimization path when  
it runs the query, which may or may not be the best path. Those may  
be the times your query is slow.  You'll notice in your explain it's  
looking in the movement_items table first.
Dan Nelson was correct about MySQL having to do 2300+ lookups in the  
second table, although in this case the second table is the  
movements table. Which is the opposite of what you are expecting in  
your query. You are correct, 2300 records should be a breeze and your  
query should always be fast. You want the movements table to be  
queried first.


I don't know your table structure, but it seems item_id is part of  
the movement_items table. Which means you should put item_id=21311 in  
the join statement.
...LEFT OUTER JOIN movements ON movements.id =  
movement_items.movement_id AND movement_items.item_id=21311


From the manual:
The LEFT JOIN condition is used to decide how to retrieve rows from  
table B. (In other words, any condition in the WHERE clause is not  
used.)


The part in () is the important part. The WHERE clause should only  
have information to filter the result set, not anything to filter the  
JOIN. That information should be in the JOIN.


Try making that change. I believe I explained what is happening and  
why. Although I may be wrong, please post if it doesn't work.  
Regardless, you want to see the movements table listed first in your  
EXPLAIN.


Brent


On Nov 9, 2007, at 3:46 PM, Moritz von Schweinitz wrote:

Thank your for your answer (even though i only got it because you  
sent it directly to me - somehow, the mailinglist-emails are not  
getting through to me. weird).


1.) All that can have a UNIQUE index, have one, ut the ones i am  
using here are not unique (except for movements.id, which is a  
PRIMARY).
2.) as far as i can see, the use of the function on the column  
quant shouldnt influence performance in this case - mysql should  
(and sais that it does inthe EXPLAIN, as far as i can tell), that  
it is using all availble indexes to reduce the numbers of examined  
rows to a mere 2300 rows, and then sums them up - which shouldn't  
take longer than a second, in my experience.
3.) i think i cant combine them in an index, because type_id and  
type_id are in different tables.


but my main problem is still that the first run of the query is  
slow, and the following ones are fast enough - this way, i cant  
really debig the query. any tips on how to manage that mysql stops  
doing whatever it is doing to make the following queries optimized?


thanks,

M.

Martin Gainty wrote:

some unknowns
1)Are all the columns you are referencing indexed with UNIQUE  
indexes?
2)Using a function on any column negates the use of the  
referencing index so

in your case you are doing a SUM(quant)
.is there any capability of denormalising say ..storing the sum  
preferably

in the movement items table
3)assuming either
Both item_id and type_id columns can be referenced via UNIQUE indexes
OR using a concatened index on item_id and type_id
will go a long way to speed up the query

Viel Gluck/
Martin
- Original Message -
From: Moritz von Schweinitz [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Friday, November 09, 2007 1:16 PM
Subject: mysql eluding query debugging?




Ok, i don't get it.

I have the following query:
SELECT
SUM(quant)
FROM
movement_items
LEFT OUTER JOIN movements ON movements.id =  
movement_items.movement_id

WHERE
item_id = 21311
AND
movements.type_id = 1

where 'movement_items' has about 1.3M rows, and 'movements' about  
0.5M



rows.


EXPLAIN gives me the following:


++-+++- 
+

-+-+---+--+-+


| id | select_type | table  | type   | possible_keys   |
key | key_len | ref   | rows | Extra


|
  ++-++ 
+-+

-+-+---+--+-+


|  1 | SIMPLE  | movement_items | ref| movement_id,item_id |
item_id | 5   | const | 2327 |  
Using where



|


|  1 | SIMPLE  | movements  | eq_ref | PRIMARY,type_id |
PRIMARY | 4   | pague9.movement_items.movement_id |1 |  
Using where



|
  ++-++ 
+-+

-+-+---+--+-+


2 rows in set (0.01 sec)

which seems ok to me (2327 rows to examine should be a breeze,  
right?)


Now, my problem: sometimes, this query takes up to 10 seconds

  1   2   3   4   5   6   7   >