filename-safe conversion of database-/tablenames

2016-08-09 Thread Simon Fromme

Hello,

In order to do MySQL-dumps to a file on a Linux system under

$BACKUP_DIR/$DB_NAME/$TABLE_NAME.sql

I need to convert both the names of databases and tables in a 
filename-safe way (escaping "/" and other characters as in [1]). The 
mapping of MySQL database/table name to the according filenames should 
preferably be the same that MySQL (or the particular DB engine) uses. If 
that's not possible the mapping should at least be injective and 
preferably be human readable.


I found out that MySQL is using the C-function 
tablename_to_filename(...) [2] internally but didn't find a way in which 
it exposes this conversion function to the outside.


Did I overlook some way this could be done? If not, would this be a 
feature that a future version of MySQL should provide?



Best regards
Simon Fromme


[1]: https://dev.mysql.com/doc/refman/5.7/en/identifier-mapping.html
[2]: http://osxr.org:8080/mysql/source/sql/sql_table.cc

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



mysql 5.5 crashed on Debian 6 after server crash

2013-08-28 Thread Simon Loewenthal
Hi there,

  I just had my mysql dB crash really badly on a small server I'm running
and wonder if someone could point me in the right direction.  I'm pretty
basic with mysql... I set up master/slave replication to another site, so
this could be a backup, so long as the errors were not replicated over
there.

Mysql boots with these types of errors recorded in syslog. Also is the
my.cnf


If you know of ideas that could be of help, then I woul dbe really
grateful.  I've been trying to get this to run for 4 hours solid.

Best regards, Simon

My.cnf
( My server only has 512Mb of RAM so mysql has to use not a lot).


# grep -v ^$ /etc/mysql/my.cnf|grep -v ^#
[client]
port= 3306
socket  = /var/run/mysqld/mysqld.sock
[mysqld_safe]
socket  = /var/run/mysqld/mysqld.sock
nice= 0
[mysqld]
user= mysql
pid-file= /var/run/mysqld/mysqld.pid
socket  = /var/run/mysqld/mysqld.sock
port= 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir  = /tmp
skip-external-locking
bind-address= 127.0.0.1
max_allowed_packet = 1M
key_buffer  = 16K
sort_buffer_size = 64K
read_buffer_size = 256K
thread_stack = 64K
thread_cache_size   = 8
myisam-recover  = BACKUP
max_connections = 34
log_slow_queries= /var/log/mysql/mysql-slow.log
query_cache_limit   = 64K
query_cache_size= 2M
expire_logs_days= 10
max_binlog_size = 100M
table_open_cache= 4
table_cache = 48
read_rnd_buffer_size = 256K
net_buffer_length = 128K
server-id   = 1
log-bin=/var/log/mysql/mysql-bin.log
innodb_flush_log_at_trx_commit  = 2
max_binlog_size = 15M
expire_logs_days= 2
[mysqldump]
quick
quote-names
max_allowed_packet = 1M
[mysql]
[isamchk]
key_buffer  = 16M
!includedir /etc/mysql/conf.d/
[myisamchk]
key_buffer_size = 386K
sort_buffer_size = 64K
read_buffer_size = 256K

Syslog records 36000 lines of errors from mysql and below please find a
snip it:

Aug 28 11:50:14 emailserver mysqld_safe: Number of processes running now: 0
Aug 28 11:50:14 emailserver mysqld_safe: mysqld restarted
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Warning] option
'thread_stack': unsigned value 65536 adjusted to 131072
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Warning] The syntax
'--log-slow-queries' is deprecated and will be removed in a future release.
Please use '--slow-query-log'/'--slow-query-log-file' instead.
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Note] Plugin
'FEDERATED' is disabled.
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: The InnoDB
memory heap is disabled
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Mutexes and
rw_locks use GCC atomic builtins
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Compressed
tables use zlib 1.2.3.4
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Using Linux
native AIO
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Initializing
buffer pool, size = 128.0M
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Completed
initialization of buffer pool
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: highest
supported file format is Barracuda.
Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14  InnoDB: Waiting for
the background threads to start
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 InnoDB: 5.5.31 started;
log sequence number 1211163767
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Recovering after
a crash using /var/log/mysql/mysql-bin
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Starting crash
recovery...
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Crash recovery
finished.
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Server hostname
(bind-address): '127.0.0.1'; port: 3306
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note]   - '127.0.0.1'
resolves to '127.0.0.1';
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Server socket
created on IP: '127.0.0.1'.
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Event Scheduler:
Loaded 0 events
Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note]
/usr/sbin/mysqld: ready for connections.
Aug 28 11:50:15 emailserver mysqld: Version: '5.5.31-1~dotdeb.0-log'
socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Debian)
Aug 28 11:50:15 emailserver mysqld: InnoDB: Dump of the tablespace extent
descriptor:  len 40; hex
01de01660002eaaaeafaabaabbaa;
ascf;
Aug 28 11:50:15 emailserver mysqld: InnoDB: Serious error! InnoDB is trying
to free page 815
Aug 28 11:50:15 emailserver mysqld: InnoDB: though it is already marked as
free in the tablespace!
Aug 28 11:50:15 emailserver mysqld: InnoDB: The tablespace free space info
is corrupt.
Aug 28 11:50:15 emailserver mysqld: InnoDB: You may need to dump

Re: mysql 5.5 crashed on Debian 6 after server crash

2013-08-28 Thread Simon Loewenthal
Hi,
 I have managed to get the dB open with
 [mysqld]
innodb_force_recovery = 2

I did a mysqldump for all tables, and restarted with innodb_force_recordy
but still have the same old error messages.

Regards, S


On Wed, Aug 28, 2013 at 12:12 PM, Simon Loewenthal 
simon.loewent...@gmail.com wrote:

 Hi there,

   I just had my mysql dB crash really badly on a small server I'm running
 and wonder if someone could point me in the right direction.  I'm pretty
 basic with mysql... I set up master/slave replication to another site, so
 this could be a backup, so long as the errors were not replicated over
 there.

 Mysql boots with these types of errors recorded in syslog. Also is the
 my.cnf


 If you know of ideas that could be of help, then I woul dbe really
 grateful.  I've been trying to get this to run for 4 hours solid.

 Best regards, Simon

 My.cnf
 ( My server only has 512Mb of RAM so mysql has to use not a lot).


 # grep -v ^$ /etc/mysql/my.cnf|grep -v ^#
 [client]
 port= 3306
 socket  = /var/run/mysqld/mysqld.sock
 [mysqld_safe]
 socket  = /var/run/mysqld/mysqld.sock
 nice= 0
 [mysqld]
 user= mysql
 pid-file= /var/run/mysqld/mysqld.pid
 socket  = /var/run/mysqld/mysqld.sock
 port= 3306
 basedir = /usr
 datadir = /var/lib/mysql
 tmpdir  = /tmp
 skip-external-locking
 bind-address= 127.0.0.1
 max_allowed_packet = 1M
 key_buffer  = 16K
 sort_buffer_size = 64K
 read_buffer_size = 256K
 thread_stack = 64K
 thread_cache_size   = 8
 myisam-recover  = BACKUP
 max_connections = 34
 log_slow_queries= /var/log/mysql/mysql-slow.log
 query_cache_limit   = 64K
 query_cache_size= 2M
 expire_logs_days= 10
 max_binlog_size = 100M
 table_open_cache= 4
 table_cache = 48
 read_rnd_buffer_size = 256K
 net_buffer_length = 128K
 server-id   = 1
 log-bin=/var/log/mysql/mysql-bin.log
 innodb_flush_log_at_trx_commit  = 2
 max_binlog_size = 15M
 expire_logs_days= 2
 [mysqldump]
 quick
 quote-names
 max_allowed_packet = 1M
 [mysql]
 [isamchk]
 key_buffer  = 16M
 !includedir /etc/mysql/conf.d/
 [myisamchk]
 key_buffer_size = 386K
 sort_buffer_size = 64K
 read_buffer_size = 256K

 Syslog records 36000 lines of errors from mysql and below please find a
 snip it:

 Aug 28 11:50:14 emailserver mysqld_safe: Number of processes running now: 0
 Aug 28 11:50:14 emailserver mysqld_safe: mysqld restarted
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Warning] option
 'thread_stack': unsigned value 65536 adjusted to 131072
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Warning] The syntax
 '--log-slow-queries' is deprecated and will be removed in a future release.
 Please use '--slow-query-log'/'--slow-query-log-file' instead.
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 [Note] Plugin
 'FEDERATED' is disabled.
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: The InnoDB
 memory heap is disabled
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Mutexes and
 rw_locks use GCC atomic builtins
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Compressed
 tables use zlib 1.2.3.4
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Using Linux
 native AIO
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Initializing
 buffer pool, size = 128.0M
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: Completed
 initialization of buffer pool
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14 InnoDB: highest
 supported file format is Barracuda.
 Aug 28 11:50:14 emailserver mysqld: 130828 11:50:14  InnoDB: Waiting for
 the background threads to start
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 InnoDB: 5.5.31
 started; log sequence number 1211163767
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Recovering
 after a crash using /var/log/mysql/mysql-bin
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Starting crash
 recovery...
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Crash recovery
 finished.
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Server hostname
 (bind-address): '127.0.0.1'; port: 3306
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note]   - '127.0.0.1'
 resolves to '127.0.0.1';
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Server socket
 created on IP: '127.0.0.1'.
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note] Event
 Scheduler: Loaded 0 events
 Aug 28 11:50:15 emailserver mysqld: 130828 11:50:15 [Note]
 /usr/sbin/mysqld: ready for connections.
 Aug 28 11:50:15 emailserver mysqld: Version: '5.5.31-1~dotdeb.0-log'
 socket: '/var/run/mysqld/mysqld.sock'  port: 3306  (Debian)
 Aug 28 11:50:15 emailserver mysqld: InnoDB: Dump of the tablespace extent
 descriptor:  len 40; hex

Re: Connect to MySQL server from a c++ application

2012-06-08 Thread Simon Walter


On 06/08/2012 01:55 AM, Claudio Nanni wrote:

Hi,

you guys don't like the official API?

http://dev.mysql.com/downloads/connector/c/



That's C isn't it? I think there is also a C++ connector. I'm interested 
to hear how that performs. It seems like a waste of time to write a 
bunch of wrappers for the C connector.



--
simonsmicrophone.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Connect to MySQL server from a c++ application

2012-06-07 Thread Simon Walter


On 06/07/2012 12:29 PM, Lars Nilsson wrote:


On Wed, Jun 6, 2012 at 10:41 PM, Simon Waltersi...@gikaku.com  wrote:

However, memory leaks are not acceptable. So I am open to suggestions. What
do other c++ programmers use?


I've been happy using SQLAPI++ (http://www.sqlapi.com/) where I work.
Commercial and not open source, but it's cross-platform and supports a
dozen or so different databases.



It looks nice. I'm looking for something open source. I'm fine using one 
of the SQL connectors. I just need to know which one works. How does 
SQLAPI++ connect to MySQL? Is it thread safe?


--
simonsmicrophone.com

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Connect to MySQL server from a c++ application

2012-06-06 Thread Simon Walter


What is the most stable and performant way to connect to a MySQL server 
from a c++ application?


I've been using libmyodbc via unixODBC running under Debian squeeze. 
Suffice it to say, I am sorely disappointed. First of all the libmyodbc 
driver that's included with Debian is quite old. However, even after 
building and utilizing the latest version, there are still memory leaks 
in the driver.


I'm not stuck on using ODBC. Though it's nice to be able to is use an 
ODBC library so that I can connect to various DBs without having to 
learn new APIs. There is also the benefit of being able to change 
databases without much effort.


However, memory leaks are not acceptable. So I am open to suggestions. 
What do other c++ programmers use?


(note: I know this is probably not the place to ask this, but the 
libmyodbc mailing is dead as a door nail with people's auto-responders 
going off like a digital ghost town. :/)


Thanks,

Simon

--
simonsmicrophone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



date comparison query

2012-03-16 Thread Simon Wilkinson
Hi,

I have a table that holds two datetime columns.  I am trying to find values
from this table that fall into specific time ranges, but am getting some
strange results.  For example, if I try to find rows where the difference
between the two column is between 47 and 48 weeks, I get back a result
where the actual difference is less than 1 month.

My query for this is as follows: select * from table where table.date1 -
table.date2 between 28425600 and 29030400;

The result returns a row where date1 is 2010-10-31 18:24:49, and date2
is 2010-10-02 20:29:54.

I seem to get proper results for some values (I am trying to find results
that fall into different weekly ranges), but then some are just way off.
 Does anybody have any ideas for why this is happening?

Thanks,

Simon


RE: Formatting Numbers with commas

2012-02-12 Thread Simon Griffiths
Please see 

http://dev.mysql.com/doc/refman/5.1/en/string-functions.html#function_format

Regards,

Simon Griffiths

-Original Message-
From: Mike Blezien [mailto:mick...@frontiernet.net] 
Sent: 12 February 2012 16:00
To: MySQL List
Subject: Formatting Numbers with commas

Hello,

Is there a function to automatically format long numercial values before
it's entered into the table, i.e I have a value like 159600 and would be
entered as
159,600 or 78450 would be entered 78,450 etc., ?

Thank you,
Mike(mickalo)Blezien
=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
Thunder Rain Internet Publishing
Custom Programming  Web Hosting Services
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- 


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



change from an OR to an AND condition

2011-09-27 Thread Simon Wilkinson
Hi,

I have a query where I'm using a left outer join to connect two tables,
entries and entries_tags, where entries_tags is a join table between entries
and another table, tags.  In my query I am selecting values from the entries
table where the entries have any of the provided tags - WHERE
entries_tags.tag_id IN (x, y, z)

What I would like to do is switch this from an OR condition to an AND
condition, so that an entry is only returned if it has all of x, y, and z.

How would I go about doing this?

Thanks,

Simon


Re: change from an OR to an AND condition

2011-09-27 Thread Simon Wilkinson
Just to clarify, here is some more info:

Here is what the query looks like:

SELECT DISTINCT `entries`.id FROM `entries` LEFT OUTER JOIN `entries_tags`
ON `entries_tags`.entry_id = `entries`.id LEFT OUTER JOIN `tags` ON
`tags`.id = `entries_tags`.tag_id WHERE (`entries_tags`.tag_id IN (5,6));

So in this instance, what I'm looking for are the entries where they have
two entries in entries_tags with tag ids 5 and 6, instead of entries that
have either.

Thanks again,

Simon

On 27 September 2011 09:46, Simon Wilkinson simon.wilkin...@gmail.comwrote:

 Hi,

 I have a query where I'm using a left outer join to connect two tables,
 entries and entries_tags, where entries_tags is a join table between entries
 and another table, tags.  In my query I am selecting values from the entries
 table where the entries have any of the provided tags - WHERE
 entries_tags.tag_id IN (x, y, z)

 What I would like to do is switch this from an OR condition to an AND
 condition, so that an entry is only returned if it has all of x, y, and z.

 How would I go about doing this?

 Thanks,

 Simon





Re: optimizing query

2011-01-21 Thread Simon Wilkinson
Thanks for the suggestions everybody.

I added in columns to store the day, month and year of the created_at value,
and then added in an index on (newsletter_id, created_month, created_day),
and the the slow queries reduced from around 20 seconds to 0.5 seconds!  I
also removed the redundant indexes.

Cheers,

Simon

On 19 January 2011 02:11, Steve Meyers steve-mysql-l...@spamwiz.com wrote:

 On 1/18/11 10:22 AM, Simon Wilkinson wrote:

 SELECT articles.* FROM articles INNER JOIN newsletters ON
 articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
 newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) =
 '12'
 AND DAY(articles.created_at) = '5' ORDER BY YEAR(articles.created_at),
 LENGTH(articles.body);


 Simon -

 There are a few issues that are slowing down your query.  First, you're
 running functions to calculate the month and day of each article that is
 looked at.  As an aside, are you sure you don't want the DAYOFMONTH()
 function?

 Second, it's ideal to have the where clause in your query filter down
 (using an index) to as few rows as possible of the first table.  Other
 tables you join should ideally be 1 to 1 from the first table.  To
 accomplish this, you would probably need the user_id in your articles table.

 Another aside -- I noticed you have index_articles_on_newsletter_id as well
 as index_articles_on_newsletter_id_and_created_at.  The first index is
 redundant, the second index will take care of it.  This will slow down your
 INSERT/UPDATE/DELETE queries to some degree.

 Steve



optimizing query

2011-01-18 Thread Simon Wilkinson
Hi,

I am trying to optimize the following query:

SELECT articles.* FROM articles INNER JOIN newsletters ON
articles.newsletter_id = newsletters.id INNER JOIN users ON users.id =
newsletters.user_id WHERE users.id =12 AND MONTH(articles.created_at) = '12'
AND DAY(articles.created_at) = '5' ORDER BY YEAR(articles.created_at),
LENGTH(articles.body);

I am trying to retrieve all the articles created on a specific day of a
specific month that belong to a user, ordered by the oldest and then longest
article.

I have run explain on the query, and get the following:

++-+--+---+++-+-+--+--+
| id | select_type | table| type  |
possible_keys  |
key| key_len | ref |
rows | Extra|
++-+--+---+++-+-+--+--+
|  1 | SIMPLE  | users| const |
PRIMARY|
PRIMARY| 4   | const
|1 | Using index; Using temporary; Using filesort |
|  1 | SIMPLE  | newsletters | ref   |
PRIMARY,index_newsletters_on_user_id  |
index_newsletters_on_user_id  | 4   |
const   |1 | Using index  |
|  1 | SIMPLE  | articles  | ref   |
index_articles_on_newsletter_id,index_articles_on_newsletter_id_and_created_at
| index_articles_on_newsletter_id_and_created_at | 4   |
my_db.newsletters.id |3 | Using where  |

++-+--+---+++-+-+--+--+
3 rows in set (0.00 sec)

This seems pretty decent, and does perform pretty well for some users (~0.5
- 1 sec), but for some users (seemingly those with large numbers of
articles) the query can take 20 - 30 seconds to run.  This seems really slow
to me.  I tried adding in the index
'index_articles_on_newsletter_id_and_created_at' but the performance doesn't
seem to be any different then when it uses just the
'index_articles_on_newsletter_id' index.  I think this might be because of
the functions I am using on the created_at column to get the day and month
from it, making an index on created_at useless in this instance.

Running both an 'optimize table entries' and 'analyze table entries' also
didn't seem to have any real impact on the performance.

I was wondering if anybody had any suggestions for what else I might be able
to try, or if there is a better way to search on dates in this manner.  Any
ideas would be greatly appreciated.

Thanks,

Simon


Re: help with query

2011-01-12 Thread Simon Wilkinson
Thank you, that did the trick.

Simon

On 11 January 2011 12:09, Steve Meyers steve-mysql-l...@spamwiz.com wrote:

 On 1/11/11 9:31 AM, Simon Wilkinson wrote:

 select users.id from users where users.id in (select newletters.user_id
 from
 newletters left join articles on newletters.id = articles.newsletter_id
 where articles.newsletter_id is null);


 I think this would do what you require:

 SELECT
  u.id AS user_id,
  COUNT(DISTINCT n.id) AS num_newsletters,
  COUNT(DISTINCT a.id) AS num_articles

 FROM
  users u
  JOIN newsletters n ON n.user_id=u.id
  LEFT JOIN articles a ON a.newsletter_id=n.id

 GROUP BY
  u.id

 HAVING
  num_newsletters  0
  AND num_articles = 0



help with query

2011-01-11 Thread Simon Wilkinson
Hi,

I have 3 tables that I am trying to search across, and could use some help
on how to structure the query.  I have a users table, a newsletter table,
and an articles table.  The newsletter table has a user_id column, and the
articles table has a newsletter_id column.  A user can have multiple
newsletters, and a newsletter can have multiple articles.  What I would like
to do is find the list of users that have only newletters with no content.

My current query is as follows:
select users.id from users where users.id in (select newletters.user_id from
newletters left join articles on newletters.id = articles.newsletter_id
where articles.newsletter_id is null);

But I believe this is finding users that have any empty newletters, and not
users that have only empty newletters.  How could I change this to return
only the users that have only empty newsletters?

Thanks,

Simon


Index not being used

2010-03-05 Thread Simon Kimber
Hi Everyone,
 
I have the following table:
 
CREATE TABLE `form_fields_items` (
  `ID` int(11) NOT NULL auto_increment,
  `siteid` int(11) NOT NULL default '0',
  `fieldid` int(11) NOT NULL default '0',
  `value` varchar(150) NOT NULL default '',
  `sortorder` int(11) NOT NULL default '0',
  PRIMARY KEY  (`ID`),
  KEY `siteid` (`siteid`),
  KEY `fieldid` (`fieldid`),
  KEY `sortorder` (`sortorder`),
  KEY `sitefieldsort` (`siteid`,`fieldid`,`sortorder`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=61219 ;
 
And I am running the following query:
 
SELECT * FROM form_fields_items WHERE siteid = 1234 AND fieldid = 5678
ORDER BY sortorder

And an explain returns the following:

id   select_type table   typepossible_keys   key key_len
ref  rowsExtra
1   SIMPLE  form_fields_items   ref
siteid,fieldid,sitefieldsortfieldid 4   const   9   Using
where; Using filesort


Can anyone tell me why this is not using the sitefieldsort index?

If I change the query to something that returns no rows, such as:

SELECT * FROM form_fields_items WHERE siteid = 1 AND fieldid = 1 ORDER
BY sortorder

An explain shows it using the correct index.

Thanks for your time!

Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Simple join very slow

2009-08-20 Thread Simon Kimber
Hi Everyone,
 
Can anyone suggest why the following query is taking upwards of 5
seconds to run?
 
SELECT * FROM users
JOIN sites ON users.ID = sites.userid
WHERE users.username = 'user1234'
OR users.email = 't...@test.com'
OR sites.email = 't...@test.com' mailto:'ccf...@googlemail.com' 
 
The users table has an index on the username field and another on the
email field.
 
The sites table has an index on the email field.
 
Both tables contain around 200k rows.
 
An explain of the query shows that no indexes are being used:
 
id  select_type table   typepossible_keys   key key_len
ref rowsExtra   
1SIMPLE  users   ALL PRIMARY,username,email  NULLNULL
NULL 155424 
1SIMPLE  sites   ref userid,emailuserid  4
dyos.users.ID1   Using where
 
Can anyone tell me how to make it use the available indexes?  Or do I
need to add some sort of multi-column index?  I'm guessing not as I'm
doing OR's
 
Thanks
 
Simon


Simple query slow on large table

2009-08-18 Thread Simon Kimber
Hi Everyone,
 
I'm having a very simple query often take several seconds to run and
would be hugely grateful for any advice on how i might spped this up.
 
The table contains around 500k rows and the structure is as follows:
 
+---+--+--+-+---+---
-+
| Field | Type | Null | Key | Default   | Extra
|
+---+--+--+-+---+---
-+
| ID| int(11)  |  | PRI | NULL  |
auto_increment |
| siteid| int(11)  |  | MUL | 0 |
|
| sender| varchar(255) |  | |   |
|
| subject   | varchar(255) |  | MUL |   |
|
| message   | text |  | |   |
|
| datestamp | timestamp| YES  | MUL | CURRENT_TIMESTAMP |
|
| msgtype   | int(1)   |  | MUL | 0 |
|
| isread| int(1)   |  | | 0 |
|
+---+--+--+-+---+---
-+

I have indexes on siteid, datestamp and msgtype.

Queries such as the following are constantly appearing in the slow
queries log:

SELECT * FROM enquiries WHERE siteid = 59255 AND msgtype = 0 ORDER BY
datestamp DESC LIMIT 5;

An EXPLAIN on the above query returns:

++-+---+--+++---
--+---+--+-+
| id | select_type | table | type | possible_keys  | key|
key_len | ref   | rows | Extra   |
++-+---+--+++---
--+---+--+-+
|  1 | SIMPLE  | enquiries | ref  | siteid,msgtype | siteid |
4 | const | 1940 | Using where; Using filesort |
++-+---+--+++---
--+---+--+-+

Shouldn't MySQL be using the datestamp index for sorting the records?
When I remove the ORDER BY clause the query is considerably faster.  Do
I need to do something to make sure it using the index when sorting?

Any help will be greatly appreciated!

Regards

Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Need advice on a good setup for generic queries

2009-07-13 Thread Simon J Mudd
mo...@fastmail.fm (mos) writes:

 At 08:06 PM 7/12/2009, Morten wrote:
 
 If you can get rid of the DateTime and switch to just Date it may
 speed up the indexes.

While not as pretty it's more compact to convert timestamp values into
an bigint. For example: seconds since epoch.  If you know the ranges
to put in the query then store them this way and thus save on some
storage, and therefore improve performance. May be worth considering?

...

 These queries which involve easily indexable fields (status_id,
 assignee_id, company_id) and multiple conditions on different ranges
 are what's difficult. The table is about 2.500.000 records and grows
 at a daily rate of about 50.000 records (that number is growing
 though). Once an action has been closed, it gets status closed and
 is no longer of interest. 70% of the records in the table will be
 status closed.

As mentioned if you are not interested in closed queries get rid of them.
put them in another table.

That reduces the number of rows and hence the query time.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication, Stored Proceedures and Databases

2009-07-11 Thread Simon J Mudd
g...@primeexalia.com (Gary Smith) writes:

...

 In database G we have 150+ stored procedures.

150k stored procedures? Sounds rather large. Do you really need this?

 What's the best approach to fix this problem?  Is it as simple as adding the 
 appropriate USE statement inside of the stored procedure right before the 
 insert/update/delete/whatever?

I'd suggest row based replication. In your previous post you mentioned
you were using 5.1.35 so you can do that.  One of the reasons for
using RBR is precisely to make life clearer when replicating from one
server to another. The rows changed on the master will be changed on
the slave.  You don't need to depend on the effect of the stored
procedure on master and slave being the same.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Indexing dynamics in MySQL Community Edition 5.1.34

2009-06-27 Thread Simon J Mudd
mo...@fastmail.fm (mos) writes:

 At 12:37 AM 6/25/2009, you wrote:

...

 my.cnf based on my-huge.cnf, expanding key_buffer to 8G,
 myisam_sort_buffer_size to 256M, and putting tmpdir on the fiber channel
 disk.
 
  You mean key_buffer_size don't you and not key_buffer? If you
 are using MySQL 5.0 then the upper limit is 4gb but 5.1 allows for
 more than 4gb.

That's not entirely true. Later versions of 5.0 (above 5.0.56?) also allow
key_buffer_size to be greater than 4GB and we are using that on
several machines. Earlier versions of 5.0 did indeed have this problem.

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Should I be able to DELETE FROM ... WHERE .. IN ( ... ) with multiple a column key?

2009-05-27 Thread Simon J Mudd
This is in 5.0.68 and 5.1.34.

I'm trying to cleanup some old data in a table which looks like the following:

CREATE TABLE `transaction_history` (
 `customer_id` int(10) unsigned NOT NULL default '0',
 `transaction_id` int(10) unsigned NOT NULL default '0',
 `first_timestamp` datetime NOT NULL default '-00-00 00:00:00',
 `last_timestamp` datetime NOT NULL default '-00-00 00:00:00',
 PRIMARY KEY  (`transaction_id`,`first_timestamp`,`customer_id`),
 KEY `customer_id` (`customer_id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1

mysql SELECT * FROM transaction_history LIMIT 10;
+-++-+-+
| customer_id | transaction_id | first_timestamp | last_timestamp  |
+-++-+-+
|  10 |   31536827 | 2009-01-22 13:25:26 | 2009-01-22 13:40:21 |
|  306636 |   31536827 | 2009-01-22 13:43:56 | 2009-01-22 13:44:02 |
|  10 |   31536827 | 2009-01-22 13:50:24 | 2009-01-22 13:50:46 |
|  306636 |   31536827 | 2009-01-22 13:50:53 | 2009-01-22 13:59:13 |
|  304142 |   31536827 | 2009-01-22 14:53:00 | 2009-01-22 14:53:00 |
|  306636 |   31536827 | 2009-01-22 15:03:59 | 2009-01-22 15:03:59 |
|  10 |   31536827 | 2009-01-22 15:06:15 | 2009-01-22 15:09:01 |
|  306636 |   31536827 | 2009-01-22 15:09:41 | 2009-01-22 15:10:32 |
|  10 |   31536827 | 2009-01-22 15:10:42 | 2009-01-22 15:19:48 |
|  306636 |   31536827 | 2009-01-22 15:30:41 | 2009-01-22 16:01:28 |
+-++-+-+
10 rows IN set (0.02 sec)

I need to identify the rows to be deleted and was planning on doing something 
like:

mysql EXPLAIN SELECT * FROM transaction_history WHERE 
(`transaction_id`,`first_timestamp`,`customer_id`) IN ( ( 31536827, '2009-01-22 
13:25:26', 10 ), ( 31536827, '2009-01-22 13:43:56', 306636 ) );
++-+---+--+---+--+-+--+--+-+
| id | SELECT_type | table | type | possible_keys | key  | 
key_len | ref  | rows | Extra   |
++-+---+--+---+--+-+--+--+-+
|  1 | SIMPLE  | transaction_history   | ALL  | NULL  | NULL | NULL 
   | NULL | 73181118 | Using WHERE |
++-+---+--+---+--+-+--+--+-+
1 row IN set (0.00 sec)

As you can see MySQL is ignoring or not recognising the primary key
definition in the where clause and thus planning on doing a table scan.

The simple approach is recognised correctly:

mysql EXPLAIN SELECT * FROM transaction_history WHERE 
(`transaction_id`,`first_timestamp`,`customer_id`) = ( 31536827, '2009-01-22 
13:25:26', 10 ) OR (`transaction_id`,`first_timestamp`,`customer_id`) = ( 
31536827, '2009-01-22 13:43:56', 306636 );
++-+---+---+--+-+-+--+--+-+
| id | SELECT_type | table | type  | possible_keys| key 
| key_len | ref  | rows | Extra   |
++-+---+---+--+-+-+--+--+-+
|  1 | SIMPLE  | transaction_history   | range | PRIMARY,customer_id  | 
PRIMARY | 16  | NULL |2 | Using WHERE |
++-+---+---+--+-+-+--+--+-+
1 row IN set (0.02 sec)

So is the format of the DELETE FROM .. WHERE ... IN ( ... )  clause I
propose valid and SHOULD the optimiser recognise this and be expected
to just find the 2 rows by searching on the primary key?

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Should I be able to DELETE FROM ... WHERE .. IN ( ... ) with multiple a column key?

2009-05-27 Thread Simon J Mudd
per...@elem.com (Perrin Harkins) writes:

 On Wed, May 27, 2009 at 6:43 AM, Simon J Mudd sjm...@pobox.com wrote:
  So is the format of the DELETE FROM .. WHERE ... IN ( ... )  clause I
  propose valid and SHOULD the optimiser recognise this and be expected
  to just find the 2 rows by searching on the primary key?
 
 Not according to the docs:
 http://dev.mysql.com/doc/refman/5.1/en/comparison-operators.html#function_in

I'm not sure that the reference makes anything clear. The statements
are wrote ARE valid SQL and even though containing mulitiple column
values ARE constants.

Problem is I'm finding it hard to find a definitive reference to something
like this. I'll have to check my Joe Celko books to see if he mentions ths.

Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-16 Thread Simon J Mudd
On Thu, May 14, 2009 at 04:45:44PM -0700, Scott Haneda wrote:

 It's true that initial mysql replication setup is a bit fiddly, but  
 once you've done it once or twice it's not so hard.
 
 I have it set up and working in test.  I will redo it again once I get  
 a better handle on it.  I am still a little confused on one aspect.   
 In the mysql sample cfg file, the section that has:
 #Replication Slave there is a very clear OR to use either #1 OR #2.
 
 I did the suggestions of #2, issuing
   #CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
   #MASTER_USER=user, MASTER_PASSWORD=password ;
 on the slave.

Sounds fine.

 I also, in section [mysqld]
 # Begin slave config 05/14/2009
 server-id = 2
 master-host = ip.add.re.ss
 master-user = user-replicate
 master-password = xx
 master-port = 3306
 # End slave config

No. not necessary as the information is stored in the master info file.

 Am I correct in that this is not needed.  I know I for certain need  
 server_id, but is that all I need, and I have redundant data?  I  
 figure also better to not have raw user and pass in a cnf file if it  
 is not needed.

The server-id IS needed and MUST be different on each server.

...

  log-bin = /usr/local/mysql/var/bin.log

This can be in the datadir just fine. If you server is very busy with updates
some people recommend putting this on a different filesystem to spread the I/O.
Depending on your setup that may or may not help. If you don't need it now
don't bother.

  log-slave-updates

Only needed if you have a daisy-chained replication environment you need this.
Without it the salve will only store the commands run on the slave itself
thus missing the commands run on the original master. If you want to make a
slave from the SLAVE server then without this option you won't pick up 
all the replication commands.

  auto_increment_increment = 10

Unless you are running master-master replication ignore this.
   
  replicate-do-db = somedbname1
  replicate-do-db = somedbname2

required if you don't want to replicate all the dbs on the server.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-16 Thread Simon J Mudd
On Fri, May 15, 2009 at 12:48:18AM -0700, Scott Haneda wrote:
 
 Also, how do I set the slave to be read only?  I set read-only in  
 my.cnf and it made all databases read only.


SET GLOBAL read_only = true;
and as you've done in the my.cnf file.

Unless the user has SUPER rights he can't change things in the database.

There are some minor exceptions:
- you can create temporary tables
- you can run ANALYZE TABLE 

These are normally not an issue.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-13 Thread Simon J Mudd
talkli...@newgeo.com (Scott Haneda) writes:

 Hello, I am confused about repliction setup.  Reading a config file,
 and the docs, leads me to believe this is an either code choice, pick
 #1 or #2.  If that is the case, why would I want to use #1 over #2?
 
 My confusion comes from several online references where there is a
 combination of #1 and #2 going on:
 
 # To configure this host as a replication slave, you can choose between
 # two methods :
 #
 # 1) Use the CHANGE MASTER TO command (fully described in our manual) -
 #the syntax is:
 #
 #CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
 #MASTER_USER=user, MASTER_PASSWORD=password ;
 #
 #where you replace host, user, password by quoted strings and
 #port by the master's port number (3306 by default).

Use this method. it works and is the correct way to do things. It also will keep
working if you stop and restart the server with replication carrying on
from where it left off.

The procedure is quite simply:

1. Ensure binlogging is enabled on the master.
2. Ensure you setup grant permissions so the slave can connect to the master.
3. Configure on the slave the replication (which databases need to be 
replicated)
4. Get the master and slave in sync (via rsync, load/dump or whatever)
5. Run show master status on the master (assuming binlogging is enabled)
   to get the current position on the master
6. use CHANGE MASTER TO on the slave providing the appropriate permissions.
7. Run: START SLAVE
8. Use: show slave status\G to check how the replication is working, and
   and adjust as necessary.

It's true that initial mysql replication setup is a bit fiddly, but once you've
done it once or twice it's not so hard.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Replication config

2009-05-13 Thread Simon J Mudd
replying only to the list...

On Wed, May 13, 2009 at 10:19:21AM -0700, Scott Haneda wrote:

 3. Configure on the slave the replication (which databases need to  
 be replicated)
 
 This is where I need a little clarification, is the only thing I need  
 to do is adjust my.cnf to have in the [mysqld] section
 server-id = 2

That's the minimal configuration. You may need to specify which databases
need to be replicated or which tables. By default everything is replicated
which is probably fine.

 4. Get the master and slave in sync (via rsync, load/dump or whatever)
 
 Is this mandatory?  There is not a lot of data, hundred rows or so,  
 can I use LOAD DATA FROM MASTER; ?

I think that only works in MySQL 4, and have never used it on our production
servers (5.0). Yes, checking the MySQL 5. documentation it says: 
http://dev.mysql.com/doc/refman/5.0/en/load-data-from-master.html

--quote--
12.6.2.2. LOAD DATA FROM MASTER Syntax

LOAD DATA FROM MASTER

This feature is deprecated. We recommend not using it anymore. It is
subject to removal in a future version of MySQL.
--quote--

If you've only got hundreds of rows then just do a normal mysqldump.
The problem is that if you have a database with GB or hundreds of GB of data
then this process is really slow, and if at the same time you can't afford
to stop your master then that makes life harder.

 Seems most instructions say to use a dump.  This does not make a lot  
 of sense to me, I am setting up replication, is it not the point to be  
 able to pull the data down?  Why does it need priming like this?

For 5.0 and above because the you can't load DATA from master, so just
use the dump, and don't change the master while you are doing this.

 5. Run show master status on the master (assuming binlogging is  
 enabled)
   to get the current position on the master
 
 I can do this now, gives back a position.  It seems to change over  
 time.  Since it is a moving target, if I am using LOAD DATA FROM  
 MASTER; I take it I need to lock the tables while the first load is  
 happening?

If you're using 5.0 you shouldn't be using LOAD DATA FROM MASTER.
Do a mysqldump and load from that, or if you use some sort of unix with
snapshotting possibilities then make a (lvm) snapshot of the filesystem
and copy that. That's what we typically do at work and it leaves the
master down for just a second or so. The later copy can take place
while the master is running.

 6. use CHANGE MASTER TO on the slave providing the appropriate  
 permissions.

This just tells the slave where to start replicating from. That is
which statements or rows in the binlog to download from the master
and apply on the slave.

 7. Run: START SLAVE

This starts the replication process.

 While not part of my plan, if the master goes down and I want to start  
 using the slave as the master while I am fixing the master server
 
 What is the best way to do this?  Can the slave be treated like a  
 master by just pointing any client to the slave assuming I set a user  
 to allow it?

You can do this from the point of view of the database users but then
the slave will be more up to date than the master and if you've not
configured things properly and don't have the right information you
won't be able to get the master back in sync.

So you can't just switch between boxes without taking special care.

 With the slave temporarily becoming the master, the data will of  
 course change.  When I bring the master back online, what is the best  
 way to reverse sync and get back to where I was?  Probably take the  
 entire thing thing down, copy the database from the current temp live  
 slave that has been used as a master, and go from there?

If the end that may be necessary. You can configure master / master
replication but as I said you have to be careful with this as it can
be quite critical how you actually setup your tables. If you don't do
things correctly it won't work.  I think it is documented however in
the MySQL manual so I'd suggest you read that.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: splitting large tables vertically

2009-05-10 Thread Simon J Mudd
kimky...@fhda.edu (Kyong Kim) writes:

 I was wondering about a scale out problem.
 Lets say you have a large table with 3 cols and 500+ million rows.
 
 Would there be much benefit in splitting the columns into different tables
 based on INT type primary keys across the tables?

To answer your question properly requires more information:

1. Expected table structure. Can you show the current CREATE TABLE xxx\G output?
2. Expected use cases to extract data?
3. Do you expect to delete data frequently, or are you only inserting data,
   or is there a mix of inserts and deletes? If so provide more info.

I've come across situations where a large table like this caused lots
of problems. There were lots of concurrent delete batches (cleaning
up) and at the same time lot of inserts. At the same time there were
large groups of selects to collect certain sets of data for
presentation. Perhaps you are doing something similar? If you do
something similar you may find that it's extremely important to get
the keys right especially the primary keys so that data retrieval (for
SELECTs or DELETEs) is as fast as possible (using clustered indexes
[PRIMARY KEY in innodb]). If not or if the queries overlap you may
find performance degredation a big issue as Innobase manages the locks
to ensure that the concurrent statements don't interfere.

You can also use merge tables sitting on top of MyISAM per year or
per whatever data in each table. That avoids you having to find data
for 2009 as you look in table xxx_2009, so this can be a big
win. MyISAM has the inconvenience that if the server ever crashes
recovery of these tables can be very timeconsuming.  Innodb has a
larger footprint for the same data.

So it's hard without more information on the structure and the use
cases to answer your question. In fact if you have the time, try out
and benchmark different approaches and see which is best for your
requirements. Just remember that as the data grows the initial
measurements may not be consistent with behaviour you see later. Also
if you are looking at a large amount of data like this appropriate
server tuning can influence performance significantly.

Hope this helps.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Still going in cicrles

2009-05-10 Thread Simon J Mudd
compu...@videotron.ca (michel) writes:

 I set up MySQL and when I try to start it it fails telling me that I
 need to run 'mysql_upgrade'.

Show us the full error output and provide information on the version
of MySQL you are using.

 When I run 'mysql_upgrade' it runs
 'mysqlcheck' which is supposed to only be run when the server works
 ...

No, mysql_upgrade does call mysqlcheck to see if things need
adjusting. Look at the documentation
http://dev.mysql.com/doc/refman/5.1/en/mysql-upgrade.html

Look at http://dev.mysql.com/doc/refman/5.1/en/upgrade.html which
gives information on upgrading MySQL. However be careful as IMO this
documentation can be a bit confusing and is incomplete. Nevertheless
it's a good starting point.

If however you don't think you are upgrading then you need to provide
more information on how you are installing MySQL so we can determine
why MySQL thinks that it needs to do an upgrade. My guess would be
that you have multiple mysql binaries on your server and are not
running the version you expect.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Still going in cicrles

2009-05-10 Thread Simon J Mudd
compu...@videotron.ca (michel) writes:

 I might have it fixed! I tried /home/qsys/mysql-5.1.32/libexec/mysqld
 and I would get
 
 /home/qsys/mysql-5.1.32/libexec/mysqld: Table 'mysql.plugin' doesn't exist
 
 090510 0:19:54 [ERROR] Can't open the mysql.plugin table. Please run
 mysql_upgrade to create it.
 
 090510 0:19:54 [ERROR] Fatal error: Can't open and lock privilege
 tables: Table 'mysql.host' doesn't exist

So you had built the binaries by didn't have an initial mysql database
created?

 From reading around I tried running
 
 /home/qsys/mysql-5.1.32/bin/mysql_install_db
 
 /home/qsys/mysql-5.1.32/libexec/mysqld
 
 and now I get
 
 090510 0:32:38 [Note] Event Scheduler: Loaded 0 events
 
 090510 0:32:38 [Note] /home/qsys/mysql-5.1.32/libexec/mysqld: ready
 for connections.
 
 Version: '5.1.32' socket: '/home/qsys/mysql-5.1.32/mysql.sock' port:
 3305 Source distribution

This looks correct.
 
 
 
 I would suggest that the developers team might change the error messages.

Indeed, I'll create a bug report for this.

http://bugs.mysql.com/44765

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: splitting large tables vertically

2009-05-10 Thread Simon J Mudd
kimky...@fhda.edu (Kyong Kim) writes:

 I don't have all the details of the schema and workload. Just an
 interesting idea that was presented to me.
 I think the idea is to split a lengthy secondary key lookup into 2 primary
 key lookups and reduce the cost of clustering secondary key with primary
 key data by using a shorter INT type surrogate key. Another downside is
 the possible need of foreign keys and added complexity of insertions and
 multi-column updates.
 
 Have you found primary key lookups to be at least twice as fast as
 secondary key lookups with VARCHAR type primary key in InnoDB? The whole
 idea is based on the assumption that it is.

That's why you really need to be more precise in the data structures
you are planning on using. This can change the results significantly.

So no, I don't have any specific answers to your questions as you don't
provide any specific information in what you ask.

 Also, MyISAM conversion is an option too. Have you found the table
 maintenance to be a significant overhead? I've experienced MyISAM table
 corruptions in production and I'm more inclined to go with InnoDB for its
 reliability. This is a fairly important table.

Well disk (and memory) usage can also be important so as it seems
InnoDB storage is less efficient this may actually degrade
performance.  Until you are more concrete it's hard to say what will
work best for you.

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: innodb rollback 30x slower than commit normal?

2009-05-09 Thread Simon J Mudd
nik...@doppelganger.com (Nikita Tovstoles) writes:

 We have a java-based webapp that talks to MySQL 5.1 INNODB in READ_COMMITTED. 
 We use Hibernate and optimistic concurrency, so periodically concurrent write 
 attempts cause app-level Exceptions that trigger rollbacks (and then we retry 
 tx). We've added app-level caching and turned down our tomcat NIO thread 
 count to just 8 (very little contention inside the app) but now we're seeing 
 that rollbacks appear to be up to 30x slower than commits?! Is that normal?
 
 Here's a typical TX:
 
 Set autocommit=0;
 Select * from users where name=bob;
 Update users set visit_count=X where id=bobId and version=Y
 Commit;
 Set autocommit=1;
 
 When this tx is executed about 100 times/sec, appserver latency is about 
 10-15 ms per http request (including db time). However, when instead of 
 commit a 'rollback' is issued, the latency spikes to 600-1100 ms (nearly all 
 of that time in appserver appears to be spent waiting on db).
 
 So is that expected cost of a rollback?

InnoDB is heavily optimised and assumes that a transaction will
commit successfully. As such it's not optimised to do the rollback,
and as such a rollback *IS* very expensive.

I've seen similar behaviour on some servers I use at work so what you
are seeing is I think normal.

 Can anything be done to speed it up?

I'm not aware of anything so I think you have to accept it and make
sure that where possible you try to avoid situations where you need to
rollback. That's not always possible of course but sometimes the scope
of the transaction can be narrowed and that should help a bit.

However in your example you could easily do a single atomic update
involving the SELECT and UPDATE.  That would be much easier as you
would either run the combined UPDATE or not. Perhaps that would work
for you?

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Help with mysql query, multiple list

2009-05-09 Thread Simon J Mudd
abhishek@gmail.com (Abhishek Pratap) writes:

 I am kind of stuck with this query  , cant expand my thinking. May this is a
 limitation.  Here it is
 
 I have a database with many cols two of which are start and end position for
 an event.
 
 Now I have a list of event time stamps, I want to find all the info once the
 current event time stamp is = start time of event and =end time of event.
 
 something  like this
 
 select * from table_name where start = ( LIST of time stamps) AND end =(
 list of time stamps).
 
 Clearly above query accepts only one item in the list. Is there a way to do
 this for multiple items in the list ??? I can't think of anything at this
 moment.

You said in a later post that you have thousands of events. If they are already 
in a table then use that, otherwise put the events into a temporary table
and join the 2 tables together.

Something like this simple example:

mysql select * from events;
+-+
| event_ts|
+-+
| 2009-05-09 10:29:00 |
+-+
1 row in set (0.00 sec)

mysql select * from table_name;
++-+-+
| id | start_ts| end_ts  |
++-+-+
|  1 | 2009-05-09 10:00:00 | 2009-05-09 11:00:00 |
|  2 | 2009-05-10 10:00:00 | 2009-05-10 11:00:00 |
++-+-+
2 rows in set (0.00 sec)

mysql select t.* from table_name t, events WHERE event_ts = start_ts and 
event_ts = end_ts;
++-+-+
| id | start_ts| end_ts  |
++-+-+
|  1 | 2009-05-09 10:00:00 | 2009-05-09 11:00:00 |
++-+-+
1 row in set (0.00 sec)

Hope this helps.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: Oracle , what else ?

2009-04-21 Thread Simon Connah

On 21 Apr 2009, at 14:06, Gilles MISSONNIER wrote:


hello people,
bad joke is not it ?

After MySQL bought by the java maker,
and now Sun bought by Oracle,

what are we gonna run as RDBMS ?


I don't see what the problem is really. Anyway if there ever is a  
problem in the future (which I doubt) there is always PostgreSQL to  
fall back on.


Simon.

smime.p7s
Description: S/MIME cryptographic signature


Trying to work out why a join query is so slow

2009-02-01 Thread Simon Kimber
Hi Everyone, 

I'm trying to run a very simple query on two joined tables but it's taking a 
long time to run. 

I have two tables, users and sites, both of which have an email address field 
that I'm querying. 

here's my query: 

SELECT * 
FROM sites 
INNER JOIN users ON sites.userid = users.ID 
WHERE sites.email = 'per...@domain.com' 
OR users.email = 'per...@domain.com' 

both tables contain over 100k rows. users.ID is a primary key, and 
sites.userid, sites.email and users.email all have indices. 

The query above is taking over 3.3 seconds to run, but if i only use one of the 
where clauses, ie. I only search on users.email or I only search on 
sites.email, the query takes around 0.002 seconds to run. 

As soon as I try and run the query with BOTH where clauses it takes 
exponentially longer! 

Can anyone suggest what might be the problem or how I could rewrite the query 
to significantly speed it up? 

Thanks! 

Simon


Re: Problem with MySQL prompt

2008-12-22 Thread Simon J Mudd
prajapat...@gmail.com (Krishna Chandra Prajapati) writes:

 You are running three mysql instance on single server. You can have three
 my.cnf say my.cnf, my1.cnf, my2.cnf with different port and socket and other
 information in them. In this way you can set the prompt for different
 instance.

It's a shame that the prompt can't be defined dynamically based on
characteristics of the current connection.  That would make it much
easier to distinguish which db instance you are talking to and only
have a single configuration.

Is there not an entry in bugs.mysql.com for this? I had a quick look
but couldn't find one.

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql?unsub=arch...@jab.org



Re: MySQl and LVM

2008-12-01 Thread Simon J Mudd
[EMAIL PROTECTED] (Shain Miley) writes:

 I am trying to  plan we in advance our methods for backup and recovery
 of our new MySQL replication cluster.  After doing some research it
 looks like a lot of people are using LVM snapshots as their backup
 solution.  We currently have two MySQL servers with 2 300 GB (Raid 1).
 What I am confused about is the best disk layout to use at this point.

 Do I need to create a separate volume for the snapshots?  Can anyone
 provide any suggestions on disk layout for two disks of this size?

The filesystem layout is largely irrelevant. Basically what you want
is to have all your mysql files on a separate LVM filesystem. So you
could do this by creating a new filesystem and mounting it at
/var/lib/mysql. Then install MySQL.

Once you have the filesystem mounted you can use mysql as normal.

To take snapshots do the following:

1. stop mysql
2. make a snapshot LV of the volume mounted at /var/lib/mysql
3. start mysql
4. mount the snapshot and back it up to a real filesystem or to tape or 
whatever.
5. unmount the snapshot and remove it.

4. Is very important as if you don't do this eventually the snapshot
will run out of space and suddenly it will lose its contents. It's
only a temporary staging area.

Hope this helps.

Simon


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Virtualizing MySQL

2008-11-19 Thread Simon J Mudd
[EMAIL PROTECTED] (Shain Miley) writes:

 I am looking into the idea of setting up 10 - 15 virtualized instances
 of MySQL.  The reason for this is as follows...we are going to be
 setting up a 3 to 4 node MySQL replication cluster (1 master-rw and 2
 slaves-ro)...each having 16 to 32 GB of RAM.

 In order for our development team to do their work...they must have
 access to some Mysql resources that are close to the production
 environment.  I am not currently in a position to provide each
 developer two MySQL servers (one master and one slave with 16 to 32 GB
 of RAM) for testing...or obvious reasons...mainly cost ;-)

 So I have been thinking about how best to provide such resources,  at
 this  point I am thinking that I can use OpenVZ to help me out a bit.

 I was wondering if anyone had any thoughts on this issue...should I
 just run 10 instances of MySQL on the same server...are there other
 options?

 I am concerned with trying to ensure that the metrics, resources,
 workloads, etc from these development servers has some sort of
 relevance to our production environment...otherwise we are testing
 apples and oranges...which the dev team will clearly point out...and
 in a way I know we are...but I would like to minimize the effects

My only concern would be that if you have busy mysql instances that
they will interfere with each other. We used to have a couple of busy
mysqld processes running on the same Linux server only to find that
the performance characteristics were worse than 1/2 of the performance
of having each instance on a separate server. Both mysqld instances
were busy and so fought each other for I/O and for CPU often at the
same time. If this might be an issue for your virtual servers may not
be an ideal solution as most of the free virtualisation options don't
control sufficiently the hardware resources distributed to each
virtual machine.

YMMV.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Server Setup Question

2008-11-12 Thread Simon J Mudd
[EMAIL PROTECTED] (Shain Miley) writes:

 Hello all,
 I was wondering if anyone had any good insight into running  the 32
 bit and  64 bit versions of MySQL?  We are going to be using a
 replication setup within my organization very shortly.  We intend to a
 have at least one master (writable) DB and several (let's say 3 for
 this excersise ) read-only DB's.

 One suggestion that I got was to use 64 bit version of MySQL so that
 we can make better use of our servers memory as we are using servers
 that have 16 - 32 GB of RAM.

Yes, use the x86_64 bit version as you won't be limited in memory by
the 32-bit architecture. MySQL seems to work pretty well with the
32-bit version but using it with more than 4GB of RAM is going to be a
problem. The 64-bit version doesn't have any trouble with 32GB (not
tried more).

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Why different engines in one database?

2008-10-12 Thread Simon J Mudd
[EMAIL PROTECTED] (hezjing) writes:

 When and why we create tables in different storage engines within a same
 database?
 
 Take for example a normal inventory application that provides CRUD operation
 to
 - USER table (e.g. create new user)
 - STOCK table (e.g. when there is new stock arrives)
 - CUSTOMER table (e.g. create new customer)
 - SALE table (e.g. when a stock is bough by a customer)
 
 I think it is always a best choice to use InnoDB since many applications are
 transactional.

For transactional stuff, yes, InnoDB is probably best.

 How would one wants to create a USER table in MyISAM engine and SALE table
 in InnoDB engine?
 
 Can you give some example?

MyISAM is [almost] the original table type that came with MySQL so
it's still supported. It also has a smaller footprint on the
filesystem than InnoDB. There are a few things you can do with MyISAM
which can't be done with InnoDB (merge tables[1] comes to mind) and
therefore it can sometimes be better to use a different storage
engine.

As long as you are aware of the advantages and limitations of the
different engines you should be fine.

Simon

[1] If your sales table was huge it might make sense to have a sales table by 
month:

sales_200810
sales_200809
sales_200808
...

(all the above tables HAVE to be MyISAM tables) and use a merge table
sales_all being a combination of the above tables. Many people might
suggest using a view for this but the implementation in MySQL of merge
tables is more efficient than views which is why it's frequently used.

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Master-master setup

2008-10-09 Thread Simon J Mudd
[EMAIL PROTECTED] (Carl) writes:

 I am running 5.0.24a on Slackware Linux.  I would like to set up a 
 master-master replication process so that I can use both servers as master as 
 add/delete/update records on both servers from different application servers 
 (Tomcat.)  I suspect the inserts will be OK but don't understand how the 
 edits and deletes would work (primary key is autoincrement):
 
 (Serial)  
   (Serial)
 TransactionServer A   
  Server B
 Add to server A1
 Replicated
 1
 
 Add to server A2
 Add to server B (before record 2  
   2
 is replicated)
 Replicate to server B 
 ?
 Replicate to server A? 
 
 Does replication control the order in which transactions are applied so that 
 somehow the replication from server A to server B is applied before the 
 insert to server B?

You need to set 2 variables to ensure you don't have problems.

# when you have 2 master servers
auto_increment_increment = 2
# each server has a different offset (values in this case 1,2)
auto_increment_offset= 1

This way each master will generate unique ids

Note: doing this means that you will get gaps in your ids as each
server uses its own value to generate new ids and these increment by
auto_increment_increment every time.

Be aware that if the updates to the tables are very frequent it's
quite possible that replication delay may mean that the data on both
servers is not the same. The only way to ensure that this is avoided
is to use explicit WHERE id IN (1,3,43,5,...,nn) clauses so that you
are absolutely certain that the changes applied on one master will be
produced on the other one.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: C api - mysql_list_fields

2008-10-07 Thread Simon J Mudd
[EMAIL PROTECTED] (Mike Aubury) writes:

 I'm probably being a bit stupid - but I'm trying to determine (in code) the 
 length of the string in the schema for a given table.
 
 So - for example : 
 
 
   create table a (
   blah char(20)
   )
 
 
 I want to return '20', but I'm getting '60' when I use mysql_list_fields..
 (Always seems to be 3x longer that I'm expecting)...
 
 Am I missing something ? (or should I just divide by 3!!)

Is the table or database using UTF-8? I think that if it is MySQL will
allocate space for each character and is forced to allocate 3x20 bytes
as a UTF-8 character can be up to 3-bytes in length.  SHOW CREATE
TABLE a\G should show if this is the case.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Consulting

2008-09-30 Thread Simon J Mudd
[EMAIL PROTECTED] (Database System) writes:

 There were no any error message on console or in error log.
 
 The symptoms are
 1) the binary log file goes to /data/mysql/data/ dir, not as defined in 
 /data/mysql/log/
 2) the binary log files name start with mysql-bin, not as I defined in 
 my.cnf
 
 I created directories as following and changed the owner to mysql account
 /data/
 /data/mysql/
 /data/mysql/log/
 /data/mysql/data/

Perhaps rather late for a reply to this thread but, ...

datadir = /data/mysql/data
log-bin = /data/mysql/log/binlog

will do what you want. Don't change basedir.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Configuration and improvement advice.

2008-07-18 Thread Simon J Mudd
[EMAIL PROTECTED] (Josh Miller) writes:

 I have recently become responsible for a LAMP site which has a decent
 MySQL install (v5.0.24a).  The database is around 40GB with a single
 master to single slave replication scheme, although all activity goes
 to the master at this time, with the exception of backups which are
 taken from the slave.
 
 I have several tables which are fairly large, one has 120 million
 records, and I need to migrate these tables to InnoDB from MyISAM to
 reduce the number of table locks that occur on a daily basis which
 bring down the site's performance.
 
 What is the best way to perform this migration?  Should I simply take
 an outage and alter table to set the engine type to InnoDB, or should
 I rename the table, and select into a new table?

It depends on this table usage. If you can, then the ideal situation
might be to create the new table with a temporary name and fill it in
the background, and finally update for any changed values during the
process. This process might be time-consuming for 120,000,000 rows but
may work. If you use replication beware of the delays that may arise
from doing this in anything but small enough chunks.

 What are the upper limits of MySQL performance in terms of data set
 size using MyISAM vs InnoDB?

Be careful: the InnoDB footprint of this table may be much larger than
your existing MyISAM footprint. I've seen issues with this especially
as you'll be needing to adjust the the innodb_buffer_pool_size and
key_buffer values during this process. Thus you _may_ suffer a
performance problem, not because of the engine change but because of
the increased memory requirements. Consider also the use of
innodb_file_per_table which makes the resulting files easier to
manage.

A different solution might be to make a new slave, convert the
table(s) on the slave to InnoDB, finally promoting it to be the new
master. You'd also need to rebuild your existing slave. This avoids
downtime to the site except for the master switchover period. It also
gives you time to tweak all values while doing the conversion from
MyISAM to InnoDB.

Hope this helps.

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: revoke all on *.* ??

2008-06-10 Thread Simon J Mudd
[EMAIL PROTECTED] (Pawel Eljasz) writes:

 there is a user with following grants:
 GRANT USAGE ON *.* TO 'ff'@'localhost' IDENTIFIED BY PASSWORD
 'x'
 is it possible to:
 revoke all on *.* from $above_user  OR revoke usage on *.* from $above_user
 ?

DROP USER [EMAIL PROTECTED];

Simon

-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I'm loading the data through the command below mysql -f -u root -p 
enwiki  enwiki.sql


The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 
cores and 2 Gigs memory.


The import fell over overnight with a table full error as it hit 1T (I 
think this may be a file system problem). As it's not importing before 
anymore show status isn't going to provide any interesting info however, 
I did notice that mysql was not consuming much CPU time ~ 10%.


I wouldn't like to split the data up into separate tables as it would 
change the schema and I'm not in charge of the schema design - just the 
DBA at the backend.


Cheers

Simon

mos wrote:

Simon,
As someone else mentioned, how are you loading the data? Can you post 
the SQL?


You have an Id field, so is that not the primary key? If so, the 
slowdown could be maintaining the index. If so, add up to 30% of your 
available ram to your key_bufer_size in your my.cnf file and restart 
the server. How much RAM do you have on your machine and how many 
CPU's do you have? What version of MySQL are you using? Also can you 
post your Show Status output after it has started to slow down? How 
much CPU is being used after the import slows down?


Now from what you've said, it looks like you are using this table as a 
lookup table, so if it just has an id and a blob field, you probably 
return the blob field for a given id, correct? If it were up to me, I 
would break the data into more manageable tables. If you have 100 
million rows, then I'd break it into 10x10 million row tables. Table_1 
would have id's from 1 to 9,999,999, and table_2 with id's from 10 
million to 10,999,999 etc. Your lookup would call a stored procedure 
which determines which table to use based on the Id it was given. If 
you really had to search all the tables you can then use a Merge table 
based on those 10 tables. I use Merge tables quite a bit and the 
performance is quite good.


Mike

At 11:42 AM 6/4/2008, you wrote:

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an 
id field and a blob field. There are no constraints / indexes on this 
table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM 
table is ~1TB it's slowed to a load of about 5G per hour. At this 
rate the import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way 
to improve performance. If there's a more suitable list to discuss 
this, please let me know.


Regards

Simon






--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins
I can do  - if the load data infile command definitely improves 
performance and splitting the file does the same I have no problem with 
doing this. It just seems strange that it's problems with the way the 
import file is configured. I thought the problem would be somehow with 
the table getting bigger.


Regards

Simon


Ananda Kumar wrote:

Simon,
Why dont u split the file and use LOAD DATA INFILE command which would
improve the performance while loading into an empty table with keys
disabled.

regards
anandkl


On 6/5/08, Simon Collins [EMAIL PROTECTED] wrote:
  

I'm loading the data through the command below mysql -f -u root -p enwiki 
enwiki.sql

The version is MySQL 5.0.51a-community

I've disabled the primary key, so there are no indexes. The CPU has 2 cores
and 2 Gigs memory.

The import fell over overnight with a table full error as it hit 1T (I
think this may be a file system problem). As it's not importing before
anymore show status isn't going to provide any interesting info however, I
did notice that mysql was not consuming much CPU time ~ 10%.

I wouldn't like to split the data up into separate tables as it would
change the schema and I'm not in charge of the schema design - just the DBA
at the backend.

Cheers

Simon

mos wrote:



Simon,
As someone else mentioned, how are you loading the data? Can you post the
SQL?

You have an Id field, so is that not the primary key? If so, the slowdown
could be maintaining the index. If so, add up to 30% of your available ram
to your key_bufer_size in your my.cnf file and restart the server. How much
RAM do you have on your machine and how many CPU's do you have? What version
of MySQL are you using? Also can you post your Show Status output after it
has started to slow down? How much CPU is being used after the import slows
down?

Now from what you've said, it looks like you are using this table as a
lookup table, so if it just has an id and a blob field, you probably return
the blob field for a given id, correct? If it were up to me, I would break
the data into more manageable tables. If you have 100 million rows, then I'd
break it into 10x10 million row tables. Table_1 would have id's from 1 to
9,999,999, and table_2 with id's from 10 million to 10,999,999 etc. Your
lookup would call a stored procedure which determines which table to use
based on the Id it was given. If you really had to search all the tables you
can then use a Merge table based on those 10 tables. I use Merge tables
quite a bit and the performance is quite good.

Mike

At 11:42 AM 6/4/2008, you wrote:

  

Dear all,

I'm presently trying to import the full wikipedia dump for one of our
research users. Unsurprisingly it's a massive import file (2.7T)

Most of the data is importing into a single MyISAM table which has an id
field and a blob field. There are no constraints / indexes on this table.
We're using an XFS filesystem.

The import starts of quickly but gets increasingly slower as it
progresses, starting off at about 60 G per hour but now the MyISAM table is
~1TB it's slowed to a load of about 5G per hour. At this rate the import
will not finish for a considerable time, if at all.

Can anyone suggest to me why this is happening and if there's a way to
improve performance. If there's a more suitable list to discuss this, please
let me know.

Regards

Simon




  

--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]





  



--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Large import into MYISAM - performance problems

2008-06-05 Thread Simon Collins

Olaf, Mike

Thanks for the input, the blob data is just text, I'll have a go at 
using the load data command


Regards

Simon

mos wrote:

At 10:30 AM 6/5/2008, you wrote:

Simon,

In my experience load data infile is a lot faster than a sql file 
htrough

the client.
I would parse the sql file and create a csv file with just the 
columns of

your table and then use load data infile using the created csv file

Olaf


Olaf,
   Using a Load Data on an empty file is at least 10x faster than 
using SQL inserts. But I thought his blob field was binary (images) in 
which case Load Data won't work. If his blob field is plain text, then 
of course Load Data will work nicely. :)


Mike




--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Large import into MYISAM - performance problems

2008-06-04 Thread Simon Collins

Dear all,

I'm presently trying to import the full wikipedia dump for one of our 
research users. Unsurprisingly it's a massive import file (2.7T)


Most of the data is importing into a single MyISAM table which has an id 
field and a blob field. There are no constraints / indexes on this 
table. We're using an XFS filesystem.


The import starts of quickly but gets increasingly slower as it 
progresses, starting off at about 60 G per hour but now the MyISAM table 
is ~1TB it's slowed to a load of about 5G per hour. At this rate the 
import will not finish for a considerable time, if at all.


Can anyone suggest to me why this is happening and if there's a way to 
improve performance. If there's a more suitable list to discuss this, 
please let me know.


Regards

Simon

--
Dr Simon Collins
Data Grid Consultant
National Grid Service
University of Manchester
Research Computing Services
Kilburn Building
Oxford Road
Manchester
M13 9PL

Tel 0161 275 0604


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: external mysqldump

2008-02-20 Thread Simon Elliston Ball

Ah!

What you actually need is for the users to have SELECT access to  
mysql.procs.


GRANT SELECT ON mysql.procs TO user@'%'  identified by 

Of course this is something of a security risk as well, in that any  
use could see any other user's functions and stored procedures, but it  
will achieve the permissions you need without giving away all the data  
as well.


Simon

Simon Elliston Ball
[EMAIL PROTECTED]



On 20 Feb 2008, at 12:03, Andre HĂĽbner wrote:

i tried, but always got error: ERROR 1221 (HY000): Incorrect usage  
of DB GRANT and GLOBAL PRIVILEGES
i followed this and did found a former discussion. seems to be not  
possible to give this privileg by this line.

http://lists.mysql.com/mysql/198421

hmm, bad case, isnt it?

Andre

- Original Message - From: Simon Elliston Ball [EMAIL PROTECTED] 


To: Andre HĂĽbner [EMAIL PROTECTED]
Sent: Wednesday, February 20, 2008 12:08 PM
Subject: Re: external mysqldump


GRANT SUPER ON userdatabase.* to user@'%' identified by 'password';

That way each external user can only do super things to their own db.

simon

Simon Elliston Ball
[EMAIL PROTECTED]



On 20 Feb 2008, at 11:03, Andre HĂĽbner wrote:

Unfortunately, not. For internal uses i can use root or other   
special user. But if my users want to do backup on there own with   
external mysqldump they get this error.
if i do login with userdata from console i can do mysqldump, its   
only the external connect which makes this problem.


- Original Message - From: Ben Clewett [EMAIL PROTECTED]
To: Andre HĂĽbner [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, February 20, 2008 10:26 AM
Subject: Re: external mysqldump


I use a specific user (backup) for my backups, therefore user   
privileges are not effected.  Is there something like this which   
you could use?


Andre HĂĽbner wrote:

Thank you for answering.
is there a way to do without granting super-privileg?  for   
security reasons i cannot grant too high privileges for normal  
db- users.

rights should be limited to own db.
Thanks
Andre

- Original Message - From: Ben Clewett  [EMAIL PROTECTED] 


To: Andre HĂĽbner [EMAIL PROTECTED]
Cc: mysql@lists.mysql.com
Sent: Wednesday, February 20, 2008 9:46 AM
Subject: Re: external mysqldump



Try:

 mysql -u root -p 

mysql GRANT SUPER ON *.* TO myuser@'%';
mysql GRANT SUPER ON *.* TO myuser@'localhost';


Andre HĂĽbner wrote:

Hi List,

i wrote this alrready in mysql-forum a few days ago, but did  
not  get any answer. :(


i try to do backup with mysqldump from external host with   
routines.
mysqldump -R -h my.host.name -u myuser -p'mypass' mydb
filename.sql
I got error: myuser has insufficent privileges to SHOW CREATE  
FUNCTION `countuser`!
It works if i do the same mysqldumLine directly on the server   
where db is installed.
In mysql-table myuser has same privileges for % and locklhost.   
Are there some further restrictions for external connects?

I dont have an idea what to change now.

Thank you
Andre





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: transfer huge mysql db

2008-01-24 Thread Simon Elliston Ball
A binary copy will require that you shut down the db, rather than just  
lock some tables for a while, which may be more desirable.


I've always found the mysql compression to be a bit weak over a slow  
link. The way I tend to do this sort of thing is:
mysqldump --opt -B dbname | bzip2 -9c | ssh [EMAIL PROTECTED] 'cat - | bzip2  
-9dc | mysql -uwhatever etc'


assuming of course that you have nice fast processors and horrible  
slow connectivity (gzip, and something less severe than 9 would do  
almost as well)


simon


Simon Elliston Ball
[EMAIL PROTECTED]



On 24 Jan 2008, at 12:20, Saravanan wrote:


do binary copy. sql dump will be slow.

Saravanan


--- On Thu, 1/24/08, Ivan Levchenko [EMAIL PROTECTED] wrote:


From: Ivan Levchenko [EMAIL PROTECTED]
Subject: transfer huge mysql db
To: mysql@lists.mysql.com
Date: Thursday, January 24, 2008, 6:12 PM
Hi All,

What would be the best way to transfer a 20 gig db from one
host to another?

Thanks in advance for your answers!

--
Best Regards,

Ivan Levchenko
[EMAIL PROTECTED]

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:
http://lists.mysql.com/[EMAIL PROTECTED]



  


Never miss a thing.  Make Yahoo your home page.
http://www.yahoo.com/r/hs

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: MySql on Leopard

2007-12-07 Thread Simon Elliston Ball
The easiest way to get mysql working on os x is with a fink build. The  
version is a little behind 5.0.38, but I can't imagine there would be  
any problems with a source build from the latest. Maybe you could give  
it a go, and report any problems, then I'll happily help out.


Simon Elliston Ball
[EMAIL PROTECTED]



On 6 Dec 2007, at 21:24, Tommy Nordgren wrote:

Have anyone out there any experience building, installing, and using  
MySql on Leopard (PowerPC G4)
(MySql Community server 5.0.51) If so, I would be thankful for info  
on possible problems.

--
Skinheads are so tired of immigration, that they are going to move  
to a country that don't accept immigrants!

Tommy Nordgren
[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]




--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



help with error messages please

2007-08-17 Thread Simon Tierney
I am unable to connect to mysql 5.1 with phpmyadmin 2.10.2 on win2k running
php 5.2.3.

I get the following errors:-

Event viewer:-

Event Type: Information
Event Source: Application Popup
Event Category: None

Application popup: httpd.exe - Entry Point Not Found : The procedure entry
point mysql_get_character_set_info could not be located in the dynamic link
library LIBMYSQL.dll.

Event Type: Information
Event Source: Application Popup

Description:
Application popup: httpd.exe - Entry Point Not Found : The procedure entry
point mysql_get_character_set_info could not be located in the dynamic link
library LIBMYSQL.dll.

Apache error log:-

[Fri Aug 17 10:26:37 2007] [notice] Child 1108: Child process is exiting
[Fri Aug 17 10:26:58 2007] [notice] Apache/2.2.4 (Win32) PHP/5.2.3
configured -- resuming normal operations
[Fri Aug 17 10:26:58 2007] [notice] Server built: Jan  9 2007 23:17:20
[Fri Aug 17 10:26:58 2007] [notice] Parent: Created child process 2328
PHP Warning:  PHP Startup: Unable to load dynamic library 'C:\\Program
Files\\PHP\\ext\\php_mysql.dll' - The specified procedure could not be
found.\r\n in Unknown on line 0
PHP Warning:  PHP Startup: Unable to load dynamic library 'C:\\Program
Files\\PHP\\ext\\php_mysqli.dll' - The specified procedure could not be
found.\r\n in Unknown on line 0
PHP Warning:  PHP Startup: Unable to load dynamic library 'C:\\Program
Files\\PHP\\ext\\php_pdo_mysql.dll' - The specified module could not be
found.\r\n in Unknown on line 0
PHP Warning:  PHP Startup: Unable to load dynamic library 'C:\\Program
Files\\PHP\\ext\\php_mysql.dll' - The specified procedure could not be
found.\r\n in Unknown on line 0
PHP Warning:  PHP Startup: Unable to load dynamic library 'C:\\Program
Files\\PHP\\ext\\php_mysqli.dll' - The specified procedure could not be
found.\r\n in Unknown on line 0
[Fri Aug 17 10:27:05 2007] [notice] Child 2328: Child process is running
[Fri Aug 17 10:27:05 2007] [notice] Child 2328: Acquired the start mutex.
[Fri Aug 17 10:27:05 2007] [notice] Child 2328: Starting 250 worker threads.
[Fri Aug 17 10:27:05 2007] [notice] Child 2328: Starting thread to listen on
port 80.
[Fri Aug 17 10:38:47 2007] [error] [client 0.0.0.0] File does not exist:
C:/Program Files/Apache Software Foundation/Apache2.2/htdocs/favicon.ico
[Fri Aug 17 10:38:47 2007] [error] [client 0.0.0.0] File does not exist:
C:/Program Files/Apache Software Foundation/Apache2.2/htdocs/favicon.ico
[Fri Aug 17 10:38:59 2007] [error] [client 0.0.0.0] PHP Warning:
session_start() [a
href='function.session-start'function.session-start/a]:
open(C:\\DOCUME~1\\SIMONT~1\\LOCALS~1\\Temp\\php\\session\\sess_v1ssm27k1hc7
f8ffp23fh2k4l4, O_RDWR) failed: No such file or directory (2) in C:\\Program
Files\\Apache Software
Foundation\\Apache2.2\\htdocs\\pma\\libraries\\session.inc.php on line 100,
referer: http://localhost/pma/
[Fri Aug 17 10:38:59 2007] [error] [client 0.0.0.0] PHP Warning:  Unknown:
open(C:\\DOCUME~1\\SIMONT~1\\LOCALS~1\\Temp\\php\\session\\sess_v1ssm27k1hc7
f8ffp23fh2k4l4, O_RDWR) failed: No such file or directory (2) in Unknown on
line 0, referer: http://localhost/pma/
[Fri Aug 17 10:38:59 2007] [error] [client 0.0.0.0] PHP Warning:  Unknown:
Failed to write session data (files). Please verify that the current setting
of session.save_path is correct
(C:\\DOCUME~1\\SIMONT~1\\LOCALS~1\\Temp\\php\\session) in Unknown on line 0,
referer: http://localhost/pma/

I checked and php_mysql.dll and  php_mysqli.dll are in the \ext directory
does anyone know the solution to  this problem?



-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Update failing with error 1062

2007-02-14 Thread Simon Giddings

Found the source of my problem.
I had an update trigger connected to this table which was trying to 
create a new entry in another table instead of updating an existing one!


There we go!
Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Update failing with error 1062

2007-02-13 Thread Simon Giddings

Hi Michael

Here is the table schema :
DROP TABLE IF EXISTS `clients`.`calendarentry`;
CREATE TABLE  `clients`.`calendarentry` (
 `idCalendarEntry` int(10) unsigned NOT NULL auto_increment,
 `Sujet` varchar(80) NOT NULL,
 `Debut` datetime NOT NULL,
 `Fin` datetime NOT NULL,
 `Notes` varchar(2048) default NULL,
 `Location` varchar(1023) default NULL,
 `ContactName` varchar(110) default NULL,
 `Structure` varchar(80) default NULL,
 `Telephone` varchar(30) default NULL,
 `ClientId` int(10) unsigned default NULL,
 `AllDayEvent` tinyint(4) NOT NULL,
 `IsMeeting` tinyint(4) NOT NULL,
 `HasReminder` tinyint(4) NOT NULL,
 `NextReminder` datetime default NULL,
 `ReminderMinutesBeforeStart` int(11) default '0',
 `ReminderIsMinutes` tinyint(4) default '0',
 `CEOid` int(10) unsigned default '0',
 `Repeats` tinyint(4) default '0',
 `RepeatPatternId` int(10) unsigned default '0',
 PRIMARY KEY  (`idCalendarEntry`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1;

As you can see, there are no other unique keys here.  In addition, there 
are no foreign keys which point here either.

Any ideas?

Simon

Michael Dykman wrote:

Simon,

  send in the schema for the table in question, that should show
something.   The only condition I can think of off the top of my head
which might do that is if you have another unique key in your
structure and that is the one this error is complaining about.

On 2/12/07, Simon Giddings [EMAIL PROTECTED] wrote:

Good morning,

I issue an update statement containing a where clause on the primary
index, so as to update a single record. Howerver this is failing with
Duplicate entry '6' for key 1 -
update clients.calendarentry set Subject = 'presentation' where
idCalendarEntry = 6;

In the table, the field 'idCalendarEntry' is declared as :
`idCalendarEntry` int(10) unsigned NOT NULL auto_increment

The server version of MySql I am using is 5.0.24
The client version of MySql I am using is 5.0.11

Is anyone able to help?
Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Update failing with error 1062

2007-02-12 Thread Simon Giddings

Good morning,

I issue an update statement containing a where clause on the primary 
index, so as to update a single record. Howerver this is failing with 
Duplicate entry '6' for key 1 -
update clients.calendarentry set Subject = 'presentation' where 
idCalendarEntry = 6;


In the table, the field 'idCalendarEntry' is declared as :
`idCalendarEntry` int(10) unsigned NOT NULL auto_increment

The server version of MySql I am using is 5.0.24
The client version of MySql I am using is 5.0.11

Is anyone able to help?
Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Too many table-locks

2006-08-21 Thread Marco Simon
Hi everybody,

I've got a little problem with a web and mysql based
bulleting-board-application.
The board is quite well visited and users are writing aprox. 1 new post
per second. In total the db gets aprox. 250 queries/sec.

The webserver and mysql-server are running on different hosts, the db server
is running on

Linux db 2.6.14.2 #4 SMP Thu Nov 17 09:54:44 CET 2005 x86_64 GNU/Linux

model name  :   Intel(R) Pentium(R) D CPU 3.00GHz
stepping: 4
cpu MHz : 3000.229
cache size  : 1024 KB

4GB RAM.

My problem is that I get a lot and very long locks (30 sec and more) on
the table which is holding
the user's posts. There are aprox. 3.3 Mio records in that table which
consumes
a size of 1.5 GB. The table-format is myisam.

So far switching to innodb mostly soluted my problems with table locks.
But because of the table
size and the limited Memory I can't switch to inno-db in this case.

Are there any other conceptional or technical ideas how to reduce the
(long lasting) table locks ?

Thanks for any idea in advance !

Best regards,
  Marco


smime.p7s
Description: S/MIME Cryptographic Signature


How to switch off auto-checks ?

2006-08-21 Thread Marco Simon
My mysql-db starts a check tables xy fast for all tables as soon
as I re-start the db-server. I didn't find a place where I can control
if the db should be checked at start or not. So where can I switch
of this checkings at db-server-start ?

Thanks in advance
 Greetings Marco


smime.p7s
Description: S/MIME Cryptographic Signature


Who's locking ?

2006-07-29 Thread Marco Simon
Hello list,

I've a question about understanding table-locks of myisam-tables:

From time to time it happens that my proccesslist gets flooded by
Queries, that
are waiting for a locked table. Quickly there are 100-500 waiting
Queries in the
queue - some of them waiting since more than 1000 seconds.

My question is: How can I see who (which query) set the lock, which all
other
queries are waiting for to be released ? I thought there should be at
least one
update Query in the processlist, which set the lock and which is still
working.
But there are only locked queries for the locked table - no updates, no
deletes.

Could somebody try to explain, how I can investigate, who's blocking the
line ?





smime.p7s
Description: S/MIME Cryptographic Signature


Re: Reset (or Defrag) the AUTO_INCREMENT columns

2006-06-14 Thread Marco Simon
Hi wolverine,

of course you could defrag your autoincrement-values, but there's
no automation for that - you've do do that via normal insert/update
statements.
Perhaps you'll need an intermediate table.

But:
In most cases the autoincrement-value is used as an id (as in your case)
- in
db-language it is often the (primary) key - which normaly is never ever
changed
through the live-time of a data-record. If you change your primary key
you'll
have to change all references to that key in your detail-tables.

Greetings,
  Marco

wolverine my schrieb:
 Hi!

 I have the following tables and the data,

 CREATE TABLE category (
id TINYINT UNSIGNED AUTO_INCREMENT PRIMARY KEY
name VARCHAR(50) NOT NULL
 );

 CREATE TABLE user (
id TINYINT UNSIGNED AUTO_INCREMENT PRIMARY KEY,
name VARCHAR(50) NOT NULL,
category TINYINT UNSIGNED REFERENCES category(id)
 );


 SELECT * FROM category;
 +++
 | id | name   |
 +++
 |  1 | Classic|
 |  2 | Gold   |
 |  5 | Platinum   |
 |  6 | Blacklist  |
 +++

 SELECT * FROM user;
 +++--+
 | id | name   | category |
 +++--+
 |  2 | John   | 1|
 |  3 | Mark   | 2|
 |  5 | Kenneth| 5|
 |  6 | Sammy  | 6|
 |  8 | Jane   | 5|
 +++--+


 Based on the above, the values of both ids are defragmented.
 The category.id 3 and 4 are deleted and
 the user.id 1, 4 and 7 are deleted.

 May I know if there is any way we can reset (or defrag?) the values so
 that they look like the following?


 SELECT * FROM category;
 +++
 | id | name   |
 +++
 |  1 | Classic|
 |  2 | Gold   |
 |  3 | Platinum   |
 |  4 | Blacklist  |
 +++

 SELECT * FROM user;
 +++--+
 | id | name   | category |
 +++--+
 |  1 | John   | 1|
 |  2 | Mark   | 2|
 |  3 | Kenneth| 3|
 |  4 | Sammy  | 4|
 |  5 | Jane   | 3|
 +++--+




smime.p7s
Description: S/MIME Cryptographic Signature


Re: example when indexing hurts simple select?

2006-06-12 Thread Marco Simon
Hi Gasper,

MySql allows to package the index - to get its size smaller and to gain
performance.
Some information about that can be found here:
http://www.mysqlperformanceblog.com/2006/05/13/to-pack-or-not-to-pack-myisam-key-compression/



Gaspar Bakos schrieb:
 Hi,


 RE:
   
 Have you tried
 analyze table x;
 

 This was quick:

 mysql analyze table TEST;
 Table  Op  Msg_typeMsg_text
 CAT.TEST   analyze status  Table is already up to date

 --

 mysql show index from TEST;
 +---+++--+-+---+-+--++--++-+
 | Table | Non_unique | Key_name   | Seq_in_index | Column_name | Collation | 
 Cardinality | Sub_part | Packed | Null | Index_type | Comment |
 +---+++--+-+---+-+--++--++-+
 | TEST |  1 | MMtestfiel |1 | MMtestfiel  | A |   
   838 | NULL | NULL   |  | BTREE  | NULL|
 | TEST |  1 | MMi_m  |1 | MMi_m   | A |   
 25857 | NULL | NULL   | YES  | BTREE  | NULL|
 +---+++--+-+---+-+--++--++-+


 ---
 I am trying to figure out what the Packed field means.

 Gaspar

   



smime.p7s
Description: S/MIME Cryptographic Signature


Trouble with aborted connections

2006-04-14 Thread Marco Simon
Hello everybody,

I'm using mysql in an clustered environment:
Several loadbalanced webservers get the requests from the users' clients
and query the needed information from several webservers. Every webserver
connects to every database-server. So in this case the webservers are the
mysql-clients. Webserver and mysql-server are in the same private network
with their own switch between them.

Ok, here's my Problem:
Im getting a lot (see timestamps) of the following kind massages in my
/var/log/mysql/error.log.err:

060414 12:01:45 [Warning] Aborted connection 2149 to db: 'board_5' user: '
boardu_5' host: `ws4' (Got timeout reading communication packets)
060414 12:02:44 [Warning] Aborted connection 3020 to db: 'board_5' user: '
 boardu_5' host: `ws5' (Got timeout reading communication packets)
060414 12:03:18 [Warning] Aborted connection 3508 to db: 'board_5' user: '
 boardu_5' host: `ws5' (Got timeout reading communication packets)
060414 12:03:19 [Warning] Aborted connection 3538 to db: 'board_5' user: '
 boardu_5' host: `ws3' (Got timeout reading communication packets)
060414 12:04:01 [Warning] Aborted connection 4173 to db: 'board_5' user: '
 boardu_5' host: `ws5' (Got timeout reading communication packets)
060414 12:04:33 [Warning] Aborted connection 4719 to db: 'board_5' user: '
 boardu_5' host: `ws3' (Got timeout reading communication packets)

As you can see the error occours from different clients. But the same
problem
exists on the other mysql-servers as well. The db-server has a load
between 0.7 and 1.7
Versions:
 Linux:  Linux db5 2.6.14.2 #4 SMP Thu Nov 17 09:54:44 CET 2005 x86_64
GNU/Linux
 Mysql: *4.1.11-Debian_4-log
 *Apache: Apache/2.0.55 (Unix) PHP/4.4.2
 Mysql-System: Intel(R) Pentium(R) D CPU 3.00GHz with 4 Gig Memory

On the client-side (webserver) I'm getting the follwing error-message:

Lost connection to MySQL server during query
mysql error number: 2013

for each of the above entries.

Actually I've no idea where I could go on searching for the
bottleneck or any existing problem. Why are are the connections
timing out ? What parameters are  relevant ? 

I'd be thankful for every idea and suggestion.

Greetings
  Marco







Re: auto_increment and the value 0

2006-03-29 Thread Simon Garner

On 30/03/2006 12:31 p.m., Daniel Kasak wrote:

[EMAIL PROTECTED] wrote:
I suppose that would be alot easier than trying to bump the PK and 
related FK values of the whole table by 1, just to give the first row 
in the table the auto_increment value of 1?
  


Yes. That sounds messy.

What about before migrating the database, just adding a new row to the 
end of the table, that would duplicate the data in the first row, then 
deleting the first row from the table?


Would that work?
  


Yes but it would be no different to just updating the primary key of the 
existing record.


Either way, you will have issues with restoring from backups if you keep 
an auto_increment column with a zero value around ( as you've discovered 
), so what ever you do, you need to get rid of those zero values.




Another option would be to reassign the zero row to be -1 (if the column 
is not UNSIGNED). Assuming the current zero row has some kind of special 
significance, this may make more sense than just giving it the next 
unused auto_increment value. This would also keep it in the same place 
with an ORDER BY.


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: auto_increment syntax

2006-03-23 Thread Simon Garner

On 24/03/2006 11:06 a.m., Eric Beversluis wrote:

Can someone illustrate the correct syntax for using auto_increment in
making a table? I've studied the manual and I'm not seeing how it comes
out.

EG: 
CREATE TABLE Books (

 bookID INT(5) PRIMARY KEY AUTO_INCREMENT...

THEN WHAT? 


Thanks.
EB



Should be:

bookID INT(5) NOT NULL AUTO_INCREMENT PRIMARY KEY

Primary keys have to be NOT NULL, and AUTO_INCREMENT must appear 
before PRIMARY KEY


AUTO_INCREMENT fields should normally be INT UNSIGNED as well since you 
generally won't want to store a negative ID number, and this gives you 
an extra byte.


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Signal 11 crashes on MySQL V5

2006-03-09 Thread Simon Garner

On 9/03/2006 9:43 a.m., Kishore Jalleda wrote:

could you tell us if these 6 are in a cluster or in a replication set up,
and u also said the 3 linux bixes all crash at once, did u check the logs,
do they crash under load, what about the OS, is it stable when mysql
crashes

Kishore Jalleda



We use cluster and replication. We were seeing random crashes on the 
replication slaves, which are only used for SELECT queries. They 
wouldn't all crash at the same time (then again they aren't all doing 
the same queries at the same time). No OS problems.


This is what we would get in the error log:

mysqld got signal 11;
This could be because you hit a bug. It is also possible that this binary
or one of the libraries it was linked against is corrupt, improperly built,
or misconfigured. This error can also be caused by malfunctioning hardware.
We will try our best to scrape up some info that will hopefully help 
diagnose
the problem, but since we have already crashed, something is definitely 
wrong

and this may fail.

key_buffer_size=134217728
read_buffer_size=1044480
max_used_connections=15
max_connections=100
threads_connected=4
It is possible that mysqld could use up to
key_buffer_size + (read_buffer_size + sort_buffer_size)*max_connections 
= 335471 K

bytes of memory
Hope that's ok; if not, decrease some variables in the equation.

thd=0xaea70058
Attempting backtrace. You can use the following information to find out
where mysqld died. If you see no messages after this, something went
terribly wrong...
Cannot determine thread, fp=0x9bb5ac, backtrace may not be correct.
Stack range sanity check OK, backtrace follows:
0x81913f0
0xfbc420
0x827a1c1
0x8279d88
0x827a1c1
0x8279f0d
0x827a03a
0x8278554
0x81a6b39
0x81ae100
0x81a5213
0x81a4d4d
0x81a429e
0x960b80
0x6549ce
New value of fp=(nil) failed sanity check, terminating stack trace!

I did run the backtrace but didn't save the results, now I can't resolve 
it without reinstalling mysql 5 because I don't have the right sym file :/


This is using the 5.0.18 Linux RPMs from mysql.com (tried both the 
glibc23 and statically linked ones, no difference - the above trace is 
from the glibc23 one). Running a mix of both Fedora 3 and Fedora 4 on 
Intel P4s.


-Simon


On 3/8/06, Dave Pullin [EMAIL PROTECTED] wrote:

I am running MySQL on 6 servers - 3 Linux and 3 Windows. I recently
upgraded
to V5 on all servers. Now MySQL is crashing regularly (several times per
day, some days) with 'got signal 11'.

My 3 Linux servers are very different machines running different software
a uniprocessor Pentium with 512MB running Redhat9 with MySQL 5.0.18-0.i386
, a new dual XEON with 8GB running Fedora Core 4 with 64bit MySQL
5.0.18-0.glibc23.x86_64
, a old quad XEON with 4GB running Fedora Core 4 with MySQL 5.0.18-0.i386

The windows machines are not having a problem. All 6 are running
essentially
the same application.

It seems unlikely to be a hardware problem because its on 3 machines at
once. It looks like a MySQL V5 problem but I can't pin it down to anything
specific enough to report a bug.

Anyone had similar experiences with MySQL V5?

Dave


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]







--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Signal 11 crashes on MySQL V5

2006-03-08 Thread Simon Garner

On 9/03/2006 8:42 a.m., Dave Pullin wrote:

I am running MySQL on 6 servers - 3 Linux and 3 Windows. I recently upgraded
to V5 on all servers. Now MySQL is crashing regularly (several times per
day, some days) with 'got signal 11'.

My 3 Linux servers are very different machines running different software
a uniprocessor Pentium with 512MB running Redhat9 with MySQL 5.0.18-0.i386
, a new dual XEON with 8GB running Fedora Core 4 with 64bit MySQL
5.0.18-0.glibc23.x86_64
, a old quad XEON with 4GB running Fedora Core 4 with MySQL 5.0.18-0.i386

The windows machines are not having a problem. All 6 are running essentially
the same application.

It seems unlikely to be a hardware problem because its on 3 machines at
once. It looks like a MySQL V5 problem but I can't pin it down to anything
specific enough to report a bug.

Anyone had similar experiences with MySQL V5?



Yep, we were seeing the same thing with 5.0. We had to roll back to 4.1 
the other day, 5.0 was just too unstable.


I haven't reported a bug though because I couldn't narrow down the cause 
of the problem. Just seemed to be random crashes, several times per day 
(more often when under load).


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: selecting records newer than say 20 min

2006-03-07 Thread Marco Simon
select * from table where mytimestamp  (unix_timestamp - 20) ??

Gregory Machin schrieb:
 Hi
 What, is the easest way to select all the records created in the last 20 min
 stay based on a column that has a timestamp record.

 Many Thanks

 --
 Gregory Machin
 [EMAIL PROTECTED]
 [EMAIL PROTECTED]
 www.linuxpro.co.za
 www.exponent.co.za
 Web Hosting Solutions
 Scalable Linux Solutions
 www.iberry.info (support and admin)

 +27 72 524 8096
   


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Using Network Block Device on Linux to build HUGE/cheap memory-base MySQL boxes.

2006-02-20 Thread Simon Garner

On 19/02/2006 10:57 p.m., Kevin Burton wrote:
I was talking to a friend tonight about how they use NBD to run a single 
system image in memory.


NBD (Network Block Device) allows one Linux box to export a block device 
and for you to mount it on another filesystem.   For the memory 
component they just use a ram disk.



What if you booted a MySQL install and told it to use NBD mounted 
memory?  Theoretically you could build MUCH cheaper and MUCH faster 
clusters.  Your DB writes would still back to the local (RAID) 
filesystem but your innodb buffer pool and other buffers would be 
running out of swap and into your network memory subsystem.


This would allow you to have a HUGE buffer for MySQL.  Buffer your whole 
damn database in MEMORY.


The main downside I can see is fault tolerance if the ethernet port was 
pulled.  The box would fail.  Of course at this point its a bit like 
pulling a SCSI cable out.


If this turns out to be a good way to scale MySQL someone could just pay 
to have NBD enhanced to support fault tolerance with mirror nodes.


Thoughts?



Why would this be better than just using MySQL Cluster (NDB)?

With this method you're reliant on one single server, all you get is a 
large space for buffering. Which could be good if your database is 
mostly static and you can take advantage of query caching and key 
buffers, and your operating system may be able to buffer the entire 
database into memory. But would be less advantageous if you have many 
updates.


But it doesn't really give you any scalability, since everything has to 
go through the one MySQL server still. It's just a MySQL server with 
lots and lots of high-speed swap space.


And I'd imagine it would be very fault-prone. If any of the buffer 
servers was to die, it would most likely take the entire database with 
it (operating systems can't handle losing swap). And even if you solved 
that, the single MySQL server would still be a single point of failure.


If you have many servers with lots of RAM, MySQL Cluster by contrast 
enables you to run your database in RAM and make real gains in 
scalability and fault tolerance (although it is a little rough round the 
edges still).


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Adding data from one table to another

2006-01-11 Thread simon elliston ball
The easiest way to do that would be to CREATE TABLE new_table_name  
SELECT (query you use to join the tables)


The old tables then become redundant.


On 11 Jan 2006, at 17:30, 2wsxdr5 wrote:

I have two tables with data on people in them.  Table A is a subset  
of table B,  However, there is data about these people in table B  
that is not in table A.  with a simple select I can do a join and  
get a result set with all the data I need to show, but what I would  
like to do is change table A so it also has one of the fields from  
table B.  Adding the field to table A is trivial , but how do I  
then populate that new field with data from the table B?


--
Chris W
KE5GIX

Gift Giving Made Easy
Get the gifts you want  give the gifts they want
One stop wish list for any gift, from anywhere, for any occasion!
http://thewishzone.com


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql? 
[EMAIL PROTECTED]



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



ISAM tables broken

2006-01-04 Thread Simon Faulkner

An old server died during the holidays.

I had a database with 2 tables on it on an old version of MySQL
(circa Redhat 7?)

I have the latest backup of the files but not a dump (kick himself)

I put the files in the correct /var/lib/mysql/gallery and restarted 
MySQL.  It shows the gallery database but not the tables



I have tried ISAM check and it seems OK but MySQL won't open the tables.

I have even tried the files in a fresh compiled MySQL rather than the 
FC4 version in case that's a problem.


Are my tables doomed?

TIA

Simon
PS they are at
http://titanic.co.uk/gallery/tblGallery.ISD
http://titanic.co.uk/gallery/tblGallery.ISM
http://titanic.co.uk/gallery/tblPicture.ISD
http://titanic.co.uk/gallery/tblPicture.ISM

if anyone needs to see them...

There's nothing private in them.


[EMAIL PROTECTED] gallery]# ls -l
total 1196
-rwxrwxrwx  1 mysql mysql6511 Jan  4 15:30 tblGallery.ISD
-rwxrwxrwx  1 mysql mysql2048 Jan  4 15:30 tblGallery.ISM
-rwxrwxrwx  1 mysql mysql 1185360 Jan  4 15:30 tblPicture.ISD
-rwxrwxrwx  1 mysql mysql   18432 Jan  4 15:30 tblPicture.ISM
[EMAIL PROTECTED] gallery]# mysqlshow
+---+
| Databases |
+---+
| gallery   |
| mysql |
| test  |
+---+
[EMAIL PROTECTED] gallery]# mysqlshow gallery
Database: gallery
++
| Tables |
++
++
[EMAIL PROTECTED] gallery]# isam
isamchk  isamlog
[EMAIL PROTECTED] gallery]# isamchk tblGallery.ISD
isamchk: error: 'tblGallery.ISD' is not a ISAM-table
[EMAIL PROTECTED] gallery]# isamchk tblGallery.ISM
Checking ISAM file: tblGallery.ISM
Data records:  17   Deleted blocks:   0
- check file-size
- check delete-chain
- check index reference
[EMAIL PROTECTED] gallery]# isamchk tblPicture.ISD
isamchk: error: 'tblPicture.ISD' is not a ISAM-table
[EMAIL PROTECTED] gallery]# isamchk tblPicture.ISM
Checking ISAM file: tblPicture.ISM
Data records:2640   Deleted blocks:   0
- check file-size
- check delete-chain
- check index reference
[EMAIL PROTECTED] gallery]# isamchk -r tblPicture.ISM
- recovering ISAM-table 'tblPicture.ISM'
Data records: 2640
- Fixing index 1


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: ERROR 2003 (HY000): Can't connect to MySQL server on 'gandalf' (111)

2005-12-04 Thread Simon Garner

On 5/12/2005 11:56 a.m., Mike Smith wrote:

Hi,



Hi,


I've already check my /etc/my.cnf file for a binding
line.  Its not 
there.  I also found an item online that indicated

adding:



Look for the option skip-networking. This disables TCP/IP so the 
server only accepts local connections via the Unix socket. This sounds 
like your situation.


Note that a could not connect error means just that. If the problem 
was related to user privileges you would get an access denied error.


HTH,
-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: LEFT JOIN not working on 5.0.16 - urgent help

2005-11-23 Thread Simon Garner

On 24/11/2005 2:22 p.m., Terence wrote:


SELECT um.username,rm.role_name,dm.department_name
FROM user_master um, role_master rm
LEFT JOIN department_master dm ON um.department_id = dm.department_id
WHERE um.role_id = rm.role_id;


1054 - Unknown column 'um.department_id' in 'on clause'
 Query:
 SELECT um.username,rm.role_name,dm.department_name
FROM user_master um, role_master rm
LEFT JOIN department_master dm ON um.department_id = dm.department_id
WHERE um.role_id = rm.role_id
=


Has the left join syntax changed?



Yeah this caught me out too. The precedence of JOINs has changed to more 
strictly follow the SQL standard.


In 5.0, MySQL is now interpreting your query as:

SELECT ... FROM user_master um, (role_master rm LEFT JOIN 
department_master dm ON um.department_id = dm.department_id) ...


And the table um doesn't exist in the join between rm and dm. The 
solution is to put the FROM tables in parentheses, like:


SELECT ... FROM (user_master um, role_master rm) LEFT JOIN 
department_master dm ON um.department_id = dm.department_id) ...


You can read more about this here:
http://dev.mysql.com/doc/refman/5.0/en/join.html

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Reset root password to mysql?

2005-11-22 Thread Marco Simon

Jerry Swanson schrieb:


How to reset mysql password to mysql?


mysql -u root
ERROR 1045 (0): Access denied for user: '[EMAIL PROTECTED]' (Using
password: NO)

 

You could start your mysqld with --*without-grant-tables* option. But 
keep in mind that this will
stop the complete permission system of mysqld and the database will be 
accessable for everyone.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Mysql 4.1 full table (nearly) - best practice to alter?

2005-11-20 Thread Simon

sheeri kritzer wrote:


On 11/18/05, Michael Stassen [EMAIL PROTECTED] wrote:


Sheeri,

I think you've missed the point.  The 4Gb limit is in mysql, not the filesystem.
 Mysql effectively doesn't limit tablesize, but the *default* pointer size
limits you to a max of about 4Gb in a MyISAM table.  To have a larger table, you
need to tell mysql that it needs to use a larger pointer for that table, either
at table creation, or with an ALTER TABLE such as the one Simon is proposing to
run.  See the last half of
http://dev.mysql.com/doc/refman/5.0/en/table-size.html for more.



You're right, I did miss the point.  Thanx for pointing this out; I
wasn't sure why he was asking the question!

-Sheeri


Hi There,

Im asking the question because :) ... I am running dbmail, which is a 
mail server that stores its mail in mysql. The table where all the mail 
is is getting nearly to 4GB, which is its Max_data_length limit:


show table status from dbmail like 'dbmail_messageblks';

gives Data_length = 3204062980 and Max_data_length = 4294967295

Im wanting a simple way of increasing the size of Max_data_length in the 
shortest amount of time as it is a live mail server.


Simon



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Mysql 4.1 full table (nearly) - best practice to alter?

2005-11-17 Thread Simon

Hi There,

We are reaching 4GB with one of our tables (v4.1 on debian sarge) and 
are needing to run:


ALTER TABLE tbl_name MAX_ROWS=10 AVG_ROW_LENGTH=nnn;

as per the documentation.. I would be greatful for any input on best 
practices to do this, as the database needs to be offline for the 
absolute minimal amount of time so the fastest process that i can 
think of would be:


1). Backup everything! :)
2). mysqldump the table to a file
3). drop the table
4). recreate the table structure
5). do the alter
6). import the data back in

Other questions are... can the alter be done to live data? how does this 
work?


Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Active user sessions

2005-10-30 Thread Simon Garner

On 31/10/2005 3:43 p.m., Cabbar Duzayak wrote:

Hi,

Is there a way of listing all the active db sessions and their IP
addresses and/or db user names? Something like v$session in oracle?

Thanks...



SHOW PROCESSLIST;

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Does MySQL 3.23.58 Support UNION

2005-10-28 Thread Simon Longstaff
I'm trying to run this :
 
SELECT DISTINCT A.B_IP FROM w3t_Posts A , w3t_Users B WHERE A.B_PosterID =
B.U_Number and B.U_Username = 'user1' 
UNION
SELECT DISTINCT C.B_IP FROM w3t_Posts C , w3t_Users D WHERE C.B_PosterID =
D.U_Number and D.U_Username = 'user2' 

and it's failing saying 
 
SQL Error: You have an error in your SQL syntax near 'UNION SELECT DISTINCT
A.B_IP FROM w3t_Posts A , w3t_Users B WHERE A.B_PosterID ' at line 2
SQL Error #: 1064
Query: SELECT DISTINCT A.B_IP FROM w3t_Posts A , w3t_Users B WHERE
A.B_PosterID = B.U_Number and B.U_Username = 'Simon' UNION SELECT DISTINCT
A.B_IP FROM w3t_Posts A , w3t_Users B WHERE A.B_PosterID = B.U_Number and
B.U_Username = 'MartyX' 

 
I've been using DB2 SQL for ages.
 
 


compiling + making source for 5.7 on alpha

2005-06-23 Thread Simon Tierney
Can anyone advise please, tried binary installation but other users have
reported encountering similar problems so I thought I'd try a source
installation.

Adjusted the pentiumpro flags out and configure reported no errors -when I
use make I get *** no targets. stop.

The makefile is in there, am I overlooking something elementary?



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.7.11/26 - Release Date: 22/06/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Shifting dates

2005-06-21 Thread Simon Garner

On 21/06/2005 6:55 p.m., Sebastian wrote:
i never understand why people use datetime anyway.. unix timestamp is so 
much easier to work with.




Because DATETIME is stored natively as a date and time, which means you 
can then use the date and time SQL functions with them (such as 
DATE_ADD, DATE_SUB, DATE_FORMAT etc). If you just store your date as a 
timestamp in an INT then you first have to convert it into a DATETIME 
using FROM_UNIXTIME. Admittedly not difficult, but an additional and 
unnecessary conversion.


For me it's really just a matter of rightness - why store a date as an 
int when you can store it as a date? :)


Also a question of whether you prefer to do date manipulations in your 
application or at the SQL level. But both ways work.


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Shifting dates

2005-06-20 Thread Simon Garner

On 21/06/2005 2:45 p.m., Scott Haneda wrote:

I need to run a BETWEEN select where I put in a date rate, the time was at
one point irrelevant, but now the client is in a new time zone +3 hours
ahead, so 


BETWEEN 2005010100 AND 20051201235959 is what I pass in now, which is
wrong, how can I add three hours to it and get the days and months to wrap
as needed.

I would love to do this in SQL, not in application.


Try something like:

SELECT
*
FROM
table
WHERE
datefield  '2005-01-01 00:00:00'
AND datefield  DATE_ADD('2005-01-01 00:00:00', INTERVAL 3 HOUR)

http://dev.mysql.com/doc/mysql/en/date-and-time-functions.html

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



problem installing 5.7 on alpha

2005-06-17 Thread Simon Tierney
Can anyone help, please, when I tar -xvzf this file I am getting the error

Archive contains obsolescent base-64 headers
gzip: stdin: invalid compressed data--crc error

and installation exits

I tried to acquire the Compaq compilers mentioned under platforms in case
it is a gcc problem, but the page is history.



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.323 / Virus Database: 267.7.3/15 - Release Date: 14/06/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: idepontent inserts? Allowing INSERT to fail faster than SELECT/INSERT

2005-06-14 Thread Simon Garner

On 15/06/2005 10:28 a.m., Kevin Burton wrote:

I've been thinking about this for a while now.

If you have an app that can compute a unique key (hashcode) and you have 
a unique index it should be possible to just do an INSERT instead of a 
SELECT first to see if the record doesn't exist and then an INSERT.


This should be 2x faster than the SELECT/INSERT combo right?



I'm not entirely clear what you're talking about, but you could also 
have a look at INSERT IGNORE..., or INSERT... ON DUPLICATE KEY UPDATE, 
or REPLACE INTO...:


http://dev.mysql.com/doc/mysql/en/insert.html
http://dev.mysql.com/doc/mysql/en/replace.html

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: idepontent inserts? Allowing INSERT to fail faster than SELECT/INSERT

2005-06-14 Thread Simon Garner

On 15/06/2005 11:22 a.m., Kevin Burton wrote:

Simon Garner wrote:

I'm not entirely clear what you're talking about, but you could also 
have a look at INSERT IGNORE..., or INSERT... ON DUPLICATE KEY UPDATE, 
or REPLACE INTO...:




The problem is that I do NOT want it to update.
Also.. REPLACE causes the row to be DELETED and INSERTED again which is 
really ugly.


I just want to take one SELECT and INSERT pair and reduce it to one 
INSERT which should be 2x faster :)


Kevin



Then you want INSERT IGNORE :)

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: CASE .. WHEN .. returns NULL

2005-06-09 Thread Simon Garner

Alvaro Cobo wrote:
Hi guys: 

I am here again asking for your generous and wise advise: 

I have a table (tbl_f4Granjas) with the information of some farms, their whole land extension (field: GraExtUPA) and the portion of the land they are managing in an agro-ecological way (field: GraExtPredio). 

What do I need is to get the percentage by organization (field: FK_ProjectHolderId) of the land managed in the agroecological way. After that, I need to recode the data based in a range (0%-10%=0; 11%-25%=3, etc.). 

I am using the CASE .. WHEN .. THEN .. function, but it keeps giving me a NULL result. What I am doing wrong?. I have searched everywhere, goggled it, check the Mysql manual, but I couldn´'t find the answer. My server is: Mysql 4.1.10 in a Debian Sarge box. 



I think your problem is you are saying =10 and then =11 for the 
next level, but your percentage could be e.g. 10.55, which wouldn't 
match any of your CASEs.


This should work...

SELECT
a.FK_ProjectHolderId,
CASE
WHEN (SUM(a.GraExtPredio)/SUM(a.GraExtUPA)*100) 75 THEN 12
WHEN (SUM(a.GraExtPredio)/SUM(a.GraExtUPA)*100) 50 THEN 9
WHEN (SUM(a.GraExtPredio)/SUM(a.GraExtUPA)*100) 25 THEN 6
WHEN (SUM(a.GraExtPredio)/SUM(a.GraExtUPA)*100) 10 THEN 3
ELSE 0 END
AS agroland
FROM
tbl_f4Granjas AS a
GROUP BY
FK_ProjectHolderId

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: If statment in query

2005-06-05 Thread Simon Garner

Sebastian wrote:

I have two fields: topic | title

topic does not always have data in it, so i want to select `title` when 
`topic` is null..


i thought i could do this (does not work):

IF(title IS NULL, topic, title) AS heading

Thanks.



Try

SELECT IFNULL(title, topic) AS heading

-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Re: Sync 2 live MySQL Databases

2005-05-26 Thread Simon Garner

On 26/05/2005 6:40 p.m., Cecil Brand wrote:

Hi,

I was wandering if anyone know of a stable and reliabile way to sync 2
live mysql databases, both ways.
I know I can use a master and slave, and yes have setup a few without
any problem, but as all
of us know this is just a one way downstream sync. I need to sync both
ways and basicly live, the every
5 or 10 min cronjob sync with a perl/php/c++ app just won't cut it.



Unfortunately, the simple answer is you can't.

While it is possible to set up multi-master replication with MySQL, this 
is not useful for the vast majority of applications. If you are sending 
write queries to both masters then you have the possibility of the two 
servers becoming out of sync. This is particularly true with 
auto_increment primary keys - if you have a table and you send one 
insert statement to each server,  both at the same time, then they might 
both assign the same auto_increment number to each row, and then they 
will both be forced to ignore the replicated inserts they receive from 
each other - screwing your table.


Your options are to use MySQL Cluster (not an option for a lot of people 
as your entire database has to be stored in RAM), or if you're just 
after high availability, set up one-way replication with dynamic master 
failover (which I am currently trying to figure out how to do myself).


-Simon

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



Replication with failover

2005-05-18 Thread Simon Garner
Hi,
Anybody have any experience with setting up MySQL replication with 
dynamic failover, preferably using keepalived (http://www.keepalived.org)?

What we need is a system so that if the master server dies one of the 
slaves will become the master.

It looks like it should be possible, and merely involves a bit of 
scripting to glue it all together, but I was hoping somebody would have 
created such scripts already so I don't screw it up myself :)

tia
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Replication with failover

2005-05-18 Thread Simon Garner
Kevin Burton wrote:
Gleb Paharenko wrote:
Hello.
I don't remember solutions with keepalived, but this issue is
discussed in the list from time to time. Search in archives at:
 http://lists.mysql.com/mysql
 

Someone should create a wiki page on this subject... its a commonly 
asked question...

Kevin
That would be nice, because I googled and searched the list archives and 
found nothing.

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Different TIMESTAMP columns

2005-05-18 Thread Simon Garner
Lieven De Keyzer wrote:
But how do I initialize the 2 datetime fields? With no DEFAULT, their 
values are -00-00 00:00:00. Just what I wanted to avoid with TIMESTAMP

Set their value to NOW() if you want the current date/time.
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Different TIMESTAMP columns

2005-05-18 Thread Simon Garner
Lieven De Keyzer wrote:
mysql CREATE TABLE bookmark (
   -   bookmark_id INTEGER NOT NULL AUTO_INCREMENT,
   -   bookmarkname VARCHAR (80) NOT NULL,
   -   url VARCHAR (150) NOT NULL,
   -   folder_id INTEGER NOT NULL,
   -   last_scanned DATETIME DEFAULT NOW(),
   -   last_notified DATETIME DEFAULT NOW(),
   -   PRIMARY KEY (bookmark_id),
   -   FOREIGN KEY (folder_id) REFERENCES folder(folder_id) ON DELETE 
CASCADE) TYPE = InnoDB;

No, not their default - you have to set the value when you insert or 
update the row. The default can only be a particular date, not a dynamic 
value.

e.g.
UPDATE bookmark SET last_scanned=NOW()
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: Using AVG

2005-05-18 Thread Simon Garner
Mike Blezien wrote:
Hello,
when using the AVG function like this:
SELECT AVG((5+8+10)/3) AS rate;
it returns NULL??
the AVG can be used to do a literal math calculation ??
TIA
That doesn't make any sense... AVG is a GROUP BY function.
If you have 3 rows with values 5, 8 and 10 then surely
SELECT (5+8+10)/3 AS rate;
is what you want?
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: select count(*) table

2005-05-14 Thread Simon Garner
[EMAIL PROTECTED] wrote:
I have a curious situation I was hoping someone could shed some light on.
mysql select count(*) table;
+---+
| table |
+---+
| 0 |
+---+
1 row in set (0.00 sec)
mysql select count(*) from table;
+--+
| count(*) |
+--+
|25965 |
+--+
1 row in set (0.00 sec)
I found it curious that the first query didn't return an error with 
there being no 'from', and even more curious that it returned a 0.  Is 
the first query actually legit, and if so, what does the 0 mean?
Yes, you can do a select without a table. This allows you to get the 
values of expressions or functions.

E.g.
SELECT 1+1
will return 2, and
SELECT NOW()
will return the current date and time.
Your query is selecting COUNT(*) AS table rather than COUNT(*) FROM 
table. Naturally, without a table, COUNT(*) will return 0.

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: beginner guide

2005-05-09 Thread Simon Garner
On 10/05/2005 1:29 p.m., ganesan malairaja wrote:
./configure --prefix=/usr/local/mysql
it returns this
This is a MySQL binary distribution. It's ready to run, you don't
need to configure it!
which Mysql distribution do i use .. to follow the instruction from 
http://www.linuxhelp.net/guides/lamp/

Well first of all why are you bothering with those instructions? Most 
Linux distributions come with Apache, MySQL and PHP out of the box.

But to answer your question, you need a source tarball. Just go to
http://dev.mysql.com/downloads/mysql/4.1.html
... and scroll down a bit further until you get to Source downloads, 
or click
http://dev.mysql.com/get/Downloads/MySQL-4.1/mysql-4.1.11.tar.gz/from/pick

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: amPiguous!

2005-05-06 Thread Simon Garner
On 7/05/2005 11:00 a.m., Rhino wrote:
Actually, the error message is misleading. There is nothing that I would
call ambiguous in your query: you have a syntax error. The join should be
written:
select pk from a inner join b on a.col1 = b.col2
Of course, you need to replace 'a.col1' and 'b.col2' with real column names
from tables a and b respectively.
Rhino
Not true, his join syntax is correct.
select pk from a inner join b using (pk)
is the same as saying
select pk from a inner join b on a.pk = b.pk
However, MySQL is complaining about the select pk part because it 
doesn't know whether to select a.pk or b.pk. I think what Dan is arguing 
is that MySQL should know from the join that a.pk and b.pk are the same, 
so it doesn't matter which one it uses.

-Simon

- Original Message - 
From: Dan Bolser [EMAIL PROTECTED]
To: mysql@lists.mysql.com
Sent: Friday, May 06, 2005 6:50 PM
Subject: amPiguous!


Why are columns included in the join between two tables ambigious?
It seems that they should *not* be ambigious!
Like this
select pk from a inner join b using (pk);
ERROR 1052 (23000): Column 'pk' in field list is ambiguous!!!
Is this a bug, or is it like this for a reason? It drives me nuts, because
it is not ambigious (as far as I can tell). Am I a fool?
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: no /tmp/mysql.sock

2005-05-05 Thread Simon Garner
Mark Sargent wrote:
Hi All,
mysql is running, but, there seems to be no /tmp/mysql.sock file. How is 
this file generated..? I can't connect, keep getting errors. What about 
via a port..? What is the argument for that..? Cheers.

Mark Sargent.
It's probably not in /tmp any more. Try /var/lib/mysql/mysql.sock, or 
look at `mysqladmin variables` and check the 'socket' value.

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: no /tmp/mysql.sock

2005-05-05 Thread Simon Garner
On 6/05/2005 4:18 p.m., Mark Sargent wrote:
Ok, I just took a punt, and changed the setting in my.cnf to reflect the 
config settings, to /tmp/mysql.sock, and all is well. What I don't un, 
is, why the default settings were like this, I certainly made no changes 
to either the configs or the my.cnf file. Weird. Cheers.

Mark Sargent.
Hi Mark,
The default location for the socket file in modern distributions is in 
the MySQL data directory now (usually /var/lib/mysql), as /tmp is not 
always 100% safe, and in /tmp it is liable to be e.g. cleaned up by 
tmpwatch. And I expect if you were to talk in terms of filesystem 
standards, /tmp would not be the 'correct' place for a socket file.

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ERROR 1251: Client does not support authentication protocol requested by server; consider upgrading MySQL client

2005-05-02 Thread Simon Garner
On 2/05/2005 6:05 p.m., Mark Sargent wrote:
Hi All,
master/client are the same machine. I installed via tar.gz, ver 4.1 
following this page

http://dev.mysql.com/doc/mysql/en/installing-binary.html
for installation on FC3. Why would it say that the client doesn't 
support it.? Cheers.

P.S. I now can't log onto the server as root. So, I tried following this 
page,

I would suggest you start again by deleting the mysql database (ie 
/var/lib/mysql/mysql) and running mysql_install_db to reinitialise the 
privilege tables.

Alternatively try adding
old-passwords
to the [mysqld] section in your my.cnf file.
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: ERROR 1251: Client does not support authentication protocol requested by server; consider upgrading MySQL client

2005-05-01 Thread Simon Garner
On 2/05/2005 5:21 p.m., Mark Sargent wrote:
Hi All,
I did this below,
mysqlSET PASSWORD FOR 'root'@'localhost' = PASSWORD('whatafraka');
and then tried logging on,
[EMAIL PROTECTED] bluefish-1.0]# mysql -h localhost -u root -pwhatafraka
ERROR 1251: Client does not support authentication protocol requested by 
server;  consider upgrading MySQL client

What is this..? Why is it asking me to upgrade the client.? Cheers.
Mark Sargent.
See here:
http://dev.mysql.com/doc/mysql/en/password-hashing.html
Presumably your server is running 4.1 with new passwords and your client 
is an older version.

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Re: many to many

2005-04-20 Thread Simon Garner
On 21/04/2005 3:36 p.m., Perry Merritt wrote:
Hi, I'm a novice.
 
Hi Perry,
I've designed a database that supports many to many relationships (actually many to many to many) and I need help creating the query to find my data. I know things like JOINs exist, but don't have a clue how to use them.
 
I have three main tables and two link tables, The main tables are A, B, and C. Each are defined with id INT and word VARCHAR(32); The link tables are X and Y. X links A and B with the columns a_id and b_id. Y links the tables B and C with columns b_id and c_id.
 
Here's what I want to accomplish:
 
  Get the A.id where A.word = some value
  Use A.id to search X where X.a_id=A.id (from above)
   Use all occurences of X.a_id = A.id to select word from B using B.id=X.b_id
   AND finally,
   select C.id where C.Word = Some other value
and given Y.c_id = C.id use the matching Y.b_id to further limit the select on B.word
 
 Can this convoluted mess be understood and if so, can a single query 
pull it off?


I'm not sure I completely understand what you're trying to do, but try 
the following:

SELECT
A.id AS a_id,
B.word AS b_word,
C.id AS c_id
FROM
A
LEFT JOIN X ON
X.a_id=A.id
LEFT JOIN B ON
B.id=X.b_id
LEFT JOIN Y ON
Y.b_id=B.id
LEFT JOIN C ON
C.id=Y.c_id
WHERE
A.word=some value
AND C.word=some other value
I haven't tested this but I think it would work...

I'm implementing this in perl, so I can break the queries into pieces if I absolutely have to.
 
Thanks

-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


Multi-master replication / clustering

2005-04-12 Thread Simon Garner
Hello,
I am working on a web application (php) that will have a largish mysql 
database (millions of rows, eventually), and for which high availability 
will be important. I am wondering if anyone here can suggest options for 
multi-master replication or clustering.

The application will be mostly read queries but also a significant 
number of writes, and it will be important to have synchronicity between 
nodes.

Load balancing apache servers appears to be reasonably straightforward 
using the software from www.linuxvirtualserver.org. This allows high 
availability and high performance using a cluster of servers acting as one.

Load balancing mysql servers appears to be, of course, much more difficult.
I have considered using replication but this has severe limitations. 
Having one master server and replicating to several slaves would allow 
read queries to be processed on the slaves leaving the master to do 
writes only. However mysql replication is asynchronous so it is possible 
that the data on the slaves would differ from the master and from each 
other. It also offers no performance gain as it means every slave has to 
be able to handle the entire transaction volume. And most importantly 
this leaves a single point of failure - the master. So replication is 
not a solution.

Next I looked at MySQL Cluster. But this too has important limitations - 
firstly, the requirement for the entire database to be held in RAM. This 
means the hardware cost becomes very high for a large database. 
Secondly, NDB does not support FULLTEXT indexes. We need to be able to 
search across a TEXT and a VARCHAR column (i.e. title and 
description) in one of the tables - without FULLTEXT indexes this is 
very slow (with LIKE). I am still considering ways we can make this work 
though.

I have also looked at the third-party m/cluster software offered by EMIC 
(www.emicnetworks.com). This is a multi-master mysql clustering 
solution, and looks like it would do exactly what I want. Unfortunately, 
it is a lot more expensive than I can afford! Would be nice if MySQL 
could implement something like this themselves.

Does anybody know of any other options for MySQL clustering?
TIA,
-Simon
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]


install failure on XP toshiba laptop

2005-01-13 Thread Simon Tierney
Hi, Can anyone help please, I am trying to install 4.1 onto this machine as
a windows service but installation will not proceed and I get the message:-

Resource messagedlg-confirmation not found

So:-
 I unchecked the 'install as windows service' and tried to start it
standalone from a command prompt, but I got ERROR 2003 hY000: CAN'T
CONNECT TO SERVER ON 'LOCALHOST' 10061

So:-
I tried to start it from administrative tools, services, but I got the
message:-
could not start the MySQL service on Local Computer. Error 193:0xc1

Can anyone offer any help, because I have never had these problems before,
and I have intalled MySQL on several Windows (as well as linux) machines.



-- 
No virus found in this outgoing message.
Checked by AVG Anti-Virus.
Version: 7.0.300 / Virus Database: 265.6.11 - Release Date: 12/01/2005


-- 
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/[EMAIL PROTECTED]



  1   2   3   4   5   >