Full text search not matching 2 letter word

2019-01-08 Thread Andrew Wood
Im trying to run a full text query on a two letter keyword 'K7'. I have 
set ft_min_word_len=2 and restarted the server and if I view the system 
vars in Mysql Workbench it shows it is set correctly.


I have then dropped and re-created the index on the descrip column. It 
is an InnoDB table so I cannot do repair table.


Im running the following query which I expect to match the following 
record but it doesnt. Full text searches for other words match OK.


select * from asset where type ='DOCUMENTS' and (match(descrip) against 
('K7' in boolean mode)) ;



+-++--+---+---+--+-+
| id  | type               | descrip           | subtype | 
intendeduse  | location    | assetfileid |

+-++-++---+--+-+
| 153 | DOCUMENTS | Telephone Kiosk No. 7 K7 Interior promo photo from 
field trial.  | PHOTO   | DISPLAY | STORAGE  
| 152 |

+-++--++--+--+-+


Any ideas why this is not working?

Thanks

Andrew



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: Workbench MySQL Enterprise Backup Error

2016-03-22 Thread Andrew Moore
The beauty of running enterprise versions is that you have support from
Oracle. I would gently point you in their direction if not to get what you
paid for but also because most of us in this list are unequipped to help
you because we don't use the software you have problems with.

Good luck
On 22 Mar 2016 8:15 pm, "Lisa Smith"  wrote:

> Hello all,
>
> I have not run across this problem where Workbench will not let me access
> the "Online Backup"
> link and claims that my version of MySQL Enterprise Backup is 0.0.0 (it is
> 4.0.0).
>
> I had backups running and scheduled through Workbench previously. Yesterday
> I changed my data files to another drive and when I restarted Workbench I
> was no longer able to access Online Backup.
>
> I feel like I may be missing something obvious so any suggestions would be
> greatly appreciated.
>
> I am running MySQL Enterprise 5.7 on Windows Server 2012.
>
> Thank you for reading.
>


Re: server-side logging of query errors?

2015-06-25 Thread Andrew Moore
Further more, use logstash to collect the audit events and you can filter
out anything that wasn't an error and move it to a query error log.

On Wed, Jun 24, 2015 at 5:32 PM, Singer Wang w...@singerwang.com wrote:

 Yep, as shown below:

 root@audit-db.ec2:(none) select fark from fark from fark fark fark from
 frak;
 ERROR 1064 (42000): You have an error in your SQL syntax; check the manual
 that corresponds to your MySQL server version for the right syntax to use
 near 'from fark fark fark from frak' at line 1
 root@audit-db.ec2:(none)

 Jun 24 16:29:52 audit-db percona-audit:

 {audit_record:{name:Query,record:217_2015-06-24T16:29:52,timestamp:2015-06-24T16:29:52

 UTC,command_class:error,connection_id:59,status:1064,sqltext:select
 fark from fark from fark fark fark from frak,user:root[root] @
 localhost [],host:localhost,os_user:,ip:}}

 error 1064


 On Wed, Jun 24, 2015 at 11:40 AM, Tomasz Chmielewski man...@wpkg.org
 wrote:

  Normal general log also logs everything including bad queries (i.e.
 SLCECT
  BLAH BLAH;) - although does not say if it was an error (i.e. syntax) or
 not.
 
  Does the audit plugin log the actual error?
 
 
  Tomasz
 
  On 2015-06-25 00:32, Singer Wang wrote:
 
  The performance hit of the Percona Audit is 15% for disk logging and
  for remote syslog we found it is lower. It logs everything including
  bad queries (eg. select fark from fark from fark fark fark from frak).
  You should be able to write a JSON parser that extracts what you want
  based on the log (eg. STATUS, COMMAND, NAME).
 
  On Wed, Jun 24, 2015 at 11:05 AM, Johan De Meersman
  vegiv...@tuxera.be wrote:
 
   -
 
   FROM: Singer X.J. Wang w...@singerwang.com
  SUBJECT: Re: server-side logging of query errors?
 
 
   You could log all queries using the audit plugin, 15% hit..
 
  Fair point, though: maybe one of the different audit plugins has
  the capability to specifically log faulty requests. Have a look
  through the slides from Percona Live London 2014, there was a talk
  about auditing.
 
  --
 
  Unhappiness is discouraged and will be corrected with kitten
  pictures.
 
 
 



Re: checking for repetitive queries

2015-05-18 Thread Andrew Moore
pt-query-digest will help here. You can use a slow query log or tcpdump to
track what the application is asking of the db. The report provided by the
tool can be per db if you wish or the complete schema. It will provide a
summary of the collection and you'll see how often yo

On Mon, May 18, 2015 at 10:37 PM, Steve Quezadas st...@modelprinting.com
wrote:

 I want to make sure my caching system is working properly and I want
 to make sure my mysql server isnt being held up by repetitive queries
 (ie like the side products table that appears on every web page).
 I'm pretty sure I cached the site pretty well, but want to make sure
 that I didn't miss anything.

 Is there some sort of tool that allows me to check for repetitive queries?

 - Steve

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: checking for repetitive queries

2015-05-18 Thread Andrew Moore
cut short on the previous message but I'm sure you get the gist.

A

On Mon, May 18, 2015 at 11:25 PM, Andrew Moore eroomy...@gmail.com wrote:

 pt-query-digest will help here. You can use a slow query log or tcpdump to
 track what the application is asking of the db. The report provided by the
 tool can be per db if you wish or the complete schema. It will provide a
 summary of the collection and you'll see how often yo

 On Mon, May 18, 2015 at 10:37 PM, Steve Quezadas st...@modelprinting.com
 wrote:

 I want to make sure my caching system is working properly and I want
 to make sure my mysql server isnt being held up by repetitive queries
 (ie like the side products table that appears on every web page).
 I'm pretty sure I cached the site pretty well, but want to make sure
 that I didn't miss anything.

 Is there some sort of tool that allows me to check for repetitive queries?

 - Steve

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql





Re: time stamp specific to columns

2015-04-08 Thread Andrew Wallace

I think you'd have to do that with a trigger.


On 4/8/15 6:36 AM, Martin Mueller wrote:

I understand how a timestamp column automatically changes when there is a
change in a data row. Is it possible to limit the update to changes in
particular columns? I have a table where I care about changes in any of
four different columns, but I don¹t care about changes in other columns or
added columns.

Is there a command that says ³update the time stamp if and only if there
is a change in columns a, b,c, or d





--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: longtext fields in a row

2015-04-01 Thread Andrew Wallace

I thought that TEXT fields only stored a pointer to the actual data in the 
table,
not the data itself - storing 9 to 12 bytes in the table:

|BLOB| https://dev.mysql.com/doc/refman/5.0/en/blob.htmland|TEXT| https://dev.mysql.com/doc/refman/5.0/en/blob.htmlcolumns count from 
one to four plus eight bytes each toward the row-size limit because their contents are stored separately from the rest of the row.

  -- https://dev.mysql.com/doc/refman/5.0/en/column-count-limit.html

also: https://dev.mysql.com/doc/refman/5.0/en/storage-requirements.html

That said, I would think a better structure would be to have the data stored in 
a different
table, keyed to a table containing the date and integer fields...



On 4/1/15 10:35 AM, Andrew Mueller wrote:

There is a max row size of 65,535 bytes.

There is no real way to get around this limit other than placing the HTML
code somewhere else, perhaps in a different table.

On Wednesday, April 1, 2015, Trianon 33 triano...@gmail.com wrote:


Hello,

I'm fiddling wit a table where I put in a date field (datetime, also key)
and some integer fields (8 of them mostly 14 long) and some longtext fields
(16 of them).

The longtext fields are filled with some statistics I generate complete
with HTML around, something like this: td12.925.965/td but than bigger,
but mostly smaller than 1 Mb.

This row is initially created by filling the first 10 fields (datetime,
the integer and the 1st longtext) and than updated each and every time the
next longtext value available is.

However this is ok up to the 10th longtext field and than it stops. The
next longtext operations runs ok, no errormessages etc. but the longtext
field itself remains empty.

Up to now I have no clue about my wrongdoings, so do you have any
suggestions?

Is there a max of longtext fields in 1 row?
Is there a max of longtext size in 1 row?

Other idea's?

Thanks in advance for any advice, best regards, Schimanski.

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql






Re: longtext fields in a row

2015-04-01 Thread Andrew Mueller
There is a max row size of 65,535 bytes.

There is no real way to get around this limit other than placing the HTML
code somewhere else, perhaps in a different table.

On Wednesday, April 1, 2015, Trianon 33 triano...@gmail.com wrote:

 Hello,

 I'm fiddling wit a table where I put in a date field (datetime, also key)
 and some integer fields (8 of them mostly 14 long) and some longtext fields
 (16 of them).

 The longtext fields are filled with some statistics I generate complete
 with HTML around, something like this: td12.925.965/td but than bigger,
 but mostly smaller than 1 Mb.

 This row is initially created by filling the first 10 fields (datetime,
 the integer and the 1st longtext) and than updated each and every time the
 next longtext value available is.

 However this is ok up to the 10th longtext field and than it stops. The
 next longtext operations runs ok, no errormessages etc. but the longtext
 field itself remains empty.

 Up to now I have no clue about my wrongdoings, so do you have any
 suggestions?

 Is there a max of longtext fields in 1 row?
 Is there a max of longtext size in 1 row?

 Other idea's?

 Thanks in advance for any advice, best regards, Schimanski.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql



-- 

/Andrew R. Mueller

/603-748-8449


Re: questions about timestamps and DST

2015-03-30 Thread Andrew Moore
When you use a timezone with DST there is no such thing as 2.30am on the
date of changeover. That hour doesn't exist.

Look up the difference between timestamp and datetime data types.

A
On 31 Mar 2015 05:43, Larry Martell larry.mart...@gmail.com wrote:

 I have a column that was a timestamp type. I was inserting rows using
 NOW(). When we switched to DST and the hour from 2am to 3am was
 non-existent I of course had no data for that hour. For reasons I
 don't need to go into, that missing hour caused problems downstream.
 To prevent this from happening next year I changed the insert to use
 UTC_TIMESTAMP() and I wanted to fill in data for that missing hour.
 But no matter what I do it will not let me insert values for that hour
 - it gives me an 'Invalid TIMESTAMP value' warning and inserts a row
 with a time of 3:00 for any time in that hour I give. This makes me
 think that I have not actually solved the problem for next year (I
 can't test this to know).

 So my questions are:

 1) How can I actually insert a timestamp value that will not be
 affected by the time change and not have the missing hour?
 2) Why is it not allowing me to insert UTC times for that missing
 hour? How can I insert UTC values for that missing hour?


 TIA!
 -larry

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: mysqldump with single-transaction option.

2014-10-08 Thread Andrew Moore
We will tend to use binary backups (Xtrabackup) for full consistent dataset
restore (think slave provisioning and disaster recovery) and logical
backups to perform single table restores in the event that a rollback may
need to occur if someone drops a table or carries out an insane update. We
will also use mydumper instead of mysqldump due to the features of
compression and encryption. Mysqldump stops being useful on full|large
datasets due to it's single-threaded-ness.



On Tue, Oct 7, 2014 at 8:35 AM, yoku ts. yoku0...@gmail.com wrote:

 Maybe no, as you knew.

  It means that after lock is released, dump is made while the read and
 write
  activity is going on.   This dump then, would be inconsistent.

 Not only binary logs, each tables in your dump is based the time when
 mysqldump began to dump *each* tables.
 It means, for example, table1 in your dump is based 2014-10-07 00:00:00,
 and next table2 is based 2014-10-07 00:00:01, and next table3 is ..

 I don't have a motivation for restoring its consistency..


 Regards,


 2014-10-07 15:44 GMT+09:00 geetanjali mehra mailtogeetanj...@gmail.com:

  So, in case of * NOT Innodb * storage engine, say MyISAM, will this dump
 be
  of any useful?
 
  Best Regards,
  Geetanjali Mehra
  Senior Oracle and MySQL DBA Corporate Consultant and Database Security
  Specialist
 
 
  On Tue, Oct 7, 2014 at 11:55 AM, yoku ts. yoku0...@gmail.com wrote:
 
   Hello,
  
   If you use any *NOT InnoDB* storage engine, you're right.
   mysqldump with --single-transaction doesn't have any consistent as you
  say.
  
   If you use InnoDB all databases and tables, your dumping process is
   protected by transaction isolation level REPEATABLE-READ.
  
  
  
 
 http://dev.mysql.com/doc/refman/5.6/en/mysqldump.html#option_mysqldump_single-transaction
  
   Regards,
  
  
   2014-10-07 12:52 GMT+09:00 geetanjali mehra 
 mailtogeetanj...@gmail.com
  :
  
   It seems to me that once the read lock is acquired, only the binary
 log
   coordinates are read. Soon after binary log coordinates are read, lock
  is
   released.  Is there anything else that happens here?
  
   It means that after lock is released, dump is made while the read and
   write
   activity is going on.   This dump then, would be inconsistent.  So, to
   make
   this dump a consistent one when restoring it, binary log will be
 applied
   starting from the binary log  coordinates that has been read earlier.
  
   This is what I understand. Please correct me if my understanding is
  wrong.
  
   Best Regards,
   Geetanjali Mehra
   Senior Oracle and MySQL DBA Corporate Consultant and Database Security
   Specialist
  
  
   On Tue, Oct 7, 2014 at 6:22 AM, shawn l.green 
 shawn.l.gr...@oracle.com
  
   wrote:
  
Hello Geetanjali,
   
On 9/23/2014 7:14 AM, geetanjali mehra wrote:
   
Can anybody please mention the internals that works when we use
   mysqldump
as follows:
   
   
*mysqldump --single-transaction --all-databases 
   backup_sunday_1_PM.sql*
   
MySQL manual says:
   
This backup operation acquires a global read lock on all tables at
  the
beginning of the dump (using *FLUSH TABLES WITH READ LOCK
http://dev.mysql.com/doc/refman/5.6/en/flush.html*). As soon as
  this
lock
has been acquired, the binary log coordinates are read and the lock
  is
released. If long updating statements are running when the FLUSH
http://dev.mysql.com/doc/refman/5.6/en/flush.html statement is
   issued,
the backup operation may stall until those statements finish. After
   that,
the dump becomes lock-free and does not disturb reads and writes on
  the
tables.
   
Can anyone explain it more? Please.
   
   
Which part would you like to address first?
   
I have a feeling it's more about how FLUSH TABLES WITH READ LOCK
 works
   but
I want to be certain before answering.
   
Yours,
--
Shawn Green
MySQL Senior Principal Technical Support Engineer
Oracle USA, Inc. - Hardware and Software, Engineered to Work
 Together.
Office: Blountville, TN
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
   
  
  
  
 



Re: Replication problem

2014-08-29 Thread Andrew Moore
Whilst there are a few possibilities, check on the master that your binary
logs are being written to. Another possible reason could be filtering.
On 29 Aug 2014 21:36, william drescher will...@techservsys.com wrote:


 Replication novice

 I have a master server at the office and a replication server at home.
 This setup has been working for a couple of years. Occasionally the
 replication server gets out of sync (usually following a internet problem
 and the vpn going down.)
 I just stop the slave, make sure there is nothing going to the master
 (when the office is closed),
 copy the database,
 transfer the file,
 load the backup, and
 start the slave and all is well.

 This time there was not a communications problem of which I am aware.  The
 slave status said the slave_IO_state was Waiting for master to send event
 but it was not replicating.

 I did the usual

 now it is not updating the replication database (transactions made on the
 master do not show on the slave - using phpMyAdmin on both servers) BUT
 show master status shows the correct log file and the position is
 incrementing AND show slave status shows the same master log file and the
 same position as the master.  So, looking at the status info it seems to be
 running fine, but the transactions do not appear to appear on the slave.

 I seek suggestions how to 1) find out what goes wrong when the vpn goes
 down, and 2) (much more important now) how to find out whether or not the
 slave is actually replicating or not.

 --bill


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Big innodb tables, how can I work with them?

2014-05-19 Thread Andrew Moore
What kind of queries is this table serving? 8GB is not a huge amount of
data at all and IMO it's not enough to warrant sharding.


On Thu, May 15, 2014 at 1:26 PM, Antonio Fernández Pérez 
antoniofernan...@fabergroup.es wrote:

 ​

 ​
 ​Hi,

 I have in my server database some tables that are too much big and produce
 some slow query, even with correct indexes created.

 For my application, it's necessary to have all the data because we make an
 authentication process with RADIUS users (AAA protocol) to determine if one
 user can or not navigate in Internet (Depending on the time of all his
 sessions).

 So, with 8GB of data in one table, what are your advices to follow?
 Fragmentation and sharding discarted because we are working with disk
 arrays, so not apply. Another option is to delete rows, but in this case, I
 can't. For the other hand, maybe de only possible solution is increase the
 resources (RAM).

 Any ideas?

 Thanks in advance.

 Regards,

 Antonio.​



Re: Decode Json in MySQL query

2014-03-21 Thread Andrew Moore
May also be of interest;

http://www.slideshare.net/blueskarlsson/using-json-with-mariadb-and-mysql




On Fri, Mar 21, 2014 at 12:27 PM, Sukhjinder K. Narula
narula...@gmail.comwrote:

 Many Thanks for the kind replies.

 I have decoded in my code but just wondering in case I missed any solution
 to decode via query.


 On Thu, Mar 20, 2014 at 3:05 PM, Michael Dykman mdyk...@gmail.com wrote:

  Short answer, no.  There is nothing in MySQL to facilitate this. In
  general, storing structured data as a blob (JSON, CSV, XML-fragment,
  etc..) is an anti-pattern in a relational environment.  There are
  NoSQL solutions that provide the facility: Mongo comes to mind; there
  are some others, I am sure.
 
 
 
  On Thu, Mar 20, 2014 at 2:59 PM, Karr Abgarian a...@apple.com wrote:
   Hi, you probably want to perform this conversion on your client.
 There
  are JSON parser libraries available for Java, PHP and the like.   Cheers,
  Karr
  
   On Mar 20, 2014, at 11:35 AM, Sukhjinder K. Narula 
 narula...@gmail.com
  wrote:
  
   Hello,
   I would like to know if there is a way to decode the json string
 stored
  in
   one of the fields as text without using triggers or stored procedures.
   What I want to do is is within the query, I would like to get one row
  per
   element within the json string.
   For example: the json string is as follow:
  
   [
{
  name : Abc,
  age : 20
},
{
  name : Xyz,
  age : 18
}
   ]
  
   and after query, I want result as:
   NameAge
   Abc   20
   Xyz   18
  
  
   Would this be possible, I greatly appreciate any help regarding this
   matter.
  
   Many Thanks,
   Sukhjinder
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 



Re: SHOW CREATE TABLE suddenly slow on InnoDB?

2014-03-17 Thread Andrew Moore
Hey Brad. What version are you using? My immediate thought is to check if
innodb_stats_on_metadata is off. If it is on, switch off and check your
timings again.

Regards
On 17 Mar 2014 04:40, Brad Heller b...@cloudability.com wrote:

 Hey all,

 I'm trying to figure out how InnoDB executes a SHOW CREATE TABLE query so I
 can figure out what could possibly have made them suddenly slow down?

 mysql SHOW CREATE TABLE `my_table`;
 ...
 1 row in set (37.48 sec)

 We tend to execute many of these statements concurrently, but it's never
 been a problem until recently. I upgraded the IO subsystem, and our
 statistics indicate that it's not maxing out IO (at least IOPS).

 This is problematic because the ORM we're using uses that to figure out the
 structure of our DB...

 *Brad Heller *| Director of Engineering | Cloudability.com | 541-231-1514 |
 Skype: brad.heller | @bradhe http://www.twitter.com/bradhe |
 @cloudabilityhttp://www.twitter.com/cloudability

 We're hiring! https://cloudability.com/jobs
 http://www.cloudability.com/jobs



Re: New to MySQL

2014-03-05 Thread Andrew Moore
Next action is to review the MySQL error log for the reason that it failed.

A


On Wed, Mar 5, 2014 at 10:55 AM, Asma rabe asma.r...@gmail.com wrote:

 Hi All,

 Thank you very much.

 I checked if mysql installed
 rpm -qa | grep mysql

 and found it is installed, so no need for yum installation. when i tried to
 start the service
 chkconfig mysql on
 error reading information on service mysql: No such file or directory

 sorry to disturb you.

 Regards,
 Rabe



 On Tue, Mar 4, 2014 at 9:40 PM, Reindl Harald h.rei...@thelounge.net
 wrote:

 
 
  Am 04.03.2014 13:20, schrieb Asma rabe:
   I have checked before installation if mysql is installed using which
  mysql
   i found no mysql is installed.
 
  which is pointless
  rpm -qa | grep mysql
 
   next i did rpm installation and got the errors
   rpm -i MySQL-server-5.6.16-1.el6.x86_64.rpm
  
   I got the following errors
   file /usr/share/mysql/czech/errmsg.sys from install of
   MySQL-server-5.6.16-1.el6.x86_64 conflicts with file from package
   mysql-libs-5.1.61-1.el6_2.1.x86_64
   file /usr/share/mysql/danish/errmsg.sys from install of
   MySQL-server-5.6.16-1.el6.x86_64 conflicts with file from package
   mysql-libs-5.1.61-1.el6_2.1.x86_64
 
  clearly shows there are at least the libraries installed which is
  why i said which is pointless
 
  *do not* use the raw rpm command it can't solve any dependencies
  use yum which works also for local downloaded RPM files
 
   when i check now which mysql
   i found it has installed in /bin/mysql
   when i try to run mysql
 
  i doubt that MySQL-server contains /bin/mysql nor that after
  the error above anything was installed expect you did not
  show all what happened on your machine
 
   Can't connect to local MySQL server through socket
  '/var/mysql/mysql.sock'
   Any idea?
 
  installing something does not mean it gets started or even enabled
  mysql is the client CLI connecting to a server, without specify one
  it connects to the local unix socket but given that your install
  above failed nor that you have tried to start mysqld that must fail
 
 



Re: New to MySQL

2014-03-05 Thread Andrew Moore
enable *isa not* start - :D

This is true. The mailing list is not an interactive article on the basics
of installing and starting MySQL. There are 1000s of blog articles,
documentation and guides on doing this.

Perhaps looking here;

http://www.rackspace.com/knowledge_center/article/installing-mysql-server-on-centos

If you have some error messages pending the completion of the steps please
be complete with your problem and include the messages you receive.

A


On Wed, Mar 5, 2014 at 11:06 AM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 05.03.2014 12:03, schrieb Andrew Moore:
  Next action is to review the MySQL error log for the reason that it
 failed.

 oh no - don't reply with no clue
 chkconfig comes long before mysqld is invoked
 even if he would have spelled the service name correctly
 enable *isa not* start
 see my last answer

  On Wed, Mar 5, 2014 at 10:55 AM, Asma rabe asma.r...@gmail.com mailto:
 asma.r...@gmail.com wrote:
  Thank you very much.
 
  I checked if mysql installed
  rpm -qa | grep mysql
 
  and found it is installed, so no need for yum installation. when i
 tried to
  start the service
  chkconfig mysql on
  error reading information on service mysql: No such file or directory
 
  sorry to disturb you




Re: New to MySQL

2014-03-04 Thread Andrew Moore
Percona, MariaDB and Oracle provide YUM repos now.
Remove all traces of MySQL before starting install using ;

rpm -e {package} --nodeps

then grab your YUM packages.

A




On Tue, Mar 4, 2014 at 7:01 PM, Brian Van der Westhuizen [DATACOM] 
brian.vanderwesthui...@datacom.co.nz wrote:

 Why don't you just use yum install .
 Should sort out most if not all your problems regarding installation of
 mysql.

 Regards
 Brian vd Westhuizen

 -Original Message-
 From: Asma rabe [mailto:asma.r...@gmail.com]
 Sent: Wednesday, 5 March 2014 1:21 a.m.
 To: geetanjali mehra
 Cc: Johan De Meersman; mysql@lists.mysql.com
 Subject: Re: New to MySQL

 Thank you very much for responding.
 I have checked before installation if mysql is installed using which mysql
 ,i found no mysql is installed.
 next i did rpm installation and got the errors

 rpm -i MySQL-server-5.6.16-1.el6.x86_64.rpm

 I got the following errors
 file /usr/share/mysql/czech/errmsg.sys from install of
 MySQL-server-5.6.16-1.el6.x86_64 conflicts with file from package
 mysql-libs-5.1.61-1.el6_2.1.x86_64
 file /usr/share/mysql/danish/errmsg.sys from install of
 MySQL-server-5.6.16-1.el6.x86_64 conflicts with file from package
 mysql-libs-5.1.61-1.el6_2.1.x86_64


 when i check now which mysql
  i found it has installed in /bin/mysql
 when i try to run mysql

 Can't connect to local MySQL server through socket '/var/mysql/mysql.sock'
 (2)

 Any idea?
 Thank you all for your kind help

 Rabe




 On Tue, Mar 4, 2014 at 12:30 AM, geetanjali mehra 
 mailtogeetanj...@gmail.com wrote:

  Thanks for responding.
  MySQL installation ,here, do not require mysql-libs package. As far as
  I know, there is no harm on using this command. I too got the same
  problem, and I didn't face any problem after removing this package.
  This package is installed by default and not needed here. So, this
 command is very safe.
 
 
  On Mon, Mar 3, 2014 at 7:49 PM, Johan De Meersman vegiv...@tuxera.be
  wrote:
 
   - Original Message -
From: geetanjali mehra mailtogeetanj...@gmail.com
Subject: Re: New to MySQL
   
use
rpm -i --replacefiles MySQL-server*.rpm
  
   While that will work, it really shouldn't happen. I'm a Debian man
  myself,
   so I don't know wether it's a problem with the packages or if you
   simply don't need the -libs package when installing the server.
  
   I would, however, suggest using Rug, Zypper or a similar advanced
   package manager instead of barebones RPM, as those will actually
   find and install any necessary dependencies, too.
  
  
  
   --
   Unhappiness is discouraged and will be corrected with kitten pictures.
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 
 
  --
  Geetanjali Mehra
  Oracle DBA Corporate Trainer
  Koenig-solutions
  Moti Nagar,New Delhi
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Lost connection to MySQL server - need help.

2013-10-12 Thread Andrew Moore
Could be a crash related to innodb data dictionary being out of sync. Could
be a bug.

http://bugs.mysql.com/bug.php?id=55277
On 12 Oct 2013 11:21, Jørn Dahl-Stamnes sq...@dahl-stamnes.net wrote:

 On Saturday 12 October 2013 12:01, nixofortune wrote:
  You might want to comment
 
  bind-address= 127.0.0.1
 
  in your my.cnf and restart mysql server.

 It does not explain why it works under low load and not under high load.

 However, I seem to have found something. When I started phpMyAdmin and
 selected one of the database, the server went away again and I found this
 in /var/log/syslog:

 Oct 12 11:53:33 cebycny mysqld: 131012 11:53:33  InnoDB: Assertion failure
 in
 thread 140182892447488 in file
 ../../../storage/innobase/handler/ha_innodb.cc
 line
  8066
 Oct 12 11:53:33 cebycny mysqld: InnoDB: Failing assertion: auto_inc  0
 Oct 12 11:53:33 cebycny mysqld: InnoDB: We intentionally generate a memory
 trap.
 Oct 12 11:53:33 cebycny mysqld: InnoDB: Submit a detailed bug report to
 http://bugs.mysql.com.
 Oct 12 11:53:33 cebycny mysqld: InnoDB: If you get repeated assertion
 failures
 or crashes, even
 Oct 12 11:53:33 cebycny mysqld: InnoDB: immediately after the mysqld
 startup,
 there may be
 Oct 12 11:53:33 cebycny mysqld: InnoDB: corruption in the InnoDB
 tablespace.
 Please refer to
 Oct 12 11:53:33 cebycny mysqld: InnoDB:
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 Oct 12 11:53:33 cebycny mysqld: InnoDB: about forcing recovery.
 Oct 12 11:53:33 cebycny mysqld: 09:53:33 UTC - mysqld got signal 6 ;
 Oct 12 11:53:33 cebycny mysqld: This could be because you hit a bug. It is
 also possible that this binary
 Oct 12 11:53:33 cebycny mysqld: or one of the libraries it was linked
 against
 is corrupt, improperly built,
 Oct 12 11:53:33 cebycny mysqld: or misconfigured. This error can also be
 caused by malfunctioning hardware.
 Oct 12 11:53:33 cebycny mysqld: We will try our best to scrape up some info
 that will hopefully help
 Oct 12 11:53:33 cebycny mysqld: diagnose the problem, but since we have
 already crashed,
 Oct 12 11:53:33 cebycny mysqld: something is definitely wrong and this may
 fail.
 Oct 12 11:53:33 cebycny mysqld:
 Oct 12 11:53:33 cebycny mysqld: key_buffer_size=2147483648
 Oct 12 11:53:33 cebycny mysqld: read_buffer_size=2097152
 Oct 12 11:53:33 cebycny mysqld: max_used_connections=8
 Oct 12 11:53:33 cebycny mysqld: max_threads=100
 Oct 12 11:53:33 cebycny mysqld: thread_count=2
 Oct 12 11:53:33 cebycny mysqld: connection_count=2
 Oct 12 11:53:33 cebycny mysqld: It is possible that mysqld could use up to
 Oct 12 11:53:33 cebycny mysqld: key_buffer_size + (read_buffer_size +
 sort_buffer_size)*max_threads = 3941387 K  bytes of memory
 Oct 12 11:53:33 cebycny mysqld: Hope that's ok; if not, decrease some
 variables in the equation.
 Oct 12 11:53:33 cebycny mysqld:
 Oct 12 11:53:33 cebycny mysqld: Thread pointer: 0x7f7f1bf997c0
 Oct 12 11:53:33 cebycny mysqld: Attempting backtrace. You can use the
 following information to find out
 Oct 12 11:53:33 cebycny mysqld: where mysqld died. If you see no messages
 after this, something went
 Oct 12 11:53:33 cebycny mysqld: terribly wrong...
 Oct 12 11:53:33 cebycny mysqld: stack_bottom = 7f7edf81fe88 thread_stack
 0x3
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(my_print_stacktrace+0x29)
 [0x7f7edff62b59]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(handle_fatal_signal+0x483)
 [0x7f7edfd774a3]
 Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0xeff0)
 [0x7f7edf4c9ff0]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(gsignal+0x35)
 [0x7f7eddf6c1b5]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(abort+0x180)
 [0x7f7eddf6efc0]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(ha_innobase::innobase_peek_autoinc()+0x8f)
 [0x7f7edfe1fa2f]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(ha_innobase::info_low(unsigned int, bool)+0x18f)
 [0x7f7edfe2524f]
 Oct 12 11:53:33 cebycny
 mysqld:
 /usr/sbin/mysqld(ha_innobase::update_create_info(st_ha_create_information*)+0x29)
 [0x7f7edfe256b9]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(+0x49e3dc)
 [0x7f7edfd953dc]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysqld_show_create(THD*,
 TABLE_LIST*)+0x7a8) [0x7f7edfd9d388]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(mysql_execute_command(THD*)+0x184a)
 [0x7f7edfc7cb0a]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(mysql_parse(THD*, char*,
 unsigned int, char const**)+0x3fb) [0x7f7edfc80dbb]
 Oct 12 11:53:33 cebycny
 mysqld: /usr/sbin/mysqld(dispatch_command(enum_server_command, THD*, char*,
 unsigned int)+0x115a) [0x7f7edfc81f2a]
 Oct 12 11:53:33 cebycny mysqld: /usr/sbin/mysqld(do_command(THD*)+0xea)
 [0x7f7edfc8285a]
 Oct 12 11:53:33 cebycny mysqld:
 /usr/sbin/mysqld(handle_one_connection+0x235)
 [0x7f7edfc74435]
 Oct 12 11:53:33 cebycny mysqld: /lib/libpthread.so.0(+0x68ca)
 [0x7f7edf4c18ca]
 Oct 12 11:53:33 cebycny mysqld: /lib/libc.so.6(clone+0x6d) [0x7f7ede00992d]
 Oct 12 

Re: binlog_format and pt-table-checksum?

2013-09-11 Thread Andrew Moore
Rafał,

pt-table-checksum will set binlog_format to statement for it's own session.

As for data drift, there are many causes for it but with statement
based replication for your general replication configuration leaves
you open to non-deterministic functions causing diffs throughout your
data.

HTH

A

On 9/11/13, Rafał Radecki radecki.ra...@gmail.com wrote:
 Hi All.

 I use binlog_format = row for my production databases. In this format most
 binlog_format changes are sent not as SQL statements but in some other
 format.
 I understand that when binlog_format = statement is used, queries on master
 and slave can give different results, but should pt-table-cheksum be used
 in situation when we use binlog_format = row?

 I've seen opinions that regardles of binlog_format the data on slave and
 master may differ. What are your experiences?

 Best regards,
 Rafał.


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: restore problem

2013-09-08 Thread Andrew Moore
I would suggest making a physical backup. Shutdown MySQL on source, copy
datadir and start on the the destination server. (observe configuration
differences between the two machines)

*Benefits;* consistent backup of non-transactional files.
*Drawbacks;* downtime required.


On Sun, Sep 8, 2013 at 10:16 PM, Michael Dykman mdyk...@gmail.com wrote:

 I would suggest that you test your backup file on another full-featured
 server to determine that it is a valid first.  I have done a little work
 with the raspberry pi and I doubt that the mysql distribution for that
 platform comes with all the features your server-class ubuntu does, so it
 is quite possible that you backup file is trying to take advantage of some
 facilities available on the source host that are not available on your
 lightweight target.



 On Sun, Sep 8, 2013 at 3:05 PM, Trianon33 triano...@gmail.com wrote:

  Hello,
 
  Thanks for your quick reply.
 
  I use WEBMIN and within the Webmin modules you can make simply a SQL
  backup, which is in fact a flat file consisting of MYSQL commands.
 
  Explains this enough?
 
  Thanks, BR
 
 
 
  Op 8 sep. 2013, om 21:02 heeft Luis H. Forchesatto 
  luisforchesa...@gmail.com het volgende geschreven:
 
   How did you backed up the MySQL?
  
   Att.
   Luis H. Forchesatto
  
   Em 08/09/2013 16:00, Trianon33 triano...@gmail.com escreveu:
   Hello all,
  
   As this is my first post to this list (though reading for some time
  yet), I maybe not complete in asking my question. Apologies for that.
  
   On the other hand, this is not a prio 1 problem, so if it takes some
  mailing, that's ok.
  
   I have a website on my private webserver, which consist of some kind of
  a LAMP-server on a raspberry. Previously it ran on a laptop with Ubuntu
 and
  the same kind of LAMP-server.
  
   I'm trying to restore the SQL-backup from the old machine to restore
  with the raspberry. While finding the file and starting the restore is no
  problem, I get the following message and a full stop:
  
   SELECT MAX( version )
   FROM `phpmyadmin`.`pma_tracking`
   WHERE `db_name` = 'bbz'
   AND `table_name` = 'wp_links VALUES'
   AND FIND_IN_SET( 'INSERT', tracking ) 0
  
   MySQL retourneerde: b_help.png
  
   #1100 - Table 'pma_tracking' was not locked with LOCK TABLES
  
   Since I do a full restore I'm processing the younameit.SQL file
  
   I'm bad in interpreting the error messag, don't understand what it
  exactly mens and what I can do to cure this problem.
  
   Anyone a suggestion?
  
   Thanks in advance, BR
 
 


 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.



Re: Performance Improvements with VIEW

2013-07-30 Thread Andrew Moore
I think you're reducing the amount of rows referenced throughout the proc
using the view. This might be where you're seeing a performance difference.
If you create an innodb table where the structure and row count match the
view maybe you'll see another difference? I'll wait for Rick James' input
before I say anything more. ;-)


On Tue, Jul 30, 2013 at 12:13 PM, Manivannan S. 
manivanna...@spanservices.com wrote:

 Hi,

 I've a table with 10 Million records in MySQL with INNODB engine. Using
 this table I am doing some calculations in STORED PROCEDURE and getting the
 results.

 In Stored Procedure I used the base table and trying to process all the
 records in the table. But it's taking more than 15 Minutes to execute the
 procedure. When executing the Procedure in the process list I am getting 3
 states like 'Sending data', 'Sorting Result' and 'Sending data' again.

 Then I created one view by using  the base table and updated the procedure
 by replacing that view in the place of a base table, it took only 4 minutes
 to execute the procedure with a view. When executing the Procedure in the
 process list I am getting 2 states like 'Sorting Result' and 'Sending
 data'. The first state of 'Sending data' is not happened with view, It's
 directly started with 'Sorting Result' state.

 When I'm referring some MySQL sites and other blogs, I have seen that
 VIEWS will never improve the performance. But here I see some improvements
 with a view.

 I would like to know how VIEW is improving the performance.

 Regards
 Manivannan S


 DISCLAIMER: This email message and all attachments are confidential and
 may contain information that is Privileged, Confidential or exempt from
 disclosure under applicable law. If you are not the intended recipient, you
 are notified that any dissemination, distribution or copying of this email
 is strictly prohibited.  If you have received this email in error, please
 notify us immediately by return email to mailad...@spanservices.com and
 destroy the original message.  Opinions, conclusions and other information
 in this message that do not relate to the official of SPAN, shall be
 understood to be nether given nor endorsed by SPAN.



Re: Audit Table storage for Primary Key(s)

2013-05-31 Thread Andrew Moore
There's been a thirst for this kind of thing for sometime but possibly
you're looking for a cheaper option? Since 5.5 there's some incarnation of
an audit plugin which can be extended for your own needs which should allow
you to perform some persistence of the results with either a log file which
could be processed into a table before you run your daily query?

http://dev.mysql.com/doc/refman/5.5/en/writing-audit-plugins.html
http://karlssonondatabases.blogspot.co.uk/2010/03/mysql-audit-plugin-api.html

HTH

Andy


On Fri, May 31, 2013 at 12:05 PM, Carsten Pedersen cars...@bitbybit.dkwrote:

 Based on the little information available, I would make a lookup field
 consisting of tablename and primary keys.

 (although I still believe that storing this information in the database in
 the first place is probably the wrong approach, but to each his own)

 / Carsten


 On 31-05-2013 12:58, Neil Tompkins wrote:

 The kind of look ups will be trying to diagnose when and by who applied
 a update.  So the primary key of the audit is important.  My question is
 for performance, should the primary key be stored as a indexed field
 like I mentioned before, or should I have a actual individual field per
 primary key


 On Fri, May 31, 2013 at 11:03 AM, Carsten Pedersen cars...@bitbybit.dk
 mailto:cars...@bitbybit.dk wrote:

 Again: Unless you can give some idea as to the kind of lookups you
 will be performing (which fields? Temporal values? etc.), it is
 impossible to give advice on the table structure. I wouldn't blame
 anyone for not being able to do so; saving data for debugging will
 always be a moving target and almost by definition you don't know
 today what you'll be looking for tomorrow.

 That's why I think that using CSV tables _the contents of which can
 subsequently be analyzed using any of a number of text file
 processing tools_ may indeed be your best initial option.

 On UUIDs vs. INTs: (1) Please do yourself a favor and read up on how
 UUIDs are generated. If it's the same server that generates all the
 UUIDs, you won't get a lot of uniqueness for the amount of space
 you'll be using for your data and index; (2) Please do the math of
 just how many inserts you can do per second over the next 1.000
 years if you use a longint auto-increment field for your PK.

 / Carsten

 On 31-05-2013 11 tel:31-05-2013%2011:14, Neil Tompkins wrote:

 Thanks for your response.  We expect to use the Audit log when
 looking into
 exceptions and/or any need to debug table updates.  I don't
 think a CSV
 table would be sufficient as we are wanting to use a interface
 to query
 this data at least on a daily basis if not weekly.

 I use UUID because we have currently 54 tables, of which
 probably 30 will
 be audited.  So a INT PK wouldn't work because of the number of
 updates we
 are applying.


 On Fri, May 31, 2013 at 9:58 AM, Carsten Pedersen
 cars...@bitbybit.dk mailto:cars...@bitbybit.dk**wrote:


 On 30-05-2013 09:27, Neil Tompkins wrote:

 Hi,

 I've created a Audit table which tracks any changed
 fields for multiple
 tables.  In my Audit table I'm using a UUID for the
 primary key.  However
 I
 need to have a reference back to the primary key(s) of
 the table audited.

 At the moment I've a VARCHAR field which stores primary
 keys like

 1
 1|2013-05-29
 2|2013-05-29
 2
 3
 1|2|2
 etc

 Is this the best approach, or should I have a individual
 field in the
 audit
 table for all primary keys.  At the moment I think the
 max number of
 primary keys on any given table is 3

 Thanks
 Neil


 First you need to ask yourself how you expect to use the
 table in the
 future. Will you be looking up the data on a regular basis?
 Or will lookups
 only be something you will do in exceptional situtions?

 What is the intended goal of having a UUID for the primary
 key rather
 than, say, an integer - or having no PK at all?

 My immediate thought when reading this was why even store
 that data in a
 table? - if it's a simple log, use a log file. Especially
 if you don't
 know how you intend to search for data later on. There are
 many tools that
 are far superior to SQL when it comes to searching for text
 strings. You
 could even consider having a 

Re: Audit Table storage for Primary Key(s)

2013-05-31 Thread Andrew Moore
Ah-ha, excuse my earlier response, I was under the impression you were
trying to track schema changes etc.

A


On Fri, May 31, 2013 at 7:54 PM, Rick James rja...@yahoo-inc.com wrote:

 UUID PRIMARY KEY (or even secondary index) --
 Once the table gets big enough (bigger than RAM cache), each row INSERTed
 (or SELECTed) will be a disk hit.  (Rule of Thumb:  only 100 hits/sec.)
  This is because _random_ keys (like UUID) make caching useless.  Actually,
 the slowdown will be gradual.  For example, once the table is 5 times as
 big as the cache, 80% (1-1/5) of the INSERTs/SELECTs will hit disk.
 Bottom line -- Avoid UUIDs in huge tables, if at all possible.
  (Exception:  The bits in type-1 UUIDs can be rearranged to be roughly
 chronological.)

 BIGINT -- You cannot possibly hit its max with any existing hardware.

 MyISAM -- PRIMARY KEY is just another secondary index.  Secondary indexes
 are separate BTrees.

 InnoDB -- PRIMARY KEY and data coexist in the same BTree.  Secondary
 indexes are separate BTrees.

 So, assuming this audit table will be huge (too big to be cached), you
 need to carefully consider every index, both for writing and for reading.

 You mentioned that you might audit 50 tables?  An index that starts with
 table_name would be inserting/selecting in 50 spots.  If the second part of
 the index is something 'chronological', such as an AUTO_INCREMENT or
 TIMESTAMP, then there would be 50 hot spots in the index.  This is quite
 efficient.  INDEX(table_name, UUID) would be bad because of the randomness.

 InnoDB may be the preferred engine, even though the footprint is bigger.
  This is because careful design of the PK could lead to INSERTs into hot
 spot(s), plus SELECTs being able to take advantage of locality of
 reference.  With PRIMARY KEY(table_name, ...), and SELECT .. WHERE
 tablename='...', InnoDB will find all the rows together (fewer disk hits);
 MyISAM will find the data scattered (more disk hits, hence slower).

 Another aspect...  Would your SELECTs say WHERE ... AND timestamp
 BETWEEN... ?  And, would you _usually_ query _recent_ times?  If so, there
 could be a boost from doing both of these
 ** PARTITION BY RANGE(TO_DAYS(timestamp))
 ** Move timestamp to the _end_ of any indexes that it is in.

 I would be happy to discuss these principles further.  To be able to
 discuss more specifically, please provide
 ** Your tentative SHOW CREATE TABLE
 ** how big you plan for the table to become (#rows or GB),
 ** how much RAM you have

  -Original Message-
  From: Carsten Pedersen [mailto:cars...@bitbybit.dk]
  Sent: Friday, May 31, 2013 4:05 AM
  Cc: [MySQL]
  Subject: Re: Audit Table storage for Primary Key(s)
 
  Based on the little information available, I would make a lookup field
  consisting of tablename and primary keys.
 
  (although I still believe that storing this information in the database
  in the first place is probably the wrong approach, but to each his own)
 
  / Carsten
 
  On 31-05-2013 12:58, Neil Tompkins wrote:
   The kind of look ups will be trying to diagnose when and by who
   applied a update.  So the primary key of the audit is important.  My
   question is for performance, should the primary key be stored as a
   indexed field like I mentioned before, or should I have a actual
   individual field per primary key
  
  
   On Fri, May 31, 2013 at 11:03 AM, Carsten Pedersen
   cars...@bitbybit.dk mailto:cars...@bitbybit.dk wrote:
  
   Again: Unless you can give some idea as to the kind of lookups
  you
   will be performing (which fields? Temporal values? etc.), it is
   impossible to give advice on the table structure. I wouldn't
  blame
   anyone for not being able to do so; saving data for debugging
  will
   always be a moving target and almost by definition you don't know
   today what you'll be looking for tomorrow.
  
   That's why I think that using CSV tables _the contents of which
  can
   subsequently be analyzed using any of a number of text file
   processing tools_ may indeed be your best initial option.
  
   On UUIDs vs. INTs: (1) Please do yourself a favor and read up on
  how
   UUIDs are generated. If it's the same server that generates all
  the
   UUIDs, you won't get a lot of uniqueness for the amount of space
   you'll be using for your data and index; (2) Please do the math
  of
   just how many inserts you can do per second over the next 1.000
   years if you use a longint auto-increment field for your PK.
  
   / Carsten
  
   On 31-05-2013 11 tel:31-05-2013%2011:14, Neil Tompkins wrote:
  
   Thanks for your response.  We expect to use the Audit log
  when
   looking into
   exceptions and/or any need to debug table updates.  I don't
   think a CSV
   table would be sufficient as we are wanting to use a
  interface
   to query
   this data at least on a daily basis if not weekly.
  
   I 

Re: Bug in BETWEEN same DATETIME

2013-05-23 Thread Andrew Moore
Sorry, that was meant to be;

WHERE (new column stored as date) = '2013-04-16'


On Thu, May 23, 2013 at 10:16 PM, Andrew Moore eroomy...@gmail.com wrote:

 Personally I don't share your view that it's a bug. Omitting the time
 results in midnight by default so this screws between because there's no
 time between 00:00:00 and 00:00:00.

 Are you having operational issues here or are you simply fishing for bugs?

 WHERE  `transaction_date` = DATE(datetime)
 or
 WHERE  `transaction_date` = (new column stored as date)





 On Thu, May 23, 2013 at 9:55 PM, Daevid Vincent dae...@daevid.com wrote:

 I just noticed what I consider to be a bug; and related, has this been
 fixed
 in later versions of MySQL?

 We are using:
 mysql  Ver 14.12 Distrib 5.0.92, for portbld-freebsd8.1 (amd64) using  5.2

 If you use BETWEEN and the same date for both parts (i.e. you want a
 single
 day) it appears that the operator isn't smart enough to consider the full
 day in the cases where the column is a DATETIME


 http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_be
 tween

 WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16'

 I actually have to format it like this to get results

 WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND '2013-04-16
 11:59:59'

 As it appears that in the first instance it defaults the time to 00:00:00
 always, as verified by this:

 WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16 11:59:59'

 So,  I think it's probably safe to assume that if someone is using the
 BETWEEN on datetime columns, their intent more often than not is to get
 the
 full 24 hour period, not the 0 seconds it currently pulls by default.

 I also tried these hacks as per the web page above, but this doesn't yield
 results either

 WHERE `transaction_date` BETWEEN CAST('2013-04-16' AS DATE) AND
 CAST('2013-04-16' AS DATE)
 WHERE `transaction_date` BETWEEN CAST('2013-04-16' AS DATETIME) AND
 CAST('2013-04-16' AS DATETIME)

 This one works, but I fail to see how it's any more beneficial than using
 a
 string without the CAST() overhead?

 WHERE `transaction_date` BETWEEN CAST('2013-04-16 00:00:00' AS DATETIME)
 AND
 CAST('2013-04-16 11:59:59' AS DATETIME)

 Or is there some other magical incantation that is supposed to be used
 (without me manually appending the time portion)?





Re: Bug in BETWEEN same DATETIME

2013-05-23 Thread Andrew Moore
Personally I don't share your view that it's a bug. Omitting the time
results in midnight by default so this screws between because there's no
time between 00:00:00 and 00:00:00.

Are you having operational issues here or are you simply fishing for bugs?

WHERE  `transaction_date` = DATE(datetime)
or
WHERE  `transaction_date` = (new column stored as date)





On Thu, May 23, 2013 at 9:55 PM, Daevid Vincent dae...@daevid.com wrote:

 I just noticed what I consider to be a bug; and related, has this been
 fixed
 in later versions of MySQL?

 We are using:
 mysql  Ver 14.12 Distrib 5.0.92, for portbld-freebsd8.1 (amd64) using  5.2

 If you use BETWEEN and the same date for both parts (i.e. you want a single
 day) it appears that the operator isn't smart enough to consider the full
 day in the cases where the column is a DATETIME


 http://dev.mysql.com/doc/refman/5.0/en/comparison-operators.html#operator_be
 tween

 WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16'

 I actually have to format it like this to get results

 WHERE `transaction_date` BETWEEN '2013-04-16 00:00:00' AND '2013-04-16
 11:59:59'

 As it appears that in the first instance it defaults the time to 00:00:00
 always, as verified by this:

 WHERE `transaction_date` BETWEEN '2013-04-16' AND '2013-04-16 11:59:59'

 So,  I think it's probably safe to assume that if someone is using the
 BETWEEN on datetime columns, their intent more often than not is to get the
 full 24 hour period, not the 0 seconds it currently pulls by default.

 I also tried these hacks as per the web page above, but this doesn't yield
 results either

 WHERE `transaction_date` BETWEEN CAST('2013-04-16' AS DATE) AND
 CAST('2013-04-16' AS DATE)
 WHERE `transaction_date` BETWEEN CAST('2013-04-16' AS DATETIME) AND
 CAST('2013-04-16' AS DATETIME)

 This one works, but I fail to see how it's any more beneficial than using a
 string without the CAST() overhead?

 WHERE `transaction_date` BETWEEN CAST('2013-04-16 00:00:00' AS DATETIME)
 AND
 CAST('2013-04-16 11:59:59' AS DATETIME)

 Or is there some other magical incantation that is supposed to be used
 (without me manually appending the time portion)?



RE: Chain Replication QUestion

2013-05-01 Thread Andrew Morgan
If you're able to use MySQL 5.6 and enable GTIDs then it gets a whole lot 
simpler as you don't need to worry about finding the correct positions in the 
binary logs. Take a look at 
http://www.mysql.com/why-mysql/white-papers/mysql-replication-high-availability/
 and http://www.mysql.com/why-mysql/white-papers/mysql-replication-tutorial/

Andrew.

 -Original Message-
 From: Rick James [mailto:rja...@yahoo-inc.com]
 Sent: 01 May 2013 16:29
 To: Richard Reina; Manuel Arostegui
 Cc: mysql@lists.mysql.com
 Subject: RE: Chain Replication QUestion
 
  1) Enable log-bin on master2 (slave that will be converted to a
 master)
 That does not 'convert' it -- it makes it both a Master and a Slave (a
 Relay).
 
 The CHANGE MASTER is probably correct, but it is difficult to find the
 right spot.
 A simple way is to
 1. Stop all writes everywhere.
 2. Wait for replication to catchup everywhere.
 3. FLUSH LOGS everywhere.
 4. Now CHANGE MASTER on the Slave to the start (POS=0 or 4) of the
 freshly created binlog in the machine that is the Slave's new Master.
 5. Start writes.
 
 
  -Original Message-
  From: Richard Reina [mailto:gatorre...@gmail.com]
  Sent: Wednesday, May 01, 2013 6:00 AM
  To: Manuel Arostegui
  Cc: mysql@lists.mysql.com
  Subject: Re: Chain Replication QUestion
 
  Hello Manuel,
 
  Thank you for your reply. Could I do the following?:
 
  1) Enable log-bin on master2 (slave that will be converted to a
 master)
  2) Enable log-slave-updates on master2
  3) Execute CHANGE MASTER to on another existing slave so that it gets
  it's updates from master2 instead of master1.
 
  Thanks for the help thus far.
 
 
  2013/4/30, Manuel Arostegui man...@tuenti.com:
   2013/4/30 Richard Reina gatorre...@gmail.com
  
   I have a few slaves set up on my local network that get updates
 from
   my main mysql database master. I was hoping to turn one into a
  master
   while keeping it a slave so that I can set up a chain.  Does
 anyone
   know where I can find a how to or other documentation for this
   specific task?
  
  
   It is quite easy:
  
   Enable log-slave-updates in the slave you want to be a master.
   Do a mysqldump -e --master-data=2 and put that mysqldump in the
  future
   slaves. Take a look at the first lines of the mysqldump where
 you'll
   find the position and logfile those slaves need to start the
  replication from.
   You can also use xtrabackup if you like.
  
   Manuel.
  
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Cluster or MySQL Cloud

2013-04-30 Thread Andrew Morgan
Hi Neil,

 If you use MySQL Cluster then you have synchronous replication between the 2 
data nodes which means that if one should fail you're guaranteed that the other 
contains the effects of every committed transaction and that the change has 
already been applied and so there is no delay while relay logs are applied 
before the automatic failover kicks in - which is why it can take less than a 
second.

 You also have a good scale-out story with MySQL Cluster as you can just 
continue to add more nodes (256 in total, 48 of which can be data nodes) withou 
having to worry about partitioning, failover etc.

Regards, Andrew.

 -Original Message-
 From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
 Sent: 29 April 2013 14:50
 To: Andrew Morgan
 Cc: [MySQL]
 Subject: Re: MySQL Cluster or MySQL Cloud
 
 Hi Andrew,
 
 Thanks for your response and the useful white paper.  I've read the
 document in great detail.  I'm looking for the best up time possible for my
 application and am still struggling to see the major differences with MySQL
 cluster compared to MySQL in the Cloud on multiple servers; apart from
 MySQL Cluster being much better solution for automatic failover including IP
 failover.
 
 Regards, Neil
 
 
 On Mon, Apr 29, 2013 at 8:47 AM, Andrew Morgan
 andrew.mor...@oracle.comwrote:
 
  Hi Neil,
 
   I hate just sending people off to white papers but you might get some
  good insights by taking a look at the MySQL Guide to High
  Availability Solutions paper -
  http://www.mysql.com/why-mysql/white-papers/mysql-guide-to-high-
 availa
  bility-solutions/
 
  Regards, Andrew.
 
  Andrew Morgan - MySQL High Availability Product Management
  andrew.mor...@oracle.com @andrewmorgan www.clusterdb.com
 
   -Original Message-
   From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
   Sent: 27 April 2013 23:28
   To: [MySQL]
   Subject: Fwd: MySQL Cluster or MySQL Cloud
  
If deploying MySQL in the Cloud with two MySQL servers with master
to
   master replication i have a good failover solution.
   
Whats the different in terms of availability if we opted for MySQL
  Cluster
   instead ?
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Cluster or MySQL Cloud

2013-04-29 Thread Andrew Morgan
Hi Neil,

 I hate just sending people off to white papers but you might get some good 
insights by taking a look at the MySQL Guide to High Availability Solutions 
paper - 
http://www.mysql.com/why-mysql/white-papers/mysql-guide-to-high-availability-solutions/

Regards, Andrew.

Andrew Morgan - MySQL High Availability Product Management
andrew.mor...@oracle.com
@andrewmorgan
www.clusterdb.com

 -Original Message-
 From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
 Sent: 27 April 2013 23:28
 To: [MySQL]
 Subject: Fwd: MySQL Cluster or MySQL Cloud
 
  If deploying MySQL in the Cloud with two MySQL servers with master to
 master replication i have a good failover solution.
 
  Whats the different in terms of availability if we opted for MySQL Cluster
 instead ?
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: mysql 5.6.10 won't start

2013-04-15 Thread Andrew Moore
Have you got permissions to start the daemon (sudo)? Is the destination for
the pid file writable by the MySQL user?

A


On Mon, Apr 15, 2013 at 2:20 PM, Paul Nowosielski paulnowosiel...@yahoo.com
 wrote:

 Hi,

 I'm running mysql 5.6.10 on Fedora.
 when I try and boot mysql I get this:

 # /etc/init.d/mysql start
 Starting MySQL.The server quit without updating PID file
 (/[FAILED]xt/mysql/veritian.**pid).

 Any ideas?

 Thank you,

 Paul

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




RE: Mysql Cluster Sync-UP

2013-04-10 Thread Andrew Morgan
Hi Kevin,

 What do you mean by running MySQL in cluster mode - MySQL Cluster? If so then 
the data is stored in the data nodes rather than the MySQL Servers and so if 
bad data is written to one MySQL Server then that same bad data will be viewed 
through the other MySQL Server too.

Regards, Andrew.

 -Original Message-
 From: Kevin Peterson [mailto:qh.res...@gmail.com]
 Sent: 09 April 2013 04:58
 To: mysql@lists.mysql.com
 Subject: Mysql Cluster Sync-UP
 
 Hi,
 
 I am running My-SQL in cluster mode with two machine. Want to know if
 mysql database get corrupted on one of the machine will it force the
 corruption on the other machine too or in this case sync between two
 mysql instances will stop after the corruption.
 
 Thanks,
 Kevin Peterson
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Converting Mysql to mysql cluster

2013-03-27 Thread Andrew Morgan


 -Original Message-
 From: Kevin Peterson [mailto:qh.res...@gmail.com]
 Sent: 27 March 2013 06:58
 To: mysql@lists.mysql.com
 Subject: Converting Mysql to mysql cluster
 
 Hi,
 
 My site is using mysql and PHP, now for the scale purpose want to 
 introduce mysql-cluster. Few questions are - 1. Do I need to change 
 any code which is written in PHP.

The answer is yes and no. There's a good chance that your application will 
work fine with MySQL Cluster without any changes *but* there are a few gotchas 
such as:

 - the current GA version of MySQL Cluster (7.2) doesn't implement Foreign Keys 
(coming in Cluster 7.3)
 - (ignoring BLOBs) rows cannot be larger than 13 Kb
 - no geo-spatial indexes
 - no full-text search

A good place to get more information is 
http://www.mysql.com/why-mysql/white-papers/mysql-cluster-evaluation-guide/ . 
That guide also gives you some advice on scenarios where you *shouldn't* use 
MySQL Cluster.

In addition, as you should expect, to get the best performance out of MySQL 
Cluster you may want to tweak your schema and/or application - you can get lots 
of tips from 
http://www.mysql.com/why-mysql/white-papers/guide-to-optimizing-performance-of-the-mysql-cluster/

 2. What are the steps to convert mysql to mysql-cluster.

Basically, you need to backup your database (mysqldump), load it into a MySQL 
Server that's part of your Cluster (use 
http://www.clusterdb.com/mysql-cluster/auto-installer-labs-release/) to get 
your first Cluster up and running and then issue ALTER TABLE tab-name 
ENGINE=ndb;

 
 Appreciate the help.
 
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: Promoting MySQL 5.5 slave to master

2013-03-21 Thread Andrew Morgan


 -Original Message-
 From: Miguel Gonzalez [mailto:miguel_3_gonza...@yahoo.es]
 Sent: 21 March 2013 08:29
 To: Manuel Arostegui
 Cc: mysql@lists.mysql.com
 Subject: Re: Promoting MySQL 5.5 slave to master
 
 
 
 
 
 - Mensaje original -
 De: Manuel Arostegui man...@tuenti.com
 Para: Miguel Gonzalez miguel_3_gonza...@yahoo.es
 CC: mysql@lists.mysql.com
 Enviado: Jueves 21 de marzo de 2013 9:17
 Asunto: Re: Promoting MySQL 5.5 slave to master
 
 2013/3/21 Miguel Gonzalez miguel_3_gonza...@yahoo.es
 
 
 
 
  Can you elaborate about this? I thought that once you fixed the issues
  in the master server you needed to set it as slave of the new promoted
  master server, and do the other way round.
 
 
 
  By having both MySQL replicating from each other, you'd avoid this.
 
 I'm trying not to overcomplicate things. Also the server where the replica of
 the production server is is not active at all. I thought master-master
 configuration where more common when you were behind a web balancer.
 
 I thought also that replication had changed in MySQL 5.5. Is that difficult to
 find a clear procedure for performing a mysql failover when a server
 crashed?

[AM] If you've the option to use MySQL 5.6 then managing replication is a lot 
simpler and more reliable... 
http://www.clusterdb.com/mysql-replication/mysql-5-6-ga-replication-enhancements/

MySQL 5.6 Failing over is described in section 5 of 
http://www.mysql.com/why-mysql/white-papers/mysql-replication-tutorial/ - note 
that the paper deals with a more complex scenario where there are multiple 
slaves and so you could simplify.

 
 Make sure you do reply all instead of replying only to me :-)
 
 Sorry about that, I thought the list was configured differently
 
 Miguel
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: mysql cluster and auto shard

2013-03-18 Thread Andrew Morgan


 -Original Message-
 From: Mike Franon [mailto:kongfra...@gmail.com]
 Sent: 18 March 2013 13:34
 To: mysql@lists.mysql.com
 Subject: mysql cluster and auto shard
 
 I am looking at  the best way to scale writes.
 
 Either using sharding with our existing infrastructure, or moving to
 mysql cluster.
 
 Does anyone have any pros/cons to using mysql cluster?  I am trying to
 find a much better understanding on how the auto sharding works?  Is it
 true we do not need to change code much on application level?

As a starting point, I think it's worth taking a look at this white paper... 
http://www.mysql.com/why-mysql/white-papers/mysql-cluster-evaluation-guide/ 

Most things will continue to work when migrating to MySQL Cluster but of course 
(as with any storage engine) to get the best performance you'll probably need 
to make some changes; this second paper explains how to optimize for MySQL 
Cluster - hopefully that will give a good feeling for the types of changes that 
you might need/want to make... 
http://www.mysql.com/why-mysql/white-papers/guide-to-optimizing-performance-of-the-mysql-cluster/


 
 
 Thanks
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



RE: MySQL Cluster Solution

2013-03-07 Thread Andrew Morgan
Hi Neil,

 MySQL Cluster *does* support stored procedures. There are some limitation that 
MySQL Cluster has; this white paper would be a good place to start... 
http://www.mysql.com/why-mysql/white-papers/mysql-cluster-evaluation-guide/

Regards, Andrew.



 -Original Message-
 From: Neil Tompkins [mailto:neil.tompk...@googlemail.com]
 Sent: 07 March 2013 14:57
 To: [MySQL]
 Subject: MySQL Cluster Solution
 
 Hi,
 
 I've used in the past MySQL Community Server 5.x.  Everything is fine,
 however I'm now wanting to implement a new High Availability solution
 and am considering MySQL Cluster.  However, I heard that MySQL Cluster
 doesn't support store procedures ?  Are there any other restrictions I
 need to be aware of.
 
 Thanks
 Neil

--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: SELECT subquery problem

2013-02-05 Thread Andrew Moore
Try using a CASE construct in the select. Should work for this.

A


On Tue, Feb 5, 2013 at 3:25 PM, Stefan Kuhn stef...@web.de wrote:

 You cannot do this. A sql result alwas has the same number of columns in
 each row. You could have null or  in the column, though. This could be
 done via the if(,,)-statement of mysql or by using a union and two selects,
 one for pub_email=n and the other for the rest.


 Gesendet: Dienstag, 05. Februar 2013 um 15:49 Uhr
 Von: cl c...@nimbleeye.com
 An: mysql@lists.mysql.com
 Betreff: SELECT subquery problem
 De-lurking here.

 I am trying to figure out how to return results from a query. What I need
 to do is to return 4 columns from a database. This is easy:

 SELECT last_name, first_name, phone, email FROM `mydatabasetable` WHERE
 `current_member` = Y AND `pub_name` = Y ORDER BY last_name ASC

 This works fine, as expected.

 But, I want to only display the value in `email` if the value in another
 field, `pub_email` = Y So, the resultant output would look like this, for
 instance, if the value of `pub_email` =N for Mr. Wills:

 Jones John 555-555- johnjo...@nowhere.com
 Smith Jim 555-222- jimsm...@nothing.com
 Wills Chill 555-111-
 Zorro Felicity 555-999- felicityzo...@madeup.com

 Can't quite figure out how to express this.

 TIA for your suggestions!


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql[http://lists.mysql.com/mysql]




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Andrew Moore
Dump and reload or use some scripting to create and drop some fake data to
increase the lsn towards the 'future' value.

http://dba.stackexchange.com/questions/8011/any-better-way-out-of-mysql-innodb-log-in-the-future


On Mon, Jan 28, 2013 at 12:01 PM, walter harms wha...@bfs.de wrote:

 hi list,

 i am using mysql 5.1.53.
 after a crash i have the follwing error in my log:

 130128 10:45:25  InnoDB: Error: page 61 log sequence number 0 2871649158
 InnoDB: is in the future! Current system log sequence number 0 2494349480.
 InnoDB: Your database may be corrupt or you may have copied the InnoDB
 InnoDB: tablespace but not the InnoDB log files. See
 InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
 InnoDB: for more information.

 according to the doc's at
 http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
 I need to restore the database from scratch (short version). What i do not
 understand is what
 exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
 tables are ok).

 Not all tables are INNODB. Is is possible to restore only immodb tables ?
 (Having fun with forgein keys)

 Or is there a better way to handle this ?

 re,
  wh

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: log sequence number InnoDB: is in the future!?

2013-01-28 Thread Andrew Moore
So this isn't production - well just rebuild it from a backup? It's a pain
in the rear to get the lsn aligned again through data creation/removal but
if it's a system critical instance without possible downtime you've got
some work to do...


On Mon, Jan 28, 2013 at 2:21 PM, walter harms wha...@bfs.de wrote:



 Am 28.01.2013 15:01, schrieb Manuel Arostegui:
  2013/1/28 walter harms wha...@bfs.de
 
  hi list,
 
  i am using mysql 5.1.53.
  after a crash i have the follwing error in my log:
 
  130128 10:45:25  InnoDB: Error: page 61 log sequence number 0
 2871649158
  InnoDB: is in the future! Current system log sequence number 0
 2494349480.
  InnoDB: Your database may be corrupt or you may have copied the InnoDB
  InnoDB: tablespace but not the InnoDB log files. See
  InnoDB: http://dev.mysql.com/doc/refman/5.1/en/forcing-recovery.html
  InnoDB: for more information.
 
  according to the doc's at
  http://dev.mysql.com/doc/refman/5.1/en/forcing-innodb-recovery.html
  I need to restore the database from scratch (short version). What i do
 not
  understand is what
  exactly is broken ?  Whole DBM ? One Instance ? (mysqlcheck says all
  tables are ok).
 
  Not all tables are INNODB. Is is possible to restore only immodb tables
 ?
  (Having fun with forgein keys)
 
  Or is there a better way to handle this ?
 
 
 
  Hello,
 
  I reckon you really need to think of what caused your MySQL to crash. If
  there's not a clear reason (HW problem) you might want to dig into that
 to
  prevent this happening again. I am saying this because it is not the
 first
  time I see someone fixing a corruption (re-building the database or
 fixing
  corrupted tables) and then getting it corrupted again within some hours.
 
 very simple: power outage
 Our Production server are on UPS but i was making tests on this one and to
 be
 fair power outages are very seldom

  The problem itself has a solution: increasing the log sequence counter. I
  wouldn't do it if it's not totally necessary (ie: you don't have another
  machine to copy the data from). If you can get the data copied again from
  some other server, that is probably the safest solution here to make sure
  your data isn't corrupted. If not, I would suggest to run
 pt-table-checksum
  to make sure the data is okay. Once your DB is recovered from this crash.
 

  pt-table-checksum means this tool ? [
 http://www.percona.com/doc/percona-toolkit/2.1/pt-table-checksum.html]
 I would need to run it once, from the description i had the impression it
 is
 intended for monitoring. Could you please explain ?

 re,
  wh

  Cheers
  Manuel.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: MySQL Cluster alerts

2012-12-18 Thread Andrew Morgan
Hi Bheemsen,

 looks like a few different things going on there; if you have a MySQL support 
contract/subscription then it would be worth raising SRs - it doesn't need to 
be a bug, it's fine tyo ask questions too. A couple of things that spring to 
mind in-line

 I am frequently seeing the following alerts in our production MySQL Cluster
 environment. Do you have any metrics, guidelines and scripts to monitor and
 fix these alerts? Any help is appreciated.

 Temporary Tables To Disk Ratio Excessive
 Excessive Disk Temporary Table Usage Detected

 Table Scans Excessive
 Indexes Not Being Used Efficiently

If you're using MySQL Cluster 7.2 then you should run OPTIMIZE TABLE for each 
of your tables (repeat that step whenever you make schemas changes to it, add 
an index or make very signifficant data changes). This will make the optimizer 
make better use of available indexes. Use the query analyzer in MySQL 
Enterprise Monitor (MEM) to see which queries are taking the time as these are 
likely to be the table scans (full table scans should be avoided as much as 
possible). You can use the EXPLAIN command to see if individual queries are 
making use of the available indexes. Try adding new indexes if they're missing 
for high-running transactions.


 Thread Cache Size May Not Be Optimal

 Cluster DiskPageBuffer Hit Ratio Is Low

Note that you might observe this after restarting a data node as the cache must 
be repopulated as queries come in. If you're seeing this at other times or the 
MEM graphs show that the DiskPageBuffer Hit Ratio is consistently low then 
consider increasing it... 
http://dev.mysql.com/doc/mysql-cluster-excerpt/5.1/en/mysql-cluster-ndbd-definition.html#ndbparam-ndbd-diskpagebuffermemory
 



--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: MySQL dying?

2012-12-05 Thread Andrew Moore
MySQL runs the internet. Go ahead kill it. I dare you.


On Tue, Dec 4, 2012 at 11:18 PM, Karen Abgarian a...@apple.com wrote:

 MySQL, like all other products, can be peachy or bitchy.   Good ones, they
 also die.Wish I was kidding :-)


 On Dec 4, 2012, at 2:37 PM, Andrés Tello wrote:

  Are u kidding?
 
  Mysql is dead easy and damn good... obviously it has its perks, but any
 database engine has them... (I'm looking at you DB2)...
 
  There has been a lot of improvements lately, I feel that mysql is
 moving much more faster under oracle umbrella than when it was alone...
 
  Replication... omg, replicacion is DEAD easy! so easy, that usually you
 doublecheck things just because you are unsure that is SO easy...
 
  Partitioning, views, storeprocedures, explain and analise are good
 enough... it simply work... I never, ever had a database corruption (I'm
 looking at to you Mssql!) ...
 
  I have tables with more the 5millon rows, noproblem, I have a table with
 more than 40 millon rows... and of course I have troubles, but always,
 always been resolved...
 
  Mysql is... wonderful, it just works... You want it to work as a
 storage with out integrity?, done.. with integrity? done, replicaion,
 encryption, secure connections, partition... name your feature!
 
 
 
 
 
 
 
 
 
  On Tue, Dec 4, 2012 at 12:50 PM, Singer Wang w...@singerwang.com
 wrote:
  Lol! Good point Karen!
 
 
  On Tue, Dec 4, 2012 at 1:02 PM, Karen Abgarian a...@apple.com wrote:
 
   A touch of realism: we are all dying.   For some, it may take a while,
   hopefully.
  
   On 04.12.2012, at 9:53, Tim Pownall wrote:
  
Mysql is used by just about every web host and is one of the most
 common
database servers around the world. I do not have any intent to stop
 using
mysql unless they start charging for it which I do not think will
 happen.
   
Thanks,
   
Tim Pownall
Sr. Linux Systems Monitoring
Hostgator.com LLC
   
On Tue, Dec 4, 2012 at 11:45 AM, Anthony Pace 
 anthony.p...@utoronto.ca
   wrote:
   
I have heard that due to Oracle taking over, the OS community is
   shifting
to other type of DB's .
   
Any thoughts?
   
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql
   
   
   
   
--
   
Thanks,
   
Tim Pownall
GNU/Linux Systems Monitoring
610-621-9712
pownall...@gmail.com
  
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
 




Re: Update One of Three tables in a single query

2012-11-29 Thread Andrew Moore
What's your use case? I've not heard of a like this done at table/sql level.

You could use stored procedures  dynamic SQL to achieve it though.


On Thu, Nov 29, 2012 at 5:44 PM, Chris W 4rfv...@cox.net wrote:

 I have three tables, TableA, TableB, and TableC each has a unique ID
 field, idA, idB, idC respectively each table also has a field called
 'Status'.  The id field is always greater than zero.  Now suppose I have
 three variables A, B, and C.   Two of them are zero and the other is a
 valid ID for the corresponding table.  Only I don't know which one.  Is
 there a way to write a single update statement that will update only the
 one table that has the matching ID?  I have tired a few different ideas but
 none seem worth mentioning here since they all either update too many
 records or don't update any records.

 Thanks for the help.

 Chris W

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: innodb_lock_wait_timeout and replication

2012-10-12 Thread Andrew Moore
You are subscribed to the MySQL mailing list. You will need to unsubscribe
yourself.

On Fri, Oct 12, 2012 at 6:58 PM, Hubert de Donnea 
hubertdedon...@hotmail.com wrote:

 I get all your mails for yearscould you not help me and suppress my
 name from your contact  thanks

  To: mysql@lists.mysql.com
  From: markus.f...@fasel.at
  Subject: innodb_lock_wait_timeout and replication
  Date: Fri, 12 Oct 2012 16:08:42 +0200
 
  I encountered an error
  MySQL Error: Lock wait timeout exceeded; try restarting transaction
  MySQL Error No: 1205
 
  For this very statement an entry in the binlog was filed
 
  ...
  # Query_time: 52  Lock_time: 0  Rows_sent: 0  Rows_examined: 0
  ...
 
  Why is there an entry in the binlog if the statement failed?
  If it is in the binlog it will be replicated to a slave and on the slave
  it will succeed (maybe).
 
  Does this mean that a Lock wait timeout exceeded potentially drive the
  slave out of sync?
  --
  Kind Regards, Markus Falb
 




Re: InnoDB corrupt after power failure

2012-10-04 Thread Andrew Miklas
Hi Manuel,

Thanks for the fast reply.

On Oct 4, 2012, at 12:05 AM, Manuel Arostegui wrote:
snip
 it shouldn't be a biggie if you have a BBU. Do you guys use HW RAID + BBU?

We've checked with our hosting provider, and the database was indeed stored on 
a BBU RAID.

 What's your innodb_flush_log_at_trx_commit value?

mysql show variables like 'innodb_flush_log_at_trx_commit'\G
*** 1. row ***
Variable_name: innodb_flush_log_at_trx_commit
Value: 1
1 row in set (0.00 sec)


 Have you tried playing with innodb_force_recovery option to try to get the 
 server started at least? That way you might be able to identify which 
 table(s) is/are the corrupted one and the one(s) preventing the whole server 
 from booting up. 

As the affected machine was just a read only slave, it was easier for me to get 
things back into service by just reloading off the master.  Unfortunately, I 
didn't think to keep the corrupted ibd files for later debugging.

At this point, I'm more trying to figure out if there's something wrong with 
the DB or host config.  There was effectively no data loss, but I'm worried we 
might have data loss or availability issues if this error crops up on our 
master server.


Thanks,


Andrew


--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



Re: InnoDB corrupt after power failure

2012-10-04 Thread Andrew Miklas
Hi Rick,

On Oct 4, 2012, at 2:40 PM, Rick James wrote:

 I hope you turned OFF caching on the drives, themselves.  The BBU should be 
 the single place that caches and is trusted to survive a power outage.

The DB server in question is running in a virtualized environment, so the array 
shows up as a SCSI device inside our VM.  I can't use hdparm to directly check 
whether the disks are doing write caching, but our hosting provider assures us 
that once data is sent to the virtual SCSI device from inside the VM, it will 
be persisted to disk even if there's a power failure.

I'm a bit suspicious of a recent change we did to switch our ext3 journals from 
data=ordered to data=writeback.  The ext3 docs say a crash+recovery can cause 
incorrect data to appear in files which were written shortly before the crash 
[1].  As a result, if a tablespace were extended just before the power failure, 
it might be possible that when MySQL restarts, it will see random data at the 
end of the tablespace.  It seems like this could happen even if the disks are 
BBU / not write caching, because the increase of the ibd's file size in the 
inode and the zeroing out of the new blocks assigned to the file are not atomic 
with respect to one another.

Is the InnoDB recovery process OK with this scenario?  Has anyone else seen 
corruption problems with data=writeback?


-- Andrew


[1] http://lxr.linux.no/linux+v3.5.2/Documentation/filesystems/ext3.txt
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe:http://lists.mysql.com/mysql



InnoDB corrupt after power failure

2012-10-03 Thread Andrew Miklas
Hi guys,

I recently had a data corruption issue with InnoDB.  MySQL was shut down 
improperly (power failure), and when the system came back up, MySQL refused to 
start.  On inspection of the logs (see below), it looks like the tablespace 
became seriously corrupted.  In the end, I had to rebuild the slave using 
mysqldump.

I'm curious what happened here, since I thought InnoDB wasn't supposed to 
become corrupted on an improper shutdown.  One possibility that we were 
exploring was that the filesystem journal setting was incorrect.  We were using 
ext3 with the journal set to writeback mode.  Is this a known bad config with 
InnoDB?


Thanks for any help,


Andrew


---

MySQL server version: Server version: 5.5.27-1~ppa1~lucid-log (Ubuntu)
(Running on Ubuntu 10.04.2 LTS)

120831 20:56:01 InnoDB: The InnoDB memory heap is disabled
120831 20:56:01 InnoDB: Mutexes and rw_locks use GCC atomic builtins
120831 20:56:01 InnoDB: Compressed tables use zlib 1.2.3.3
120831 20:56:02 InnoDB: Initializing buffer pool, size = 5.0G
120831 20:56:03 InnoDB: Completed initialization of buffer pool
120831 20:56:06 InnoDB: highest supported file format is Barracuda.
InnoDB: Log scan progressed past the checkpoint lsn 62096393185
120831 20:56:06  InnoDB: Database was not shut down normally!
InnoDB: Starting crash recovery.
InnoDB: Reading tablespace information from the .ibd files...
InnoDB: Restoring possible half-written data pages from the doublewrite
InnoDB: buffer...
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 230.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 373.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 214.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 222.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 2673.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 2681.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 46.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Warning: database page corruption or a failed
InnoDB: file read of space 0 page 62.
InnoDB: Trying to recover it from the doublewrite buffer.
InnoDB: Recovered the page from the doublewrite buffer.
InnoDB: Doing recovery: scanned up to log sequence number 62096881152
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 198.
InnoDB: You may have to recover from a backup.
120831 20:56:33  InnoDB: Page dump in ascii and hex (16384 bytes):
 len 16384; hex
{Big dump here -- I can supply if needed}

InnoDB: End of page dump
120831 20:56:33  InnoDB: Page checksum 3859504003, prior-to-4.0.14-form 
checksum 1080681687
InnoDB: stored checksum 3859504003, prior-to-4.0.14-form stored checksum 
3870577874
InnoDB: Page lsn 14 1966349405, low 4 bytes of lsn at page end 1966973261
InnoDB: Page number (if stored to page already) 198,
InnoDB: space id (if created with = MySQL-4.1.1 and stored already) 0
InnoDB: Page may be a system page
InnoDB: Database page corruption on disk or a failed
InnoDB: file read of page 198.
InnoDB: You may have to recover from a backup.
InnoDB: It is also possible that your operating
InnoDB: system has corrupted its own file cache
InnoDB: and rebooting your computer removes the
InnoDB: error.
InnoDB: If the corrupt page is an index page
InnoDB: you can also try to fix the corruption
InnoDB: by dumping, dropping, and reimporting
InnoDB: the corrupt table. You can use CHECK
InnoDB: TABLE to scan your table for corruption.
InnoDB: See also 
http://dev.mysql.com/doc/refman/5.5/en/forcing-innodb-recovery.html
InnoDB: about forcing recovery.
InnoDB: Ending processing because of a corrupt database page.
120831 20:56:33  InnoDB: Assertion failure in thread 140548948399904 in file 
buf0buf.c line 3609
InnoDB: We intentionally generate a memory trap.
InnoDB: Submit a detailed bug report to http://bugs.mysql.com.
InnoDB: If you get repeated assertion failures or crashes, even
InnoDB: immediately after the mysqld startup, there may be
InnoDB: corruption in the InnoDB tablespace. Please refer

Re: mysql is eating up lot of CPU and memory

2012-09-20 Thread Andrew Moore
Hey Simon,

You just performed the classic, I have an issue, but I won't provide any
info.

Please elaborate on the issue. Provide hardware spec, config information.
Tell us why you suspect the mysqld is consuming too much resource?

We can attempt to assist with more info as there are no silver bullets here
except using the blackhole storage engine for all tables (joke) .

Andrew
On 20 Sep 2012 18:15, Simon K k_simo...@yahoo.com wrote:

 Hi All,

 I am using HP-UX box.

 I am having problem with mysqld daemon , it is eating too much of
 processor and memory . Are there any kernel tweaks for memory and processor
 i need to do to get mysql to run ? And it is taking too much time to run
 the queries on HP-UX box .

 Did anyone faced this kind of problem ? Please help me to sort this out.


 Thanks  Regards,
 Simon


Re: create roles/groups in MYSQL

2012-08-02 Thread Andrew Moore
There's nothing built in but if you want explore this it is an extension

http://www.securich.com/


On 1 Aug 2012 21:56, Aastha aast...@gmail.com wrote:

 Hello,

 I have 10 different users and i have to give different accesses to
 different tables.
 Is it possible to create a groups with different set of access rights on
 different tables.

 I know there are ROLES and PROFILES in Oracle. Is there something similar
 in MySQL.

 Thanks,
 Aastha Gupta



Re: Disable DB without deleting data

2012-07-20 Thread Andrew Moore
Or rename the users themselves. This backs onto the revoke idea. If only
the DBA knows the new names then the privileges stay intact and the old
usernames could be restored with a reversal script when the db can be
accessed again. Applications will simply get an access denied at connection
time. Not the most elegant solution but practical as long as you're sure
the other apps are privileged correctly.

On Friday, July 20, 2012, Reindl Harald wrote:

 the permissions are in the mysql database
 so there are two possibilites

 * mysql is not interested and do not touch them at all
   means they are still for the old names and work after
   rename beack as before

 * mysql does magic and update the permission-tables

 i bet case one will happen
 and this is what you want

 after rename this way there is no access except with root-permissions
 after rename back all is as before

 however, you can export permissions easily with phpMyAdmin
 as normal sql-statements

 Am 20.07.2012 23:09, schrieb Kirk Hoganson:
  Would this keep permissions intact?  I need something that would make it
 easy to automatically restore the database
  including any custom permissions?
 
  Thanks again for the input,
  Kirk
 
  On 07/20/2012 12:16 PM, Rick James wrote:
  No.  RENAME DATABASE does not exist.  It may have something to do with
 internal table numbers in InnoDB.
  Instead...
  CREATE DATABASE new ...;
  RENAME TABLE old.t1 TO new.t1, ...
  DROP DATABASE old;
  This should work cross-device.
 
  -Original Message-
  From: Reindl Harald [mailto:h.rei...@thelounge.net javascript:;]
  Sent: Friday, July 20, 2012 2:51 AM
  To: Johan De Meersman
  Cc: mysql@lists.mysql.com javascript:;
  Subject: Re: Disable DB without deleting data
 
 
 
  Am 20.07.2012 11:20, schrieb Johan De Meersman:
  - Original Message -
  From: Reindl Haraldh.rei...@thelounge.net javascript:;
 
  impossible with innodb
 
  * dump
  * delete
 
  with myisam you could stop the server and move the databasedir do
  not
  try this with innodb even with innodb_file_per_table!
  Mmh, it should be pretty easy to write a small script that creates a
  new database namedoldname_offline and renames all tables into it,
  no?
  Shame that rename database doesn't exist any more :-)
  my reply was to Something like moving the databases subdirectory to a
  different directory
 
  surely, you can rename the database
  but if you want to move it to another mount-point you are lost with
  innodb
 
 

 --

 Reindl Harald
 the lounge interactive design GmbH
 A-1060 Vienna, Hofmühlgasse 17
 CTO / CISO / Software-Development
 p: +43 (1) 595 3999 33, m: +43 (676) 40 221 40
 icq: 154546673, http://www.thelounge.net/

 http://www.thelounge.net/signature.asc.what.htm




Re: Disable DB without deleting data

2012-07-19 Thread Andrew Moore
Revoke all privileges to it. No access, no use.

On Thu, Jul 19, 2012 at 11:47 PM, Kirk Hoganson kirkhogan...@gmail.comwrote:

 I need to find an easy way to make a database unavailable without deleting
 the data, so that it could be easily and quickly restored if necessary,
 without impacting the availability of the other databases.  Something like
 moving the databases subdirectory to a different directory, or the detach
 feature in MSSQL.  The MySQL server (5.0.45) is running on a Windows
 server, and is using the InnoDB storage engine.

 Thank you for any suggestions.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: GA download reverted back to 5.5.24?

2012-07-06 Thread Andrew Moore
Charming, Noel. Are you Devops? :-D

On Fri, Jul 6, 2012 at 3:09 AM, Noel Butler noel.but...@ausics.net wrote:

 **
 For those interested 5.5.25a has been released overnight, long after
 oracle claimed it was there.
 frankly., I think they ought to have use 5.5.26.

 To those who replied to me directly, a few facts...

 1/ it never affected me directly - my gripe with them was on principle and
 their actions (or lack thereof) towards those that were affected

 2/ to the wanker who said people deserve what they get for untesting on
 DEV bed first ...  this is true  _IF_ it was a major release.
 (as I hope we all do)   _BUT_  you don't expect to get fucked over by
 a point release, to have that happen, shows incompetenceon the part of
 the software developer, not the users.



 On Sat, 2012-06-30 at 14:15 +1000, Noel Butler wrote:

 I wonder if you would have the same opinion to say your Operating System
 environment, Apache, php, any mainstream server daemon, how about they pull
 the current version for a serious bug, but dont tell anyone...

 Oracle have been quick to announce new releases of mysql, but failed to
 issue a notice saying  uhoh, you better not use it instead, putting a
 small notice, where, on a fricken manual page FFS. who the hell reads that!
 and they say use version a which does not even exist, I'd hate to think
 of how many high profile sites are at risk of being screwed over by yet
 MORE oracle incompetence.
 No one would think  any less of them if they sent that notice, many would
 be appreciative, but to hide such a serious issue that was enough for
 them to withdraw and remove that version, is outright despicable.





 On Fri, 2012-06-29 at 22:58 -0400, Govinda wrote:

  That was nice of oracle to announce this wasn't it ...(/sarcasm)
 

 I am not aligned with any side.. and I am also not known/qualified/respected 
 in this group enough to make much of a statement...  but:
 IMHO, In almost all matters, *appreciation* is the only approach that will 
 serve... let alone sustain happiness...
 ...and especially when we consider what little we must give to have right to 
 use MySQL.

 Sure, desire for better communication/usability makes total sense.. but I am 
 just also observing/suggesting: please add (positively) to the atmosphere.. 
 for everyones' sake.  Just us humans under the hood.

 -Govinda






Re: GA download reverted back to 5.5.24?

2012-06-30 Thread Andrew Moore
Agreed - if you installed this version in production without fully testing
then it's your problem, you'll need to downgrade. If you've
tested thoroughly and hit the bug then you you're aware of the issue
already and do not have it installed in production. If you tested
thoroughly and didn't hit the bug then it's likely you're not creating the
conditions that cause the problem. Pointing out the obvious but the message
here is test, test and test some more.

On Sat, Jun 30, 2012 at 1:57 PM, Mihail Manolov 
mihail.mano...@liquidation.com wrote:

 Noel,

 I am really sorry for those high profile sites, which in your scenario
 haven't tested their database in dev/test/stage before upgrading. Bugs
 happen. Big organizations have slower communication.

 Cheers,
 Mihail

 On Jun 30, 2012, at 0:16, Noel Butler noel.but...@ausics.net wrote:

  I wonder if you would have the same opinion to say your Operating System
 environment, Apache, php, any mainstream server daemon, how about they pull
 the current version for a serious bug, but dont tell anyone...
 
  Oracle have been quick to announce new releases of mysql, but failed to
 issue a notice saying  uhoh, you better not use it instead, putting a
 small notice, where, on a fricken manual page FFS. who the hell reads that!
 and they say use version a which does not even exist, I'd hate to think
 of how many high profile sites are at risk of being screwed over by yet
 MORE oracle incompetence.
  No one would think  any less of them if they sent that notice, many
 would be appreciative, but to hide such a serious issue that was enough
 for them to withdraw and remove that version, is outright despicable.
 
 
 
 
 
  On Fri, 2012-06-29 at 22:58 -0400, Govinda wrote:
 
   That was nice of oracle to announce this wasn't it ...(/sarcasm)
  
 
  I am not aligned with any side.. and I am also not
 known/qualified/respected in this group enough to make much of a
 statement...  but:
  IMHO, In almost all matters, *appreciation* is the only approach that
 will serve... let alone sustain happiness...
  ...and especially when we consider what little we must give to have
 right to use MySQL.
 
  Sure, desire for better communication/usability makes total sense.. but
 I am just also observing/suggesting: please add (positively) to the
 atmosphere.. for everyones' sake.  Just us humans under the hood.
 
  -Govinda
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: New guy

2012-06-21 Thread Andrew Moore
Welcome Raymond, hope you enjoy your stay :)

On Thu, Jun 21, 2012 at 8:47 PM, Raymond D'Anjou 
radan...@osullivan-quebec.qc.ca wrote:

 I don't know if new members usually introduce themselves here...
 ...but if not, why not?

 I was very active on the SQL Server news group a few years back but this
 MYSQL stuff is new to me.
 Seems I'll be muddling around for at least a little while so I might have
 a few questions in the near future.

 Thanks in advance,
 Raymond from Québec City

 Raymond D'Anjou
 Programmeur-analyste WEB
 Artmérik International
 600, rue Saint-Jean
 Québec (Québec) G1R 1P8
 www.artmerik.comhttp://www.artmerik.com
 Tél.: 418-529-6800
 Téléc.: 418-529-1982




Re: i need advice on redundancy of mysql server.

2012-06-11 Thread Andrew Moore
That's not a description of 'load balancing'; it is a high availability
solution you're looking for.

On Mon, Jun 11, 2012 at 4:43 PM, Joey L mjh2...@gmail.com wrote:

 I understand ..I am looking for load balancing - something that i do
 not have to worry about if one server goes down - the other server
 will be up and running by itself and i can bring back the other server
 later on when i have time.

 On Mon, Jun 11, 2012 at 10:36 AM, Ananda Kumar anan...@gmail.com wrote:
  when u say redudency.
  Do u just want replication like master-slave, which will be
 active-passive
  or
  Master-master which be active-active.
 
  master-slave, will work just a DR, when ur current master fails you can
  failover the slave, with NO LOAD balancing.
 
  Master-master allows load balancing.
 
  On Mon, Jun 11, 2012 at 7:56 PM, Joey L mjh2...@gmail.com wrote:
 
  I am running a site with about 50gig myisam databases which are the
  backend to different websites.
  I can not afford any downtime and the data is realtime.
 
  What is the best method for this setup? master-master or master-slave?
 
  What are the best utilities to create and maintain this setup?  as far
  as load balancing between the two physical servers that i am running.
  I am currently working with percona utilities - is there something
 better
  ?
  what would you use to load balance mysql ? what would you use to load
  balance apache.
 
 
  thanks
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: i need advice on redundancy of mysql server.

2012-06-11 Thread Andrew Moore
Not forgetting Pythian http://www.pythian.com, Baron ;)

On Mon, Jun 11, 2012 at 8:12 PM, Baron Schwartz ba...@xaprb.com wrote:

 Ultimately, if you intend to use MyISAM, you must keep in mind that it
 eliminates some of your options. One problem is that MyISAM is very
 slow to repair after a crash. Remember, if a crash can happen, it
 eventually will, it's just a question of when. And MyISAM doesn't have
 recovery -- it only has repair, which will not necessarily recover
 all of your data.

 If you are not aware of Percona XtraDB Cluster, it might be
 interesting for you. (I work for Percona.) There is also Continuent
 Tungsten to consider.

 Frankly, though, I'd step back a bit from such microscopic focus on
 technologies. It looks like you need advice from someone who's done
 this before, to get the high-level things right before you dive deeply
 into details. If it's really this important, I personally wouldn't
 trust it to a mailing list, I'd hire someone. It's well worth it.
 There's Percona again, of course, but there's also MySQL, SkySQL,
 PalominoDB, and lots more to choose from.

 Baron

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: multi master auto syncing when servers back on line

2012-06-06 Thread Andrew Moore
Joey, sounds like you've got a lot of testing to do before you can deploy
and support this in production yourself with out fear of shooting yourself
in the foot. Whilst you're trialling things, you could try 'XtraDB Cluster'
from percona.

A


On Wed, Jun 6, 2012 at 5:09 AM, Joey L mjh2...@gmail.com wrote:

 My current situation arises out of testing really - i restored a db
 that is out of sync with the other server (having less records) and
 then i tried to update the master server - and only seen the records
 that I updated and not the records that were additional to the other
 server when I first restored.

 I am just trying different scenerios before i put this in production.
 I was looking to do load balancing and i am concerned about
 high-availablitity.
 Is there a product that is opensource out there that I can use to
 maintian a master master setup with ease ?


 thanks
 mjh

 On Tue, Jun 5, 2012 at 6:18 PM, Andrew Moore eroomy...@gmail.com wrote:
  Joey,
 
  The master master replication topology comes with it's own potential
  pitfalls and trials. Be sure your use case needs master master and that
  you're not implementing it because you think it's 'nice to have'.
 
  pt-table-sync should assist you getting your data in sync but how have
 you
  arrived at this out-of-sync predicament you find yourself in?
 
  A
 
 
 
  On Tue, Jun 5, 2012 at 11:03 PM, Joey L mjh2...@gmail.com wrote:
 
  with all do respect - I am new to this - i did read the docs and
  having a hard time.
  I also was asking if you know something easier or does the trick as
  this utility does.
  thanks
 
  On Tue, Jun 5, 2012 at 5:06 PM, Claudio Nanni claudio.na...@gmail.com
  wrote:
   Joey,
  
   from what I can see from your email you lack of a lot of basics and I
   suggest you to read some documentation before proceeding.
  
   Maatkit is now Percona Toolkit and contains some of the best tools for
   MySQL.
  
   Cheers
  
   Claudio
  
   2012/6/5 Joey L mjh2...@gmail.com
  
   Hi -
   I have setup mysql mult master setup on debian squeeze.
   I have realized that the databases have to be initially in sync
 before
   multi master can operate properly.
  
   This can require a lot of down time on the one functioning server.
   Is there a way to do an automatic sync from the 1 server that is
 still
   running ??
  
   I have found a tool dpkg package called Maakit , but having trouble
   running it - get this error on the master:
  
mk-table-sync -u sl2 -ppswd --print --sync-to-master --replicate
   mailserver 192.168.1.11
   DBI
  
 connect(';host=192.168.1.11;mysql_read_default_group=client','sl2',...)
   failed: Access denied for user 'sl2'@'mybox.somedomain.com' (using
   password: YES) at /usr/bin/mk-table-sync line 1284
  
   can anyone assist with the error ?
   Or can someone tell me of a better opensource tool to use to sync the
   servers without a mysql dump ? my db is rather large.
  
   thanks
  
   --
   MySQL General Mailing List
   For list archives: http://lists.mysql.com/mysql
   To unsubscribe:http://lists.mysql.com/mysql
  
  
  
  
   --
   Claudio
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: I am trying to setup a multi master mysql setup.

2012-06-05 Thread Andrew Moore
Joey, please consult your mysql error log. Something has gone wrong on your
server where innodb is not started.

On Tue, Jun 5, 2012 at 3:53 PM, Joey L mjh2...@gmail.com wrote:

 I am following or trying to follow this document:
 http://www.howtoforge.com/mysql5_master_master_replication_debian_etch

 I think it is running in a good state because when i query the show
 slave state, I get awaiting connection from host, like :

 on server1:

 mysql SHOW MASTER STATUS;
 +--+--+--+--+
 | File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
 +--+--+--+--+
 | mysql-bin.35 |  106 | mailserver   |  |
 +--+--+--+--+
 1 row in set (0.00 sec)

 mysql show slave status\G;
 *** 1. row ***
   Slave_IO_State: Waiting for master to send event
  Master_Host: 192.168.2.121
  Master_User: slave1_user
  Master_Port: 3306
Connect_Retry: 60
  Master_Log_File: mysql-bin.24
  Read_Master_Log_Pos: 106
   Relay_Log_File: relay-bin.25
Relay_Log_Pos: 251
Relay_Master_Log_File: mysql-bin.24
 Slave_IO_Running: Yes
Slave_SQL_Running: Yes
  Replicate_Do_DB: mailserver
  Replicate_Ignore_DB:
   Replicate_Do_Table:
   Replicate_Ignore_Table:
  Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
   Last_Errno: 0
   Last_Error:
 Skip_Counter: 0
  Exec_Master_Log_Pos: 106
  Relay_Log_Space: 545
  Until_Condition: None
   Until_Log_File:
Until_Log_Pos: 0
   Master_SSL_Allowed: No
   Master_SSL_CA_File:
   Master_SSL_CA_Path:
  Master_SSL_Cert:
Master_SSL_Cipher:
   Master_SSL_Key:
Seconds_Behind_Master: 0
 Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
   Last_SQL_Errno: 0
   Last_SQL_Error:
 1 row in set (0.00 sec)

 ERROR:
 No query specified


 On Server2:

 mysql show master status ;
 +--+--+--+--+
 | File | Position | Binlog_Do_DB | Binlog_Ignore_DB |
 +--+--+--+--+
 | mysql-bin.24 |  106 | mailserver   |  |
 +--+--+--+--+
 1 row in set (0.00 sec)

 mysql show slave status\G;
 *** 1. row ***
   Slave_IO_State:
  Master_Host: 192.168.2.121
  Master_User: slave2_user
  Master_Port: 3306
Connect_Retry: 60
  Master_Log_File:
  Read_Master_Log_Pos: 4
   Relay_Log_File: relay-bin.01
Relay_Log_Pos: 4
Relay_Master_Log_File:
 Slave_IO_Running: No
Slave_SQL_Running: No
  Replicate_Do_DB: exampledb,mailserver
  Replicate_Ignore_DB:
   Replicate_Do_Table:
   Replicate_Ignore_Table:
  Replicate_Wild_Do_Table:
  Replicate_Wild_Ignore_Table:
   Last_Errno: 0
   Last_Error:
 Skip_Counter: 0
  Exec_Master_Log_Pos: 0
  Relay_Log_Space: 125
  Until_Condition: None
   Until_Log_File:
Until_Log_Pos: 0
   Master_SSL_Allowed: No
   Master_SSL_CA_File:
   Master_SSL_CA_Path:
  Master_SSL_Cert:
Master_SSL_Cipher:
   Master_SSL_Key:
Seconds_Behind_Master: NULL
 Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
   Last_SQL_Errno: 0
   Last_SQL_Error:
 1 row in set (0.00 sec)

 ERROR:
 No query specified

 On server1 - I am able to do a select statement to the table and get
 back results.
 On server2 - i get the error regarding no innodb.below:



 ERROR 1286 (42000): Unknown table engine 'InnoDB'
 mysql select * from virtual_users ;
 ERROR 1286 (42000): Unknown table engine 'InnoDB'


 can anyone assist ?

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: multi master auto syncing when servers back on line

2012-06-05 Thread Andrew Moore
Joey,

The master master replication topology comes with it's own potential
pitfalls and trials. Be sure your use case needs master master and that
you're not implementing it because you think it's 'nice to have'.

pt-table-sync should assist you getting your data in sync but how have you
arrived at this out-of-sync predicament you find yourself in?

A



On Tue, Jun 5, 2012 at 11:03 PM, Joey L mjh2...@gmail.com wrote:

 with all do respect - I am new to this - i did read the docs and
 having a hard time.
 I also was asking if you know something easier or does the trick as
 this utility does.
 thanks

 On Tue, Jun 5, 2012 at 5:06 PM, Claudio Nanni claudio.na...@gmail.com
 wrote:
  Joey,
 
  from what I can see from your email you lack of a lot of basics and I
  suggest you to read some documentation before proceeding.
 
  Maatkit is now Percona Toolkit and contains some of the best tools for
  MySQL.
 
  Cheers
 
  Claudio
 
  2012/6/5 Joey L mjh2...@gmail.com
 
  Hi -
  I have setup mysql mult master setup on debian squeeze.
  I have realized that the databases have to be initially in sync before
  multi master can operate properly.
 
  This can require a lot of down time on the one functioning server.
  Is there a way to do an automatic sync from the 1 server that is still
  running ??
 
  I have found a tool dpkg package called Maakit , but having trouble
  running it - get this error on the master:
 
   mk-table-sync -u sl2 -ppswd --print --sync-to-master --replicate
  mailserver 192.168.1.11
  DBI
  connect(';host=192.168.1.11;mysql_read_default_group=client','sl2',...)
  failed: Access denied for user 'sl2'@'mybox.somedomain.com' (using
  password: YES) at /usr/bin/mk-table-sync line 1284
 
  can anyone assist with the error ?
  Or can someone tell me of a better opensource tool to use to sync the
  servers without a mysql dump ? my db is rather large.
 
  thanks
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
  --
  Claudio

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: table cache value error in my.cnf file

2012-06-04 Thread Andrew Moore
Joey, you've over allocated the cache. MySQL is telling you that it has
corrected the allocation.

Check out the docs for the meaning behind the numbers.

On Mon, Jun 4, 2012 at 2:01 PM, Reindl Harald h.rei...@thelounge.netwrote:



 Am 04.06.2012 14:45, schrieb Joey L:
  Can you explain this further ?
  Sorry a little slow ?
 
  table count * expected connections


 http://dev.mysql.com/doc/refman/5.0/en/server-system-variables.html#sysvar_table_cache





Re: Need help for performance tuning with Mysql

2012-05-23 Thread Andrew Moore
Yu,

The upgrade to 5.5 that Jonny advises should NOT your first action. If
MySQL is mis-configured on 5.0 it will likely be misconfigured on 5.1 and
5.5. Test your application thoroughly on the new version before heeding
that advice. Read the change logs and known bugs. Running the upgrade might
seem painless but if you have some legacy feature in place then things will
not work how you may expect them to.

Review your needs and see if a switch to innodb storage engine will give
you any performance gain. The locking differences alone might make this
worthwhile. TEST it.

You did not state your data and index size. You will benefit from having
enough RAM so that your 'working' data set fits to memory. This isn't
possible/practical for large data but if you have a 5G dataset and 8G
available memory you might not need to rush out and spend money.

If you're heavily using MyISAM, review and tune the MyISAM related buffers.
If you are working mostly with InnoDB tune those variables. Measure, change
measure again. It might be an iterative process but you will learn lots
along the way.

Good luck.

Andy

On Wed, May 23, 2012 at 5:44 AM, Tsubasa Tanaka yoku0...@gmail.com wrote:

 Hello, Yu-san,
 (へろへろな英語で申し訳ないです)

 Can I think that you already tweaked Index on the tables?
 if you yet,please create apt indexes.

 MyISAM caches only Index without data.
 i take way for decreasing disk seek,
  1) create more indexes on the tables,if the tables doesn't update quite
 often.
   including data into index forcibly.
   this makes slow for insert and update,and this is dirty idea,i think.
(よくSELECTされるカラムをINDEXに含めてしまいます。
ただし、SELECT * FROMで呼ばれることが多い場合には使えない上に
かなり美しくない策です。。)
  2) tune filesystem and disk drive parameter for datadir.
   MyISAM table's data caches only in the filesystem cache.
   But i regret that i don't have knowledge around filesystem.

 あまり力になれなくて申し訳ないです。

 regards,


 ts. tanaka//

 2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
  Hello Tsubasa.
 
  Thank you for the reply. (返信ありがとうございます。)
 
  Our high loaded DB are both INNODB and MyISAM.
  Espicially , on MyISAM.
 
  I will consider the tuning of innodb_buffer_pool_size as well.
 
  Do you know the tips for how to tune the disk access for MyISAM?
 
  Thanks,
  Yu
 
  Tsubasa Tanaka さんは書きました:
 Hello,
 
 I seem your mysqld doesn't use enough memory.
 
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028
 
 if your mysqld uses InnoDB oftenly,
 edit innodb_buffer_pool_size in you my.cnf.
 
 
 http://dev.mysql.com/doc/refman/5.1/en/innodb-parameters.html#sysvar_innodb_buffer_pool_size
 
 
 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size
 
 but made not much difference.
 
 It is solution for only sql's large result,i think.
 if you doesn't recognize that problem causes large result,
 you should approach other way,too.
 
 regards,
 
 
 ts. tanaka//
 
 2012/5/23 Yu Watanabe yu.watan...@jp.fujitsu.com:
  Also following is the free command result.
 
  total   used   free sharedbuffers
 cached
  Mem:   81623807843676 318704  0  95632
  5970892
  -/+ buffers/cache:17771526385228
  Swap:  8032492  235608008932
 
  Thanks,
  Yu
 
 
  Yu Watanabe さんは書きました:
 Hello all.
 
 I would like to ask for advice with performance tuning with MySQL.
 
 Following are some data for my server.
 
 CPU: Xeon(TM) 2.8GHz (2CPUs - 8core total)
 Memory : 8GB
 OS : RHEL 4.4 x86_64
 MySQL  : MySQL 5.0.50sp1-enterprise
 
 Attached file
 # my.cnf.txt  : my.cnf information
 # mysqlext_20120522131034.log : variable and status information from
 mysqladmin
 
 I have 2 database working with high load.
 
 I wanted to speed up my select and update queries not by
 optimizing the query itself but tuning the my.cnf.
 
 I have referred to following site,
 http://dev.mysql.com/doc/refman/5.0/en/server-status-variables.html
 
 and read Hiperformance Mysql vol.2 ,
 and increased the following values,
 
 table_cache
 thread_cache_size
 tmp_table_size
 max_heap_table_size
 
 but made not much difference.
 
 According to the ps and sar result
 
 *1 PS result
 Date   Time  CPU%  RSS VSZ
 2012/5/22  21:00:39  109   294752  540028
 
 *2 SAR
 Average CPU user 25%
 sys  5%
 io   3%
 
 I assume that MySQL can work more but currently not.
 
 I am considersing to off load 1 high load database to
 seperate process and make MySQL work in multiple process.
 
 It would be a great help if people in this forum can give
 us an adivice for the tuning.
 
 Best Regards,
 Yu Watanabe
 
 __
 
 
 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 --
 

Re: Myisam won't support replication

2012-05-03 Thread Andrew Moore
Charles,

How do you know your replication isn't working?



On Thu, May 3, 2012 at 9:50 PM, Brown, Charles cbr...@bmi.com wrote:

 I noticed that my replication stopped working after migrating to MySQL
 cluster. My current engine is myisam. Does anyone have an idea why repl
 won't work. Do I have to use the ndbengine?

 
 This message is intended only for the use of the Addressee and
 may contain information that is PRIVILEGED and CONFIDENTIAL.

 If you are not the intended recipient, you are hereby notified
 that any dissemination of this communication is strictly prohibited.

 If you have received this communication in error, please erase
 all copies of the message and its attachments and notify us
 immediately.

 Thank you.
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Postal code searching

2012-04-25 Thread Andrew Moore
If nothing else a great intro to the UK postcode. I find this very
interesting/useful.

Thanks Mark.

On Wed, Apr 25, 2012 at 10:14 AM, Mark Goodge m...@good-stuff.co.uk wrote:

 On 24/04/2012 17:24, Tompkins Neil wrote:

 How about if I want to only return postal codes that are like W1U 8JE
 not W13 0SU.

 Because in this example I have W1 as the postal code and W13 is the other
 postal code


 No, you don't. In this example you have W1U as one outbound code and W13
 as the other.

 W1U postcodes are not a subset of W1 postcodes, any more than IP27
 postcodes are a subset of IP2 postcodes. The fact that in W1U the district
 segment is in the form of NA rather than NN doesn't change the fact that
 it's an indivisible two-character code.

 So I think the first question has to be, why do you want to get W1 as a
 particular substring from the postcode W1U 8JE?

 British postcodes have a structure which is easy for humans to understand,
 although (unfortunately) rather hard to parse automatically. Essentially,
 every full postcode contains four elements:

 Area code: one or two alpha characters, either A or AA
 District code: one or two alphanumeric characters the first of which is
 always numeric, either N, NN or NA
 Sector code: single numeric character, always N
 Walk code: two alpha characters, always AA

 It's customary, but not part of the formal specification, to insert
 whitespace between the District and Sector codes.

 So, given the postcode WC1H 8EJ, we have:

 Area: WC
 District: 1H
 Sector: 8
 Walk: EJ

 Taken together, the first two sections form the outbound part of the
 postcode, and the second two form the inbound. (That is, the first two
 identify the destination sorting depot that the originating depot will send
 the post to, and the second two are used by the destination depot to make
 the actual delivery).

 The reason for mentioning this is that postcodes, having a wide range of
 possible formats, are not easy to handle with simple substring searches if
 you're trying to extract outbound codes from a full postcode. It can be
 done with regular expressions, but you have to be wary of assuming that the
 space between District and Sector will always be present as, particularly
 if you're getting data from user input, it might not be.

 In my own experience (which is quite extensive, as I've done a lot of work
 with systems, such as online retail, which use postcodes as a key part of
 the data), I've always found it simpler to pre-process the postcodes prior
 to inserting them into the database in order to ensure they have a
 consistent format (eg, inserting a space if none exists). That then makes
 it easy to select an outbound code, as you can use the space as a boundary.
 But if you want to be able to go further up the tree and select area codes
 (eg, distinguishing between EC, WC and W) then it's harder, as you have to
 account for the fact that some are two characters and some are only one.
 You can do it with a regular expression, taking everything prior to the
 first digit, but it's a lot easier in this case to extract the area code
 prior to inserting the data into the database and store the area code in a
 separate column.

 Mark
 --
  Sent from my ZX Spectrum HD
  http://mark.goodge.co.uk


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Instance tuning

2012-04-11 Thread Andrew Moore
Hey Bruce,

Much of the output is inaccurate and the tool is rather dated.

A

On Wed, Apr 11, 2012 at 10:23 PM, Bruce Ferrell bferr...@baywinds.orgwrote:

 I've long used mysqltuner.pl and have recently heard that it may not be
 the best tool for the job.  what are others using?  What experiences have
 you had with mysqltuner.pl

 Inquiring minds want to know

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Licensing question about mysql_com.h

2012-04-09 Thread Andrew Moore
So what's the deal with Twitter's mysql code...how can it be BSD licensed?
I'm a bit unsure about the intricacies of licensing.

A

On Tue, Apr 10, 2012 at 1:19 AM, Michael Dykman mdyk...@gmail.com wrote:

 Your code might not qualify for the linking excepetion, but users of
 your code can use the inking exception to licence their product
 however they choose.

  - michael dykman

 On Mon, Apr 9, 2012 at 2:43 PM, James Ots my...@jamesots.com wrote:
  I don't think I can use a linking exception when I license my code, as
  the GPL says I must license my code with the same licence that the
  original code used.
 
  James Ots
 
  On 8 April 2012 00:52, Michael Dykman mdyk...@gmail.com wrote:
  Not quite true.  Your driver would likely have to be published under
  GPL but that allows the linking exception which allows users of your
  driver to avoid having to open-source their own works which utilize
  the driver.Should someone decide to code bug fixes or extensions
  for your driver, those would necessarily be GPL.
 
   - michael dykman
 
  On Sat, Apr 7, 2012 at 6:52 PM, James Ots my...@jamesots.com wrote:
  I am writing a MySQL connector for the Dart programming language. I
  was hoping to licence it under the BSD Licence, but since it uses
  modified parts of mysql_com.h, which is licensed under the GPL, I'm
  guessing that I'll have to licence my connector under the GPL as well?
  And therefore, anyone who used the connector would also have to
  licence their software under the GPL too?
 
  Am I correct about this? I looked at the FOSS exception, but it
  doesn't seem to apply in this case.
 
  James Ots
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
 
 
  --
   - michael dykman
   - mdyk...@gmail.com
 
   May the Source be with you.
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe:http://lists.mysql.com/mysql
 



 --
  - michael dykman
  - mdyk...@gmail.com

  May the Source be with you.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: two 5.1 servers, different behaviour

2012-02-24 Thread Andrew Moore
To rule out a version issue have you tried another host with the
problematic version and same/similar config?
On Feb 24, 2012 6:47 PM, William Bulley w...@umich.edu wrote:

 According to Michael Dykman mdyk...@gmail.com on Fri, 02/24/12 at 13:23:
 
  At this point, I would not know what else to do except fire up
  wireshark and start debugging the packets.

 Well, doesn't that beat all, sigh...  Now I'm back to square zero...  :-(

  Are both servers on the same subnet?

 Negative.

  Is your DBVisualizer client local to either of these or on the same
  subnet as one and not another?

 The DbVisualizer client tool is local to the 5.1.60 server.

  You mentioned a minor version difference between the servers..  Have
  you read the relevant release notes between those versions?

 I think that might be a next step, but even more expedient would be
 upgrading the 5.1.58 server to 5.1.60 or 5.1.61 version and retest.

  This looks like an ugly one.  I don't envy you.

 Gee, thanks for those words of encouragement - NOT!   :-)

 Regards,

 web...

 --
 William Bulley Email: w...@umich.edu

 72 characters width template -|

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re:Cant drop trigger.

2012-02-20 Thread Andrew Moore
Is there white space at the end of the trigger name?

A
On Feb 20, 2012 10:04 PM, Brent Clark brentgclarkl...@gmail.com wrote:

 Hi Guys

 I wonder if someone could urgently help me

 Im trying to drop some triggers. But mysql says the trigger does not exist.

 I ran

 SELECT TRIGGER_SCHEMA, EVENT_OBJECT_TABLE, TRIGGER_NAME FROM
 INFORMATION_SCHEMA.TRIGGERS


 +++--+
 | TRIGGER_SCHEMA | EVENT_OBJECT_TABLE | TRIGGER_NAME
  |

 +++--+
 | abc_db1 | foods  | Foods Insert to Contributions,Search
 |

 But if I run

 DROP TRIGGER `abc_db1`.`Foods Insert to Contributions\,Search`;
 ERROR 1360 (HY000): Trigger does not exist

 I tried backtick, single quotes, doubles quotes. nothing appears to work.

 Thanks
 Brent

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Date and Time

2012-01-08 Thread Andrew Moore
What's your problem/reason with how it is?

Andy

On Sun, Jan 8, 2012 at 8:21 PM, Donovan Brooke li...@euca.us wrote:

 Hello, I'm doing an insert into with date and time type fields.

 I was reading:
 http://dev.mysql.com/doc/**refman/5.1/en/date-and-time-**literals.htmlhttp://dev.mysql.com/doc/refman/5.1/en/date-and-time-literals.html

 My question is: is the format always 'year month day'?.. or can we save
 dates in 'month day year' as well?

 Thanks,
 Donovan


 --
 D Brooke

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql




Re: Cleaning up old files

2011-11-14 Thread Andrew Moore
The binary log is useful for more then just replication and can be used to
take incremental backups or to perform forensics on what your server has
done. I would recommend learning how to use them to your advantage over
turning binary logging off as some have recommended. Make sure you're
monitoring your storage so you don't end up running out of space. Use
expire_logs_days
to auto purge your logs and if you have replication in place ensure that
this is set high enough so that you don't remove logs that haven't been
used by the slave.



On Mon, Nov 14, 2011 at 6:15 PM, Basil Daoust bdao...@lemonfree.com wrote:

 If you search on mysql-bin.01 you get a lot of good info.
 o They are mysql log files it contains all the queries u can view
 these files with the command mysqlbinlog just man it for more details
 o These are your binary log files.. you might not want to switch them
 off depending on your setup - but you can purge them (look in the
 manual for PURGE MASTER LOGS)

 I've heard people talk about using them for recovery.
 I know they are used for replication.
 You can delete/purge the old ones that you have a backup for, because
 trying to recover by playing them all back is probably not a realistic
 solution.

 Basil

 On Mon, Nov 14, 2011 at 12:01 PM, Rob Tanner rtan...@linfield.edu wrote:
  Hi,
 
  In my MySQL directory, I have more than a few gig and a half sized
 files, mysql-bin.01, mysql-bin.01 and et cetera.  They date from
 today all the way back to early 2010.  I don't know exactly what those
 files are but I would like to delete as many as are no longer is use since
 I had a 40GB partition fill up over the weekend which resulted in bringing
 down our web server.  So what are those files and can I delete all but the
 most recent?
 
  Thanks.
 
 
  Rob Tanner
  UNIX Services Manager
  Linfield College, McMinnville Oregon
 
 

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: add index to slave but not master

2011-10-31 Thread Andrew Moore
Functionally not a problem. Many people use a slave for different work
loads then their master. Different indexes for different queries make
sense. Be aware that should you intend to make backups off the slave or
even leave yourself open to promoting slave to master as a HA strategy then
things will differ from the original master and you might notice a
performance difference as a result.

Hth

Andrew
On Oct 31, 2011 7:47 AM, Jeff Pang jeffrp...@gmail.com wrote:

 Hello,

 I have a question that, if I add the index to the table in slave, but
 don't do it in master, will it make problems?

 Thanks.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: Synchronize two mysql servers (single user, only one server (but not both) can be changed at any given time)

2011-10-18 Thread Andrew Moore
As Johan describes, replication is the way. If that's not an option due to
connectivity between the hosts you could arrange for a logical dump to be
copied and restored. I would certainly opt for replication in your
situation.

A




On Tue, Oct 18, 2011 at 7:02 AM, Johan De Meersman vegiv...@tuxera.bewrote:

 - Original Message -
  From: Peng Yu pengyu...@gmail.com
 
  Suppose I have two servers (one on mac and one on ubuntu), at any
  time I only change one server but not both servers. After I have done
 
  I think that probably it is OK to synchronize these two servers at
  the database file level using something like unison. However, this will

 It's not generally a good idea to copy database files across platforms, let
 alone possibly different versions of the mysql server.

 You probably want to set up master-master replication, which should work
 fine across platforms. If you are sure that you'll never change the (same)
 data on both sides concurrently, there is pretty much nothing that can go
 wrong.

 Have a look at the online documentation at mysql.com, it should be your
 first stop for everything :-)


 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: Beginner question

2011-10-11 Thread Andrew Moore
Hey, welcome to the lists,

Be mindful that your query is using 2 tables and 'SELECT *'.



On Tue, Oct 11, 2011 at 4:11 PM, Biz-comm b...@biz-comm.com wrote:

 I am trying to write a query for a web page that shows a list of users in a
 particular group.

 There are 3 tables:
 pm1_users that uses UserID
 pm1_groupsubscriptions that uses UserID and GroupID
 pm1_mailingroups that uses GroupID

 So I want to show all the users that belong to a specific mailingroup

 SELECT *
 FROM pm1_groupsubscriptions, pm1_users
 WHERE GroupID = 10

 (10 = one of the mailingroups)

 That isn't enough to get there. That shows all uses.

 Thanks for any assistance.


 Regards,

 Patrice Olivier-Wilson
 828-628-0500
 http://Biz-comm.com
 b...@biz-comm.com





 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re:Socket not found

2011-10-09 Thread Andrew Moore
Hey Peter,

2 options;

Check the socket key-value in the my.cnf file under the client section.
Explicitly tell mysqladmin what one to use with the relevant cmd line
option. See the documentation for syntax.

Hth

Andrew
On 9 Oct 2011 09:21, Peter Schrock peter.schr...@gmail.com wrote:

 I just loaded and built three different versions of mysql and all three
 versions have given me the same results. When trying to change the password
 to root, this is what I get:

 ./mysqladmin: connect to server at 'localhost' failed
 error: 'Can't connect to local MySQL server through socket
 '/tmp/mysql.sock'
 (2)'
 Check that mysqld is running and that the socket: '/tmp/mysql.sock' exists!

 Help please.

 Peter



Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Andrew Moore
Joey, does your 'large' table get

On Thu, Oct 6, 2011 at 3:22 PM, Joey L mjh2...@gmail.com wrote:

 here is mysqlreport ---
 
 root@rider:~/tmp# ./mysqlreport --user root --password barakobomb
 Use of uninitialized value $is in multiplication (*) at ./mysqlreport line
 829.
 Use of uninitialized value in formline at ./mysqlreport line 1227.
 MySQL 5.1.49-3-log   uptime 0 0:25:5Thu Oct  6 10:20:49 2011

 __ Key _
 Buffer used   727.43M of   2.00G  %Used:  35.52
  Current 963.24M%Usage:  47.03
 Write hit  29.41%
 Read hit   99.79%

 __ Questions ___
 Total  50.20k33.4/s
  QC Hits  32.56k21.6/s  %Total:  64.87
  DMS  12.28k 8.2/s   24.46
  Com_  3.21k 2.1/s6.39
  COM_QUIT  2.89k 1.9/s5.76
  -Unknown745 0.5/s1.48
 Slow 10 s  68 0.0/s0.14  %DMS:   0.55  Log: OFF
 DMS12.28k 8.2/s   24.46
  SELECT   11.09k 7.4/s   22.10 90.36
  UPDATE  539 0.4/s1.07  4.39
  INSERT  384 0.3/s0.77  3.13
  DELETE  260 0.2/s0.52  2.12
  REPLACE   0   0/s0.00  0.00
 Com_3.21k 2.1/s6.39
  set_option1.10k 0.7/s2.20
  show_fields   1.03k 0.7/s2.05
  admin_comma 707 0.5/s1.41

 __ SELECT and Sort _
 Scan1.65k 1.1/s %SELECT:  14.87
 Range 493 0.3/s4.44
 Full join 310 0.2/s2.79
 Range check   339 0.2/s3.06
 Full rng join   0   0/s0.00
 Sort scan 887 0.6/s
 Sort range628 0.4/s
 Sort mrg pass   0   0/s

 __ Query Cache _
 Memory usage5.96M of  16.00M  %Used:  37.25
 Block Fragmnt   5.17%
 Hits   32.56k21.6/s
 Inserts 5.66k 3.8/s
 Insrt:Prune   5.66k:1 3.8/s
 Hit:Insert 5.76:1

 __ Table Locks _
 Waited513 0.3/s  %Total:   3.62
 Immediate  13.65k 9.1/s

 __ Tables __
 Open 1024 of 1024%Cache: 100.00
 Opened 14.96k 9.9/s

 __ Connections _
 Max used   70 of  100  %Max:  70.00
 Total   2.89k 1.9/s

 __ Created Temp 
 Disk table  1.34k 0.9/s
 Table   2.35k 1.6/sSize:  32.0M
 File5 0.0/s

 __ Threads _
 Running32 of   37
 Cached  0 of8  %Hit:  93.26
 Created   195 0.1/s
 Slow0   0/s

 __ Aborted _
 Clients 0   0/s
 Connects2 0.0/s

 __ Bytes ___
 Sent  100.33M   66.7k/s
 Received   12.48M8.3k/s

 __ InnoDB Buffer Pool __
 Usage   1.67M of   8.00M  %Used:  20.90
 Read hit   99.70%
 Pages
  Free405%Total:  79.10
  Data107 20.90 %Drty:   0.00
  Misc  0  0.00
  Latched  0.00
 Reads  26.18k17.4/s
  From file78 0.1/s0.30
  Ahead Rnd 2 0.0/s
  Ahead Sql 1 0.0/s
 Writes  3 0.0/s
 Flushes 3 0.0/s
 Wait Free   0   0/s

 __ InnoDB Lock _
 Waits   0   0/s
 Current 0
 Time acquiring
  Total 0 ms
  Average   0 ms
  Max   0 ms

 __ InnoDB Data, Pages, Rows 
 Data
  Reads96 0.1/s
  Writes   12 0.0/s
  fsync11 0.0/s
  Pending
Reads   0
Writes  0
fsync   0

 Pages
  Created   0   0/s
  Read107 0.1/s
  Written   3 0.0/s

 Rows
  Deleted   0   0/s
  Inserted  0   0/s
  Read 20.98k13.9/s
  Updated   0   0/s
 root@rider:~/tmp#

 and the mysqltuner.pl report :
 -

 root@rider:~/tmp# perl mysqltuner.pl

MySQLTuner 

Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Andrew Moore
Sorry, hit send by accident there! *face palm*

Just had a quick scan of the report. You've got 2 1GB disks in software raid
- RAID1 or RAID5? I can also see you're creating a lot of temporary files on
disk. I think in your previous email that your biggest table's index(s) were
larger then the keybuffer size. I would suspect that you're disk bound with
limited IO performance through 2 disks and effectively 1 if in a mirrored
configuration. The stats show that you're configured for MyISAM and that
you're tables are taking reads and writes (read heavy though), MyISAM
doesn't like high concurrency mixed workloads such as yours, it will cause
locking and maybe thats why your count has such a delay. Such activity may
be better suited to InnoDB engine (you must configure and tune for this, not
JUST change the engine).

HTH

Andy



On Thu, Oct 6, 2011 at 5:05 PM, Andrew Moore eroomy...@gmail.com wrote:

 Joey, does your 'large' table get


 On Thu, Oct 6, 2011 at 3:22 PM, Joey L mjh2...@gmail.com wrote:

 here is mysqlreport ---
 
 root@rider:~/tmp# ./mysqlreport --user root --password barakobomb
 Use of uninitialized value $is in multiplication (*) at ./mysqlreport line
 829.
 Use of uninitialized value in formline at ./mysqlreport line 1227.
 MySQL 5.1.49-3-log   uptime 0 0:25:5Thu Oct  6 10:20:49 2011

 __ Key _
 Buffer used   727.43M of   2.00G  %Used:  35.52
  Current 963.24M%Usage:  47.03
 Write hit  29.41%
 Read hit   99.79%

 __ Questions ___
 Total  50.20k33.4/s
  QC Hits  32.56k21.6/s  %Total:  64.87
  DMS  12.28k 8.2/s   24.46
  Com_  3.21k 2.1/s6.39
  COM_QUIT  2.89k 1.9/s5.76
  -Unknown745 0.5/s1.48
 Slow 10 s  68 0.0/s0.14  %DMS:   0.55  Log: OFF
 DMS12.28k 8.2/s   24.46
  SELECT   11.09k 7.4/s   22.10 90.36
  UPDATE  539 0.4/s1.07  4.39
  INSERT  384 0.3/s0.77  3.13
  DELETE  260 0.2/s0.52  2.12
  REPLACE   0   0/s0.00  0.00
 Com_3.21k 2.1/s6.39
  set_option1.10k 0.7/s2.20
  show_fields   1.03k 0.7/s2.05
  admin_comma 707 0.5/s1.41

 __ SELECT and Sort _
 Scan1.65k 1.1/s %SELECT:  14.87
 Range 493 0.3/s4.44
 Full join 310 0.2/s2.79
 Range check   339 0.2/s3.06
 Full rng join   0   0/s0.00
 Sort scan 887 0.6/s
 Sort range628 0.4/s
 Sort mrg pass   0   0/s

 __ Query Cache _
 Memory usage5.96M of  16.00M  %Used:  37.25
 Block Fragmnt   5.17%
 Hits   32.56k21.6/s
 Inserts 5.66k 3.8/s
 Insrt:Prune   5.66k:1 3.8/s
 Hit:Insert 5.76:1

 __ Table Locks _
 Waited513 0.3/s  %Total:   3.62
 Immediate  13.65k 9.1/s

 __ Tables __
 Open 1024 of 1024%Cache: 100.00
 Opened 14.96k 9.9/s

 __ Connections _
 Max used   70 of  100  %Max:  70.00
 Total   2.89k 1.9/s

 __ Created Temp 
 Disk table  1.34k 0.9/s
 Table   2.35k 1.6/sSize:  32.0M
 File5 0.0/s

 __ Threads _
 Running32 of   37
 Cached  0 of8  %Hit:  93.26
 Created   195 0.1/s
 Slow0   0/s

 __ Aborted _
 Clients 0   0/s
 Connects2 0.0/s

 __ Bytes ___
 Sent  100.33M   66.7k/s
 Received   12.48M8.3k/s

 __ InnoDB Buffer Pool __
 Usage   1.67M of   8.00M  %Used:  20.90
 Read hit   99.70%
 Pages
  Free405%Total:  79.10
  Data107 20.90 %Drty:   0.00
  Misc  0  0.00
  Latched  0.00
 Reads  26.18k17.4/s
  From file78 0.1/s0.30
  Ahead Rnd 2 0.0/s
  Ahead Sql 1 0.0/s
 Writes  3 0.0/s
 Flushes 3 0.0/s

Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Andrew Moore
Precisely my point Singer. There's a workload here that isn't friendly with
table level locking and I would hazard a guess that there's some fights over
IO due to load vs resources. The count is going to be queued as you
describe.

A

On Thu, Oct 6, 2011 at 6:09 PM, Singer X.J. Wang w...@singerwang.comwrote:

 Okay, lets hold on for a minute here and go back. We're side tracking too
 much.

 Lets state the facts here:

 1) MyISAM stores the row count internally, a 'select count(*) from table'
 DOES NOT DO A FULL TABLE SCAN
 2) hell, a software RAID6 of 2 MFM drives could do a seek to the metadata
 faster then 4 minutes..

 But lets remember that if another thread is writing or updating the MyISAM
 table, the count(*) must wait..

 So I recommend this:

 run a select count(*) from the table that you see is long.. if it is taking
 a long time open another session, do a show processlist

 I bet you that you will see another process updating or deleting or
 inserting into the MyISAM table.


 On Thu, Oct 6, 2011 at 12:35, Joey L mjh2...@gmail.com wrote:

 i did google search - myisam is faster...i am not really doing any
 transaction stuff.
 thanks

 On Thu, Oct 6, 2011 at 12:16 PM, Andrew Moore eroomy...@gmail.com
 wrote:
  Sorry, hit send by accident there! *face palm*
  Just had a quick scan of the report. You've got 2 1GB disks in software
 raid
  - RAID1 or RAID5? I can also see you're creating a lot of temporary
 files on
  disk. I think in your previous email that your biggest table's index(s)
 were
  larger then the keybuffer size. I would suspect that you're disk bound
 with
  limited IO performance through 2 disks and effectively 1 if in a
 mirrored
  configuration. The stats show that you're configured for MyISAM and that
  you're tables are taking reads and writes (read heavy though), MyISAM
  doesn't like high concurrency mixed workloads such as yours, it will
 cause
  locking and maybe thats why your count has such a delay. Such activity
 may
  be better suited to InnoDB engine (you must configure and tune for this,
 not
  JUST change the engine).
  HTH
  Andy
 
 
  On Thu, Oct 6, 2011 at 5:05 PM, Andrew Moore eroomy...@gmail.com
 wrote:
 
  Joey, does your 'large' table get
 
  On Thu, Oct 6, 2011 at 3:22 PM, Joey L mjh2...@gmail.com wrote:
 
  here is mysqlreport ---
  
  root@rider:~/tmp# ./mysqlreport --user root --password barakobomb
  Use of uninitialized value $is in multiplication (*) at ./mysqlreport
  line 829.
  Use of uninitialized value in formline at ./mysqlreport line 1227.
  MySQL 5.1.49-3-log   uptime 0 0:25:5Thu Oct  6 10:20:49
 2011
 
  __ Key
 _
  Buffer used   727.43M of   2.00G  %Used:  35.52
   Current 963.24M%Usage:  47.03
  Write hit  29.41%
  Read hit   99.79%
 
  __ Questions
 ___
  Total  50.20k33.4/s
   QC Hits  32.56k21.6/s  %Total:  64.87
   DMS  12.28k 8.2/s   24.46
   Com_  3.21k 2.1/s6.39
   COM_QUIT  2.89k 1.9/s5.76
   -Unknown745 0.5/s1.48
  Slow 10 s  68 0.0/s0.14  %DMS:   0.55  Log:
 OFF
  DMS12.28k 8.2/s   24.46
   SELECT   11.09k 7.4/s   22.10 90.36
   UPDATE  539 0.4/s1.07  4.39
   INSERT  384 0.3/s0.77  3.13
   DELETE  260 0.2/s0.52  2.12
   REPLACE   0   0/s0.00  0.00
  Com_3.21k 2.1/s6.39
   set_option1.10k 0.7/s2.20
   show_fields   1.03k 0.7/s2.05
   admin_comma 707 0.5/s1.41
 
  __ SELECT and Sort
 _
  Scan1.65k 1.1/s %SELECT:  14.87
  Range 493 0.3/s4.44
  Full join 310 0.2/s2.79
  Range check   339 0.2/s3.06
  Full rng join   0   0/s0.00
  Sort scan 887 0.6/s
  Sort range628 0.4/s
  Sort mrg pass   0   0/s
 
  __ Query Cache
 _
  Memory usage5.96M of  16.00M  %Used:  37.25
  Block Fragmnt   5.17%
  Hits   32.56k21.6/s
  Inserts 5.66k 3.8/s
  Insrt:Prune   5.66k:1 3.8/s
  Hit:Insert 5.76:1
 
  __ Table Locks
 _
  Waited513 0.3/s  %Total:   3.62
  Immediate  13.65k 9.1/s
 
  __ Tables
 __
  Open 1024 of 1024%Cache: 100.00
  Opened 14.96k 9.9/s
 
  __ Connections

Re: 4 minute slow on select count(*) from table - myisam type

2011-10-06 Thread Andrew Moore
Glad you got to the bottom of it Joey.
On 7 Oct 2011 01:23, Joey L mjh2...@gmail.com wrote:
 Guys - I wanted to thank you all very much for your help
 I found the offending code on the website !
 thank you very very very much...
 what did it for me was a combination of show processlist and show full
 processlist.

 I saw the full queries and the main thing was that it was doing a
 query about 20 miles long.

 thanks again
 mjh

 On Thu, Oct 6, 2011 at 7:48 PM, Jan Steinman j...@bytesmiths.com wrote:
 From: Joey L mjh2...@gmail.com

 i did google search - myisam is faster...i am not really doing any
 transaction stuff.

 That's true for read-only. But if you have a mix of reads and writes,
MYISAM locks tables during writes, which could be blocking reads.

 
 In a museum in Havana, there are two skulls of Christopher Columbus; one
when he was a boy and one when he was a man. -- Mark Twain
  Jan Steinman, EcoReality Co-op 


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=mjh2...@gmail.com




Re: A tidbit for those of us who want to play with InnoDB compression

2011-10-04 Thread Andrew Moore
Nice one Johan, thanks for the info.

On Tue, Oct 4, 2011 at 2:17 PM, Johan De Meersman vegiv...@tuxera.bewrote:

 As noted in the title, I'm messing about a bit with InnoDB compressed
 tables. As such, I found a rather glaring hole in the Internet: how the hell
 do you turn compression off again? :-D

 After messing about a lot and googling until my fingers hurt, I happened
 upon this bug report: http://bugs.mysql.com/bug.php?id=56628

 So, you turn compression on a table off by:

 set session innodb_strict_mode=off;
 alter table YOURTABLEHERE engine=InnoDB row_format=compact
 key_block_size=0;

 Of course, if you're running 5.1.55+ or 5.5.9+, you'll not need to tinker
 with your innodb_strict_mode ; but it's still a glaring hole in the
 documentation.

 --
 Bier met grenadyn
 Is als mosterd by den wyn
 Sy die't drinkt, is eene kwezel
 Hy die't drinkt, is ras een ezel



Re: how to shrink ibdata1

2011-10-03 Thread Andrew Moore
File per table is required if you want to implement compression via the
barracuda file format.
On 3 Oct 2011 06:39, Adarsh Sharma adarsh.sha...@orkash.com wrote:
 innnodb_file per table creates ibdata files for each table and What to
 do if some tables data are deleted frequently.
 I have a innodb table which was truncated after 150GB in mysql database.


 Reindl Harald wrote:
 Am 02.10.2011 22:10, schrieb Dhaval Jaiswal:

 Hi All,

 How to shrink the Mysql ibdata1 files.

 The actual size of database is only hardly 10G, but ibdata1 is showing
73G
 full. I did some googling and found the following procedure.

 Do a mysqldump of all databases, procedures, triggers etc
 Drop all databases
 Stop mysql
 Delete ibdata1 and ib_log files
 Start mysql
 Restore from dump

 When you start MySQL in step 5 the ibdata1 and ib-log files will be
 recreated.


 Is this the only option with mysql? Can't we update the statistics of
 database  reduce the file size.

 I am currently using 5.0.X.


 innodb_file_per_table is your friend, but way too late
 that should have been considered BEFORE the setup

 now you have only the option dump your databases
 reset the server
 configure innodb_file_per_table
 restore the backup





Re: 4 minute slow on select count(*) from table - myisam type

2011-10-02 Thread Andrew Moore
Is your table MyISAM or InnoDB?

A

On Sun, Oct 2, 2011 at 2:44 PM, Joey L mjh2...@gmail.com wrote:

 I have having issues with mysql db - I am doing a select count(*) from
 table -- and it take 3 to 4 min.
 My table has about 9,000,000 records in it.
 I have noticed issues on my web pages so that is why i did this test.
 I have about 4 gig of memory on the server.
 Is there anything I can do to fix the issue 
 My my.cnf looks like this :
 # * Fine Tuning
 #
 key_buffer  = 256M
 max_allowed_packet  = 16M
 thread_stack= 192K
 thread_cache_size   = 32
 # This replaces the startup script and checks MyISAM tables if needed
 # the first time they are touched
 myisam-recover = BACKUP
 max_connections= 100
 table_cache= 1024
 thread_concurrency = 20
 #
 # * Query Cache Configuration
 #
 query_cache_limit   = 1M
 query_cache_size= 512M
 #
 # * Logging and Replication
 #
 # Both location gets rotated by the cronjob.
 # Be aware that this log type is a performance killer.
 # As of 5.1 you can enable the log at runtime!
 general_log_file= /var/log/mysql/mysql.log
 general_log = 1
 #
 # Error logging goes to syslog due to
 /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
 #
 # Here you can see queries with especially long duration
 #log_slow_queries   = /var/log/mysql/mysql-slow.log
 #long_query_time = 2
 #log-queries-not-using-indexes
 #
 # The following can be used as easy to replay backup logs or for
 replication.
 # note: if you are setting up a replication slave, see README.Debian about
 #   other settings you may need to change.
 #server-id  = 1
 #log_bin= /var/log/mysql/mysql-bin.log
 expire_logs_days= 10
 max_binlog_size = 100M
 #binlog_do_db   = include_database_name
 #binlog_ignore_db   = include_database_name
 #
 # * InnoDB
 #
 # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
 # Read the manual for more InnoDB related options. There are many!
 #
 # * Security Features
 #
 # Read the manual, too, if you want chroot!
 # chroot = /var/lib/mysql/
 #
 # For generating SSL certificates I recommend the OpenSSL GUI tinyca.
 #
 # ssl-ca=/etc/mysql/cacert.pem
 # ssl-cert=/etc/mysql/server-cert.pem
 # ssl-key=/etc/mysql/server-key.pem



Re: 4 minute slow on select count(*) from table - myisam type

2011-10-02 Thread Andrew Moore
Did you fix the issue?

On Sun, Oct 2, 2011 at 4:05 PM, Singer X.J. Wang w...@singerwang.comwrote:

 Are you sure? Do a show create table and send it to us please




 On Sun, Oct 2, 2011 at 10:02, Joey L mjh2...@gmail.com wrote:

 thanks for the quick reply!
 My table is MyISAM
 further top says this:
 top - 10:01:29 up  8:25,  4 users,  load average: 1.42, 1.85, 2.69
 Tasks: 338 total,   1 running, 337 sleeping,   0 stopped,   0 zombie
 Cpu(s): 10.3%us,  0.9%sy,  0.0%ni, 56.6%id, 32.0%wa,  0.0%hi,  0.2%si,
  0.0%st
 Mem:   8198044k total,  8158784k used,39260k free,   199852k buffers
 Swap:  8210416k total,44748k used,  8165668k free,  5457920k cached

  PID USER  PR  NI  VIRT  RES  SHR S %CPU %MEMTIME+  COMMAND
 10682 mysql 20   0  958m 343m 6588 S   31  4.3  57:25.69
 /usr/sbin/mysqld --basedir=/usr --datadir=/var/lib/mysql --user=mysql
 --pid-file=/va
 14627 www-data  20   0 50088  14m 4744 S3  0.2   0:10.43
 /usr/sbin/apache2 -k start
 14637 www-data  20   0 50088  14m 4744 S3  0.2   0:07.66
 /usr/sbin/apache2 -k start
 14737 www-data  20   0 50092  14m 4744 S3  0.2   0:07.25
 /usr/sbin/apache2 -k start
 14758 www-data  20   0 50092  14m 4748 S3  0.2   0:07.36
 /usr/sbin/apache2 -k start
 15145 root  20   0  2596 1328  896 R1  0.0   0:00.55 top
  1895 bind  20   0 98452  24m 1980 S1  0.3   0:31.34
 /usr/sbin/named
 -u bind
  401 root  20   0 000 D0  0.0   0:42.63 [md0_raid1]
  1398 root  20   0 000 S0  0.0   2:59.33 [flush-9:0]
  2428 asterisk -11   0 33500  15m 6660 S0  0.2   0:19.39
 /usr/sbin/asterisk -p -U asterisk
1 root  20   0  2032  604  568 S0  0.0   0:01.14 init [2]
2 root  20   0 000 S0  0.0   0:00.01 [kthreadd]
3 root  RT   0 000 S0  0.0   0:00.04 [migration/0]
4 root  20   0 000 S0  0.0   0:00.16 [ksoftirqd/0]


 On Sun, Oct 2, 2011 at 9:55 AM, Andrew Moore eroomy...@gmail.com wrote:

  Is your table MyISAM or InnoDB?
 
  A
 
 
  On Sun, Oct 2, 2011 at 2:44 PM, Joey L mjh2...@gmail.com wrote:
 
  I have having issues with mysql db - I am doing a select count(*) from
  table -- and it take 3 to 4 min.
  My table has about 9,000,000 records in it.
  I have noticed issues on my web pages so that is why i did this test.
  I have about 4 gig of memory on the server.
  Is there anything I can do to fix the issue 
  My my.cnf looks like this :
  # * Fine Tuning
  #
  key_buffer  = 256M
  max_allowed_packet  = 16M
  thread_stack= 192K
  thread_cache_size   = 32
  # This replaces the startup script and checks MyISAM tables if needed
  # the first time they are touched
  myisam-recover = BACKUP
  max_connections= 100
  table_cache= 1024
  thread_concurrency = 20
  #
  # * Query Cache Configuration
  #
  query_cache_limit   = 1M
  query_cache_size= 512M
  #
  # * Logging and Replication
  #
  # Both location gets rotated by the cronjob.
  # Be aware that this log type is a performance killer.
  # As of 5.1 you can enable the log at runtime!
  general_log_file= /var/log/mysql/mysql.log
  general_log = 1
  #
  # Error logging goes to syslog due to
  /etc/mysql/conf.d/mysqld_safe_syslog.cnf.
  #
  # Here you can see queries with especially long duration
  #log_slow_queries   = /var/log/mysql/mysql-slow.log
  #long_query_time = 2
  #log-queries-not-using-indexes
  #
  # The following can be used as easy to replay backup logs or for
  replication.
  # note: if you are setting up a replication slave, see README.Debian
 about
  #   other settings you may need to change.
  #server-id  = 1
  #log_bin= /var/log/mysql/mysql-bin.log
  expire_logs_days= 10
  max_binlog_size = 100M
  #binlog_do_db   = include_database_name
  #binlog_ignore_db   = include_database_name
  #
  # * InnoDB
  #
  # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/.
  # Read the manual for more InnoDB related options. There are many!
  #
  # * Security Features
  #
  # Read the manual, too, if you want chroot!
  # chroot = /var/lib/mysql/
  #
  # For generating SSL certificates I recommend the OpenSSL GUI tinyca.
  #
  # ssl-ca=/etc/mysql/cacert.pem
  # ssl-cert=/etc/mysql/server-cert.pem
  # ssl-key=/etc/mysql/server-key.pem
 
 
 


 --
 Pythian at Oracle OpenWorld: 8 sessions packed with hot tips, real-world 
 experiences and valuable insight. bit.ly/pythianoow11




Re: mysql listed as attach page by google?

2011-09-26 Thread Andrew Moore
Yeah I noticed that today too...

On Mon, Sep 26, 2011 at 7:30 PM, Michael Albert m_albert...@yahoo.comwrote:

 I don't suppose I am the first to notice this, but most of
 the pages on dev.mysql.com have been listed by google
 as attack pages, e.g http://dev.mysql.com/downloads/.
 Has there been a problem, or is google being overzealous?

 Thanks!

 -Mike



Re: trying to change wait_timeout

2011-09-08 Thread Andrew Moore
Check that you're looking at the variable in the GLOBAL scope not the
SESSION scope.

SHOW GLOBAL VARIABLE ...

Andy

On Thu, Sep 8, 2011 at 11:34 AM, Bruce Ferrell bferr...@baywinds.orgwrote:

 On 09/08/2011 02:56 AM, Johan De Meersman wrote:

 - Original Message -

 From: Bruce Ferrellbferr...@baywinds.org**
 To: mysql@lists.mysql.com
 Sent: Thursday, 8 September, 2011 3:10:16 AM
 Subject: trying to change wait_timeout

 I've read the documentation on MySQL for version 5.1 and it says all
 I have to do is to place the following:
 wait_timeout=xxx
 under [mysqld]

 That, and restart the service, of course. You *did* think of restarting
 the service, I trust? :-p

 That being said, it is also a dynamic variable, so if you didn't restart,
 prefer not to restart *and* are certain your config file is correct; you can
 also do set global wait_timeout=xxx to have it take effect immediately for
 all new sessions. Yes, that means you'll have to disconnect/reconnect to see
 the change in your own session.


 Good question to ask.  Yes, I did restart mysql.  Both before and after
 show variables like 'wait_time%' returns 28800.  Most confusing.




 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?**
 unsub=eroomy...@gmail.comhttp://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: Query Optimization

2011-09-08 Thread Andrew Moore
Thinking outside the query, is there any archiving that could happen to make
your large tables kinder in the range scan?

Andy

On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelps bphe...@gls.com wrote:

 On 09/01/2011 01:32 PM, Brandon Phelps wrote:

 On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:

 On 9/1/2011 09:42, Brandon Phelps wrote:

 On 09/01/2011 04:59 AM, Jochem van Dieten wrote:
   ...

   WHERE
   (open_dt = '2011-08-30 00:00:00' OR close_dt = '2011-08-30
 00:00:00')
   AND (open_dt = '2011-08-30 12:36:53' OR close_dt = '2011-08-30
 12:36:53')

  In that case your logic here simplifies to:
  WHERE
  open_dt = '2011-08-30 00:00:00'
  AND
  close_dt = '2011-08-30 12:36:53'

  Now add an index over open_dt and close_dt and see what happens.

  Jochem

 Jochem,

 I can't really use your WHERE logic because I also need to retrieve
 results where the open_dt time is out of the range specified. For
 example, a very large file download might span multiple days so given
 your logic if the connection was started 2 days ago and I want to pull 1
 days worth of connections, I would miss that entry. Basically I want to
 SELECT all of the records that were opened OR closed during the
 specified time period, ie. if any activity happened between my start and
 end dates, I need to see that record.

 Any other ideas?


 I believe Jochem was on the right track but he got his dates reversed.

 Let's try a little ASCII art to show the situation. I will setup a query
 window with two markers (s) and (e). Events will be marked by || markers
 showing their durations.

 a) (s) (e)
 b) |---|
 c) |---|
 d) |---|
 e) ||
 f) |---|
 g) |---|

 To describe these situations:
 a) is the window for which you want to query (s) is the starting time and
 (e) is the ending time for the date range you are interested in.
 b) the event starts and stops before your window exists. It won't be part
 of your results.
 c) the event starts before the window but ends within the window -
 include this
 d) the event starts and ends within the window - include this
 e) the event starts before the window and ends after the window - include
 this
 f) the event starts inside the window but ends beyond the window -
 include this.
 g) the event starts and ends beyond the window - exclude this.

 In order to get every event in the range of c-f, here is what you need
 for a WHERE clause

 WHERE start = (ending time) and end = (starting time)

 Try that and let us know the results.


 Thanks Jochem and Shawn, however the following two queries result in the
 exact same EXPLAIN output: (I hope the tables don't wrap too early for you)

 Old method:
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 (open_dt = '2011-08-31 09:53:31' OR close_dt = '2011-08-31 09:53:31')
 AND (open_dt = '2011-09-01 09:53:31' OR close_dt = '2011-09-01
 09:53:31')
 ORDER BY rcvd DESC
 LIMIT 0, 10;

 New method with BTREE index on open_dt, close_dt (index name is
 ndx_open_close_dt):
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 open_dt = '2011-09-01 09:53:31' AND close_dt = '2011-08-31 09:53:31'
 ORDER BY rcvd DESC
 LIMIT 0, 10;

 EXPLAIN output for old method:
 ++-+---+--**--+---**
 +--+-+**+--+--**---+
 | id | select_type | table | type | possible_keys | key | key_len | ref |
 rows | Extra |
 ++-+---+--**--+---**
 +--+-+**+--+--**---+
 | 1 | SIMPLE | sc | index | open_dt,ndx_open_close_dt | ndx_rcvd | 4 |
 NULL | 10 | Using where |
 | 1 | SIMPLE | spm | eq_ref | PRIMARY | PRIMARY | 2 | syslog.sc.src_port |
 1 | |
 | 1 | SIMPLE | dpm | eq_ref | PRIMARY | PRIMARY | 2 | syslog.sc.dst_port |
 1 | |
 ++-+---+--**--+---**
 +--+-+**+--+--**---+

 EXPLAIN output for new method with new index:
 ++-+---+--**--+---**
 +--+-+**+--+--**---+
 | id | select_type | table | type | possible_keys | key | key_len | ref |
 rows | Extra |
 ++-+---+--**--+---**
 

Re: Query Optimization

2011-09-08 Thread Andrew Moore
I don't think I saw any query timings in the emails (maybe I missed them).

What version of MySQL are you currently using?
What does the explain look like when your remove the limit 10?
Is your server tuned for MyISAM or InnoDB?
What kind of disk setup is in use?
How much memory is in your machine?


On Thu, Sep 8, 2011 at 7:27 PM, Brandon Phelps bphe...@gls.com wrote:

 Thanks for the reply Andy.  Unfortunately the users will be selecting
 varying date ranges and new data is constantly coming in, so I am not sure
 how I could archive/cache the necessary data that would be any more
 efficient than simply using the database directly.



 On 09/08/2011 02:16 PM, Andrew Moore wrote:

 Thinking outside the query, is there any archiving that could happen to
 make
 your large tables kinder in the range scan?

 Andy

 On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelpsbphe...@gls.com  wrote:

  On 09/01/2011 01:32 PM, Brandon Phelps wrote:

  On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:

  On 9/1/2011 09:42, Brandon Phelps wrote:

  On 09/01/2011 04:59 AM, Jochem van Dieten wrote:

 ...


  WHERE
 (open_dt= '2011-08-30 00:00:00' OR close_dt= '2011-08-30

 00:00:00')

 AND (open_dt= '2011-08-30 12:36:53' OR close_dt= '2011-08-30

 12:36:53')

  In that case your logic here simplifies to:
 WHERE
 open_dt= '2011-08-30 00:00:00'
 AND
 close_dt= '2011-08-30 12:36:53'


  Now add an index over open_dt and close_dt and see what happens.


  Jochem


 Jochem,

 I can't really use your WHERE logic because I also need to retrieve
 results where the open_dt time is out of the range specified. For
 example, a very large file download might span multiple days so given
 your logic if the connection was started 2 days ago and I want to pull
 1
 days worth of connections, I would miss that entry. Basically I want
 to
 SELECT all of the records that were opened OR closed during the
 specified time period, ie. if any activity happened between my start
 and
 end dates, I need to see that record.

 Any other ideas?


  I believe Jochem was on the right track but he got his dates
 reversed.

 Let's try a little ASCII art to show the situation. I will setup a
 query
 window with two markers (s) and (e). Events will be marked by ||
 markers
 showing their durations.

 a) (s) (e)
 b) |---|
 c) |---|
 d) |---|
 e) ||
 f) |---|
 g) |---|

 To describe these situations:
 a) is the window for which you want to query (s) is the starting time
 and
 (e) is the ending time for the date range you are interested in.
 b) the event starts and stops before your window exists. It won't be
 part
 of your results.
 c) the event starts before the window but ends within the window -
 include this
 d) the event starts and ends within the window - include this
 e) the event starts before the window and ends after the window -
 include
 this
 f) the event starts inside the window but ends beyond the window -
 include this.
 g) the event starts and ends beyond the window - exclude this.

 In order to get every event in the range of c-f, here is what you need
 for a WHERE clause

 WHERE start= (ending time) and end= (starting time)

 Try that and let us know the results.


 Thanks Jochem and Shawn, however the following two queries result in the
 exact same EXPLAIN output: (I hope the tables don't wrap too early for
 you)

 Old method:
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 (open_dt= '2011-08-31 09:53:31' OR close_dt= '2011-08-31 09:53:31')
 AND (open_dt= '2011-09-01 09:53:31' OR close_dt= '2011-09-01
 09:53:31')
 ORDER BY rcvd DESC
 LIMIT 0, 10;

 New method with BTREE index on open_dt, close_dt (index name is
 ndx_open_close_dt):
 SELECT
 sc.open_dt,
 sc.close_dt,
 sc.protocol,
 INET_NTOA(sc.src_address) AS src_address,
 sc.src_port,
 INET_NTOA(sc.dst_address) AS dst_address,
 sc.dst_port,
 sc.sent,
 sc.rcvd,
 spm.desc AS src_port_desc,
 dpm.desc AS dst_port_desc
 FROM firewall_connections AS sc
 LEFT JOIN port_mappings AS spm ON spm.port = sc.src_port
 LEFT JOIN port_mappings AS dpm ON dpm.port = sc.dst_port
 WHERE
 open_dt= '2011-09-01 09:53:31' AND close_dt= '2011-08-31 09:53:31'
 ORDER BY rcvd DESC
 LIMIT 0, 10;

 EXPLAIN output for old method:
 ++-+---+----+-**--**
 +--+-++--+**
 --**---+
 | id | select_type | table | type | possible_keys | key | key_len | ref
 |
 rows | Extra |
 ++-+---+----+-**--**
 +--+-++--+**
 --**---+
 | 1 | SIMPLE | sc | index | open_dt

Re: Query Optimization

2011-09-08 Thread Andrew Moore
Partitioning isn't a bad idea for this however I'm still thinking about your
dataset size and possible hardware limitations. It's not likely going to fit
into relevant buffers/memory so you're going to be on disk more then you
want. You're probably creating temporary tables like crazy and I would bet
that there are a good load of them heading to disk too. With your IO
performance limited to a small amount of disks as you describe, you're not
going to be able to get much more from these queries. Although a dedicated
DB server are there other databases been accessed on the server? When
looking at the scope of your data, are you capturing more then you need? How
often and how far back are the users querying? How many users concurrently
performing queries on the 32m record table?

On Thu, Sep 8, 2011 at 8:04 PM, Brandon Phelps bphe...@gls.com wrote:

 Mihail,

 I have considered this but have not yet determined how best to go about
 partitioning the table.  I don't think partitioning by dst_address or
 src_address would help because most of the queries do not filter on IP
 address (except very specific queries where the end-user is searching the
 table for history on a particular employee).

 I could potentially partition the table based on the day of week the
 connection was opened on which may improve performance for a while since
 this would take me from a single 32million record table down to roughly 4.5
 million records per partition (32 / 7) however we are looking to eventually
 store up to 2 months worth of data in the table, and so far that 32 million
 records is only for 1 month, so I estimate another 32 million-ish before the
 month is out, bringing me to roughly 70 million records total (it would be
 nice if I could store even more than 2 months, but given my currently
 performance dilemma I don't expect that to happen).  Also this does not take
 into account that the end-user will often be pulling data for multiple days
 at a time, meaning that multiple partitions in this scheme will need to be
 accessed anyway.

 The only other logical partitioning scheme I can think of would be to
 partition based on dst_port (the port the log relates to) but the majority
 of records are all to port 80 (standard web traffic) so I don't think this
 would be all that helpful.

 I have never messed with partitioning so it is totally possible that I am
 not thinking of something, so if you have any ideas on a decent partitioning
 scheme based on my criteria and queries below, please let me know.

 Thanks,
 Brandon


 On 09/08/2011 02:47 PM, Mihail Manolov wrote:

 If you're running version 5.1+ you may wanna take a look at table
 partitioning options you may have.

 On Sep 8, 2011, at 2:27 PM, Brandon Phelps wrote:

  Thanks for the reply Andy.  Unfortunately the users will be selecting
 varying date ranges and new data is constantly coming in, so I am not sure
 how I could archive/cache the necessary data that would be any more
 efficient than simply using the database directly.


 On 09/08/2011 02:16 PM, Andrew Moore wrote:

 Thinking outside the query, is there any archiving that could happen to
 make
 your large tables kinder in the range scan?

 Andy

 On Thu, Sep 8, 2011 at 7:03 PM, Brandon Phelpsbphe...@gls.com
 wrote:

  On 09/01/2011 01:32 PM, Brandon Phelps wrote:

  On 09/01/2011 12:47 PM, Shawn Green (MySQL) wrote:

  On 9/1/2011 09:42, Brandon Phelps wrote:

  On 09/01/2011 04:59 AM, Jochem van Dieten wrote:

 ...


  WHERE
 (open_dt= '2011-08-30 00:00:00' OR close_dt= '2011-08-30

 00:00:00')

 AND (open_dt= '2011-08-30 12:36:53' OR close_dt= '2011-08-30

 12:36:53')

  In that case your logic here simplifies to:
 WHERE
 open_dt= '2011-08-30 00:00:00'
 AND
 close_dt= '2011-08-30 12:36:53'


  Now add an index over open_dt and close_dt and see what happens.


  Jochem


 Jochem,

 I can't really use your WHERE logic because I also need to retrieve
 results where the open_dt time is out of the range specified. For
 example, a very large file download might span multiple days so
 given
 your logic if the connection was started 2 days ago and I want to
 pull 1
 days worth of connections, I would miss that entry. Basically I want
 to
 SELECT all of the records that were opened OR closed during the
 specified time period, ie. if any activity happened between my start
 and
 end dates, I need to see that record.

 Any other ideas?


  I believe Jochem was on the right track but he got his dates
 reversed.

 Let's try a little ASCII art to show the situation. I will setup a
 query
 window with two markers (s) and (e). Events will be marked by ||
 markers
 showing their durations.

 a) (s) (e)
 b) |---|
 c) |---|
 d) |---|
 e) ||
 f) |---|
 g) |---|

 To describe these situations:
 a) is the window for which you want to query (s) is the starting time
 and
 (e) is the ending time for the date range you are interested in.
 b) the event starts and stops before your window exists. It won't

Re: why does throughput increase with number of connections even if it exceeds number of processors?

2011-09-06 Thread Andrew Moore
Hong,

What you are seeing is probably your server's ability to keep up with your
benchmark with some comfort. I assume your benchmark isn't completing in a
couple of seconds but would presume that you are not IO bound and all
changes are happening in memory. Its hard to give you 'the' answer as you
haven't provided enough info to work with but in my estimations you aren't
working your machine hard enough. How true to your application is your
benchmark? If you are not expecting an append only workload then you might
be wasting your time merely adding records to a table whcich is what your
benchmark sounds like its doing.

Hth

Andy
On Sep 6, 2011 3:27 AM, Chuntao HONG chuntao.h...@gmail.com wrote:
 Hi,

 I wrote a micro benchmark to test the performance of my MySQL server.
 In the benchmark, which is written in C#, I created several threads,
 each with a connection to the MySQL server, to insert rows into the
 same table. Totally 3200 rows are inserted into the table.

 When I try to vary the number of C# threads I found that the time
 taken to finish the benchmark decreases, thus increasing the
 throughput. The throughput increases almost linearly with the number
 of C# threads, until I reach 100 threads, which is the maximum number
 of connections allowed by my server.

 This is quite unexpected, since the server has only two processors. I
 expect the throughput to grow from one connection to two connections.
 But I don't expect it to grow with more than two connections. Why is
 it the case?

 My Server has one Intel Xeon X3360 CPU with two cores running at
 2.83GHz and 8GB of main memory. It runs Windows Server 2008 R2. The
 MySQL version is 5.5.15 x64 Edition.


 Thanks!

 HONG Chuntao

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com



Re: mysql

2011-08-23 Thread Andrew Moore
That's too bad. How did you configure things? What trouble shooting have you
done so far?
On Aug 23, 2011 9:18 AM, madu...@gmail.com madu...@gmail.com wrote:
 When I try to start my mysql DB I keep getting the following message:
 Can't connect to local MySQL server through socket
 '/var/run/mysqld/mysqld.sock'

 Thanks

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com



Re: Reg...My Hung MYSQL instance

2011-08-23 Thread Andrew Moore
It will only do what you let it. If your server ui consuming too much memory
it because you've let it.
On Aug 23, 2011 9:22 AM, Shafi AHMED shafi.ah...@sifycorp.com wrote:
 Dear, Today suddenly my database went into hung state due to Out of Memory
 [ Killed process 1330 (mysqld) ].

 Please advise me folks.This happens now often

 Shafi




 Get your world in your inbox!

 Mail, widgets, documents, spreadsheets, organizer and much more with your
Sifymail WIYI id!
 Log on to http://www.sify.com

 ** DISCLAIMER **
 Information contained and transmitted by this E-MAIL is proprietary to
 Sify Technologies Limited and is intended for use only by the individual
or entity to
 which it is addressed, and may contain information that is privileged,
 confidential or exempt from disclosure under applicable law. If this is a
 forwarded message, the content of this E-MAIL may not have been sent with
 the authority of the Company. If you are not the intended recipient, an
 agent of the intended recipient or a person responsible for delivering the

 information to the named recipient, you are notified that any use,
 distribution, transmission, printing, copying or dissemination of this
 information in any way or in any manner is strictly prohibited. If you
have
 received this communication in error, please delete this mail  notify us
 immediately at ad...@sifycorp.com

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com



Re: about the config file

2011-08-02 Thread Andrew Moore
Hey!

I asked this one a long time ago. They are interchangeable in most cases. I
can recall there was one or two options (pid-file and log-error...I think?!)
that were funky with the '_' but I can't be sure that's still an issue.

Try them out and don't worry too much about the aesthetics of the file.

Andy

On Tue, Aug 2, 2011 at 11:18 AM, Feng He short...@gmail.com wrote:

 Hello,

 In mysql's config file my.cnf, are the variable names with _ and - the
 same?
 for example,

 log_error = ...
 log-error = ...

 Thanks.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: access from everywhere but localhost?

2011-06-20 Thread Andrew Moore
Grant only to the hosts you want to have access to your data.

Andy
On 20 Jun 2011 14:20, Willy Mularto sangpr...@gmail.com wrote:
 Replace GRANT with REVOKE



 On Jun 20, 2011, at 8:08 PM, Matthias Leopold wrote:

 hi,

 this surely is a beginners question:

 i already created a user and privileges like this:

 GRANT ALL PRIVILEGES ON `xxx`.* TO 'yyy'@'%' identified by 'zzz';

 how do i modify this setup to remove access from localhost/127.0.0.1?

 mysql version is 5.1.49

 thx
 matthias


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=sangpr...@gmail.com


 Willy Mularto
 F300HD+MR18DE (NLC1725)








 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe: http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com



Re: access from everywhere but localhost?

2011-06-20 Thread Andrew Moore
Nope, for example you can assign access to db1.* to user1@192.168.% which
doesn't include localhost or 127.0.0.1.
On 20 Jun 2011 14:35, Matthias Leopold matth...@aic.at wrote:
 does this mean that access from everywhere but localhost is impossible?

 matthias

 Am 2011-06-20 15:22, schrieb Andrew Moore:
 Grant only to the hosts you want to have access to your data.

 Andy

 On 20 Jun 2011 14:20, Willy Mularto sangpr...@gmail.com
 mailto:sangpr...@gmail.com wrote:
  Replace GRANT with REVOKE
 
 
 
  On Jun 20, 2011, at 8:08 PM, Matthias Leopold wrote:
 
  hi,
 
  this surely is a beginners question:
 
  i already created a user and privileges like this:
 
  GRANT ALL PRIVILEGES ON `xxx`.* TO 'yyy'@'%' identified by 'zzz';
 
  how do i modify this setup to remove access from localhost/127.0.0.1
 http://127.0.0.1?
 
  mysql version is 5.1.49
 
  thx
  matthias
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: http://lists.mysql.com/mysql?unsub=sangpr...@gmail.com
 
 
  Willy Mularto
  F300HD+MR18DE (NLC1725)
 
 
 
 
 
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To unsubscribe: http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com
 


 --
 Mit freundlichen Grüssen

 Matthias Leopold
 System  Network Administration

 Streams Telecommunicationsservices GmbH
 Universitaetsstrasse 10/7, 1090 Vienna, Austria

 tel: +43 1 40159113
 fax: +43 1 40159300
 


Re: How to copy file from an old server to reconstitute a database ?

2011-06-16 Thread Andrew Moore
Lee,

You will need to take the datadir directory, typically found under
/var/lib/mysql/

It would also be a good idea to grab your configuration file from
/etc/mysql/my.cnf or /etc/my.cnf (ubuntu tends to be the former).

Once transplanted into your new server remember to check that the
permissions on the implanted files. Starting the MySQL Server Daemon should
initiate the InnoDB Crash recovery procedure and you should be up and
running again.

That's a 50,000ft overview of the process but unless you're having some
other more serious issues with the data then the previous steps should get
you working again.

A

On Thu, Jun 16, 2011 at 6:50 PM, leegold leeg...@fastmail.fm wrote:

 Hi,

 I have a situation where the box with a mysql server will not boot. But
 I can boot it from a CD and browse the hard drive and I can mount
 partitions so files and data are accessible. I want to somehow take the
 Mysql DB structure and content, copy it, and move it to a bootable
 working unit. Normally I'd run the script that puts it all into a
 comma delimited file that can be reconstituted at a later date. But in
 my case I can only copy and paste files from an unbootable unit to a
 working server and try to reconstitute it from there.

 It is Ubuntu 10.4 server. Is what I want to do possible? What
 directories and files would I need to copy? First I would have a working
 Mysql server on the new unit then I assume I would copy to the new
 server and process the data if necessary...any help on how to proceed
 appreciated.

 Thanks,

 Lee G.

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: MySQL loses password?

2011-06-13 Thread Andrew Moore
Try starting the server using --skip-grants-table and then resetting the
password for the root account.
http://dev.mysql.com/doc/refman/5.5/en/server-options.html#option_mysqld_skip-grant-tables

I've only seen/heard of the problems you're having with legacy system
tables. Did you restore the data into 5.5 using an old backup file?
http://dev.mysql.com/doc/refman/5.5/en/mysql-upgrade.html

Might save your data!

Andy


On Mon, Jun 13, 2011 at 10:55 AM, Tim Thorburn webmas...@athydro.comwrote:

 Ok, ran several scans on my dev PC.  Unsurprisingly, nothing was found.
  Trying to use command line produces the error:
 ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using
 password: YES)

 The same Access denied for user 'root'@'localhost' (using password: YES)
 happens when trying to use MySQL Workbench 5.2.33 CE.  phpMyAdmin gives an
 error #2000 Cannot log in to the MySQL server.

 mysql --version returns:
 mysql Ver 14.14 Distrib 5.5.11, for Win64 (x86)

 I'm guessing I'll be uninstalling/removing MySQL completely tomorrow and
 doing a clean install of 5.5.13.  Although it would be handy to know what
 caused this problem in the first place.  It's more than a little concerning
 when Friday everything works, shutdown computer for the night, log in again
 Saturday to find MySQL doesn't so much like my passwords anymore.


 On 6/12/2011 3:33 PM, Claudio Nanni wrote:


 This happens when you use an old client / library. Can you try from
 command line? Also issue: mysql --version
 Claudio

 On Jun 12, 2011 3:09 PM, Tim Thorburn webmas...@athydro.com mailto:
 webmas...@athydro.com wrote:
  There's been no upgrade or old pre-upgrade on this machine. I say the
  same dev password for years, as it is the same password I've used on my
  dev machines throughout the years.
 
  This machine started off as a fresh install of the OS several months
  ago. The only version of MySQL ever on this particular machine is
  5.5.11. My confusion is mostly centered around the it worked fine on
  Friday, then Saturday happened issue. I've run a complete virus scan
  and found nothing, as well as several spyware/malware scans - it's in
  the process of running a scan from a rescue disc, so I'll know if
  there's anything else afterward.
 
  Seems odd anyone would bother hacking into this dev machine that's
  barely connected to the Internet.
 
  On 6/12/2011 8:59 AM, Johan De Meersman wrote:
  If it's recently been upgraded or had an old pre-upgrade backup
 restored, that's not a major surprise; and surely you won't have ben running
 mysql on a win7 for several years :-)
 
  In brief, mysql changed password encryptions between 4.1 and 5.0, for
 various reasons. The old password scheme is still supported, but iird the
 default setting for that in newer versions is off, yielding you the error
 you are reporting.
 
  Of course, if none of that is the case, you may have been hacked; but
 it seems somehow strange that a hacker would bother to install oldstyle
 passwords.
 
  The error message you provide also mentions the oldpasswd flag for PHP
 5.2 - also worth looking at.
 
 
  - Original Message -
  From: Tim Thorburnwebmas...@athydro.com mailto:
 webmas...@athydro.com
  To: mysql@lists.mysql.com mailto:mysql@lists.mysql.com
  Sent: Sunday, 12 June, 2011 2:50:22 PM
  Subject: MySQL loses password?
 
  Hi all,
 
  I came across something strange today on my dev machine and thought
  I'd
  see if anyone here has run into a similar problem. To begin, my dev
  machine is Win7 Ultimate 64-bit, running MySQL 5.5.11 (also 64-bit).
  Today when I tried to log into the server using the old MySQL GUI
  tools
  as root, I got an error number 1045 Access denied for user
  'root'@'localhost' (using password: YES). This is odd because I've
  not
  changed the root password on my dev machine in years. When I tried
  to
  goto a site on this same machine, Apache throws the following
  message:
 
  mysqlnd cannot connect to MySQL 4.1+ using the old insecure
  authentication. Please use an administration tool to reset your
  password with the command SET PASSWORD =
  PASSWORD('your_existing_password'). This will store a new, and more
  secure, hash value in mysql.user. If this user is used in other
  scripts
  executed by PHP 5.2 or earlier you might need to remove the
  old-passwords flag from your my.cnf file
 
  This is the first time I've seen such a message, or had MySQL
  randomly
  stop accepting my root password. I'll likely be doing a complete
  uninstall and reinstall in a few hours on this machine, but thought
  I'd
  ask here to see if anyone had any thoughts as to why this happened,
  and
  how I might correct it? If at all possible, I'd prefer to not have
  to
  do an uninstall/reinstall as I wisely hadn't backed up a few tables I
  was working on over the last couple days.
 
  Any thoughts?
 
  Thanks in advance,
  -Tim
 
 
 
  --
  MySQL General Mailing List
  For list archives: http://lists.mysql.com/mysql
  To 

Re: need help with -- unknown variable

2011-06-13 Thread Andrew Moore
Check if this is in the [mysqldump] section of your my.cnf file(s). Of
course it might not be a valid option in mysqldump...I haven't checked...
On 13 Jun 2011 17:00, Brown, Charles cbr...@bmi.com wrote:
 Hello All ~ I attempted to do a mysqldump when I got this message -- see
below:

 mysqldump: unknown variable 'datadir=/var/lib/mysql'

 Please can someone help me.

 Thanks





 
 This message is intended only for the use of the Addressee and
 may contain information that is PRIVILEGED and CONFIDENTIAL.

 If you are not the intended recipient, you are hereby notified
 that any dissemination of this communication is strictly prohibited.

 If you have received this communication in error, please erase
 all copies of the message and its attachments and notify us
 immediately.

 Thank you.
 


Patching MySQL CLI to log information

2011-06-02 Thread Andrew Braithwaite
Has anyone ever patched the MySQL or libmysql to log to some logfiles
with information like the UNIX user, time, server connected to, port
etc?

I'm just trying to save myself a bit of C patching.

Cheers,

A

-
LOVEFiLM UK Limited is a company registered in England and Wales. 
Registered Number: 06528297. 
Registered Office: No.9, 6 Portal Way, London W3 6RU, United Kingdom.

This e-mail is confidential to the ordinary user of the e-mail address to which 
it was addressed. If you have received it in error, 
please delete it from your system and notify the sender immediately.

This email message has been delivered safely and archived online by Mimecast.
For more information please visit http://www.mimecast.co.uk 
-

Re: Not finding my.cnf file

2011-06-01 Thread Andrew Moore
You need to provide a my.cnf for your server. There are some sample files
included with the server binaries that you can start with. These won't be
tailored for your application/server so will need to be changed as per your
requirements.

Note that several of the options are static values so will require a MySQL
restart before they will work.

Best of luck

Andrew

On Wed, Jun 1, 2011 at 10:24 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Dear all,

 I install mysql in CentOS -5.4 through 2 commands :

 yum install mysql-server
 yum install mysql-client

 And I can see directories created in /var/lib/mysql directory.

 But now i want to change it to my /hdd1-1 diretcory and alse set logging
 directories.
 So , I search my.cnf file as:

 [root@ws-test ~]# find / -name my.cnf
 [root@ws-test ~]#

 Now how could I solve this issue  from where mysql picks its configuration
 or it go for its default.


 Thanks  best Regards,
 Adarsh Sharma

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: Not finding my.cnf file

2011-06-01 Thread Andrew Moore
Check the permissions in the datadir.

On Wed, Jun 1, 2011 at 11:50 AM, Adarsh Sharma adarsh.sha...@orkash.comwrote:

 Thanks, It works  a new error occurs as :

 110601 16:16:16 mysqld_safe Starting mysqld daemon with databases from
 /hdd2-1/myisam_data
 /usr/sbin/mysqld: File '/hdd2-1/myisam_data/mysql-bin.index' not found
 (Errcode: 13)
 110601 16:16:16 [ERROR] Aborting

 110601 16:16:16 [Note] /usr/sbin/mysqld: Shutdown complete

 I checked  /hdd2-1/myisam_data/mysql-bin.index file is there , how to
 comes
 Thanks


 Peter Boros wrote:

 Hi,

 You probably didn't run mysql_install_db.

 Peter Boros

 On Wed, 2011-06-01 at 15:52 +0530, Adarsh Sharma wrote:


 I got the error after setting my.cnf file in /etc directory.

 110601 15:23:02 [Note] Plugin 'FEDERATED' is disabled.
 /usr/sbin/mysqld: Table 'mysql.plugin' doesn't exist

 After some research i found the cause of this error : the new my.cnf
 is very old and mysql_upgrade is needed

 So , Can someone Please give my a standard my.cnf file that contains
 all the parameters.

 Is my.cnf file is different for different versions.
 My mysql version is 5.1.4. Please check my attached my.cnf file

 Thanks

 John Daisley wrote:

 I think the default location on Centos is /etc/my.cnf

 Regards
 John

 On 1 June 2011 10:24, Adarsh Sharma adarsh.sha...@orkash.com wrote:



 Dear all,

 I install mysql in CentOS -5.4 through 2 commands :

 yum install mysql-server
 yum install mysql-client

 And I can see directories created in /var/lib/mysql directory.

 But now i want to change it to my /hdd1-1 diretcory and alse set
 logging
 directories.
 So , I search my.cnf file as:

 [root@ws-test ~]# find / -name my.cnf
 [root@ws-test ~]#

 Now how could I solve this issue  from where mysql picks its
 configuration
 or it go for its default.


 Thanks  best Regards,
 Adarsh Sharma

 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:
 http://lists.mysql.com/mysql?unsub=john.dais...@butterflysystems.co.uk






 plain text document attachment (my.cnf)
 #This is for a large system with memory = 512M where the system runs
 mainly
 # MySQL.
 #
 # You can copy this file to
 # /etc/my.cnf to set global options,
 # mysql-data-dir/my.cnf to set server-specific options (in this
 # installation this directory is @localstatedir@) or
 # ~/.my.cnf to set user-specific options.
 #
 # In this file, you can use all long options that a program supports.
 # If you want to know which options a program supports, run the program
 # with the --help option.

 # The following options will be passed to all MySQL clients

 [client]
 #password = your_password
 port  = 3306
 socket  = /var/lib/mysql/mysql.sock

 # Here follows entries for some specific programs
 # The MySQL server
 [mysqld]
 port  = 3306
 socket  = /var/lib/mysql/mysql.sock
 #skip-locking

 # Caches and Buffer Sizes
 key_buffer = 256M
 max_allowed_packet=16M
 table_cache = 256
 sort_buffer_size = 2M
 read_buffer_size = 2M
 read_rnd_buffer_size = 4M

 #record_buffer = 1M

 myisam_sort_buffer_size = 128M
 thread_cache = 128
 query_cache_limit = 2M
 query_cache_type = 1
 query_cache_size = 32M
 key_buffer = 16M
 join_buffer = 2M
 table_cache = 1024

 datadir = /hdd2-1/myisam_data

 log-bin=mysql-bin

 #Time Outs
 interactive_timeout = 100
 wait_timeout = 100
 connect_timeout = 10

 # Try number of CPU's*2 for thread_concurrency
 thread_concurrency = 2

 # Maximum connections allowed
 max_connections = 100
 max_user_connections = 50
 max_connect_errors = 10

 # Don't listen on a TCP/IP port at all. This can be a security
 enhancement,
 # if all processes that need to connect to mysqld run on the same host.
 # All interaction with mysqld must be made via Unix sockets or named
 pipes.
 # Note that using this option without enabling named pipes on Windows
 # (via the enable-named-pipe option) will render mysqld useless!
 #
 #skip-networking

 # Replication Master Server (default)
 # binary logging is required for replication

 # required unique id between 1 and 2^32 - 1
 # defaults to 1 if master-host is not set
 # but will not function as a master if omitted
 server-id = 1

 # Replication Slave (comment out master section to use this)
 #
 # To configure this host as a replication slave, you can choose between
 # two methods :
 #
 # 1) Use the CHANGE MASTER TO command (fully described in our manual) -
 #the syntax is:
 #
 #CHANGE MASTER TO MASTER_HOST=host, MASTER_PORT=port,
 #MASTER_USER=user, MASTER_PASSWORD=password ;
 #
 #where you replace host, user, password by quoted strings and
 #port by the master's port number (3306 by default).
 #
 #Example:
 #
 #CHANGE MASTER TO MASTER_HOST='125.564.12.1', MASTER_PORT=3306,
 #MASTER_USER='joe', MASTER_PASSWORD='secret';
 #
 # OR
 #
 # 2) Set the variables below. However, in case you choose this method,
 then
 #start replication for the first time (even unsuccessfully, for
 example
 #if you mistyped the 

Re: Renaming a database?

2011-05-24 Thread Andrew Moore
Instead of renaming it, revoke permissions to it. If it's being used you
should see some problems in the application due to access denied.

Andy

On Tue, May 24, 2011 at 4:19 PM, Jerry Schwartz je...@gii.co.jp wrote:

 It looks like there’s no way to rename a database. Is that true?



 I have two reasons for wanting to do this:



 - I want to get a current version of a database out of the way so that I
 can load in an older version, without having to pay the penalty of reloading
 the current version.

 - I think I have a database that isn’t being used (don’t ask), and I want
 to hide it to see if anything blows up.



 Suggestions?



 Regards,



 Jerry Schwartz

 Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341

 E-mail:  mailto:je...@gii.co.jp je...@gii.co.jp

 Web site:  http://www.the-infoshop.com/ www.the-infoshop.com






Re: Renaming a database?

2011-05-24 Thread Andrew Moore
Could you try restoring the other db with another name and changing the
connection string in the app?
On 24 May 2011 16:20, Jerry Schwartz je...@gii.co.jp wrote:
 It looks like there’s no way to rename a database. Is that true?



 I have two reasons for wanting to do this:



 - I want to get a current version of a database out of the way so that I
can load in an older version, without having to pay the penalty of reloading
the current version.

 - I think I have a database that isn’t being used (don’t ask), and I want
to hide it to see if anything blows up.



 Suggestions?



 Regards,



 Jerry Schwartz

 Global Information Incorporated

 195 Farmington Ave.

 Farmington, CT 06032



 860.674.8796 / FAX: 860.674.8341

 E-mail: mailto:je...@gii.co.jp je...@gii.co.jp

 Web site: http://www.the-infoshop.com/ www.the-infoshop.com





Re: MySQL ignores foreign key constraints

2011-05-20 Thread Andrew Moore
Try

show create table ... ;

A

On Fri, May 20, 2011 at 12:07 PM, Mimi Cafe mimic...@googlemail.com wrote:

 Hi

 An ideas why MySQL silently ignores any foreign key constraints I define
 for
 the following tables?


 mysql desc book;

 +--+---+--+-+-+-
 --+
 | Field| Type  | Null | Key | Default |
 Extra |

 +--+---+--+-+-+-
 --+
 | pkisbn   | varchar(20)   | NO   | PRI | NULL|
 |
 | fkpublisher_id   | tinyint(3) unsigned   | NO   | MUL | NULL|
 |
 | title| varchar(50)   | NO   | | NULL|
 |
 | subtitle | varchar(50)   | NO   | | NULL|
 |
 13 rows in set (0.01 sec)

 mysql desc book_author;
 +-++--+-+-+---+
 | Field   | Type   | Null | Key | Default | Extra |
 +-++--+-+-+---+
 | fkauthor_id | mediumint(10) unsigned | NO   | MUL | NULL|   |
 | fkisbn  | varchar(20)| NO   | MUL | NULL|   |
 +-++--+-+-+---+
 2 rows in set (0.00 sec)

 mysql desc author;

 +-++--+-+-+-
 ---+
 | Field   | Type   | Null | Key | Default | Extra
 |

 +-++--+-+-+-
 ---+
 | pkauthor_id | mediumint(10) unsigned | NO   | PRI | NULL|
 auto_increment |
 | fname   | varchar(20)| NO   | | NULL|
 |
 | initial | varchar(5) | YES  | | NULL|
 |
 | lname   | varchar(20)| NO   | | NULL|
 |

 +-++--+-+-+-
 ---+
 4 rows in set (0.00 sec)


 Mimi



Re: URGENT: Change Default Location of where Database Files get written?

2011-05-13 Thread Andrew Moore
There absolutely is;

there is a configuration file belonging to MySQL named `my.cnf`. It can
exist in many places and there's a hierarchal order of precedense. The most
common of which is /etc/my.cnf.

Within this file you may specify the `datadir` option to identify location
you wish your data to reside. This is a static variable that can't be
altered whilst the server is in motion and there are things you may need to
do before considering changing this value. There is a wealth of
documentation on this configuration file that can be found at...

http://dev.mysql.com/doc/refman/5.5/en/mysqld-option-tables.html

HTH

Andy

On Fri, May 13, 2011 at 4:21 PM, Tina Matter ti...@umich.edu wrote:

 I have a MySQL question that I'm hoping someone can help answer.

 We have a linux machine which has MySQL 5.5.8 installed.
 It is currently installed in this location:/opt/mysql

 When creating a new database, a folder (with the name of the databas) gets
 created in this location:
 /opt/mysql/data

 Is there any way to change the location of where data is stored?
 The database that I need to create is going to have over a billion records
 in it,
 so it needs to be in a specific place.

 I want the database folder to get created here:

 /science/databases/databasename

 Thanks for any help.
 Tina

 --
 Tina Matter
 Web Applications Developer
 University of Michigan
 Department of Epidemiology
 1415 Washington Heights, Suite 4605
 Ann Arbor, MI 48109


 --
 MySQL General Mailing List
 For list archives: http://lists.mysql.com/mysql
 To unsubscribe:http://lists.mysql.com/mysql?unsub=eroomy...@gmail.com




Re: operation with dates

2011-05-12 Thread Andrew Moore
Rocio,

there are specific date functions that you need to learn to allow you to
complete this kind of query. Please check out the MySQL documentation for
this.

HTH

Andy

On Thu, May 12, 2011 at 4:05 PM, Rocio Gomez Escribano 
r.go...@ingenia-soluciones.com wrote:

 Hello! I’m trying to subtract two dates in my consult, but I don’t get it,
 I did:







 mysql select userID from user where (userPaymentDate - now())  365 ;





 It didn’t work. Do you know how to do it? Thank you so much!



 Regards



 *Rocío Gómez Escribano*

 r.go...@ingenia-soluciones.com r.sanc...@ingenia-soluciones.com



 [image: Descripción: cid:image002.jpg@01CB8CB6.ADEBA830]

 Polígono Campollano C/F, nº21T

 02007 Albacete (España)

 Tlf:967-504-513  Fax: 967-504-513

 www.ingenia-soluciones.com





  1   2   3   4   5   6   7   8   9   >